id
stringlengths 10
10
| title
stringlengths 7
231
| abstract
stringlengths 3
2.43k
| authors
stringlengths 5
21.5k
| published_date
stringlengths 20
20
| link
stringlengths 33
34
| markdown
stringlengths 133
1.92M
|
---|---|---|---|---|---|---|
2308.00150 | Effects of mirror birefringence and its fluctuations to laser
interferometric gravitational wave detectors | Crystalline materials are promising candidates as substrates or
high-reflective coatings of mirrors to reduce thermal noises in future laser
interferometric gravitational wave detectors. However, birefringence of such
materials could degrade the sensitivity of gravitational wave detectors, not
only because it can introduce optical losses, but also because its fluctuations
create extra phase noise in the arm cavity reflected beam. In this paper, we
analytically estimate the effects of birefringence and its fluctuations in the
mirror substrate and coating for gravitational wave detectors. Our calculations
show that the requirements for the birefringence fluctuations in silicon
substrate and AlGaAs coating will be on the order of $10^{-8}$ and $10^{-10}$
rad/$\sqrt{\rm Hz}$ at 100~Hz, respectively, for future gravitational wave
detectors. We also point out that optical cavity response needs to be carefully
taken into account to estimate optical losses from depolarization. | Yuta Michimura, Haoyu Wang, Francisco Salces-Carcoba, Christopher Wipf, Aidan Brooks, Koji Arai, Rana X Adhikari | 2023-07-31T20:57:21Z | http://arxiv.org/abs/2308.00150v2 | # On the effects of mirror birefringence and its fluctuations to
###### Abstract
Crystalline materials are promising candidates as substrates or high-reflective coatings of mirrors to reduce thermal noises in future laser interferometric gravitational wave detectors. However, birefringence of such materials could degrade the sensitivity of gravitational wave detectors, not only because it can introduce optical losses, but also because its fluctuations create extra phase noise in the arm cavity reflected beam. In this paper, we analytically estimate the effects of birefringence and its fluctuations in the mirror substrate and coating for gravitational wave detectors. Our calculations show that the requirements for the birefringence fluctuations in silicon substrate and AlGaAs coating will be in the order of \(10^{-8}\) rad\(/\sqrt{\text{Hz}}\) and \(10^{-10}\) rad\(/\sqrt{\text{Hz}}\) at 100 Hz, respectively, for future gravitational wave detectors. We also point out that optical cavity response needs to be carefully taken into account to estimate optical losses from depolarization.
## I Introduction
The first detections of gravitational waves from binary black holes [1] and binary neutron stars [2; 3] by Advanced LIGO [4] and Advanced Virgo [5] inaugurated gravitational wave physics and astronomy. Improvements in the sensitivity of these laser interferometric detectors in recent years enabled routine detections and more precise binary parameter estimation [6]. Further improvements in the astrophysical reach of these detectors will allow us to study the origin of massive black holes, the neutron star equation of state, alternative gravity theories, and cosmology.
The fundamental limitation to the sensitivity of these detectors at the most sensitive frequency band is set by thermal vibrations of mirror surface [7]. KAGRA [8; 9] and other concepts of future gravitational wave detectors plan to utilize cryogenic crystalline test mass mirrors for thermal noise reduction, instead of fused silica mirrors at room temperature. KAGRA uses sapphire test masses, and plan to cool them down to 22 K [10]. Voyager is an upgrade plan of LIGO to use 123 K silicon to increase the astrophysical reach by a factor of 4-5 over Advanced LIGO design [11]. The next generation detectors such as Einstein Telescope [12; 13] also plan to use silicon test masses at cryogenic temperatures for the low frequency detectors, and Cosmic Explorer [14; 15] considers using them for an upgrade. In addition, crystalline coatings such as AlGaAs coating [16] and AlGaP coating [17] are considered as promising candidates to reduce coating Brownian noise, instead of amorphous silica and tantala coating.
Although crystalline materials are promising to reduce thermal noise, it has been pointed out that slight birefringence of mirror substrates and coatings could cause optical losses due to depolarization of the light, and cause degradation of interferometric contrast [18]. The birefringence and its inhomogeneity of sapphire input test masses of KAGRA were found to be higher than expected [19; 20], and around 10% of power was lost on reflection due to depolarization, when arm cavities are not on resonance [9]. Ideally, crystalline silicon is a cubic crystal and optically isotropic, but could have strain-induced birefringence from crystal dislocations and due to support in the mirror suspension system. Birefringence measurements in silicon mirrors have revealed that the amount of the static birefringence is \(\Delta n\sim 10^{-7}\) or less at laser wavelengths of 1.55 \(\mu\)m [21] and 2 \(\mu\)m [22] at room temperature, which satisfies the optical loss requirements for future detectors. Also, previous cavity experiments using AlGaAs coatings reported birefringence at 1 mrad level [23; 16; 24].
These past studies have focused on the static birefringence and optical losses from the depolarization. However, recent measurement of thermal noises in crystalline mirror coatings at cryogenic temperatures reported excess birefringent noise, which could limit the sensitivity of future gravitational wave detectors [25]. Theoretical calculations on thermal fluctuations of birefringence in crystalline mirror coatings have also revealed that the noise from these fluctuations could be similar to Brownian noise [26]. It is also worth noting that experiments to search for vacuum magnetic birefringence, such as PVLAS and OVAL, has been suspected to be limited by thermal birefringence noise of mirrors [27; 28; 29; 30; 31]. These temporal birefringence fluctuations could also limit optical cavity based axion dark matter searches using the birefringence effect from axion-photon coupling [32; 33; 34; 35; 36].
In this paper, we study the effects of birefringence and its fluctuations to gravitational wave detectors based on
Fabry-Perot-Michelson interferometer. We show that the polarization axis and the crystal axes of arm cavity mirrors need to be aligned to avoid optical losses and to reduce noises from birefringence fluctuations. We also show that the cavity response to birefringence needs to be correctly taken into account for estimating the noises and the optical losses of arm cavities. We start by analytically describing the cavity response to birefringence in Sec. II. In Sec. III, we focus on noises from substrate birefringence and coating birefringence, and derive requirements for their fluctuations for future gravitational wave detectors. In Sec. IV, we expand our formulation to include spatial higher order modes, and discuss power losses from inhomogeneous birefringence of the substrate and the coating. Our conclusions and outlook are summarized in Sec. V.
## II Cavity response to birefringence
Let us consider a Fabry-Perot cavity formed by an input test mass (ITM) and an end test mass (ETM) mirrors as shown in Fig. 1. We consider birefringence of ITM substrate, ITM high-reflective coating, and ETM high-reflective coating. The ordinary axis of the ETM coating is rotated by \(\theta\) with respect to that of ITM. The input beam is linearly polarized and its polarization is rotated by \(\theta_{\mathrm{pol}}\) with respect to the ordinary axis of ITM. We assume that the crystal axes of ITM substrate are aligned with those of its coating. This will not affect the results of this paper, as we will treat the substrate birefringence and the coating birefringence independently in the following sections.
For calculating the cavity response to birefringence, we can use the Jones matrix formalism Jones (1974). In the basis of ITM crystal axes, the electric field of the input beam can be written as
\[\vec{E}_{\mathrm{in}}=\left(\vec{e}_{\mathrm{o}}\ \ \vec{e}_{\mathrm{e}} \right)\vec{v}_{\mathrm{in}}E_{\mathrm{in}}, \tag{1}\]
where \(\vec{e}_{\mathrm{o}}\) and \(\vec{e}_{\mathrm{e}}\) are eigenvectors along with the ITM ordinary and extraordinary axes, and \(\vec{v}_{\mathrm{in}}\) is the vector representing the input polarization.
We suppose the ITM substrate is loss less, and the amplitude reflectivity and the amplitude transmission of the whole ITM is determined by the high-reflective coating. Then the amplitude transmission of ITM can be written as
\[T_{1}=\begin{pmatrix}t_{1}&0\\ 0&t_{1}e^{-i\frac{1}{2}\Delta\phi_{\mathrm{t_{1}}}}\end{pmatrix}, \tag{2}\]
where \(\Delta\phi_{\mathrm{t_{1}}}/2\) is the phase difference between the ordinary and extraordinary axes in the ITM transmission from both the substrate and the coating birefringence, and \(t_{1}\) is the amplitude transmission of ITM. Here, we assumed that the amplitude transmission is the same for both axes. Similarly, the amplitude reflectivity of ITM and ETM from the high-reflective coating side can be written as
\[R_{j}=\begin{pmatrix}r_{j}&0\\ 0&r_{j}e^{-i\Delta\phi_{r_{j}}}\end{pmatrix}, \tag{3}\]
where \(\Delta\phi_{\mathrm{r_{j}}}\) is the phase difference between the ordinary and extraordinary axes in ITM and ETM reflection, and \(r_{j}\) is the amplitude reflectivity of ITM and ETM. \(j=1\) is for ITM and \(j=2\) is for ETM. Also, the amplitude reflectivity of ITM from the substrate side can be written as
\[S_{1}=\begin{pmatrix}-r_{1}&0\\ 0&-r_{1}e^{-i\Delta\phi_{\mathrm{r_{1}}}}\end{pmatrix}, \tag{4}\]
where \(\Delta\phi_{s_{1}}\) is the phase difference between the ordinary and extraordinary axes in the ITM reflection from the substrate side. From the energy conservation and the time reversal symmetry, \(\Delta\phi_{t_{1}}=\Delta\phi_{r_{1}}+\Delta\phi_{s_{1}}\). Here, we use the convention that \(r_{j}\) and \(t_{1}\) are real, and the sign is flipped for reflection from the ITM substrate side. We keep the coordinate axis to be the same even if the propagation direction flips on mirror reflections, so that the sign for both polarizations will be the same.
For arm cavities in gravitational wave detectors, \(r_{1}\) and \(r_{2}\) are designed to be \(r_{2}\simeq 1\), and \(r_{1}<r_{2}\), such that almost all the light is reflected back. From the phase of the cavity reflected beam, cavity length changes from gravitational waves are read out. In the following subsections, we calculate the polarization eigenmodes in the cavity, and the phase of the cavity reflected beam.
### Polarization eigenmodes in the cavity
The electric field inside the cavity that propagates from ITM to ETM can be written as
\[\vec{E}_{\mathrm{cav}}=\left(I-A\right)^{-1}T_{1}\vec{E}_{\mathrm{in}}, \tag{5}\]
with \(I\) being the identity matrix. Here,
\[A\equiv R_{1}R(-\theta)R_{2}R(\theta)e^{-i\phi}, \tag{6}\]
Figure 1: The schematic of a Fabry-Pérot cavity with mirror crystal axes and input beam polarization axis illustrated. With respect to the ITM ordinary axis, the input polarization is rotated by \(\theta_{\mathrm{pol}}\) and the ETM ordinary axis is rotated by \(\theta\).
where \(\phi=4\pi L/\lambda\) is the phase acquired in the cavity round-trip, with \(L\) and \(\lambda\) being the cavity length and the laser wavelength, and
\[R(\theta)\equiv\begin{pmatrix}\cos\theta&-\sin\theta\\ \sin\theta&\cos\theta\end{pmatrix}. \tag{7}\]
Note that \(\phi\) includes phase acquired in the ITM and ETM reflection for their ordinary axes. The resonant polarization mode is the eigenvectors of
\[M_{\rm cav}\equiv(I-A)^{-1}\,T_{1}. \tag{8}\]
The cavity enhancement factors for each mode will be the eigenvalues of \(M_{\rm cav}\).
When \(\theta=0\), the ITM axes and the ETM axes are aligned, and the eigenvectors will be
\[\vec{v}_{a}=\begin{pmatrix}1\\ 0\end{pmatrix},\qquad\vec{v}_{b}=\begin{pmatrix}0\\ 1\end{pmatrix}, \tag{9}\]
which means that the resonant modes are linear polarizations along the ITM ordinary axis \(\vec{e}_{\rm o}\) and the extraordinary axis \(\vec{e}_{\rm e}\). The cavity enhancement factors will be
\[w_{a}=\frac{t_{1}}{1-r_{1}r_{2}e^{-i\phi}},\qquad w_{b}=\frac{t_{1}e^{-i\frac{ 1}{2}\Delta\phi_{t_{1}}}}{1-r_{1}r_{2}e^{-i(\phi+\Delta\phi_{t_{1}}+\Delta \phi_{t_{2}})}}. \tag{10}\]
The resonant frequency difference between two eigenmodes therefore will be
\[\Delta\nu=\frac{\Delta\phi_{\rm r_{1}}+\Delta\phi_{\rm r_{2}}}{2\pi}\nu_{\rm FSR}, \tag{11}\]
where \(\nu_{\rm FSR}=c/(2L)\) is the free spectral range of the cavity.
When \(\theta=\pi/2\), the ITM ordinary axis and the ETM extraordinary axis are aligned, and the eigenvectors again will be the same as the ones given in Eq. (9). The cavity enhancement factors will be
\[w_{a}=\frac{t_{1}}{1-r_{1}r_{2}e^{-i(\phi+\Delta\phi_{\rm r_{2}})}},\quad w_{b }=\frac{t_{1}e^{-i\frac{1}{2}\Delta\phi_{t_{1}}}}{1-r_{1}r_{2}e^{-i(\phi+ \Delta\phi_{\rm r_{1}})}}. \tag{12}\]
The resonant frequency difference between two eigenmodes therefore will be
\[\Delta\nu=\frac{\Delta\phi_{\rm r_{1}}-\Delta\phi_{\rm r_{2}}}{2\pi}\nu_{\rm FSR}. \tag{13}\]
Since we defined the ITM and ETM axes such that \(\Delta\phi_{\rm r_{i}}\) have the same sign for ITM and ETM, when \(\theta=0\), the phase difference between the axes are added and the resonant frequency difference is maximized. When \(\theta=\pi/2\), it is minimized, as the phase difference is cancelled. When \(0<\theta<\pi/2\), the resonant frequency difference will be in between the maximum and the minimum.
When the resonant frequency difference is smaller than the cavity linewidth, i.e., \(\Delta\phi_{\rm r_{i}}\ll 2\pi/\mathcal{F}\), and when the effect from the ITM substrate birefringence is small, i.e., \(\Delta\phi_{\rm r_{i}}\ll\Delta\phi_{\rm r_{1}}\mathcal{F}/\pi\), the resonant frequency difference can be calculated with
\[\Delta\nu\simeq\frac{2\pi(\arg w_{a}-\arg w_{b})}{\mathcal{F}}\frac{\nu_{\rm FSR }}{2\pi}, \tag{14}\]
at \(\phi=0\), where
\[\mathcal{F}=\frac{\pi\sqrt{r_{1}r_{2}}}{1-r_{1}r_{2}} \tag{15}\]
is the finesse of the cavity. This can be further approximated as [38]
\[\Delta\nu\simeq\frac{\delta_{\rm EQ}}{2\pi}\nu_{\rm FSR}, \tag{16}\]
where
\[\delta_{\rm EQ}\equiv\sqrt{(\Delta\phi_{\rm r_{1}}-\Delta\phi_{\rm r_{2}})^{ 2}+4\Delta\phi_{\rm r_{1}}\Delta\phi_{\rm r_{2}}\cos^{2}\theta}, \tag{17}\]
when \(\delta_{\rm EQ}\ll 1\). Also, the cavity eigenmodes are linear polarizations approximated as
\[\vec{v}_{a}=\begin{pmatrix}\cos\theta_{\rm EQ}\\ \sin\theta_{\rm EQ}\end{pmatrix},\quad\vec{v}_{b}=\begin{pmatrix}-\sin\theta_{ \rm EQ}\\ \cos\theta_{\rm EQ}\end{pmatrix}, \tag{18}\]
where the polarization angle is defined by
\[\cos 2\theta_{\rm EQ}=\frac{\frac{\Delta\phi^{\prime}_{r_{1}}}{\Delta\phi_{\rm r _{2}}}+\cos 2\theta}{\sqrt{\left(\frac{\Delta\phi^{\prime}_{r_{1}}}{\Delta\phi_{\rm r _{2}}}-1\right)^{2}+4\frac{\Delta\phi^{\prime}_{r_{1}}}{\Delta\phi_{\rm r_{2}}} \cos^{2}\theta}}, \tag{19}\]
with
\[\Delta\phi^{\prime}_{r_{1}}\equiv\Delta\phi_{r_{1}}+\frac{\pi}{\mathcal{F}} \Delta\phi_{\rm t_{1}}. \tag{20}\]
When \(\Delta\phi^{\prime}_{r_{1}}\gg\Delta\phi_{r_{2}}\), \(\theta_{\rm EQ}\) is equal to zero, when \(\Delta\phi^{\prime}_{r_{1}}=\Delta\phi_{r_{2}}\), \(\theta_{\rm EQ}\) is equal to \(\theta/2\), and when \(\Delta\phi^{\prime}_{r_{1}}\ll\Delta\phi_{r_{2}}\), \(\theta_{\rm EQ}\) is equal to \(\theta\). Note that the polarization state resonating inside the cavity are elliptic polarizations given by \(R_{1}T_{1}\vec{v}_{a,b}/(r_{1}t_{1})\), and are different from linear polarizations given by Eq. (18).
The mis-match between the cavity polarization mode and the input beam polarization can be calculated with
\[\Lambda^{2}=1-|\vec{v}_{a}\cdot\vec{v}_{\rm in}|^{2}\,. \tag{21}\]
When the input beam is linearly polarized with the polarization angle of \(\theta_{\rm pol}\) such that
\[\vec{v}_{\rm in}=R(\theta_{\rm pol})\begin{pmatrix}1\\ 0\end{pmatrix}=\begin{pmatrix}\cos\theta_{\rm pol}\\ \sin\theta_{\rm pol}\end{pmatrix}, \tag{22}\]
Eq. (21) reduces to
\[\Lambda^{2}=\sin^{2}{(\theta_{\rm EQ}-\theta_{\rm pol})}. \tag{23}\]
The mis-match will be less than than 0.1 % when \(|\theta_{\rm EQ}-\theta_{\rm pol}|\) is smaller than 1.8 degrees. For gravitational
wave detectors, this is required for both arm cavities. This means that the axes of two arm cavities need to be aligned to the same degree. Note that mis-match do not directly mean that there is a same amount of power loss. The actual power loss also depend on the amount of birefringence, as we will discuss in Sec. IV.
Figure 2 shows the polarization eigenmodes of the cavity as a function of ETM rotation angle \(\theta\), calculated using Eqs. (16) and (19). As we have discussed earlier, the resonant frequency difference will be the maximized at \(\theta=0\), and minimized at \(\theta=\pi/2\). When \(\theta=\pi/2\) and \(\Delta\phi_{\mathrm{r}_{1}}=\Delta\phi_{\mathrm{r}_{2}}\), the phase difference between ordinary and extraordinary axes is completely cancelled, and two modes will be degenerate. In this case, two linear polarizations and two circular polarizations will be cavity eigenmodes, since two modes have the same resonant frequency.
The bottom panel of Fig. 2 shows the mis-match calculated using Eq. (21), assuming the input polarization is linear and aligned with either of the ITM axes. The mis-match is nulled at \(\theta=0\) and \(\theta=\pi/2\). To minimize the mis-match and to make the resonant frequency difference large, aligning the ETM rotation such that \(\theta=0\) and aligning the input polarization to one of the ITM axes will be the optimal choice. The requirement on the alignment will be not severe, since the dependence on the ETM rotation angle goes with \(\theta^{2}\) at \(\theta=0\).
For deriving the cavity reflected beam, we need to calculate the electric field inside the cavity that propagates from ETM to ITM. This can be written as
\[\vec{E}^{\prime}_{\mathrm{cav}} = R(-\theta)R_{2}R(\theta)e^{-i\phi}M_{\mathrm{cav}}\vec{E}_{ \mathrm{in}} \tag{24}\] \[\equiv M^{\prime}_{\mathrm{cav}}\vec{E}_{\mathrm{in}}. \tag{25}\]
The eigenvectors of \(M^{\prime}_{\mathrm{cav}}\) is the same as those of \(M_{\mathrm{cav}}\) within our approximations discussed above, but the cavity enhancement factors will be slightly different. When \(\theta=0\), the cavity enhancement factors will be
\[w^{\prime}_{a}=\frac{t_{1}r_{2}e^{-i\phi}}{1-r_{1}r_{2}e^{-i\phi}},\quad w^{ \prime}_{b}=\frac{t_{1}r_{2}e^{-i(\phi+\frac{1}{2}\Delta\phi_{\mathrm{r}_{1}}+ \Delta\phi_{\mathrm{r}_{2}})}}{1-r_{1}r_{2}e^{-i(\phi+\Delta\phi_{\mathrm{r}_ {1}}+\Delta\phi_{\mathrm{r}_{2}})}}, \tag{26}\]
and when \(\theta=\pi/2\), those will be
\[w^{\prime}_{a}=\frac{t_{1}r_{2}e^{-i(\phi+\Delta\phi_{\mathrm{r}_{2}})}}{1-r_{ 1}r_{2}e^{-i(\phi+\Delta\phi_{\mathrm{r}_{2}})}},\quad w^{\prime}_{b}=\frac{t_ {1}r_{2}e^{-i(\phi+\frac{1}{2}\Delta\phi_{\mathrm{r}_{1}})}}{1-r_{1}r_{2}e^{-i( \phi+\Delta\phi_{\mathrm{r}_{1}})}}. \tag{27}\]
Compared with \(w_{a}\) and \(w_{b}\), those have extra phase \(\phi\) from the cavity round trip and extra phase \(\Delta\phi_{\mathrm{r}_{2}}\) for the corresponding axis for one additional reflection from ETM.
### Phase of cavity reflected beam
The noises due to temporal fluctuations of birefringence will be imprinted in the phase of the cavity reflected beam. The electric field of the cavity reflection can be written as
\[\vec{E}_{\mathrm{refl}}=M_{\mathrm{refl}}\vec{E}_{\mathrm{in}} \tag{28}\]
where
\[M_{\mathrm{refl}}\equiv S_{1}+T_{1}M^{\prime}_{\mathrm{cav}}. \tag{29}\]
The first term corresponds to the prompt reflection from ITM, and the second term is the ITM transmitted beam from the cavity circulating beam. In general, when the input beam polarization component is
\[\vec{v}_{\mathrm{in}}=a\vec{v}^{\prime}_{a}+b\vec{v}^{\prime}_{b}, \tag{30}\]
the polarization component of the reflected beam is
\[M_{\mathrm{refl}}\vec{v}_{\mathrm{in}}=a(S_{1}+w^{\prime}_{a}T_{1})\vec{v}^{ \prime}_{a}+b(S_{1}+w^{\prime}_{b}T_{1})\vec{v}^{\prime}_{b}. \tag{31}\]
Since the resonant condition of each eigenmode is generally different, it is generally \(|w^{\prime}_{a}|\neq|w^{\prime}_{b}|\). Therefore, the polarization component of the cavity reflected beam will be different from the input polarization.
When we use a Faraday isolator to extract the cavity reflection, we extract the polarization component which
Figure 2: The polarization eigenmodes of a Fabry-Pérot cavity as a function of ETM rotation angle \(\theta\). The top panel shows the round-trip phase difference between the eigenmodes in the unit of \(\Delta\phi_{\mathrm{r}_{1}}\), i.e., \(2\pi\Delta\nu/(\nu_{\mathrm{FSR}}\Delta\phi_{\mathrm{r}_{1}})\), which is proportional to the resonant frequency difference. The middle panel shows the polarization angle of the eigenmodes \(\theta_{\mathrm{EQ}}\) calculated using Eq. (19). The bottom panel shows the mis-match of the input beam polarization to the eigenmodes, when it is linear and aligned with ITM axes, calculated using Eq. (21). Different color of the lines correspond to different \(\Delta\phi_{\mathrm{r}_{2}}/\Delta\phi_{\mathrm{r}_{1}}\) ratio. Blue lines for \(\Delta\phi_{\mathrm{r}_{2}}=0\) case in the bottom two plots are zero.
is the same as the input polarization. Therefore, the phase of the cavity reflected beam can be calculated with
\[\arg\left(E_{\rm out}\right)=\arg\left(E_{\rm ref\parallel}\right)=\arg\left(E_{ \rm in}M_{\rm ref\parallel}\vec{v}_{\rm in}\cdot\vec{v}_{\rm in}\right). \tag{32}\]
In the case when the input beam polarization is aligned to the ITM ordinary axis, this reflected phase is the phase of (1,1) component of \(M_{\rm refl}\), and that for the ITM extraordinary axis is (2,2) component of \(M_{\rm refl}\).
Let us first consider the effects from ITM. If we set \(\Delta\phi_{\rm r_{2}}=0\) and the input beam is linearly polarized with the polarization angle of \(\theta_{\rm pol}\) as shown in Eq. (22), the reflected electric field in the polarization parallel to \(\vec{v}_{\rm in}\) and in the orthogonal polarization will be
\[\frac{E_{\rm ref\parallel}}{E_{\rm in}} = M_{\rm refl}\vec{v}_{\rm in}\cdot\vec{v}_{\rm in} \tag{33}\] \[= (-r_{1}+w^{\prime}_{a}t_{1})\cos^{2}\theta_{\rm pol}\] \[+(-r_{1}e^{-i\Delta\phi_{\rm r_{1}}}+w^{\prime}_{b}t_{1}e^{-i \frac{1}{2}\Delta\phi_{\rm r_{1}}})\sin^{2}\theta_{\rm pol},\] \[\frac{E_{\rm ref\perp}}{E_{\rm in}} = M_{\rm refl}\vec{v}_{\rm in}\cdot R(\theta_{\rm pol})\begin{pmatrix} 0\\ 1\end{pmatrix}\] (34) \[= \left[(-r_{1}+w^{\prime}_{a}t_{1})-(-r_{1}e^{-i\Delta\phi_{\rm r _{1}}}+w^{\prime}_{b}t_{1}e^{-i\frac{1}{2}\Delta\phi_{\rm r_{1}}})\right]\] \[\times\frac{\sin\left(2\theta_{\rm pol}\right)}{2}.\]
These are similar to the electric fields of the bright reflection port and the dark anti-symmetric port for a Fabry-Perot-Michelson interferometer that has an unbalanced beam splitter.
The effects from the ETM birefringence can be calculated by setting \(\Delta\phi_{\rm s_{1}}=\Delta\phi_{\rm r_{1}}=0\), and replacing \(\Delta\phi_{\rm r_{1}}\) to \(\Delta\phi_{\rm r_{2}}\) and \(\theta_{\rm pol}\) to \(\theta+\theta_{\rm pol}\). If we combine the effects from ITM and ETM, the phase of the reflected beam around the resonance can be approximated as
\[\arg\left(\frac{E_{\rm ref\parallel}}{E_{\rm in}}\right) = (\Delta\phi_{\rm s_{1}}-2\Delta\phi_{\rm t_{1}})\sin^{2}\theta_{ \rm pol}\] \[- \frac{\cal F}{\pi}\left[\phi+\Delta\phi_{\rm r_{1}}\sin^{2}\theta _{\rm pol}\right.\] \[+ \left.\Delta\phi_{\rm r_{2}}\sin^{2}\left(\theta+\theta_{\rm pol} \right)\right],\]
with the approximation that \(\Delta\phi_{\rm r_{1}}\ll 2\pi/{\cal F}\) and \(r_{2}=1\). It is clear that both the ETM rotation angle \(\theta\) and the input beam polarization angle \(\theta_{\rm pol}\) changes the phase of the cavity reflected beam, and will contribute to the phase noise, unless \(\theta_{\rm pol}\) and \(\theta+\theta_{\rm pol}\) are either \(0\) or \(\pi/2\), where the effects are quadratic to these angles. The fluctuations of phase differences between ordinary and extraordinary axes also create phase noises, unless \(\theta_{\rm pol}\) and \(\theta+\theta_{\rm pol}\) are both \(0\).
It is worth noting that, even if we use this phase to lock the cavity, this does not generally mean that the cavity is locked on resonance to one of its polarization eigenmodes, as the cavity reflected beam contains the phase fluctuations from both polarization eigenmodes. To avoid the mixing of phase noises from two polarization eigenmodes, it is actually better to have higher static coating birefringence, i.e., \(\Delta\phi_{\rm r_{i}}\gg 2\pi/{\cal F}\). If the static coating birefringence is high such that one of the eigenmodes is out of resonance when the other is resonant, only \(\Delta\phi_{\rm s_{1}}\) and \(\phi\) terms remain in Eq. (35).
## III Noises from birefringence
In this section, we calculate the phase noises from temporal fluctuations of birefringence, and derive the requirements for the current and future gravitational wave detectors. For calculating the requirements, we have used the interferometer parameters summarized in Table 1, and the displacement sensitivity curves shown in Fig. 3. At the last part of this section, we also discuss the noise from the amplitude fluctuations in the orthogonal polarization at the anti-symmetric port of the Fabry-Perot Michelson interferometer. Although different interferometers plan to use different materials for the mirrors, discussions presented here do not depend on the choice of materials.
### Phase noises from substrate birefringence
The phase changes from the ITM substrate birefringence can be calculated from Eq. (35) by setting \(\Delta\phi_{\rm r_{1}}=\Delta\phi_{\rm r_{2}}=0\), and \(\Delta\phi_{\rm s_{1}}=\Delta\phi_{\rm t_{1}}\). In this case, Eq. (35) reduces to
\[\arg\left(\frac{E_{\rm ref\parallel}}{E_{\rm in}}\right)=-\Delta\phi_{\rm s_{1 }}\sin^{2}\theta_{\rm pol}-\frac{\cal F}{\pi}\phi. \tag{36}\]
Therefore, the length noise couplings from the fluctuations of \(\theta_{\rm pol}\) and \(\Delta\phi_{\rm s_{1}}\) can be calculated as
\[\frac{\delta L}{\delta\theta_{\rm pol}} = \frac{\lambda}{4\pi}\frac{\delta[\arg\left(E_{\rm ref\parallel} \right)]}{\delta\theta_{\rm pol}}\left(\frac{\delta[\arg\left(E_{\rm ref \parallel}\right)]}{\delta\phi}\right)^{-1} \tag{37}\] \[= \frac{\lambda}{4{\cal F}}\Delta\phi_{\rm s_{1}}\sin 2\theta_{\rm pol},\] \[\frac{\delta L}{\delta(\Delta\phi_{\rm s_{1}})} = -\frac{\lambda}{4{\cal F}}\sin^{2}\theta_{\rm pol}. \tag{38}\]
\begin{table}
\begin{tabular}{c c c c c c} & \(L\) & \({\cal F}\) & \(t\) & \(\lambda\) & Ref. \\ \hline aLIGO & 4 km & 450 & 20 cm & 1064 nm & [4] \\ A+ & 4 km & 450 & 20 cm & 1064 nm & [39] \\ Voyager & 4 km & 3000 & 55 cm & 2050 nm & [11] \\ CE & 40 km & 450 & 27.3 cm & 2050 nm & [15] \\ ET-LF & 10 km & 900 & 57 cm & 1550 nm & [13] \\ ET-HF & 10 km & 900 & 30 cm & 1064 nm & [13] \\ \end{tabular}
\end{table}
Table 1: Interferometer parameters of Advanced LIGO (aLIGO), A+, Voyager, Cosmic Explorer (CE), Einstein Telescope Low Frequency (ET-LF), and ET High Frequency (ET-HF) used for calculating requirements. \(L\): arm length, \({\cal F}\): arm finesse, \(t\): ITM thickness, \(\lambda\): laser wavelength.
### Phase noises from coating birefringence
Next, we consider the phase changes from the coating birefringence. From Eq. (35), it is clear that the second term from \(\Delta\phi_{\rm r_{1}}\) and \(\Delta\phi_{\rm r_{2}}\) contributes more to the phase of the reflected beam, compared with the first term from \(\Delta\phi_{\rm s_{1}}\) and \(\Delta\phi_{\rm t_{1}}\), since the phase acquired inside the cavity is enhanced by a factor of \(\mathcal{F}/\pi\). The length noise couplings from the fluctuations of \(\theta_{\rm pol}\), \(\theta\), and \(\Delta\phi_{\rm r_{i}}\) can be calculated as
\[\frac{\delta L}{\delta\theta_{\rm pol}} = \frac{\lambda}{4\pi}[\Delta\phi_{\rm r_{1}}\sin 2\theta_{\rm pol} \tag{39}\] \[+\Delta\phi_{\rm r_{2}}\sin{[2(\theta+\theta_{\rm pol})]}],\] \[\frac{\delta L}{\delta\theta} = \frac{\lambda}{4\pi}\Delta\phi_{\rm r_{2}}\sin{[2(\theta+\theta_ {\rm pol})]},\] (40) \[\frac{\delta L}{\delta(\Delta\phi_{\rm r_{1}})} = -\frac{\lambda}{4\pi}\sin^{2}\theta_{\rm pol},\] (41) \[\frac{\delta L}{\delta(\Delta\phi_{\rm r_{2}})} = -\frac{\lambda}{4\pi}\sin^{2}{(\theta+\theta_{\rm pol})}. \tag{42}\]
### Requirements on birefringence fluctuations
Noise couplings discussed above are nulled when \(\theta_{\rm pol}=0\) and \(\theta=0\). For KAGRA test masses, the sapphire \(c\)-axis was aligned to the cylindrical plane of the test mass within 0.1 deg [20]. For deriving the requirements to birefringence fluctuations for the substrate and the coating, we assume that the input beam polarization and the ETM axes are aligned to the ITM axes to \(\theta_{\rm pol}=1\) deg and \(\theta=1\) deg, respectively.
Figure 4: The requirements on birefringence fluctuations from the axis rotations (top) and from the phase difference between ordinary and extraordinary axes (middle) for different gravitational wave detectors. The bottom plot shows the requirement on the substrate birefringence converted from the phase difference requirements on \(\Delta\phi_{\rm s_{1}}\) in the middle plot, assuming uniform \(\Delta n\), using Eq. (43). The solid lines are for the substrate that have a static birefringence of \(\Delta n=10^{-7}\) and the dashed lines are for the coating that have a static birefringence of \(\Delta\phi_{\rm r_{i}}=1\) mrad. For deriving these requirements, we assumed that the input beam polarization and the ETM axes are aligned to the ITM axes to \(\theta_{\rm pol}=1\) deg and \(\theta=1\) deg, and no safety margin is considered.
Figure 3: The designed displacement sensitivity for different gravitational wave detectors. The strain sensitivity data are taken from Refs. [40; 41; 42], and corrected to displacement sensitivities by removing frequency-dependent responses to gravitational waves [43].
The solid lines in Fig. 4 show the derived requirements for the substrate birefringence fluctuations. We assumed that the ITM substrate have uniform birefringence \(\Delta n\), and \(\Delta\phi_{\rm s_{1}}\) can be written using the mirror thickness \(t\) as
\[\Delta\phi_{\rm s_{1}}=\frac{4\pi}{\lambda}\Delta nt. \tag{43}\]
We used the static birefringence value of \(\Delta n=10^{-7}\), which is a typical measured value for silicon [21, 22]. The dashed lines in Fig. 4 show the derived requirements for the coating using the static birefringence value of \(\Delta\phi_{\rm r_{i}}=1\) mrad, which is a typical measured value for AlGaAs coating [23, 16, 24]. The requirements do not change for other materials when they have the same amount of static birefringence. For deriving the requirement for \(\Delta\phi_{\rm r_{j}}\), we used Eq. (42), as this gives more stringent requirement than Eq. (41). All the requirements are divided by \(\sqrt{2}\) to take into account of birefringence noises between two arm cavities to be incoherent, assuming both cavities have similar level of birefringence. The requirements will be relaxed for common effects in two arms, such as the fluctuations in the input beam polarization angle and birefringence induced by laser intensity fluctuations.
The requirements on the axis rotations for future gravitational wave detectors is in the order of \(10^{-10}\) rad\(/\sqrt{\rm Hz}\). We note that the requirements on \(\theta_{\rm pol}\) and \(\theta\) presented here are also the requirements for the polarization fluctuation requirement for the input beam and the roll motion of the mirrors. As for the roll motion of the mirrors, the vertical seismic motion create less than \(10^{-11}\) rad\(/\sqrt{\rm Hz}\) level of roll motion above 10 Hz for the Advanced LIGO suspensions, if we conservatively assume that the coupling from vertical to roll motion is unity [35, 44]. Therefore, the birefringence noise from the roll motion of the mirrors is small enough.
The requirements on the phase differences between ordinary and extraordinary axes for future gravitational wave detectors are in the order of \(10^{-8}\) rad\(/\sqrt{\rm Hz}\) for the substrate, and \(10^{-10}\) rad\(/\sqrt{\rm Hz}\) for the coating. Birefringence at \(10^{-8}\) rad\(/\sqrt{\rm Hz}\) level can be feasibly evaluated with shot noise limited interferometry at the laser power of \(P=10\) mW level, as the shot noise limited phase sensitivity of a Michelson interferometer is given by
\[\phi_{\rm shot}=\sqrt{\frac{hc}{2\lambda P}}, \tag{44}\]
where \(h\) is Planck constant and \(c\) is the speed of light. Evaluation of birefringence at \(10^{-10}\) rad\(/\sqrt{\rm Hz}\) level requires 10-W class laser or cavity enhancements. Measurements can be done at relatively lower power compared with gravitational wave detectors, as the phase noise from birefringence is attenuated by \(\sin^{2}\theta\) and \(\sin^{2}\left(\theta+\theta_{\rm pol}\right)\), by aligning the polarization axis and the mirror crystal axes. In the evaluation setup, the phase noise can be enhanced by intentionally misaligning the axes.
One of the possible sources of birefringence fluctuations is magnetic field fluctuations due to Faraday effect. Measured magnetic field fluctuations at various gravitational wave detector sites are in the order of \(10^{-12}\) T\(/\sqrt{\rm Hz}\) at 10 Hz [45], and the Verdet constant for silicon is 15 rad\(/(\)T\(\cdot\)m\()\)[46]. These give \(10^{-11}\) rad\(/\sqrt{\rm Hz}\) level of \(\Delta\phi_{\rm s_{1}}\) for mirror thicknesses in Table 1, which is below the requirements given above.
### Amplitude noise at the anti-symmetric port
So far, we have considered the phase noise in the arm cavity reflected beams in gravitational wave detectors. In gravitational wave detectors, the differential arm length caused by gravitational waves will be read out as the interference fringe changes at the anti-symmetric port. Birefringence fluctuations will also create power fluctuations in the orthogonal polarization, and it will be a noise source when the output Faraday isolator has a finite extinction ratio \(\epsilon\), and the orthogonal polarization is not completely rejected. A slight misalignment of the axes between the input Faraday isolator and the output Faraday isolator would also cause a finite extinction ratio.
From Eq. (34), the power of the cavity reflected beam in the orthogonal polarization from the birefringence in ITM can be written as
\[\frac{P_{\rm ref\perp}|_{\rm reg}}{P_{\rm in}} \simeq \frac{1}{4}\left(\Delta\phi_{\rm s_{1}}-2\Delta\phi_{\rm t_{1}}- \frac{\mathcal{F}}{\pi}\Delta\phi_{\rm r_{1}}\right)^{2}\sin^{2}\left(2\theta _{\rm pol}\right) \tag{45}\]
when the cavity is on resonance. Here, \(P_{\rm in}=|E_{\rm in}|^{2}\) is the input power to the cavity, and we used that \(r_{2}=1\), \(r_{1}\simeq 1\) and \(t_{1}^{2}=1-r_{1}^{2}\), which are good approximations for arm cavities of gravitational wave detectors. We also assumed that the amount of birefringence is uniform and small, i.e., \(\Delta\phi_{\rm r_{i}}\ll 2\pi/\mathcal{F}\), \(\Delta\phi_{\rm s_{1}}\ll 1\) and \(\Delta\phi_{\rm t_{1}}\ll 1\).
As we can see from Eq. (34), the orthogonal polarization is vanished when there is no birefringence, or \(\theta_{\rm pol}\) is not 0 or \(\pi/2\). The orthogonal polarization component is generated from the reflected electric field unbalance between two eigenmodes. Therefore, when the amount of birefringence is small, the phase of \(E_{\rm ref\perp}\) is always around \(\pi/2\) away from the phase of \(E_{\rm ref\parallel}\). This means that the orthogonal polarization in the cavity reflection is always in the quadrature phase with respect to the gravitational wave signal, independent of the resonant condition of the cavity.
In the case of gravitational wave detectors, the anti-symmetric port therefore will be either at the bright or the dark fringe for the orthogonal polarization, when it is at the dark fringe for the main polarization. When the both arms are completely symmetric and the amount of birefringence is the same, the anti-symmetric port will be at the bright fringe for the orthogonal polarization. This is the same as the reason why the polarization signal from axion dark matter is present at the anti-symmetric port,
as discussed in Ref. [35]. In reality, the beam splitter in the Fabry-Perot-Michelson interferometer adds extra phase difference between two polarization axes due to \(\sim\)45 deg incident angle, and the fringe will be slightly shifted.
To derive the requirements for the extinction ratio \(\epsilon\) of the output Faraday isolator, let us assume that the power of the orthogonal polarization component at the anti-symmetric port can be roughly estimated from the power from one of the arms. By requiring the power fluctuation from the orthogonal polarization from one of the arms to be less than the shot noise of the local oscillator beam in the main polarization, we can require
\[\epsilon<\frac{1}{P_{\text{refl}\perp}}\sqrt{\frac{2\hbar cP_{\text{LO}}}{ \lambda}}, \tag{46}\]
where \(P_{\text{LO}}\) is the power of the local oscillator beam at the anti-symmetric port. When the requirements for the birefringence fluctuations derived in the previous subsections are met, the noise from the birefringence fluctuations are lower than the shot noise of the gravitational wave detector. Therefore, the requirement can be rewritten as
\[\epsilon\lesssim\sqrt{\frac{P_{\text{LO}}}{P_{\text{in}}}}\left(\Delta\phi_{ \text{s}_{1}}-2\Delta\phi_{\text{t}_{1}}-\frac{\mathcal{F}}{\pi}\Delta\phi_{ \text{r}_{1}}\right)^{-1}. \tag{47}\]
For gravitational wave detectors operating with DC readout scheme [47], \(P_{\text{LO}}\) and \(P_{\text{in}}\) are in the order of 10 mW and 10 kW for power-recycled case, respectively. Assuming that the birefringence terms \(\Delta\phi_{\text{s}_{1}}\), \(\Delta\phi_{\text{t}_{1}}\), and \(\Delta\phi_{\text{r}_{1}}\mathcal{F}/\pi\) are in the order of 1 rad, the requirement to the extinction ratio will be \(\epsilon\lesssim 0.1\%\). This means that the input Faraday isolator and the output Faraday isolator has to be aligned within 1.8 degrees.
## IV Optical losses from inhomogeneous birefringence
Birefringence and its inhomogeneity in cavities create power losses from depolarization. The mode content of the cavity reflected beam in the orthogonal polarization will be different depending on the locations of birefringence and the resonant condition of the cavity. In this section, we discuss the power of cavity reflected beam in the orthogonal polarization to estimate the optical loss.
To show that the different locations of birefringence create different mode content, we first consider the effects from ITM, as we have considered in Eqs. (33) and (34). From Eq. (34), the power losses to orthogonal polarization when the cavity is out of resonance will be
\[\frac{P_{\text{refl}\perp}}{P_{\text{in}}}\simeq\frac{1}{4}(\Delta\phi_{\text {s}_{1}})^{2}\sin^{2}{(2\theta_{\text{pol}})}, \tag{48}\]
under same approximations used to derive Eq. (45).
So far, we have only considered the birefringence uniform over the substrate and the coating. When there is a perturbation from a uniform birefringence, spatial higher order modes are generated. The amount of the higher order modes in the orthogonal polarization can be estimated from inhomogeneous birefringence \(\Delta\phi_{\text{s}_{1}}^{\text{HOM}}\). The power in the higher order modes when the cavity is on resonance and out of resonance will be
\[\frac{P_{\text{refl}\perp}^{\text{HOM}}\big{|}_{\text{res}}}{P_{ \text{in}}} \simeq \frac{1}{4}\left(\Delta\phi_{\text{s}_{1}}^{\text{HOM}}-\Delta \phi_{\text{t}_{1}}^{\text{HOM}}\right)^{2}\sin^{2}{(2\theta_{\text{pol}})}, \tag{49}\] \[\frac{P_{\text{refl}\perp}^{\text{HOM}}\big{|}_{\text{off}}}{P_{ \text{in}}} \simeq \frac{1}{4}(\Delta\phi_{\text{s}_{1}}^{\text{HOM}})^{2}\sin^{2}{( 2\theta_{\text{pol}})}, \tag{50}\]
respectively. Note that the coefficient for \(\Delta\phi_{\text{t}_{1}}^{\text{HOM}}\) is 1, as opposed to 2 for \(\Delta\phi_{\text{t}_{1}}\) in Eq. (45), since higher order modes do not resonate in the cavity and higher order modes are generated in the ITM transmission of intra-cavity beam.
For considering the effect from the ITM substrate birefringence, we can set \(\Delta\phi_{\text{r}_{1}}=0\), \(\Delta\phi_{\text{s}_{1}}=\Delta\phi_{\text{t}_{1}}\) and \(\Delta\phi_{\text{s}_{1}}^{\text{HOM}}=\Delta\phi_{\text{t}_{1}}^{\text{HOM}}\). In this case, the amount of the fundamental transverse mode in the orthogonal polarization stays the same when the cavity is out of resonance or on resonance. However, the amount of higher order modes in the orthogonal polarization is suppressed to the second order, as we can see from Eq. (49). This is similar to the Lawrence effect for the thermal lensing of ITM [48]. It is worth noting that the cavity reflected power in the main polarization \(P_{\text{refl}\parallel}\) could increase when the cavity is on resonance due to this effect, if the optical loss in the cavity is small compared with the optical loss from inhomogeneous birefringence.
For KAGRA sapphire ITM, the transmission wavefront error difference between two polarizations was measured to be around 60 nm in RMS [19; 20], which corresponds to the round-trip phase difference \(\Delta\phi_{\text{s}_{1}}^{\text{HOM}}\) of 0.7 rad in RMS. If we attribute this all to inhomogeneous refractive index difference using Eq. (43), this corresponds to \(\Delta n^{\text{HOM}}\) of \(2\times 10^{-7}\) in RMS, using the KAGRA sapphire mirror thickness being 15 cm and laser wavelength being 1064 nm. For sapphire, the amount of birefringence along \(c\)-axis can be calculated with [49]
\[\Delta n=\frac{n_{o}(n_{o}^{2}-n_{e}^{2})\psi^{2}}{n_{e}^{2}}, \tag{51}\]
where \(n_{e}=1.747\) and \(n_{o}=1.754\) are the refractive indices in the \(c\)-axis and in axes orthogonal to the \(c\)-axis, respectively, and \(\psi\ll 1\) is the inclination of the light propagation direction with respect to the \(c\)-axis. Using this equation, the amount of birefringence observed in KAGRA can be explained by \(\psi^{\text{HOM}}\) being 0.2 deg in RMS. This is larger than nominal orientation of the beam propagation axis with respect to the \(c\)-axis, which was aligned within 0.1 deg [20]. This suggests that \(\theta_{\text{pol}}\) is also inhomogeneous and uncontrolled.
Using Eq. (50), this inhomogeneous birefringence create power loss to orthogonal polarization of around 10%
when the arm cavity is out of resonance, if \(\theta_{\rm pol}\) is around \(\pi/4\). This is consistent with the measured value in KAGRA, as reported in Ref. [9]. The reduction of the power loss to orthogonal polarization on resonance was also observed, which is consistent with the Lawrence effect described above. In KAGRA case, the power of the orthogonal polarization inside the power recycling cavity was reduced by a factor of three when the arm cavity was locked on resonance.
To make the optical loss due to inhomogeneous birefringence of ITM substrate always smaller than \(0.1\%\), \(\Delta\phi_{\rm s_{1}}\) and \(\Delta\phi_{\rm s_{1}}^{\rm HOM}\) need to be smaller than \(0.06\) rad in RMS. Achieving this with surface figuring alone could be challenging, as surface figuring cannot compensate for the phase difference between two axes. This requirement can be eased by aligning the input polarization axis to \(\theta_{\rm pol}=0\) or \(\pi/2\).
When considering the effect from the ITM coating birefringence, we can set \(\Delta\phi_{\rm s_{1}}=\Delta\phi_{\rm r_{1}}\). However, \(\Delta\phi_{\rm s_{1}}\) is not exactly \(\Delta\phi_{\rm t_{1}}\), as the penetration length for the coating is different from the coating thickness. Therefore, the Lawrence effect does not completely suppress the higher order modes. If we can set \(\Delta\phi_{\rm s_{1}}=l\Delta\phi_{\rm t_{1}}\), where \(0<l<1\) is the ratio of the penetration length over the coating thickness, the higher order modes in the orthogonal polarization increase when the cavity is locked on resonance, for \(l<0.5\). The fundamental transverse mode in the orthogonal polarization increases for high finesse cavities with \(\mathcal{F}/\pi\gg 1\).
The mode content in the orthogonal polarization from the ETM coating birefringence can be obtained by replacing \(\Delta\phi_{\rm r_{1}}\) to \(\Delta\phi_{\rm r_{2}}\) and \(\theta_{\rm pol}\) to \(\theta+\theta_{\rm pol}\) in Eqs. (45), (48), (49) and (50), and by setting \(\Delta\phi_{\rm s_{1}}=\Delta\phi_{\rm t_{1}}=0\), as
\[\frac{P_{\rm ref\perp}\rm|_{\rm res}}{P_{\rm in}} \simeq \frac{1}{4}\left(\frac{\mathcal{F}}{\pi}\Delta\phi_{\rm r_{2}} \right)^{2}\sin^{2}\left[2(\theta+\theta_{\rm pol})\right]\!, \tag{52}\] \[\frac{P_{\rm ref\perp}\rm|_{\rm off}}{P_{\rm in}} \simeq 0,\] (53) \[\frac{P_{\rm ref\perp}^{\rm HOM}|_{\rm res}}{P_{\rm in}} \simeq 0,\] (54) \[\frac{P_{\rm ref\perp}^{\rm HOM}|_{\rm off}}{P_{\rm in}} \simeq 0. \tag{55}\]
Therefore, as for the effects from the ETM coating birefringence, the power in the orthogonal polarization increases when the cavity is locked on resonance, and the fundamental transverse mode dominates, because the higher order modes are suppressed in the cavity.
The discussions above highlights the fact that the optical losses from birefringence needs to be correctly taken into account to measure the optical losses in the arm cavity. It also suggests that, by measuring the mode content of the beam in the orthogonal polarization when the cavity is out of resonance and on resonance, we can estimate where the optical losses from birefringence is mainly coming from.
Future gravitational wave detector designs call for 10 dB of detected squeezing, requiring that the total optical loss be less than \(10\%\)[50]. From Eqs. (45) and (52), \(|\theta|\) and \(|\theta+\theta_{\rm pol}|\) needs to be less than \(1.8\) degrees, requiring the optical loss from birefringence be less than \(0.1\%\), when the birefringence terms \(\Delta\phi_{\rm s_{1}}\), \(\Delta\phi_{\rm t_{1}}\), and \(\Delta\phi_{\rm r_{2}}\mathcal{F}/\pi\) are in the order of \(1\) rad. Similar to the discussions around Eq. (23), the polarization of the injected squeezed vacuum also needs to be aligned to less than \(1.8\) degrees to achieve the optical loss of less than \(0.1\%\).
## V Conclusions and outlook
In this paper, we have discussed the effects of birefringence and its fluctuations in the mirror substrate and coating for laser interferometric gravitational wave detectors. We have shown that the polarization axis of the beam and the crystal axes of mirrors need to be aligned to minimize the optical losses and the noises from birefringence fluctuations. The optical losses from birefringence can be feasibly reduced to less than \(0.1\%\), when the axes are aligned within a few degrees. We have also shown that the requirements for the birefringence fluctuations in the substrate and the coating will be in the order of \(10^{-8}\) rad\(/\sqrt{\rm Hz}\) and \(10^{-10}\) rad\(/\sqrt{\rm Hz}\) at \(100\) Hz, respectively, for future gravitational wave detectors with mirrors that have \(\Delta n=10^{-7}\) level of substrate birefringence and \(\Delta\phi_{\rm r_{1}}=1\) mrad level of coating birefringence. When the static coating birefringence is large such that the resonant frequency difference between two polarization eigenmodes are larger than the cavity linewidth, the requirements on the coating birefringence fluctuations will be relaxed. In addition, we have derived the equations for estimating the amount of optical losses due to depolarization from inhomogeneous birefringence of mirror substrates and coatings. Our results provide the basic theory to study the noises and optical losses from birefringence fluctuations of mirrors in gravitational wave detectors.
In our model, we assumed that the amount of birefringence and mis-orientation of axes to be small. We also assumed that two interferometer arms of gravitational wave detectors to be close to symmetric. Detailed interferometer modeling will be necessary to treat larger birefringence, mis-orientation of axes, inhomogeneity of birefringence and axes orientations, and asymmetry between two arms including birefringent beam splitter effects. These effects would create classical radiation pressure noise, as intra-cavity power fluctuates from birefringence fluctuations. Including the power and signal recycling cavities to the model would also be important when these effects are not negligible and the resonant condition in the recycling cavities are different between polarizations. We leave these studies to future work.
###### Acknowledgements.
We would like to thank Hiroki Fujimoto, Kevin Kuns, Stefan W. Ballmer, Valery Frolov and Martin M. Fejer for insightful discussions. This work was supported by the Gordon and Betty Moore Foundation, by the National Science Foundation under Grant No. PHY-1912677, by JSPS KAKENHI Grant No. JP20H05854, and by JST PRESTO Grant No. JPMJPR200B. FSC acknowledges support from the Barish-Weiss postdoctoral fellowship. This paper carries LIGO DCC No. LIGO-P2300220 and JGW Document No. JGW-P2315068.
|
2309.17063 | GateSeeder: Near-memory CPU-FPGA Acceleration of Short and Long Read
Mapping | Motivation: Read mapping is a computationally expensive process and a major
bottleneck in genomics analyses. The performance of read mapping is mainly
limited by the performance of three key computational steps: Index Querying,
Seed Chaining, and Sequence Alignment. The first step is dominated by how fast
and frequent it accesses the main memory (i.e., memory-bound), while the latter
two steps are dominated by how fast the CPU can compute their
computationally-costly dynamic programming algorithms (i.e., compute-bound).
Accelerating these three steps by exploiting new algorithms and new hardware
devices is essential to accelerate most genome analysis pipelines that widely
use read mapping. Given the large body of work on accelerating Sequence
Alignment, this work focuses on significantly improving the remaining steps.
Results: We introduce GateSeeder, the first CPU-FPGA-based near-memory
acceleration of both short and long read mapping. GateSeeder exploits
near-memory computation capability provided by modern FPGAs that couple a
reconfigurable compute fabric with high-bandwidth memory (HBM) to overcome the
memory-bound and compute-bound bottlenecks. GateSeeder also introduces a new
lightweight algorithm for finding the potential matching segment pairs. Using
real ONT, HiFi, and Illumina sequences, we experimentally demonstrate that
GateSeeder outperforms Minimap2, without performing sequence alignment, by up
to 40.3x, 4.8x, and 2.3x, respectively. When performing read mapping with
sequence alignment, GateSeeder outperforms Minimap2 by 1.15-4.33x (using KSW2)
and by 1.97-13.63x (using WFA-GPU). Availability:
https://github.com/CMU-SAFARI/GateSeeder | Julien Eudine, Mohammed Alser, Gagandeep Singh, Can Alkan, Onur Mutlu | 2023-09-29T08:49:44Z | http://arxiv.org/abs/2309.17063v1 | # GateSeeder: Near-memory CPU-FPGA Acceleration
###### Abstract
**Motivation:** Read mapping is a computationally expensive process and a major bottleneck in genomics analyses. The performance of read mapping is mainly limited by the performance of three key computational steps: Index Querying, Seed Chaining, and Sequence Alignment. The first step is dominated by how fast and frequent it accesses the main memory (i.e., memory-bound), while the latter two steps are dominated by how fast the CPU can compute their computationally-costly dynamic programming algorithms (i.e., compute-bound). Accelerating these three steps by exploiting new algorithms and new hardware devices is essential to accelerate most genome analysis pipelines that widely use read mapping. Given the large body of work on accelerating Sequence Alignment, this work focuses on significantly improving the remaining steps.
**Results:** We introduce _GateSeeder_, the _first_ CPU-FPGA-based near-memory acceleration of both short and long read mapping. GateSeeder exploits near-memory computation capability provided by modern FPGAs that couple a reconfigurable compute fabric with high-bandwidth memory (HBM) to overcome the memory-bound and compute-bound bottlenecks. GateSeeder also introduces a new lightweight algorithm for finding the potential matching segment pairs. Using real ONT, HiFi, and Illumina sequences, we experimentally demonstrate that GateSeeder outperforms Minimap2, without performing sequence alignment, by up to 40.3\(\times\), 4.8\(\times\), and 2.3\(\times\), respectively. When performing read mapping with sequence alignment, GateSeeder outperforms Minimap2 by 1.15-4.33\(\times\) (using KSW2) and by 1.97-13.63\(\times\) (using WFA-GPU).
**Availability:** [https://github.com/CMU-SAFARI/GateSeeder](https://github.com/CMU-SAFARI/GateSeeder)
**Contact:** [email protected], [email protected], [email protected]
**Supplementary information:** Supplementary data are available at _Bioinformatics_ online.
## 1 Introduction
Read mapping is the first fundamental step in most genomic analyses [1; 2; 3; 4; 5; 6; 7; 8]. Read mapping compares fragments (known as _reads_) of an organism's genome generated by a sequencing machine against a well-studied reference genome. The main goal of read mapping is to locate each read sequence in a reference genome, attempting to reassemble the reads back into their entire genome sequence. Read mapping remains one of the major performance bottlenecks in many genomic analyses for the three prominent sequencing technologies, Oxford Nanopore Technologies (ONT), PacBio HiFi, and Illumina [9; 10]. This is true even for the widely-used, well-maintained, state-of-the-art read mapper for modern CPUs, Minimap2 [11].
To understand the reasons behind read mapping's large performance overhead, we first briefly describe the workflow of Minimap2 in five key steps: 1) _Index Construction_, 2) _Seed Extraction_, 3) _Index Querying_, 4) _Anchor Sorting_, and 5) _Seed
###### Abstract
We present a novel approach to the _Index Querying_ state of the art, which is a novel approach to the _Index Querying_ state of the art. The proposed approach is based on the _Index Querying_ state of the art, which is a novel approach to the _Index Querying_ state of the art. The proposed approach is based on the _Index Querying_ state of the art, which is a novel approach to the _Index Querying_ state of the art. The proposed approach is based on the _Index Querying_ state of the art, which is a novel approach to the _Index Querying_ state of the art. The proposed approach is based on the _Index Querying_ state of the art, which is a novel approach to the _Index Querying_ state of the art. The proposed approach is based on the _Index Querying_ state of the art, which is a novel approach to the _Index Querying_ state of the art. The proposed approach is based on the _Index Querying_ state of the art, which is a novel approach to the _Index Querying_ state of the art. The proposed approach is based on the _Index Querying_ state of the art, which is a novel approach to the _Index Querying_ state of the art. The proposed approach is based on the _Index Querying_ state of the art, which is a novel approach to the _Index Querying_ state of the art. The proposed approach is based on the _Index Querying_ state of the art, which is a novel approach to the _Index Querying_ state of the art. The proposed approach is based on the _Index Querying_ state of the art, which is a novel approach to the _Index Querying_ state of the art. The proposed approach is based on the _Index Querying_ state of the art, which is a novel approach to the _Index Querying_ state of the art. The proposed approach is based on the _Index Querying_ state of the art, which is a novel approach to the _Index Querying_ state of the art. The proposed approach is based on the _Index Querying_ state of the art, which is a novel approach to the _Index Querying_ state of the art. The proposed approach is based on the _Index Querying_ state of the art, which is a novel approach to the _Index Querying_ state of the art. The proposed approach is based on the _Index Querying_ state of the art, which is a novel approach to the _Index Querying_ state of the art. The proposed approach is based on the _Index Querying_ state of the art, which is a novel approach to the _Index Querying_ state of the art. The proposed approach is based on the _Index Querying_ state of the art, which is a novel approach to the _Index Querying_ state of the art. The proposed approach is based on the _Index Querying_ state of the art, which is a novel approach to the _Index Querying_ state of the art. The proposed approach is based on the _Index Querying_ state of the art, which is a novel approach to the _Index Quer_ state of the art. The proposed approach is based on the _Index Querying_ state of the art, which is a novel approach to the _Index Quer_ state of the art. The proposed approach is based on the _Index Querying_ state of the art, which is a novel approach to the _Index Quer_ state of the art. The proposed approach is based on the _Index Querying_ state of the art, which is a novel approach to the _Index Quer_ state of the art. The proposed approach is based on the _Index Querying_ state of the art, which is a novel approach to the _Index Quer_ state of the art. The proposed approach is based on the _Index Querying_ state of the art, which is a novel approach to the _Index Quer_ state of the art. The proposed approach is based on the _Index Querying_ state of the art, which is a novel approach to the _Index Quer_ state of the art. The proposed approach is based on the _Index Querying_ state of the art, which is a novel approach to the _Index Quer_ state of the art. The proposed approach is based on the _Index Querying_ state of the art, which is a novel approach to the _Index Quer_ state of the art. The proposed approach is based on the _Index Querying_ state of the art, which is a novel approach to the _Index Quer_ state of the art. The proposed approach is based on the _Index Quer_ state of the art, which is a novel approach to the _Index Quer_ state of the art. The proposed approach is based on the _Index Querying_ state of the art, which is a novel approach to the _Index Quer_ state of the art. The proposed approach is based on the _Index Querying_ state of the art, which is a novel approach to the _Index Quer_ state of the art. The proposed approach is based on the _Index Quer_ state of the art, which is a novel approach to the _Index Quer_ state of the art, which is a novel approach to the _Index Quer_ state of the art. The proposed approach is based on the _Index Quer_ state of the art, which is a novel approach to the _Index Quer_ state of the art, which is a novel approach to the _Index Quer_ state of the art. The proposed approach is based on the _Index Quer_ state of the art, which is a novel approach to the _Index Quer_ state of the art, which is a novel approach to the _Index Quer_ state of the art. The proposed approach is based on the _Index Quer_ state of the art, which is a novel approach to the _Index Quer_ state of the art, which is a novel approach to the _Index Quer_ state of the art. The proposed approach is based on the _Index Quer_ state of the art, which is a novel approach to the _Index Quer_ state of the art, which is a novel approach to the _Index Quer_ state of the art. The proposed approach is based on the _Index Quer_ state of the art, which is a novel approach to the _Index Quer_ state of the art, which is a novel approach to the _Index Quer_ state of the art. The proposed approach is based on the _Index Quer_ state of the art, which is a novel approach to the _Index Quer_ state of the art, which is a novel approach to the _Index Quer_ state of the art. The proposed approach is based on the _Index Quer_ state of the art, which is a novel approach to the _Index Quer_ state of the art, which is a novel approach to the _Index Quer_ state of the art, which is a novel approach to the _Index Quer_ state of the art, which is a novel approach to the _Index Quer_ state of the art. The proposed approach is based on the _Index Quer_ state of the art, which is a novel approach to the _Index Quer_ state of the art, which is a novel approach to the _Index Quer_ state of the art, which is a novel approach to the _Index Quer_ state of the art, which is a novel approach to the _Index Quer_ state of the art, which is a novel approach to the _Index Quer_ state of the art, which is a novel approach to the _Index Quer_ state of the art, which is a novel approach to the _Index Quer_ state of the art, which is a novel approach to the _Index Quer_ state of the art, which is a novel approach to the _Index Quer_ state of the art, which is a novel approach to the _Index Quer_ state of the art, which is a novel approach to the _Index Quer_ state of the art, which is a novel approach to the _Index Quer_ state of the art, which is a novel approach to the _Index Quer_ state of the art, which is a novel approach to the _Index Quer_ state of the art, which is a novel approach to the _Index Quer_ state of the art, which is a novel approach to the _Index Quer_ state of the art, which is a novel approach to the _Index Quer_ state of the art, which is a novel approach to the _Index Quer_ state of the art, which is a novel approach to the _Index Quer_ state of the art, which is a novel approach to the _Index Quer_ state of the art, which is a novel approach to the _Index Quer_ state of the art, which is a novel approach to the _Index Quer_ state of the art, which is a novel approach to the _Index Quer_ state of the art, which is a novel approach to the _Index Quer_ state of the art, which is a novel approach to the _Index Quer_ state of the art, which is a novel approach to the _Index Quer_ state of the art, which is a novel approach to the _Index Quer_ state of the art, which is a novel approach to the _Index Quer_ state of the art, which is a novel approach to the _Index Quer_ state of the art, which is a novel approach to the _Index Quer_ state of the art, which is a novel approach to the _Index Quer_ state of the art, which is a novel approach to the _Index Quer_ state of the art, which is a novel approach to the _Index Quer_ state of the art, which is a novel approach to the _Index Quer_ state of the art, which is a novel approach to the _Index Quer_ state of the art, which is a novel approach to the _Index Quer_ state of the art, which is a novel approach to the _Index Quer_ state of the art, which is a novel approach to the _Index Quer_ state of the art, which is a novel approach to the _Index Quer_ state of the art, which is a novel approach to the _Index Quer_ state of the art, which is a novel approach to the _Index Quer_ state of the art, which is a novel approach to the _Index Quer_ state of the art, which is a novel approach to the _Index Quer_ state of the art, which is a novel approach to the _Index Quer_ state of the art, which is a novel approach to the _Index Quer_ state of the art, which is a novel approach to the _Index Quer_ state of the art, which is a novel approach to the _Index Quer_ state of the art, which is a novel approach to the _Index Quer_ state of the art, which is a novel approach to the _Index Quer_ state of the art, which is a novel approach to the _Index Quer_ state of the art, which is a novel approach to the _Index Quer_ state of the art, which is a novel approach to the _Index Quer_ state of the art, which is a novel approach to the _Index Quer_ state of the art, which is a novel approach to the _Index Quer_ state of the art, which is a novel approach to the _Index Quer_ state of the art, which is a novel approach to the _Index Quer_ state of the art, which is a novel approach to the _Index Quer_ state of the art, which is a novel approach to the _Index Quer_ state of the art, which is a novel approach to the _Index Quer_ state of the art, which is a novel approach to the _Index Quer_ state of the art, which is a novel approach to the _Index Quer_ state of the art, which is a novel approach to the _Index Quer_ state of the art, which is a novel approach to the _Index Quer_ state of the art, which is a novel approach to the _Index Quer_ state of the art, which is a novel approach to the _Index Quer_ state of the art, which is a novel approach to the _Index Quer_ state of the art, which is a novel approach to the _Index Quer_ state of the art, which is a novel approach to the _Index Quer_ state of the art, which is a novel approach to the _Index Quer_ state of the art, which is a novel approach to the _Index Quer_ state of the art, which is a novel approach to the _Index Quer_ state of the art, which is a novel approach to the _Index Quer_ state of the art, which is a novel approach to the _Index Quer_ state of the art, which is a novel approach to the _Index Quer_ state of the art, which is a novel approach to the _Index Quer_ state of the art, which is a novel approach to the _Index Quer_ state of the art, which is a novel approach to the _Index Quer_ state of the art, which is a novel approach to the _Index Quer_ state of the art, which is a novel approach to the _Index Quer_ state of the art, which is a novel approach to the _Index Quer_ state of the art, which is a novel approach to the _Index Quer_ state of the art, which is a novel approach to the _Index Quer_ state of the art, which is a novel approach to the _Index Quer_ state of the art, which is a novel approach to the _Index Quer_ state of the art, which is a novel approach to the _Index Quer_ state of the art, which is a novel approach to the _Index Quer_ state of the art, which is a novel approach to the _Index Quer_ state of the art, which is a novel approach to the _Index Quer_ state of the art, which is a novel approach to the _Index Quer_ state of the art, which is a novel approach to the _Index Quer_ state of the art, which is a novel approach to the _Index Quer_ state of the art, which is a novel approach to the _Index Quer_ state of the art, which is a novel approach to the _Index Quer_ state of the art, which is a novel approach to the _Index Quer_ state of the art, which is a novel approach to the _Index Quer state of the art,
higher memory density close to the computation fabric, an order of magnitude more memory bandwidth, and much lower latency to access stored data compared to traditional off-chip DRAM devices. However, exploiting such modern FPGAs requires building efficient hardware architecture that handles the desired operations by leveraging only supported operations by the FPGA logic. Such modern FPGAs are already proven beneficial for sequence alignment [19] and pre-alignment filtering [20]. However, such new technology is not yet exploited for performing complete read mapping.
To this end, we introduce _GateSeeder_, the first near-memory CPU-FPGA co-design for alleviating both the compute-bound and memory-bound bottlenecks in short and long-read mapping. GateSeeder is based on three **key ideas**: (1) We observe that potential mapping locations always have the largest number of seed matches compared to other locations in the reference genome due to high similarity with a given read. GateSeeder exploits this observation and proposes a new computational step with a linear time complexity in the number of seed matches that finds out the potential matching segment pairs based on the highest number of seed matches scattered around a region in the reference genome. We call this approach _Seed Voting_. (2) GateSeeder builds two new hardware architectures for performing _Seed Extraction_ and _Index Querying_ using modern FPGAs with HBM. Although _Seed Extraction_ is not memory-bound, it provides the input queries that are used for querying the index. Thus, minimizing the overall latency requires accommodating both steps, _Seed Extraction_ and _Index Querying_, within the same FPGA chip. (3) GateSeeder introduces the first HBM-friendly hash table that is specially designed to exploit the access parallelism provided by modern FPGAs for _fully_ maximizing the querying throughput. We carefully orchestrate execution on the CPU and the FPGA to hide data transfer latency and increase parallelism. GateSeeder takes reads in FASTQ format and a reference genome (or precomputed index) in FASTA format and outputs mapping information in PAF format.
We summarize the **contributions** of this paper as follows:
* We introduce GateSeeder, the first software/hardware co-designed read mapper that exploits modern FPGAs featuring high bandwidth memory (HBM). GateSeeder is fully synthesizable, open-source, and ready-to-be-used on real hardware.
* We provide, to our knowledge, the first FPGA accelerator for _Seed Extraction_ and _Index Querying_ for both short and long read mapping.
* We propose a new, efficient voting algorithm that replaces the compute-bound seed chaining algorithm while maintaining good accuracy.
* We experimentally demonstrate, using real ONT, HiFi, and Illumina sequences, that GateSeeder outperforms Minimap2 by up to 40.3x, 4.8x, and 2.3x, respectively, when mapping the reads against the entire human reference genome. When performing read mapping with sequence alignment, GateSeeder outperforms Minimap2 by 1.15-4.33 (using KSW2) and by 1.97-13.63x (using WFA-GPU).
Figure 1: (a) Roofline model, and (b) execution time breakdown for the four key steps of the state-of-the-art read mapper, Minimap2, when mapping ONT and Illumina reads against the human reference genome (GRCh38). We use 12, 24, and 48 CPU threads and 2 Intel Xeon Gold 5118.
## 0.2 Methods
### Overview
Fig. 2 shows the overview of GateSeeder, a CPU-FPGA co-design for accelerating read mapping. The pipeline can be divided into 7 stages: _Index Construction_, _Read Parsing_, _Seed Extraction_, _Index Querying_, _Location Adjustment_, _Anchor Sorting_, and _Mapping Location Voting_. We explain each step in detail in the next subsections. Stages \(1\), \(2\), and \(3\) are performed on the host CPU as they better suit general-purpose CPUs and better benefit from CPU multithreading. Stages \(4\), \(5\), and \(6\) are performed on FPGA featuring HBM as they better suit near-data FPGA acceleration. **GateSeeder efficiently uses both a host CPU and a modern FPGA in order to enable four different levels of parallelism**. First, the host CPU and the FPGA kernels are working concurrently. The host CPU launches the FPGA kernels asynchronously, such that the host CPU continues executing other stages of GateSeeder (i.e., \(5\), and _6_) without the need to wait for the FPGAs. Second, GateSeeder exploits the CPU multithreading for faster execution. GateSeeder allocates the available CPU threads (e.g., a user-defined parameter, \(N\)) and efficiently manages the tasks assigned to each CPU thread via thread-pool design pattern [21]. Our thread-pool software design does not limit each CPU thread to processing a single read. It rather keeps each CPU thread busy with any remaining task for any available read. This achieves high allocation efficiency and optimized concurrent execution. The CPU threads are orchestrated such that the following five different tasks are quickly applied to each read sequence: (1) Parsing the read sequences of a given FASTQ file using stage \(5\), (2) transferring the parsed read sequence in batches from the host CPU to the FPGA, (3) launching an FPGA kernel that executes stages \(6\), \(7\), and \(8\), (4) transferring the calculated anchors from the FPGA to the CPU, and (5) sorting the anchors using stage \(9\), performing _Mapping Location Voting_ using stage _10_, and writing the mapping results in PAF format. Third, by carefully building an efficient hardware architecture as a Processing Element (PE) for performing stages _11_, _12_, and _13_ on an FPGA chip, GateSeeder is able to run multiple (\(M\)) PEs concurrently on the same FPGA chip for a higher level of parallelism. Fourth, GateSeeder executes in dataflow manner [22] where the PEs perform different tasks on an FPGA in parallel by allowing consumer tasks to operate before the producer tasks have been completed. We describe in Section S1.1 the FPGA dataflow in more detail.
### HBM Organization
To mitigate the memory bottleneck caused by the data transfer between the memory and the computing elements, modern FPGA features HBM. Fig. 3 depicts the internal organization of an HBM that consists of two main components: 1) HBM stacks and 2) an HBM controller inside the FPGA. A stack comprises multiple memory sections (MSs), each of which is connected to the HBM controller through a 64-bit pseudo channel. In the HBM controller, each pseudo channel is connected to an AXI channel that interacts with the user logic. Each AXI channel is associated with a pseudo channel and can directly access the aligned MS. To make each AXI channel able to access the full HBM space (i.e., all the MSs), an AXI switch is integrated between the AXI channels and the pseudo channels. However, to reach the maximum bandwidth and the
Figure 2: Overview of GateSeeder that consists of a host CPU with main memory and a modern FPGA board that is equipped with an HBM memory.
minimum latency for the HBM controller, direct routing from the AXI port to the aligned pseudo channel should be used, and accessing unaligned MS should be avoided [23]. As a result, to optimize the throughput of our design, we 1) partition our data into batches with a smaller size than the size of MS (e.g., Xilinx Alveo U55C features two 8GB HBM2 memories, each of which has 16 512MB MSs), and 2) carefully design the architecture of each PE such that each AXI channel can access only a unique MS, i.e., limiting the size of the memory space accessed by each AXI channel to the size of one MS.
### Index Processing
#### 2.3.1 Index Construction
The purpose of the index is to efficiently store extracted information (e.g., _seeds_ and their start locations in the reference genome) from the subject reference genome and efficiently facilitate the querying and retrieval of such information when needed. For a given reference genome and a set of parameters (e.g., seed length), the index only needs to be built once and can be reused for different sets of read sequences. As building the index is not considered as a contributor to the total execution time of read mapping, we build the index at the host CPU. GateSeeder uses the minimizer algorithm [24] to choose the seeds to be stored in the reference genome index. The minimizer algorithm uses a _hash and compare_ heuristic: (1) It computes the hash value of \(w\) consecutive/overlapping k-mers (a subsequence of length \(k\)), and (2) compares all the hash values and outputs the k-mer with the smallest hash value as a resulting minimizer seed that represents the subject k-mers. The _Index Construction_ step of GateSeeder is fully configurable for different \(w\) and \(k\) values. The implementation is multi-threaded, and its execution time has the same order of magnitude as Minimap2.
We build an index data structure that is similar to the _HashMap_[25] as it offers (1) high data locality and (2) constant-time performance for adding or querying any seed. The high data locality leads to a higher throughput. The data are accessed into contiguous blocks, which leverage the memory architecture and enable multiple hardware accelerations such as burst transfer. The constant-time performance leads to a constant number of required clock cycles for performing index-dependent operations and a constant number of memory accesses for fetching indexed data. This helps in easily orchestrating the index querying step with all other steps that depend on its output and thus increasing task-level parallelism. This index data structure has two arrays: a map array and a key array. The map array stores pointers to the key array and is indexed by the seed value (i.e., a hash value of a seed). The key array stores the locations of extracted seeds in the reference genome.
Some seeds can occur very frequently in the reference genome and, as a result, can increase the rate of false-positive mapping locations and unnecessarily increase the time spent to query the index and process the seed locations [26]. To overcome this issue, we remove from the index the seeds (along with their locations) that occur (based on each seed's number of locations) more frequently than a user-defined value of max_occ.
#### 2.3.2 Index Storing
As frequent accesses to the index stored in the main memory cause memory bottlenecks, GateSeeder stores the index directly in the HBM of the FPGA. This provides two key advantages: (1) minimizes data communication latency due to shorter interconnects between the FPGA chip and HBM compared to the interconnects between the CPU and the main memory, and (2) provides an order of magnitude more bandwidth than traditional main memory (e.g., DDR4). Since the size of each MS in HBM is limited to a fixed size and the size of the index depends on the subject reference genome, we partition both the map and key arrays of the index into subarrays, each of which has a size smaller than or equal to the size of an MS. By storing each subarray in a different MS, GateSeeder can handle any index of any size as long as the sizes of the index, one batch of read sequences, and one batch of anchors collectively do not exceed the HBM capacity (e.g., 16GB on the Xilinx Alveo U55C). The index is loaded in the HBM of the FPGA _only_ once before the execution of the read mapper.
#### 2.3.3 Index Querying
The purpose of the _Index Querying_ stage is to efficiently retrieve all occurrence locations in the reference genome for a given query seed. To maximize the throughput of this stage: (1) we minimize the number of memory accesses, and (2) our design only accesses consecutive memory addresses to leverage burst transfers.
Our _Index Querying_ is a two-step mechanism: accessing the map array and accessing the key array of the index. Both steps perform unique memory access (i.e., unique entry in the arrays) for each seed, and both steps are performed in parallel. The first step is to access the map array with the value of the seed, which returns two pointers (i.e., addresses to the corresponding memory section) that indicate the start and the end of the list of seed locations stored in the key array.
The second step is to fetch all the locations between the start and end entries in the key array. Each fetched location from the index (corresponding to a location in the reference genome) is then associated with the corresponding location of seed in the read to form an anchor. To perform both steps in parallel (through pipelining), each PE is connected to the index through 2 different AXI channels. The first AXI channel is used to access the map array, and the second one to access the key array.
### Read Processing
#### 2.4.1 Read Parsing & Storing
The goal of _Read Parsing & Storing_ is to convert the input read sequences stored as FASTQ files into sequences that can efficiently be stored in the HBM and processed by the FPGA logic. To efficiently overlap FPGA processing time with data transfer time and minimize the HBM allocation size for accommodating read sequences, the reads are transferred and processed in batches. We construct the read batches at the CPU side and transfer them to the HBM of the FPGA. To maximize the bandwidth between the FPGA logic and the HBM, we limit the size of each read batch to the size of an MS of the HBM. Since our FPGA design only performs _Seed Extraction_ and _Index Querying_, there is no need to store any metadata (read ID, read len, read number) on the HBM. Each read batch consists of a stream of read sequences concatenated to each other, where each read sequence is separated by a special character E (a different character from the read alphabets, A, C, G, T, and N). The metadata (e.g., read ID, read length, number of reads) for a given batch of reads is stored at the CPU side.
Each read batch is transferred by a CPU thread to the HBM of the FPGA through the PCIe interface. The FPGA can process as many batches as the number of PEs implemented in the FPGA chip in parallel. Therefore we limit the number of batches to the number of PEs, which we discuss in Section S1.1.
#### 2.4.2 Seed Extraction
The goal of the _Seed Extraction_ is to quickly extract the seeds of each read stored in the batches. Similar to the _Index Construction_ step of GateSeeder, GateSeederalso uses the minimizer algorithm [24] to extract the seeds from read sequences.
Our hardware architecture of _Seed Extraction_ step calculates one minimizer seed every cycle. To reach this performance, we use two key approaches: (1) We replicate the hardware logic responsible for computing the hash values \(w\) times, and thus it allows us to compute the hash value of the subject \(w\) consecutive k-mers in parallel. (2) Our implementation is pipelined, which means the critical path delay of each PE is shortened by dividing it into stages or smaller tasks. This allows GateSeeder to meet target timing constraints (e.g., maximum operating frequency) and achieve more parallelism by calcullating multiple minimizer seeds in parallel.
### Calculating the Mapping Locations
#### 2.5.1 Anchor Sorting
The goal of the _Anchor Sorting_ stage is to sort the anchors according to their location in the reference genome. Sorting the anchors allows us to quickly identify the potential matching segment pairs during the voting stage. Based on the literature [27] and our experimental evaluation, there is no FPGA implementation for sorting algorithms that is faster than multicore CPU implementations as used in Minimap2 [11]. Our FPGA implementation of a pseudo-in-place merge sort algorithm shows one order of magnitude higher execution time compared to 24-threads CPU implementation. For this reason, we decide to perform _Anchor Sorting_ at the CPU side and not on the FPGA chip. We implement two types of sorting algorithms: radix sort and merge sort 28. We observe that for Illumina reads, merge sort is 1.84x faster than radix sort, while for ONT reads, radix sort is 1.32x faster than merge sort.
#### 2.5.2 Location Adjustment and Mapping Location Voting
The goal of the _Mapping Location Voting_ stage is to quickly find the potential matching segment pairs between a given read sequence and the reference genome. The key idea of _Voting_ is based on the observation that the correct mapping location always has the largest number of anchors compared to the other mapping locations due to the high similarity between the read sequence and the sequence extracted at the correct mapping location in the reference genome. Based on this observation, we develop a linear time (in the number of anchors) voting algorithm.
Our voting mechanism consists of two main steps. The first one is performed on the FPGA after _Index Querying_, and it consists of subtracting the location of the seed within the read sequence from the location of the seed within the reference
genome. The list of subtracted locations (\(\delta\)) along with the corresponding location within the read sequence, also called the list of anchors (\(A\)) constitutes the input of the second step. The second step is the core of our algorithm, and it is performed on the CPU after _Anchor Sorting_. During this step, we iterate once through the list of sorted anchors, and based on those, we output a list of matching segment pairs that have the highest number of votes.
Our voting mechanism is different from the one used in Genome-on-Diet [29] in two different aspects. (1) GateSeeder only performs one round of voting on the whole read to identify all subsequences in the read that share a large number of votes with the reference genome. The goal of GateSeeder is to only identify the correct mapping locations in the reference genome for each of these subsequences and report them in the PAF file. Genome-on-Diet on the other hand performs multiple rounds of voting on multiple subsequences of the read to map one or more of the read subsequences (two subsequences with a large gap in between) together to cover structural variations (SVs) occurring in the read. The linked subsequences are needed to generate a CIGAR string that represents the SV. (2) The index data structure, the _Seed Extraction_ algorithm, and the indexing parameters that GateSeeder uses are _all_ different than the one used in Genome-on-Diet. GateSeeder uses minimizer seeds, while Genome-on-Diet uses sparsified seeds that span a much larger region in the reference genome compared to that of GateSeeder. Thus, GateSeeder also uses a different implementation and different parameters for our voting algorithm than the one used in Genome-on-Diet.
To explain our voting algorithm, let a list of anchors be \(A\), and the \(i^{th}\) and \(j^{th}\) anchors are represented as pairs of integer numbers \((L^{i}_{read},L^{i}_{ref})\) and \((L^{j}_{read},L^{j}_{ref})\)), respectively. While \(L^{i}_{read}\) and \(L^{j}_{read}\) represent the locations of different seeds within the same read, \(L^{i}_{ref}\) and \(L^{j}_{ref}\) represent the locations of these seeds within the reference genome. Let \(e^{i}_{j}\) be the total number of deletions and insertions between the \(i^{th}\) and \(j^{th}\) anchors, such that we have the following inequality:
\[|(L^{j}_{read}-L^{i}_{read})-(L^{j}_{ref}-L^{i}_{ref})|\leq e^{i}_{j}\]
This inequality becomes equality if there are only deletions or insertions between the two seed matches. Let the subtracted locations for the two anchors be: \(\delta^{i}=L^{i}_{ref}-L^{i}_{read}\) and \(\delta^{j}=L^{j}_{ref}-L^{j}_{read}\), such that the following inequality holds:
\[|\delta^{j}-\delta^{i}|\leq e^{i}_{j}\]
Thus the difference between two subtracted locations \(\Delta^{i}_{j}=|\delta^{j}-\delta^{i}|\) gives us a lower bound for the total number of insertions and deletions between two anchors. If there are only insertions or deletions, then \(\Delta^{i}_{j}=e^{i}_{j}\). For anchors that are close to each other, we expect \(\Delta^{i}_{j}\) to be close to \(e^{i}_{j}\) since the number of consecutive insertions and deletions is small.
**Location Adjustment**. Thereby, it makes sense to sort the list of locations based on \(\delta\) and then iterate through the list. For this reason, the goal of _Location Adjustment_ step is to compute the \(\delta\) values on the FPGA chip after performing _Index Querying_ step. On FPGA, the computation of the \(\delta\) values is performed in parallel with the other stages performed on the FPGA and thus has no cost in terms of execution time.
To match segments from the read sequence to segments from the reference genome, we define a voting distance vt_dist. We consider that two anchors \(i\) and \(j\) belong to the same segment if the total number of insertions and deletions between the two anchors is smaller than the user-defined voting distance. (i.e., \(e^{i}_{j}\leq\) vt_dist). Computing the exact value of \(e^{i}_{j}\) is computationally expensive and requires DP (it requires performing alignment between the anchors). Since for anchors that are close to each other \(\Delta^{i}_{j}\) can be seen as a good approximation of \(e^{i}_{j}\), and the \(\Delta^{i}_{j}\) of all the consecutive anchors can be computed with a linear time complexity, we consider that the two anchors belong to the same segment if \(\Delta^{i}_{j}\leq\) vt_dist.
The voting distance can be arbitrarily large depending on the application. If we want to perform alignment on the output segments, the voting distance should have the same order of magnitude as the alignment bandwidth. Indeed, for a given segment pair, we might have a \(\Delta^{i}_{j}\) such that \(\Delta^{i}_{j}=\) vt_dist. Now if we consider that we only have insertions or deletions between the two anchors, the following holds \(\Delta^{i}_{j}=e^{i}_{j}\) and thus \(e^{i}_{j}=\) vt_dist. In order to align a segment pair having a section with only vt_dist deletions or insertions, we need a bandwidth of at least vt_dist. For each matching segment pair, we define a voting score corresponding to the number of anchors belonging to the given segment. We use the voting score as a metric to measure the quality of the matching segment pair. The higher the voting score is, the more anchors belong to the segment pair and the higher the probability of being the correct mapping location.
Our voting algorithm (Algorithm 1) takes the list of sorted anchors as input and outputs a list of matching segment pairs with the highest voting score that meet some user-defined constraints, such as the minimum length of the segments. The algorithm starts by initializing two temporary mutable segment pairs, one corresponding to the positive strand and the other to the negative. The voting algorithm then iterates over the list of sorted anchors. For each iteration, we check if the anchor belongs to the temporary segment pair of the corresponding strand (Line 4). If yes, we adjust the boundaries of the segment pair based on the anchor and increment the voting score (Line 5). Else, we check if the voting score of the temporary segment pair is greater than the lowest voting score of the list of mapping segment pairs and if the temporary segment pair meets the user-defined constraints. If both conditions are met, we append the segment pair to the list of matching segment pairs (and remove the segment pair with the lowest voting score if necessary) (Line 7). We then initialize the temporary segment pair with the current anchor (Line 8).
## 3 Results
We evaluate 1) the time for data transfer and processing per genomic base (bp), 2) the FPGA resource utilization, 3) the end-to-end speedup of GateSeeder compared to Minimap2, and 4) the accuracy of GateSeeder compared to Minimap2. We provide all the commands used to generate the results and a comprehensive treatment of all evaluation results on the GateSeeder GitHub page. Our evaluated datasets and presets are provided in the Supplementary Materials, Sections S1.2 and S1.3. We implement our accelerator designs on an Alveo U55C card featuring the Xilinx Virtex Ultrascale+ XCU55C with 16 GiB HBM2 connected to an AMD EPYC 7302P host system with 64 GiB of memory. All the experiments, including the CPU-only experiments, are run on the described system.
### Data Transfer and Processing Time Analysis
In Fig. 4, we evaluate the data transfer time from the host CPU to the FPGA board and from the FPGA board back to the host CPU, the FPGA kernel processing time, and the processing time of a CPU-optimized version of the seeding kernel running on the host CPU. We use 32 CPU threads for the CPU-based step and 8 FPGA PEs for the FPGA-based step. We perform our measurements when running the complete pipeline of GateSeeder. We use the nine presets and normalize the time to the number of bases by dividing the transfer time or processing time by the batch size. Based on Fig. 4 we make three key observations. (1) Our FPGA kernel is always faster than the CPU version except for the ILMN3 preset and provides up to 1.96x, 1.58x and 1.47x speedup for ONT, HiFi, and Illumina reads, respectively, compared to the CPU kernel. The highest performance compared to the CPU kernel is reached when using small max_occ values. This is expected, as for large max_occ values, the average number of locations returned by the key table is larger. Since the returned locations are stored contiguously in the memory, the CPU cache hierarchy and the data prefetching mechanisms can be leveraged by the CPU to increase the overall throughput of the CPU kernel. (2) Increasing max_occ value always increases the execution
Figure 3: Near-memory FPGA design of GateSeeder
time of our FPGA kernel. This is expected as increasing max_occ increases the number of locations to fetch from the HBM. (3) The transfer time is always more than 20x faster than the FPGA kernel execution time. Transferring the locations from the device to the host is always slower than transferring the read sequences from the host to the device. Since we use asynchronous programming in our host program, we trigger the host-to-device and the device-to-host transfers in parallel. Thus, we are only limited by the device-to-host transfer time.
We conclude that outsourcing the entire seeding stage on FPGA is always beneficial since it reduces the CPU workload, the processing time on FPGA is always faster or comparable to the CPU processing time, and the transfer time is negligible compared to the processing time.
### FPGA Resource Utilization
We list the resource utilization of GateSeeder on the FPGA in Table 1. For each sequencing technology and from the FPGA design point-of-view, the three presets we choose differ only in the values of k and k as max_occ and vt_dist impact only the CPU-based steps and batch size impacts the memory allocation. Thus, we report the resource utilization for each sequencing technology and not for each preset. From Table 1, we observe that regardless of the sequencing technology, there are always enough resources to theoretically accommodate more than 16 PEs. However, in practice, we are limited by the number of HBM AXI Channels. Since each PE is designed to use 4 AXI Channels, the maximum number of PEs that GateSeeder can accommodate is 8 to cope with the 32 memory channels offered by the HBM of the board we are using.
\begin{table}
\begin{tabular}{l c c c c c} \hline \hline & **PEs** & **CLB** & **LUT** & **FF** & **BRAM** \\ \hline \multirow{2}{*}{ONT} & 1 & 18.88 & 10.33 & 7.05 & 10.69 \\ & 8 & 31.62 & 18.22 & 13.49 & 16.07 \\ \hline \multirow{2}{*}{HiFi} & 1 & 19.30 & 10.51 & 7.21 & 10.69 \\ & 8 & 33.89 & 19.67 & 14.72 & 16.07 \\ \hline \multirow{2}{*}{Illumina} & 1 & 19.32 & 10.51 & 7.17 & 10.69 \\ & 8 & 33.38 & 19.66 & 14.41 & 16.07 \\ \hline \hline \end{tabular}
\end{table}
Table 1: FPGA resource utilization (in %) for different sequencing data types (i.e., w and k values) and different numbers of PEs.
Figure 4: Data transfer time and processing time per bp, when transferring the reads to the FPGA board, transferring the mapping locations back to the host CPU, executing the FPGA kernel, and executing a CPU implementation of the seeding kernel on the host CPU.
### End-to-End Speedup
We evaluate the end-to-end speedup that is provided by GateSeeder over Minimap2. As a baseline, we run Minimap2 without alignment using 32 CPU threads on the same host CPU used by GateSeeder for a fair comparison. We build the index used by GateSeeder and Minimap2 beforehand so that the execution time for _Index Construction_ steps is not accounted for in the total execution time. We run GateSeeder using 32 CPU threads and 8 FPGA PEs, as we discussed in the previous subsection. We load the read sequences and the index into DRAM before each run of GateSeeder and Minimap2 to reduce the impact of the I/O costs. Fig. 5 presents the speedup of GateSeeder over Minimap2 for the nine presets.
We make three key observations. (1) GateSeeder provides the largest speedup rate when using ONT reads. This is expected for two main reasons: 1) ONT preset uses small k and w values, which causes the number of locations returned by the index to be large, and 2) the length of evaluated ONT reads is much larger (between 10k and 100k bps) than that of the evaluated HiFi and Illumina reads. Consequently, the number of extracted seeds, the number of queried seeds, and the number of returned locations per read are much larger than those for HiFi and Illumina reads. The larger workload benefits directly from FPGA acceleration and high parallelism offered by GateSeeder. Minimap2 uses chaining, which has a quadratic time complexity in the number of returned locations, and the voting algorithm used in GateSeeder has a linear time complexity in the number of locations. So for small k and w values and ultra-long reads, _Mapping Location Voting_ provides a non-negligible speedup compared to chaining. (2) Using a small max_occ as in ONT1 leads to have the highest speedup rate (40.3\(\times\)). This is expected as it reduces the number of returned locations after querying the index and hence there is a smaller workload to be sorted and performing voting step on, which reduces the overall execution time. (3) For large values of k (HiFi and Illumina presets), the impact of max_occ is less important on the end-to-end speedup. Indeed using large values for k increases the number of unique minimizers and decreases the average occurrence of each minimizer. Thus increasing max_occ while having a small average occurrence only has a limited impact on the execution time. Whereas for ONT (k = 15) max_occ has a large impact on the end-to-end speedup. For ONT, the execution time when using max_occ = 10 is 2.4x faster than the execution time when using max_occ = 50.
We conclude that, in terms of speedup, GateSeeder performs the best for long and inaccurate reads compared to Minimap2. We also conclude that, in terms of execution time, the choice of the max_occ value is impactful for long and inaccurate reads.
### Accuracy Analysis
We evaluate the accuracy of GateSeeder compared to Minimap2 using simulated human reads and using mapeval tool from the PAFtools library provided by Minimap2. Simulated reads were mapped to the complete Human reference genome GRCh38. A read is considered correctly mapped if its longest mapping overlaps with the true interval, and the overlap length is \(\geq\)10% of the true interval length. We run Minimap2 using its default presets for each read type. We measure the
Figure 5: End-to-end speedup of GateSeeder over Minimap2 for the nine different presets. We run Minimap2 without performing sequence alignment.
accuracy of GateSeeder for the nine presets. We provide in Fig. 6 the _(error rate, fraction of mapped reads)_ pairs that are above different mapping quality thresholds.
Based on Fig. 6, we make three key observations. (1) For accurate read sequences (HiFi and Illumina), increasing max_occ always increases the accuracy. This is not true for long noisy reads (ONT) accuracy results. A possible explanation is that increasing max_occ also increases the rate of false positive seed matches (i.e., random seed matches due to, for example, highly repetitive seeds in Human data). Since the amount of false positive seed matches is higher for noisy reads, it also leads to a higher number of false positive votes. (2) For HiFi reads, GateSeeder has always a better accuracy even with max_occ set to 1 compared to Minimap2. For ONT and Illumina reads, GateSeeder has always a lower fraction (<2%) of mapped reads for the same error rate compared to Minimap2 (3) For HiFi and Illumina, we observe that the accuracy converges to an upper bound. Choosing a max_occ value above 5 and 450 for HiFi and Illumina, respectively, only has a limited effect on the fraction of mapped reads.
We conclude that even if GateSeeder uses a lightweight pre-alignment filtering algorithm, _Mapping Location Voting_, compared to chaining, GateSeeder provides high accuracy compared to Minimap2 for all sequencing data types.
Figure 6: Read mapping accuracy of GateSeeder compared to Minimap2, using mapeval from PAFtools.
### Performing Sequence Alignment
We examine in Table 2 the benefits of integrating the existing state-of-the-art sequence aligners with GateSeeder to perform complete read mapping with sequence alignment. We choose one representative tool from each of the four directions for accelerating sequence alignment: 1) Using modern processors that provide wider registers (e.g., 512-bit wide) for executing vector operations on multiple operands at once for high parallelism. We choose a recent, fastest vectorized implementation [30] of the widely-used aligner, KSW2 [31]. It accelerates KSW2 by up to \(2.2\times\). We refer to this recent implementation in Table 2 as _KSW2 AVX_. 2) Building CMOS-based customized hardware architectures to speed up the alignment process. We choose a non-optimal alignment algorithm, called GACT [32], that has such an accelerator. It divides the DP matrix into overlapping submatrices and greedily processing each submatrix using systolic arrays. We refer to it in Table 2 as _GACT CMOS_. 3) Exploiting a large number of threads and large local memory provided by modern GPUs to compute alignments of many independent sequence pairs concurrently. We choose a recent GPU implementation [33] of the wavefront algorithm (WFA) [34], which reformulates the classic Smith-Waterman-Gotoh recursion and shows significant speedups for highly similar sequence pairs. The GPU implementation [33] of the WFA algorithm improves the original CPU implementation by 1.5-7.7\(\times\) using long reads. We refer to it in Table 2 as _WFA GPU_. 4) Using a pre-alignment filtering algorithm to reduce the number of mapping locations to be verified by sequence alignment by providing approximate edit distance calculation. We choose SneakySnake [35] as representative since it provides the highest accuracy and speedup compared to other algorithms [2]. We refer to it in Table 2 as _SneakySnake CPU_.
We present in Table 2 the read mapping throughput of Minimap2 (which uses KSW2) and GateSeeder integrated with each of the representative tools that we discuss. We observe that integrating existing tools for sequence alignment with GateSeeder is always beneficial. It provides up to \(13.63\times\), \(13.67\times\), and \(3.89\times\) higher read mapping throughput compared to Minimap2.
## 4 Conclusion
We demonstrate that we can use the HBM of modern FPGAs to mitigate the memory bottleneck of the index querying step. We propose an index data structure that leverages the HBM organization. We develop an FPGA+HBM design that performs the seed extraction and index querying steps. We implement our design on a real FPGA with 8 PEs. In addition, we propose a lightweight voting algorithm with a linear time complexity that replaces the computationally expensive chaining step while maintaining good accuracy. We integrate our FPGA design and our voting algorithm into a CPU-FPGA co-designed read mapping tool, GateSeeder. We experimentally demonstrate, using real ONT, HiFi, and Illumina sequences, that GateSeeder outperforms Minimap2 by up to 40.3x, 4.8x, and 2.3x, respectively, when mapping the reads against the entire human reference genome.
## Funding
We acknowledge the generous gifts of our industrial partners, including Intel and VMware. This work is also partially supported by the European Union's Horizon programme for research and innovation [101047160 - BioPIM] and the Swiss National Science Foundation (SNSF) \([200021\_213084]\).
\begin{table}
\begin{tabular}{c|c c c c c} \hline \hline & \multicolumn{4}{c}{**GateSeeder integrated with**} & \multicolumn{1}{c}{**Minimap2**} \\ \cline{2-6} & **KSW2 AVX** & **GACT CMOS** & **WFA GPU** & **SneakySnake CPU** & \\ \hline
**ONT** & 1’516 (4.33x) & 3’037 (8.67x) & **4’771 (13.63x)** & 1’324 (3.78x) & 350 \\ \hline
**HiFi** & 1’287 (1.15x) & 2’752 (2.47x) & **15’237 (13.67x)** & 4’774 (4.28x) & 1’114 \\ \hline
**Illumina** & 281’827 (2.44x) & 156’168 (1.35x) & 228’205 (1.97x) & **450’062 (3.89x)** & 115’511 \\ \hline \hline \end{tabular}
\end{table}
Table 2: Read mapping throughput (number of mapped reads per second) of Minimap2 and GateSeeder integrated with state-of-the-art pre-alignment filter and sequence alignment tools |
2307.16809 | An enhanced system for the detection and active cancellation of snoring
signals | Snoring is a common disorder that affects people's social and marital lives.
The annoyance caused by snoring can be partially solved with active noise
control systems. In this context, the present work aims at introducing an
enhanced system based on the use of a convolutional recurrent neural network
for snoring activity detection and a delayless subband approach for active
snoring cancellation. Thanks to several experiments conducted using real
snoring signals, this work shows that the active snoring cancellation system
achieves better performance when the snoring activity detection stage is turned
on, demonstrating the beneficial effect of a preliminary snoring detection
stage in the perspective of snoring cancellation. | Valeria Bruschi, Michela Cantarini, Luca Serafini, Stefano Nobili, Stefania Cecchi, Stefano Squartini | 2023-07-31T16:28:16Z | http://arxiv.org/abs/2307.16809v1 | # An Enhanced System for the Detection and Active Cancellation of Snoring Signals
###### Abstract
Snoring is a common disorder that affects people's social and marital lives. The annoyance caused by snoring can be partially solved with active noise control systems. In this context, the present work aims at introducing an enhanced system based on the use of a convolutional recurrent neural network for snoring activity detection and a delayless subband approach for active snoring cancellation. Thanks to several experiments conducted using real snoring signals, this work shows that the active snoring cancellation system achieves better performance when the snoring activity detection stage is turned on, demonstrating the beneficial effect of a preliminary snoring detection stage in the perspective of snoring cancellation.
V. Bruschi\({}^{\star}\) M. Cantarini\({}^{\star}\) L. Serafini\({}^{\star}\) S. Nobili\({}^{\dagger}\) S. Cecchi\({}^{\star}\) S. Squartini\({}^{\star}\)+\({}^{\star}\) Department of Information Engineering - Universita Politecnica delle Marche, Ancona, Italy
\({}^{\dagger}\) Leaff Engineering Srl, Ancona, Italy +
Footnote †: This work was supported by the financial program DM MiSE 5 Marzo 2018, project ”_ChaALenge_”—F/180016/01-05/X43.
boring activity detection, active snoring cancellation, convolutional recurrent neural network, adaptive subband algorithm
## 1 Introduction
The noise caused by snoring activity is an important problem in our society. The snoring noise can reach a sound level of \(90\,\)dB and have harmful implications, e.g., loss of productivity, attention deficit, and unsafe driving [1, 2]. Recently, various studies have identified significant similarities between snoring and vocal signal [3, 4]. In fact, both of them present high-order harmonics preceded by a fundamental frequency in the spectrum [4]. The snoring activity is composed of two phases, i.e., inspiration and expiration. The power of the snoring signal is mostly concentrated on lower frequencies of the spectrum. In particular, the inspiration produces a signal between \(100\,\)Hz and \(200\,\)Hz, while the expiration is focused between \(200\,\)Hz and \(300\,\)Hz. Thus, the fundamental frequency, which must be deleted, is located between \(100\,\)Hz and \(300\,\)Hz.
In the literature, several approaches can be found for snoring attenuation. Passive solutions involve physical devices such as earplug or special pillows [5] that may be trouble-some for the user. Moreover, these techniques are ineffective at low frequencies and can be very expensive. In contrast, active noise control (ANC) systems can reduce low-frequency noises that passive approaches cannot attenuate. In particular, ANC techniques are based on the introduction of a secondary source that produces a signal capable of generating destructive interference in a desired area controlled by one or more microphones. ANC systems must be adaptive to follow the variations of the noise recorded at the error and reference microphones. They are usually implemented using filtered-X least mean square (FxLMS) [6] algorithm, where the estimate of the secondary path is used to calculate the output signal at the error microphone. Examples of FxLMS applications for active snoring cancellation can be found in [1, 7, 8, 9, 10, 11].
However, snoring is a non-stationary signal that can cause issues during the adaptation process. Specifically, its irregular nature can result in signal absence, which in turn can negatively impact the performance of the adaptive algorithm. Therefore, to ensure active snoring cancellation, it is crucial to support it with a snoring activity detection algorithm that can identify the presence of snoring.
In the literature, deep learning algorithms for sound event detection and classification have also been applied to snoring audio signals. To this end, several studies have employed 2D convolutional neural networks (2D-CNNs) that rely on feature learning of time-frequency representations computed from fixed-length audio segments [12, 13, 14]. In these studies, the high accuracy in snoring detection derives from both the acoustic features chosen and the wide signal analysis windows (\(\geq 1\,\)s) that entail a slow decision response of the algorithm. This issue can be solved by sequential models that analyze the signal over short frames, such as 1D convolutional neural networks (1D-CNNs) and recurrent neural networks (RNNs). In [15, 16], 1D-CNNs proved to be less performing than 2D-CNNs, but the low computational cost due to feature extraction from the raw audio signal makes them suitable for end-to-end systems. In [17, 18], RNNs exploited the features of past and present time-frequency representations of the audio signal over reduced temporal windows (\(25-30\,\)ms) for the snoring activity detection, confirming their effectiveness in sequential data analysis. Promising results have also been obtained from the combination of convolutional and se
quential models, which together form convolutional recurrent neural networks (CRNNs). The studies described in [19, 20] demonstrated that CRNNs with gated recurrent units (GRUs) or long short-term memory (LSTM) layers outperform 2D-CNNs in snoring detection. However, the performance of each approach is not easily comparable due to the different quantity, quality, and acquisition methods of the data used for training and testing the algorithms.
Given these premises, requirements such as reliability in signal classification and the capability to generalize in the presence of different background noises are some of the desired ones for an effective active snoring cancellation system. In this context, an enhanced system for the detection and active cancellation of snoring signals is presented. In particular, starting from the use of a CRNN for snoring activity detection, a delayless subband approach for active snoring cancellation has been improved, reporting good results in terms of convergence time and cancellation quality achieved. The paper is focused on the performance of the active snoring cancellation system with and without the aid of the snoring detection stage; therefore, since our interest is to evaluate the active snoring cancellation performance, the comparison of our snoring activity detection system with others in the literature is not addressed here because out of our scope, but it can be addressed in future work.
The paper is organized as follows. Section 2 and Section 3 describe the definition of the algorithm for both snoring activity detection and active snoring cancellation, respectively. Experimental results are reported in Section 4, where several results obtained with snoring signals are presented. Finally, conclusions are drawn in Section 5.
## 2 Snoring Activity Detection
In this study, we address the snoring activity detection (SAD) methodology in three stages, as reported in Figure 1. The first stage involves audio signal processing for acoustic feature computation. The second stage consists of data analysis using a CRNN for a binary snoring/non-snoring classification task, where a snoring event represents the positive class (label \(1\)), and all non-snoring events constitute the negative class (label \(0\)). Finally, in the third stage, the predictions produced by the neural network are post-processed with the "Hangover" algorithm. This pipeline - binary classifier plus output filter (Hangover) - is common in Voice Activity Detection tasks.
More in detail, in the first stage, the stereo audio signal is turned into monophonic by channel averaging. Log-Mel spectrograms are computed, and \(40\) log-Mel coefficients are extracted by using \(30\) ms-windows with a shift of \(10\) ms.
The second stage involves the classification and is performed by the CRNN, which takes as input the log-Mel coefficients computed in the previous step. The convolutional part of the CRNN comprises three consecutive blocks, each consisting of a convolutional layer, a batch normalization layer, a dropout layer, and a max pooling layer. In each block, convolutional layers have \(32\) filters with size (3,3), and their output is normalized and regulated by the Leaky Rectified Linear Unit (Leaky ReLU) [21] activation function. All dropout layers are characterized by a rate equal to \(0.3\), while max-pooling layers have filters decreasing with each block, from (5,1) to (4,1) to (2,1). The output is then flattened and passed to the recurrent part of the network, composed of two blocks. Each consists of a \(32\)-unit GRU layer with tanh and hard sigmoid activation functions to update and reset the gates, respectively, and a dropout layer with a drop rate of \(0.3\). Finally, a time-distributed feed-forward output layer, with a single neuron and sigmoid as activation function, returns predictions in the range [0,1], each one representing the probability that a frame is associated with a snoring event. Then, the predictions are binary-encoded ("binarization"), using a threshold of \(0.5\), so that they can be leveraged by the ASC algorithm.
In the third stage, the predictions output by the CRNN are post-processed with the Hangover algorithm presented in Algorithm 1. It works with an input buffer, _buffIn_, which acts as a FIFO (First-In First-Out) register that is automatically updated with a new sample every \(10\) ms, and takes as input the number of predictions in the input audio file \(L\), the size of the input buffer \(X\), and the number of predictions \(k\) that we would like to use to characterize a snoring event. When the input buffer is filled with the first \(X\) samples, _buffInFull()_ returns
Figure 1: Scheme of the SAD system.
the execution of the code to the caller; then the input buffer is read and a majority voting scheme is applied. In particular, if the input buffer contains more zeros than ones, its content is copied into the output buffer, \(\textit{buffOut}\). On the other hand, if it contains more ones than zeros, the Hangover algorithm considers the beginning of a snoring event by setting \(k\) consecutive predictions to one. In this way, a snoring event is more likely to be characterized by all predictions equal to one. This method aims to decrease the number of sporadic false negatives (FNs) predictions (i.e., snoring occurrences erroneously classified as non-snoring) within a snoring sequence, which could degrade the ASC performance. Although this method is not robust against false positives (FPs), it is able to reduce FNs, which are those to which the ASC algorithm is most susceptible.
```
\(L\), \(k\), \(X\) Output: buffOut // output buffer \(outIdx\gets 0\)// index output buffer \(buffInFull(X)\)// returns when buffIn full while\(outIdx\leq(L-1)\)do \(\textit{buffIn}\gets readbuffIn()\) \(\textit{zeros}\leftarrow\textit{FindZeros}(\textbf{buffIn})\) // Find n\({}^{\circ}\) of 0s \(ones\leftarrow\textit{FindOnes}(\textbf{buffIn})\) // Find n\({}^{\circ}\) of ls if\(ones>zeros\)then \(startIdx=outIdx\) \(\textbf{buffOut}[startIdx-X:startIdx+1]\leftarrow\textbf{I}(X,1)\) \(i\gets 1\) while\(1\)do \(\_\gets readbuffIn()\) // Discard reading if\((i\leq k-X)\) and \((outIdx\leq L-1)\)then \(outIdx\gets startIdx+i\) \(\textbf{buffOut}[outIdx]\gets 1\) \(i\gets i+1\) else \(outIdx\gets outIdx+1\) \(break\) else if\(outIdx=0\)then buffOut[\(outIdx:outIdx+X\)] \(\leftarrow\)buffIn \(outIdx\gets outIdx+X\) else buffOut[\(outIdx\)] \(\leftarrow\)buffIn[\(-1\)] \(outIdx\gets outIdx+1\)
```
**Algorithm 1** Hangover algorithm
## 3 Active Snoring Cancellation
Active Snoring Cancellation (ASC) is developed considering a feed-forward filtered-X configuration and a subband implementation as reported in [11]. Figure 2 shows the scheme of the algorithm. There is a reference microphone that picks up the snoring source \(x(n)\) and an error microphone that picks up the noise in the area to be quiet \(e(n)\). Then, a loudspeaker reproduces the interference signal \(y(n)\) generated by \(x(n)\) filtered with the adaptive filter \(w(n)\) that represents the estimation of the primary path \(p(n)\). The coefficients of this filter are produced by the subband adaptive filtering (SAF) block on the basis of \(x(n)\) filtered with the estimation of the path between the loudspeaker and the error microphone, i.e., the secondary path \(s(n)\), the error \(e(n)\), and snoring detection block predictions.
The SAF block has been developed considering a delay-less subband adaptive filter algorithm as first proposed in [22] and efficiently implemented in [11, 10]. In particular, the signal \(x^{\prime}(n)\) and the error \(e(n)\) are decomposed in subband by an analysis filter-bank, as \(x^{\prime}_{k}(n)\) and \(e_{k}(n)\) for each \(k\)-th subband, respectively. The weights of the \(k\)-th subband \(\textbf{w}_{k}^{SAF}(n)\) are updated following the normalized least mean square (NLMS) algorithm as
\[\textbf{w}_{k}^{SAF}(n+1)=\textbf{w}_{k}^{SAF}(n)+\mu_{w}\frac{\textbf{x}_{k}^ {\prime*}(n)e_{k}(n)}{\alpha+||\textbf{x}_{k}^{\prime}(n)||^{2}}, \tag{1}\]
where \(\textbf{x}_{k}^{\prime*}(n)\) is the complex conjugate of the input signal of the \(k\)-th subband \(x^{\prime}_{k}(n)\), \(\mu_{w}\) is the step size, and \(\alpha\) is a small coefficient that avoids division by zero. The fullband filter \(w(n)\) of length \(N\) is obtained by stacking all the subband weights following the steps below:
* the subband weights are reported in the frequency domain by \((N/D)\)-point fast Fourier transform (FFT), with \(D=M/2\) the decimation factor and \(M\) the number of subbands;
* the first half of the array representing the fullband filter is calculated by stacking the complex samples of FFT;
* the rest of the array is obtained by the complex conjugate reversed version of the first half and the central point is set to zero.
* the fullband filter is computed by a \(N\)-point inverse FFT of the array.
The SAF algorithm is activated when the SAD algorithm provides a prediction of snore presence.
## 4 Experimental Results
### Dataset
The A3-Snore dataset [19] has been selected for the experimental phase. It is a collection of audio files containing snoring events emitted by two male volunteers aged 48 and 55 during overnight sleep. The recording setup is a ZOOM-H1 Handy Recorder with two unidirectional microphones oriented perpendicularly. Acquisitions were made in a single room measuring \(4\times 2.5\) m with the sensors positioned near
the snorer's head. The corpus includes almost \(7\,\)h of audio material split into \(10\)-minute segments, selected according to the highest frequency of snoring events associated with each volunteer ("snorer 1" and "snorer 2"). All audio files, characterized by wav format, are stereophonic with a sampling rate of \(44.1\,\)kHz and \(16\)-bit encoding. A metadata file reports annotations of the start and end timestamps of snoring events with a resolution of \(1\) second. The dataset is organized into two folders, each associated with a snorer, with an unbalanced distribution between snoring and non-snoring events. Table 1 summarizes the composition of the A3-Snore audio collection. Files associated with Snorer 1 have been used for the training set, whereas Snorer 2's files have been split with a ratio of 50% and used for validation and test sets.
### Snoring Activity Detection
In the experiments, training was performed in a supervised manner for \(500\) epochs by monitoring the Average Precision (AP) - also known as the area under the precision-recall curve (AUC-PR) - on the validation set, and exploiting the early-stopping strategy to arrest the learning process when the model does not improve for \(20\) consecutive epochs.
An adaptive learning rate according to the AdaDelta [23] optimization algorithm was selected, with an initial value equal to \(1\) and a decay rate of \(0.95\). The binary cross-entropy was used as the loss function. The experiments were carried out on an NVIDIA DGX Station A100 with dual 64-Core AMD EPYC 7742 @3.4 GHz and eight NVIDIA A100-SXM4-40 GB GPUs. The server was running Ubuntu 20.04.3 LTS. The neural network has been implemented with the Tensorflow [24] deep learning framework.
CRNN classification performance was evaluated considering the AP, obtaining a value equal to \(77.54\)%.
For what concerns the Hangover algorithm, the size \(X\) of the input buffer has been chosen in order to reduce the number of FNs while keeping the latency as low as possible. Moreover, since the Hangover algorithm applies a majority voting scheme, \(X\) should be an odd number. We found the right trade-off by setting \(X=3\); in this way, the post-processing algorithm is able to improve the CRNN output while maintaining a relatively low latency (i.e., \(30\,\)ms). Since for a 10-minute audio file we have \(60\,001\) predictions, we set \(L=60\,001\), whereas \(k\) has been set equal to \(100\).
In order to evaluate the performance of the overall active snoring detection system also from a graphical perspective, we report in Fig. 3(a) a 100-second excerpt of an audio signal employed in testing and the associated predictions generated by the overall snoring activity detection system, after the post-processing stage. Moreover, in order to also visualize the Hangover Algorithm performance, Fig. 3(b) shows the binary predictions output of the CRNN before and after the post-processing stage; the time interval is more limited to better highlight the difference.
### Active Cancellation with Snoring Activity Detection
The presented ASC algorithm has already been validated in [11, 10], by comparing its performance with the state-of-the-art algorithm of [25], considered as reference. In this paper, the ASC algorithm is improved by applying the SAD, and the experiments are mainly focused on evaluating the perfor
\begin{table}
\begin{tabular}{|c|c|c|c|c|} \hline \multirow{2}{*}{Snorer} & \multirow{2}{*}{\begin{tabular}{c} Number of files \\ \end{tabular} } & \multicolumn{2}{c|}{\begin{tabular}{c} Total \\ duration \\ [s] \\ \end{tabular} } & \multicolumn{2}{c|}{\begin{tabular}{c} Snoring \\ duration \\ [s] \\ \end{tabular} } & \multicolumn{2}{c|}{
\begin{tabular}{c} Snoring \\ ratio \\ [\%] \\ \end{tabular} } \\ \hline
1 & 18 & 10 800 & 1127 & 10.4 \\
2 & 23 & 13 800 & 2017 & 14.6 \\ Total & 41 & 24 600 & 3144 & 12.8 \\ \hline \end{tabular}
\end{table}
Table 1: Statistics of the A3-Snore dataset.
Figure 3: Predictions post-processed by the Hangover algorithm of an audio signal (a), and their difference with respect to raw predictions before the processing stage (b).
Figure 2: Scheme of the ASC algorithm with SAD.
mance of the system with and without SAD. Starting from the snoring signals of the dataset described in Section 4.1, primary path \(p(n)\) and secondary path \(s(n)\) are simulated considering responses measured in a semi-anechoic chamber from the setup of [9]. Since \(p(n)\) and \(s(n)\) are modeled as FIR filters with a length of \(256\) samples, the length of the adaptive filter \(w(n)\) is set to \(512\) taps. For the subband structure, the length of the prototype filter is \(256\) samples, the number of subbands is \(M=64\), and the step size is \(\mu=0.03\). The performance of the proposed system has been evaluated in terms of primary path estimation, varying the signal-to-noise ratio (SNR) of the signal \(d(n)\) (cf. Figure 2).
The primary path estimated by the ASC with the SAD is compared with the one estimated without SAD and with the measured primary path. Figure 4 shows the obtained results considering \(\text{SNR}=10\) dB. The difference between the estimated responses and the measured one is evaluated by the log-spectral distance (LSD), in the frequency domain, and by the misalignment, in the time domain. The LSD evaluates the spectral difference between two frequency responses [26]. Similarly, the misalignment evaluates the difference between the measured and the estimated path in the time domain and gives a measure of the convergence rate [10, 11]. Denoting the measured primary path as \(p(n)\), the estimated primary path as \(w(n)\), and their respective transfer functions as \(P(k)\) and \(W(k)\), the LSD is computed as
\[\text{LSD}=\sqrt{\frac{1}{k_{2}-k_{1}+1}\sum_{k=k_{1}}^{k_{2}}\left[10\log_{10 }\frac{\left|P(k)\right|^{2}}{\left|W(k)\right|^{2}}\right]^{2}}, \tag{2}\]
where \(k_{1}\) and \(k_{2}\) delimit the frequency range within which the LSD is estimated, defined as \(B=[k_{1}\frac{f_{1}}{K},\,k_{2}\frac{f_{2}}{K}]=[100\) Hz, \(20\) kHz\(]\), with \(K=4096\) the number of frequency bins for the FFT computation, and \(f_{\text{s}}=44.1\) kHz the sampling frequency. The misalignment is calculated as
\[\text{MIS}=20\log_{10}\frac{||p(n)-w(n)||}{||p(n)||}. \tag{3}\]
Table 2 shows the values of the LSD and the misalignment considering signals with different SNR levels. The estimation performance improves with the SNR increase both with and without SAD and in terms of both LSD and misalignment. The lowest values of the LSD are obtained when the SAD is applied, i.e., when the adaptation algorithm of the ASC is executed only when the snoring signal is detected by the SAD. This result is confirmed by Figure 4(b), where the magnitude frequency response of the primary path is well estimated up to 10 kHz with SAD, while the frequency response estimated without SAD deviates from the measured one for all the frequency spectrum. Differently, the difference in the misalignment of the two cases is more difficult to recognize. In fact, looking at Figure 4(a), the main peak of the impulse response is rightly detected both with and without SAD, but both cases introduce some late reflections not present in the measured impulse response.
## 5 Conclusions
In this paper, an enhanced system that combines detection and active cancellation of snoring signals has been proposed. For snoring activity detection, a convolutional recurrent neural network fed by log-Mel coefficients has been implemented to classify snoring and non-snoring events. For active snoring cancellation, a feed-forward filtered-X configuration based on a delayless subband adaptive filter algorithm has been developed. The combined use of the two algorithms results in a single improved system for ASC. This work is a preliminary study that offers large room for improvement. For the SAD, more performing neural architectures based on unsupervised or semi-supervised deep learning strategies coupled with larger and more challenging datasets can be explored. The ASC can be improved by introducing non-uniform subband structures and different environments with different re
\begin{table}
\begin{tabular}{|c|c|c|c|c|} \hline \multirow{2}{*}{SNR [dB]} & \multicolumn{2}{c|}{LSD [dB]} & \multicolumn{2}{c|}{Misalignment [dB]} \\ \cline{2-5} & SAD OFF & SAD ON & SAD OFF & SAD ON \\ \hline
10 & 0.79 & **0.72** & -4.05 & **-6.05** \\
15 & 0.49 & **0.37** & -10.11 & **-12.51** \\
20 & 0.25 & **0.21** & **-16.20** & -14.86 \\ \hline \end{tabular}
\end{table}
Table 2: Values of the LSD and the misalignment obtained considering SAD OFF and SAD ON for different SNR values of the input signal. The LSD is calculated in the frequency range of [\(100\) Hz–\(20\) kHz].
Figure 4: Comparison between the measured primary path, the primary path estimated in the case of SAD OFF and SAD ON, (a) in the time domain and (b) in the frequency domain, considering an input signal with \(\text{SNR}=10\) dB.
verberations could be taken into account to test the proposed system.
|
2309.11412 | Creases and cusps in growing soft matter | The buckling of a soft elastic sample under growth or swelling has
highlighted a new interest in materials science, morphogenesis, and biology or
physiology. Indeed, the change of mass or volume is a common fact of any living
species, and on a scale larger than the cell size, a macroscopic view can help
to explain many features of common observation. Many morphologies of soft
materials result from the accumulation of elastic compressive stress due to
growth, and thus from the minimization of a nonlinear elastic energy. The
similarity between growth and compression of a piece of rubber has revived the
instability formalism of nonlinear elastic samples under compression, and in
particular Biot's instability. Here we present a modern treatment of this
instability in the light of complex analysis and demonstrate the richness of
possible profiles that an interface can present under buckling, even if one
restricts oneself to the two spatial dimensions. Special attention is given to
wrinkles, folds and cusps, a surprising observation in swelling gels or clays.
The standard techniques of complex analysis, nonlinear bifurcation theory and
path-independent integrals are revisited to highlight the role of physical
parameters at the origin of the observed patterns below and above the Biot
threshold. | Martine Ben Amar | 2023-09-20T15:36:52Z | http://arxiv.org/abs/2309.11412v1 | # Creases and cusps in growing soft matter
###### Abstract
The buckling of a soft elastic sample under growth or swelling has highlighted a new interest in materials science, morphogenesis, and biology or physiology. Indeed, the change of mass or volume is a common fact of any living species, and on a scale larger than the cell size, a macroscopic view can help to explain many features of common observation. Many morphologies of soft materials result from the accumulation of elastic compressive stress due to growth, and thus from the minimization of a nonlinear elastic energy. The similarity between growth and compression of a piece of rubber has revived the instability formalism of nonlinear elastic samples under compression, and in particular Biot's instability. Here we present a modern treatment of this instability in the light of complex analysis and demonstrate the richness of possible profiles that an interface can present under buckling, even if one restricts oneself to the two spatial dimensions. Special attention is given to wrinkles, folds and cusps, a surprising observation in swelling gels or clays. The standard techniques of complex analysis, nonlinear bifurcation theory and path-independent integrals are revisited to highlight the role of physical parameters at the origin of the observed patterns below and above the Biot threshold.
###### Contents
* I Introduction
* II Selection of creases in experiments
* III A basic introduction to nonlinear elasticity
* III.1 A brief reminder of the principles of linear elasticity
* III.2 The principles of nonlinear elasticity
* III.3 Constitutive equations in finite elasticity and definition of the elastic stresses
* III.4 Simple geometry and stretches
* IV Competition between elasticity and continuous fields
* IV.1 The origin of the elastic stresses
* IV.1 Growth without stress generation
* IV.1.1 Elastic energy evaluation, order by order
* IV.2 Nonlinear coupling of quasi-singular profiles
* IV.3 Nonlinear coupling of harmonic modes
* IV.4 Intermediate algebra for the coupling of sinusoidal modes
* IV.4.1 Coupling two modes near the \(J_{B}\) threshold
* IV.4.2 Nonlinear three mode coupling in the vicinity of the \(J_{B}\) threshold
* IV.4.3 Super and subcritical bifurcations
* IV.4 Role of surface tension
* X How to escape the Biot threshold?
* X Singular Profiles below the Biot threshold
* X.1 Physical origins of the patches
* X.1 Patches as inner boundary layer
* XI Theoretical evidence for internal singularities
* XI New elastic model for large stresses
* X.2 The intermediate boundary layer analysis
C The inner core * 1. Rescaling the strains and the invariants * 2. The energy density of the inner core * 2. Energy of the patches
* 3. Path independent contour integrals
* 4. The J-Integral
* 5. Constant growth and finite size effects
* 6. Inhomogeneous volumetric growth
* 7. The M-Integral
* 4. Finite-size effects or the buckling of layers
* 5. Selection of a unique harmonic mode
* 6. Nonlinearity and creasing above threshold for growing layer
* 7. Conclusion
* 8. Acknowledgements
* A. Nonlinear elasticity at first order: stress and energy expansion
* B. Expansion of the elastic and capillary energy density
* C. Evaluation of the total energy for a single mode, double and triple mode
* D. Profiles and Cartography of the stress
* E. Weakly nonlinear analysis for quasi-singular profiles
* F. Path-independent integrals
## I Introduction
The buckling of the outer surface of a living tissue during growth [1; 2; 3] and the corrugation of the surface of a swelling gel [4; 5; 6] are often observed in nature or in the laboratory. In the last three decades, a large number of studies have been devoted to such patterns in order to explain complex geometries in embryogenesis [7; 8; 9], botanical morphogenesis [10; 11; 12], but also in tumorogenesis [13; 14; 15] and organ pathologies (e.g. wound healing [16; 17; 18]). These shape instabilities affect thick samples that experience large volume variations in a non-isotropic manner. Obviously, in a free environment, the constant growth of a homogeneous sample does not generate stress, but if there is a constraint, such as a substrate, or if there is a material or growth inhomogeneity, then the stress is generated that changes the shape of the body. It can buckle, but only if there is enough growth. This suggests a shape change once the relative volume increase exceeds a threshold, about 2 times the original. The origin of the observed patterns at free surfaces results from the compressive stress generated by growth coupled with the hyperelastic properties of soft tissues. These tissues exhibit large deformations even at low stress values, and classical linear elasticity cannot explain the observed shapes. Focusing on the simplest case of a gel layer of constant thickness \(H_{0}\) placed on a substrate, the growth process occurs mainly in the vertical direction and leads to a thickening of the layer with: \(H=J_{t}H_{0}\), where \(J_{t}\) is the relative growth per unit volume at a time \(t\) in this simple geometry. When \(J_{t}\) is increased to a critical value, the top surface begins to wrinkle. For neo-Hookean elasticity, this value \(J_{B}\) of order \(3.38\) can be related to the critical value found by Biot for samples under compression. Of course, this instability is common and not limited to the ideal gel layer. The threshold for wrinkling depends on the nonlinear elasticity model [19; 20], or on the initial geometry of the sample [21; 16], or possibly on the growth anisotropy [22], but the order of magnitude of this number seems quite robust.
The mechanical interpretation of a material under compression was first given by M.A. Biot in a seminal paper "Surface instability of rubber in compression" [23]. Surface instability means that the instability is more visible at the surface of the sample, but actually occurs throughout the volume, as opposed to the Azaro-Tiller-Grenfield instability [24; 25], which results from surface diffusion. This instabilty is also different from wrinkles formed by a two-layer system where the top layer is thin and stiff and plays the role of a hard skin [26]. In this case, the surface topography can be realized in a very controlled way and is of enormous importance in industrial and biomedical applications [27]. Biot's instability was first demonstrated for a compressed neo-Hookean hyperelastic sample with a free surface in infinite geometry. It describes a two-dimensional infinite periodic pattern that occurs above a characteristic threshold for the compression level, but when the material geometry is more complex, such as bilayers [20; 28], or when the compression results from anisotropic or inhomogeneous growth, the interface buckling is recovered experimentally, but the analysis can be less straightforward. However, if smooth surface undulations can also be considered [29], the experimental patterns quickly evolve to nonlinear mode coupling [30; 31; 32; 33; 34] and even to wrinkles, which are less understood, although they are easily and commonly observed in experiments and are also noted in the physiology of the brain cortex, for example [35].
An even more puzzling observation concerns more cusped interfaces as shown in Fig.(1) (A1) to (A6). In one dimension, a cusp is a special point of a curve where the radius of curvature vanishes (or the curvature is infinite), while a "wrinkle" represents a more or less deep local folding of the interface. Other different interpretations of surface wrinkles concern singular points at the origin of a self-contacting interface, which of course indicates a much more singular interface deformation, see Fig. (1) (A9) and [36; 37; 38; 39; 40]. Do they result from a highly nonlinear coupling of modes occurring after the bifurcation, or do they belong to another class of solutions? In the latter case, they can appear below the Biot threshold \(J_{B}\) and even inhibit the classical instability [41; 42]. More recently, the idea that there can be new families of solutions below the Biot threshold has been supported by matched asymptotic analysis [36; 37; 38; 39; 40] or by the nucleation of new solutions in more complex elasticity models and geometries [20; 43]. Some experimental evidence realized on rubber in compression or on swelling gels also seems to favor the second hypothesis [36; 37; 44]. Of course, numerical evidence is always
difficult in the case of spatial singularities, but we must mention the finite element numerical investigation of [45; 46] in favor of a subcritical (or discontinuous bifurcation) before \(J_{B}\) which becomes supercritical (or continuous) at \(J_{B}\) with an important sensitivity of the results to the conditions imposed on the substrate. Another way to study the cusp formation experimentally and theoretically [38] is to create a localized defect in a controlled experiment, mimicking in some way experiments in viscous fluids where the defect is realized by contrarotating cylinders [47]. It should be noted that localized singular structures easily occur in tubes but here the geometry helps the appearance of singular deformations [48; 49].
Despite the similarity that exists between compressive forcing and homogeneous growth in the neo-Hookean approach, this review article focuses on volumetric growth, which is ubiquitous in life. Most of our organs exhibit Biot's instability, which explains our fingerprints, the convolutions of our brains, the villi and the mucosa of the intestines. All these structures appear after a certain time after fertilization in foetal life. They are present in most mammals, except for small rodents. These two observations support an interpretation in terms of morpho-elasticity: the shape of the organ is a determinant factor, as is the volumetric growth, which increases with time from \(J=1\) (no growth expansion) up to critical values.
Before giving mathematical proofs concerning wrinkles, our presentation will begin with a selection of experiments (section II) and a brief introduction to the principles of nonlinear elasticity. In this field of study, positive quantities called invariants \(I_{J}\) are introduced to evaluate the elastic energy density. Since they are specific to finite elasticity, they will be introduced in detail in section III. In addition, the local growth per unit volume creates an external field that does not obey physical rules and is imposed a priori inside the sample. It is not fully comparable to an externally applied compressive dead load, see Sec. IV. We first revisit the original model of Biot for neo-Hookean elasticity in the incompressibility limit and in semi-infinite geometry [23; 50], but for the threshold determination \(J_{B}\) and for nonlinear buckling and wrinkling, we follow a different strategy based on variational principles. Euler-Lagrange equations derived by incremental perturbation techniques are at the origin of the periodic modes and also of \(J_{B}\), the threshold. We then apply the nonlinear techniques of bifurcations, combined with complex analysis, which greatly simplifies the intermediate algebra. The results of Biot are recovered in a much simpler way and nonlinearities are treated above and below the threshold without difficulty. First, subcritical bifurcations, as indicated by [51; 52; 53], are demonstrated by nonlinear sinusoidal mode coupling. Second, wrinkles above and below the Biot threshold are analytically justified by introducing singularities either inside or above the elastic sample.
This notion can be rather abstract, but has been successfully introduced for interfacial flows such as viscous fingering [54; 55; 56; 57], for bubbles in Laplacian and Stokes flows [54; 58], for vortices [59; 60], and for diffusive growth [61; 62]. In fluids, singularities outside the physical plane are used to select the length scale of the interface patterns, but they can be physically introduced into the flow in the experimental setup, leading to a complete change of the interface shape. For example, a coin or a bubble in front of a viscous finger completely changes the shape into a dendritic one [63], and a theoretical interpretation has been given in terms of a dipole.
This idea of a dipole was taken up later [64] in fluids and in linear elastic solids. Also, when vortices are created in viscous fluids, they generate cusps at the interface [65; 66] (in the mathematical sense), which are transformed into sharp wrinkles when a weak surface tension is included [47; 67]. Following a similar strategy, we will consider singularities outside and inside the physical domain, with the aim of discovering the main physical ingredients necessary to predict the observed wrinkles.
In conclusion, the existence of wrinkles in growing soft materials benefits from many theoretical analyses carried out in the last decades on viscous flows (interfacial and vortex flows) and from treatments of singularities in elasticity based on the Noether theorem and path independent integrals, see section XII. These classical but not easy techniques are presented in the following. We limit ourselves to a very simple modeling of hyperelasticity, being convinced that, once established, it will be possible to extend the mathematics to arbitrary geometries and complex structures of soft materials. After the presentation of some experimental examples in section II and a reminder of the foundations of nonlinear or finite elasticity (sections III to VI), we focus on a variational energy method, section VII, where buckling modes are treated at the linear, (section VIII), and nonlinear, (section IX), levels. We then study the possibility of stress focusing [68] inside the material just below the interface, which can induce interfacial wrinkles, in section X. If these zones can be perfectly characterized in morphoelastic growth, (section XI), there is no clear threshold for their observation as demonstrated by the technique of path independent integrals, (section XII). Finally, we come back to the buckling of thin films of finite thickness comparable to the wavelength in section XIII.
## II Selection of Creases in Experiments
The formation of wrinkles and creases in samples of elastomers or swelling gels has fascinated physicists for decades and probably still does. Examples of compressed elastomers are given in Fig.(1) panels \(A1,A2,A4\), and all the other panels concern swelling gels in different experimental setups. In fact, the nucleation of wrinkles in materials known to be highly deformable without plasticity is quite astonishing. It contrasts with the difficulty of nucleating a fracture in a 3D brittle sample under tensile loading: in this case, an initial notch or slit must be deliberately made [69; 70]. Experimentally, it is difficult to elucidate the threshold for the appearance of these wrinkles. Indeed, the homogeneous volumetric growth of a material is equivalent to a compression, but the linear instability threshold discovered by Biot has not been precisely verified experimentally. As for wrinkles, it seems even worse, although there is a tendency to detect them above the Biot
threshold. It is true that the geometry of the experimental setup has its importance on the threshold, as well as the fact that the material is attached to a solid substrate or to another elastic sample. Another important point concerns the size of the experimental setup compared to the instability wavelength and the fact that the neo-Hookean model (or any hyperelastic model) is not really adapted to swelling. The poroelastic model is more appropriate in this case [14; 36; 71]. Independently, R. Hayward and collaborators [72; 53; 42] point out in a series of articles that the bifurcation around the Biot threshold is probably subcritical, which makes a precise experimental determination difficult. However, singular profiles certainly exist, and the last panel (A9) shows the strong stress concentration that leads to the ejection of material pieces from the outer ring [73; 74] during the course of the experiment. Our main concern in the following will be the prediction of patterns around the Biot threshold or below. Nevertheless, let us recall the theory of finite elasticity with or without growth. It will be a way to introduce the main principles of application as well as the mathematical tools. A short presentation of the theory of swelling gels is also included to emphasize the difference between swelling and volumetric growth.
## III A basic introduction to nonlinear elasticity
### A brief reminder of the principles of linear elasticity
Linear elasticity is limited to weak to moderate deformations corresponding to small strains, estimated by the ratio: deformation over a typical length of the deformed specimen. These deformations often occur under external loads, possibly under external fields such as temperature changes. Unlike other heuristic models, such as the Canham-Helfrich [78; 79] models for lipid membranes, elasticity requires knowledge of the initial shape of the body, which is assumed to be free of stress, and focuses on its deformation. Until recently, the goal was to explain and quantify the deformations of stiff materials: steel, wood, concrete, paper, nylon, etc., and their stiffness is usually given by the Young's modulus \(E\) in Pascals. For these materials, the value of \(E\) is on the order of \(10^{9}\) to \(10^{12}\) Pascals, which immediately indicates that it will be very difficult to stretch a cuboid by human force. Nevertheless, the field of linear elasticity remains very active: Being closely related to geometry, any peculiarity leads to a strong interest and curiosity, such as the crumpling of paper [80; 81; 82; 83], the formation of folds [84; 85; 86], or the science of origami [87; 88]. The linearity of the relationship between displacement and load does not automatically imply that the equilibrium equations are linear, as demonstrated by the Kappel-Von Karman equation, where the Hooke formalism is applied but the deformation is extended to the third order [89]. In particular, origami and paper crumpling studies introduce geometric singularities that can be treated with linear elasticity [68], while folding involves nonlinear elasticity. The linearity of the Hooke's law does not automatically imply simplicity of the theoretical treatment when the initial shape is complex. In fact, the formalism leads to partial differential equations, and this geometric complexity is also recovered in nonlinear elasticity. Thus, the main question is when nonlinear elasticity is required a priori.
### The principles of nonlinear elasticity
Once a material is soft, even very soft, with a Young's modulus \(E\) not greater than \(10^{5}\), the displacement of any point of the sample under load can be of the order of the original shape. Then, a precise description for the internal stresses and also for the geometry of the deformations are required. Not all nonlinear descriptions of the elastic energy density W are possible because they must satisfy strong mathematical properties dictated by the laws of mechanics, such as objectivity and convexity. Objectivity means that the elastic energy remains invariant under rigid rotation or translation. Convexity means that for small displacements \(u\), \(\delta W\sim\alpha u^{2}\) with \(\alpha>0\). We consider an undeformed body with no internal stresses, where each point \(M\) is represented by the capital letters \(M(X,Y,Z)\) (for simplicity, Cartesian coordinates are chosen and maintained throughout the manuscript). Then there exists a vectorial mapping function \(\chi\) such that relates the new coordinates of the displaced point \(m\) to the coordinates of the original point such that \(\bar{O}m=\bar{O}M+\vec{u}\). \(\vec{u}\) is the vector displacement according to the same definition as in linear elasticity. One of the most important mathematical tools is the deformation gradient tensor which reads:
\[\mathbf{F}=\nabla\chi\quad\text{where}\quad F_{ij}=\frac{\partial x_{i}}{ \partial X_{j}}=\delta_{ij}+\frac{\partial u_{i}}{\partial X_{j}}\,. \tag{1}\]
The hyperelastic energy density \(W\) must respect spatial isotropy (if there is no preferred direction in the structure of the body) and be invariant for any change in the coordinate system. Consequently, it must be represented by the trace or determinant of tensors constructed with \(\mathbf{F}\). We start with the simplest invariants, the most common one being defined with the Cauchy right tensor \(\mathbf{C}=\mathbf{F}^{\mathbf{T}}\mathbf{F}\) to satisfy the objectivity requirement.
\[I_{1}=\mathrm{Tr}(\mathbf{C})\quad\mathrm{I}_{2}=\frac{1}{2}\left\{(\mathrm{ Tr}(\mathbf{C}))^{2}-\mathrm{Tr}(\mathbf{C}^{2})\right\}, \tag{2}\]
\(I_{1}\) can be written as \(I_{1}=F_{ij}\cdot F_{ij}\) where summation on repeated indices holds. The third invariant \(I_{3}=\)Det\((\mathbf{F})\) is related to the local volume variation and must be a positive number. Homogeneous hyperelastic energy densities are basically functions of these \(3\) invariants, but can also be restricted to two of them, generally \(I_{1}\) and \(I_{3}\) as for the neo-Hookean energy density, while the linear combination of \(I_{1}\), \(I_{2}\) and \(I_{3}\) is called the Mooney-Rivlin model. One may wonder how to recover the weakly nonlinear energy density described by the Lame coefficients. The simplest way is to define \(\mathbf{H}=\mathbf{F}-\mathbf{I}\) first, and then the elastic energy density W as
\[W=\frac{1}{2}\left\{\mu_{L}(\mathrm{Tr}\left(\mathbf{H}^{\mathbf{T}}\cdot \mathbf{H}\right)+\mathrm{Tr}(\mathbf{H})^{2})+\lambda_{L}\mathrm{Tr}(\mathbf{ H}^{2})\right\}. \tag{3}\]
Figure 1: In (A1) and (A2), compression of a parallelepiped specimen: micrographs of a pair of wrinkles from the front and from the top view at a strain level of \(45\%\), above the Biot threshold. [75]. The critical strain for wrinkling is \(37.2\%\). In (A3) experimental herringbone array of a PDMS swelling gel sample, courtesy of Derek Breid [51; 52].
In (A4) Confocal microscopy for elastomer surfaces under compressive strain with an initial thickness of \(23\mu\)m [76]. In (A5) and (A6) two optical micrographs of wrinkle growth for a gel containing \(15\) mol \(\%\) NaAc, obtained by cooling, with an initial thickness of \(15\mu\)m [77], in (A5) from to \(33.2^{\circ}C\) to \(31.7^{\circ}C\) in (A6) up to to \(25^{\circ}C\).
In (A7) Creases in circular geometry: a pioneering experiment by T. Tanaka _et al._ on the swelling of an ionized acrylamide gel in water. In (A8) A ring of charged polyacrylamide gel (yellow) around a hard disk of neutral polyacrylamide gel (transparent) viewed from above: initial diameter: \(50\) mm and imposed thickness of \(1\) mm. The outer ring swells by immersion in distilled water; the swelling is highly inhomogeneous in this geometry. The inner disk acts as a constraint, and after the appearance of smooth wrinkles, wrinkles develop at the outer boundary above a certain threshold of volume variation [14]. In (A9) the same experimental setup as in (b) with a focus on a single cuspidal point [74]. For clarity, the attached line of fracture or refolding has been underlined in black. Note that it may appear as a self-contacting interface or as a fracture in compression [38].
Note that such a formulation is not suitable for incompressible materials, since the coefficient \(\lambda_{L}\) diverges. In fact, for incompressible materials, \(I_{3}=1\), a limit corresponding to a Poisson ratio \(\sigma=0.5\) in linear elasticity. If a preferred direction is present in the materials, as is often the case in organs such as heart, arteries, and skeletal muscles, more invariants are needed indicating an increase in stiffness. These invariants will depend on \(\mathbf{C}\) and on the orientation of a unit vector \(\vec{e}_{0}\) which indicates the direction of the fibers, assuming that this direction is unique. The Helmoltz free energy for an incompressible sample is then
\[\mathcal{E}=\iiint_{\Omega}dV\ W(I_{1},I_{2},I_{4},I_{5})-Q\ (I_{3}-1)\,, \tag{4}\]
where dV is the volume element in the reference configuration and \(Q\) is a Lagrange multiplier that fixes the physical property of incompressibility. The energy density \(W\) is a positive scalar that vanishes for \(\mathbf{C}=\mathbf{I}\). If a material is anisotropic only in a single direction, defined by the unit vector \(\vec{e}_{0}\) in the reference configuration, then two invariants must be added, such as \(I_{4}\) and \(I_{5}\), given by \(I_{4}=\vec{e}_{0}.(\mathbf{C}\vec{e}_{0})\) and \(I_{5}=\vec{e}_{0}.(\mathbf{C}^{2}\vec{e}_{0})\)[90]. In the biological context, materials can have other directions of anisotropy, in which case other invariants are introduced with a new vector \(\vec{e}_{1}\). For compressible materials, the energy is composed of two terms: a volumetric term, which is a function of \(I_{3}\): \(\Psi(I_{3})\), and a strain energy function, where all components of the strains are divided by \(I_{3}^{1/3}\) in \(3D\) so \(\bar{I}_{1}=I_{1}/I_{3}^{2/3}\) and \(\bar{I}_{2}=I_{2}/I_{3}^{4/3}\) :
\[\mathcal{E}=\iiint_{\Omega}dV\ W(\bar{I}_{1},\bar{I}_{2})+\Psi(I_{3})\,. \tag{5}\]
Note that in \(2\)D, the new strains are divided by \(\sqrt{I_{3}}\). Compressible elasticity leads to much more complex calculations in practice and different simpler models can be found in the literature [91] as the compressible Mooney-Rivlin model [92]:
\[\begin{cases}W_{MR}&=c_{1}(I_{1}-3)+c_{2}(I_{2}-3)+c(I_{3}-1)^{2}\\ &-2(c_{1}+2c_{2})Log(I_{3})\,.\end{cases} \tag{6}\]
Finally, if an external mechanical load \(\vec{B}\) is applied onto the system and/or on its surface \(\vec{T}\), the work they exert on the sample must be added to eq.(4) or to eq.(6) according to:
\[\mathcal{E}_{add}=-\iiint_{\Omega}dV\ \vec{B}\cdot\vec{x}-\iint_{\partial \Omega}d\mathcal{A}\ \vec{T}\cdot\vec{x}. \tag{7}\]
Let us now derive the so-called constitutive equations, which are the counterpart of the Hooke's law of the linear elasticity theory.
### Constitutive equations in finite elasticity and definition of the elastic stresses
The constitutive equation is the relation between the stress tensor \(\mathbf{S}\) and the gradient of the deformation tensor \(\mathbf{F}\) which can be obtained from the variation of the elastic energy. The Euler-Lagrange equation results from the extremum of \(\mathcal{E}+\mathcal{E}_{add}\) with respect to the variation of the new position \(\delta x\) and also of \(Q\). Mathematically, it reads:
\[\delta[\mathcal{E}+\mathcal{E}_{add}](x,y,z;x_{i})=0\quad\text{and}\quad \delta\mathcal{E}(x,y,z;Q)=0\,, \tag{8}\]
for arbitrary variation of \(x_{i}\) and \(Q\). As before \(x_{i}\) means either \(x\), or \(y\), or \(z\), which are the current coordinates of the displaced point \(m\), initially located at \(M\). Then
\[\delta\mathcal{E}=\iiint_{\Omega}dV\left(\frac{\partial W}{\partial\mathbf{F} }-Q\,\mathbf{F}^{-\mathbf{T}}\right)\delta\mathbf{F}, \tag{9}\]
where we have used the tensorial relation for an arbitrary tensor \(\mathbf{A}\), which is \(\partial\) Det( \(\mathbf{A})/\partial\mathbf{A}=\text{Det}(\mathbf{A})\,\mathbf{A}^{-\mathbf{ T}}\). Then we derive the Piola stress tensor \(\mathbf{S}\) for an incompressible material:
\[\mathbf{S}=\frac{\partial W}{\partial\mathbf{F}}-Q\,\mathbf{F}^{-\mathbf{T}}\,. \tag{10}\]
Note that the Piola stress tensor, also called the first Piola-Kirchoff stress tensor [91] is the transpose of the nominal stress tensor [93]. Once \(W\) is selected, this relation represents the constitutive relation of the material. Since we must perform the variation with respect to the current position \(\vec{x}\) in the coordinate system of the reference configuration \(\vec{X}\), an integration by part leads for \(\delta\mathcal{E}+\delta\mathcal{E}_{add}\):
\[\begin{cases}\delta\mathcal{E}+\delta\mathcal{E}_{add}=\iint_{\partial\Omega} d\mathcal{A}\ (-\vec{T}+\mathbf{S}\cdot\vec{N})\cdot\vec{\delta}x\\ -\iint_{\Omega}dV\,(\operatorname{Div}\mathbf{S}+\vec{B})\cdot\delta\vec{x}= 0\,.\end{cases} \tag{11}\]
When the equilibrium is reached:
\[\operatorname{Div}\mathbf{S}+\vec{B}=0,\quad\mathbf{S}\cdot\vec{N}=\vec{T}\,. \tag{12}\]
The Piola stress tensor \(\mathbf{S}\) is not the only stress that can be defined in finite elasticity. In fact, by definition, a stress is the ratio between a force and a surface, and the value is not the same in the reference or in the current configuration where the Cauchy stress is evaluated according to:
\[\iint_{\partial\Omega}d\mathcal{A}\ (\mathbf{S}.\vec{N})=\iint_{\partial\Omega}d \,a\ (\mathbf{\sigma}.\vec{n})\,. \tag{13}\]
Using Nanson's formula: \(da\,\vec{n}=d\mathcal{A}\vec{N}(\text{Det}(\mathbf{F})\mathbf{F}^{-\mathbf{T}})\), we obtain the Cauchy stress \(\mathbf{\sigma}\):
\[\mathbf{\sigma}=\operatorname{Det}(\mathbf{F})^{-1}\mathbf{S}\mathbf{F}^{\mathbf{T }}\quad\text{and}\quad\mathbf{S}\mathbf{F}^{\mathbf{T}}=\mathbf{F}\mathbf{S} ^{\mathbf{T}}\,. \tag{14}\]
The Cauchy stress is imposed to be symmetric unlike \(\mathbf{S}\) and the last equality results for the Piola stress tensor \(\mathbf{S}\) which is not symmetric. Note that although in this section the determinant of \(\mathbf{F}\) is equal to one, we keep this notation which will change when growth is considered. In the literature and in classical textbooks (see [91; 21; 93] for instance) there are other alternative stress tensors, all of which are related to the Piola stress tensor, as opposed to linear elasticity. Relations between them can be established as soon as \(\mathbf{F}\) is known.
### Simple geometry and stretches
When the specimen geometry is simple such as the cube, the cylinder and the sphere, the deformation gradient tensor can be diagonal in the corresponding coordinate system and the equations of elasticity become simpler if the deformations follow the same symmetry. Let us start with a parallelepiped with coordinates \(0<X<L_{X},0<Y<L_{Y},0<Z<L_{Z}\), subjected to a compressive force on the two opposite faces normal to \(\vec{e}_{Y}\) (see Fig.(2)). In this case, we expect a simple deformation \(x=\lambda_{1}X,y=\lambda_{2}Y\) and \(z=\lambda_{3}Z\) and the diagonal tensors \(\mathbf{F}\) and \(\mathbf{S}\) are easily obtained:
\[\begin{cases}\mathbf{F}=\mathrm{Diag}(\lambda_{1},\lambda_{2},\lambda_{3}), \\ \text{and}\\ \mathbf{S}=\mathrm{Diag}(\frac{\partial\mathrm{W}}{\partial\lambda_{1}}- \frac{\mathrm{Q}}{\lambda_{1}},\frac{\partial\mathrm{W}}{\partial\lambda_{2} }-\frac{\mathrm{Q}}{\lambda_{2}},\frac{\partial\mathrm{W}}{\partial\lambda_{3 }}-\frac{\mathrm{Q}}{\lambda_{3}})\,.\end{cases} \tag{15}\]
where \(\mathbf{S}\) follows the definition of eq.(10). In this simple geometry and for constant values of \(\lambda_{i}\), \(\mathbf{S}\) is diagonal with constant components, so it automatically satisfies the equilibrium equation eq.(12) in the absence of internal mechanical load \(\vec{B}\). The eigenvalues of \(\mathbf{F}\) are called stretches. Since there is no force acting on the surfaces perpendicular to \(\vec{e}_{X}\) and \(\vec{e}_{Z}\), the Lagrange parameter \(Q\) is then
\[Q=\lambda_{1}\frac{\partial\mathrm{W}}{\partial\lambda_{1}}\quad\text{and} \quad Q=\lambda_{3}\frac{\partial\mathrm{W}}{\partial\lambda_{3}}\,. \tag{16}\]
For an isotropic sample, \(W\) is a symmetric function of the stretches \(\lambda_{i}\), and there is no reason to distinguish between both directions, here \(1\) and \(3\) so \(\lambda_{1}=\lambda_{3}=1/\sqrt{\lambda_{2}}\) due to the assumption of incompressibility. After applying a compressive load, we finally get:
\[\frac{\partial\mathrm{W}}{\partial\lambda_{2}}-\frac{\lambda_{1}}{\lambda_{2 }}\frac{\partial\mathrm{W}}{\partial\lambda_{1}}=-P_{0}\,. \tag{17}\]
Assuming a neo-Hookean material with a shear modulus \(\mu\) chosen as the unit of stress, then the energy density is \(W=1/2(I_{1}-3)\) and the stretch \(\lambda_{2}\) is the solution of the cubic equation:
\[\lambda_{2}^{3}+P_{0}\lambda_{2}^{2}-1=0, \tag{18}\]
which has a unique real root: \(\lambda_{2}\sim 1-P_{0}/3\) for small \(P_{0}\) and for large compression, the stretch is close to zero so \(\lambda_{2}\sim 1/\sqrt{P_{0}}\). Note the simplicity of the derivation of such a solution which, however, implies that the points at the bottom of the cube can slide freely, without friction.
## IV Competition between elasticity and continuous fields
Independent of local forces applied to the surface, the shape of a body can change due to different external fields applied and elasticity can be only one cause of the deformation among others. The nonlinear elastic formalism explained above concerns only a part of the global visible deformation and in practice it is not so easy to separate the elastic part from the overall shape. In the case of volumetric growth, each small piece of the sample which initially has a volume \(\delta\Omega\) becomes \(\delta\omega\) after a growth or drying process that results in a change in the total volume but also in a change in shape or morphology. In the following, the word growth will be used to refer either an increase or a decrease in volume. Furthermore, growth can refer to the cell proliferation, as in embryos, or to the swelling of gels, as already shown in the experiments mentioned in section II. It can also refer to drying or volume decrease.
To separate the growth from the elastic deformation, we keep the definition of the mapping \(\chi\) between the initial state and the observed state at time \(t\) as it is defined in eq.(1). This mapping gives only a geometric information and we split the tensor \(\mathbf{F}\) into two components: a tensor mimicking the growth \(\mathbf{G}\) and the tensor \(\mathbf{F_{e}}\) for the elasticity, so that :
\[\mathbf{F}=\mathbf{F_{e}G}\ \text{ so }\quad\mathbf{F_{e}}=\mathbf{F}\mathbf{G^{-1}}. \tag{19}\]
This relation, inspired by plasticity modeling and proposed in biomechanics by Rodriguez et al. Rodriguez et al. (2014) is local, and \(\mathbf{G}\) is the growth tensor at a point \(M\) of the sample, obtained after a period \(t\). This law is cumulative in time meaning that the determinant of \(\mathbf{G}\) gives the local amount of growth variation between the initial time and the time of observation. This approach assumes that transient states are quickly eliminated to make room for a slowly adiabatic dependent growth state where the time is an index. Although not intuitive, this formalism actually allows to quantitatively translate some aspects of biological growth, such as inhomogeneity, but also anisotropy of growth: \(\mathbf{G}\) is a tensor, so it simultaneously represents \(3\) directions and \(3\) eigenvalues, each of them associated with a direction.
A question that immediately comes to mind is the order of the two tensors \(\mathbf{F_{e}}\) and \(\mathbf{G}\) when they do not commute. This
Figure 2: On the left a schematic representation of a soft material in blue, in the initial configuration, on the right the same sample in the current configuration. A normal pressure \(P_{0}\) is applied to both surfaces (X,Z) on the left, which becomes \(p_{0}\) on the right. For clarity, only one side is shown. To emphasize the deformation, a non-deformable substrate is shown in black and it is assumed that the sample slides on this substrate without friction. Note that the pressure in the reference and current configurations are different due to the expansion of the lateral surfaces.
and other questions have been discussed, see [95]. A physicist would argue that, since the stresses are due to the growth, then the position of \({\bf G}\) is obviously on the right side. Another difficult problem arises simply from the fact that growth is often associated with a process defined per unit time and may be better represented in an Eulerian description while here we are faced with a Lagrangian formulation that relates an initial state to a current state at time \(t\). This approach more or less intuitively assumes that the time scale of growth is extremely long compared to any time scale at the origin of the dissipation, reorganization, or remodeling of the samples [96]. Despite its apparent conceptual simplicity, this formalism has generated significant contributions in embryogenesis, morphogenesis and also in the description of various pathologies such as wound healing, fibrosis and tumorogenesis. As suggested by eq.(19), growth induces stresses so not only a change in volume but also a change in its shape and one may wonder if this is always the case. In the next section, we will examine the origin of the stresses induced by growth.
### The origin of the elastic stresses
#### iv.1.1 Growth without stress generation
Materials can grow without stress if they can follow and adapt themselves to the imposed growth tensor. This is possible if there are no boundary conditions restricting the growth. Homogeneous growth \({\bf G}=G_{0}{\bf I}\) of a spherical object (without weight) does not generate any stress in the material. If the growth tensor is more complex, e.g. inhomogeneous and anisotropic, the shape of the body will change, as it grows. The question of a stress-free process has recently been explored [97; 98] and examples from living systems have been given. If the deformation of the body can exactly follow the growth process, then \({\bf F}={\bf G}\) and is independent of the material properties of the body. Such a relation allows to obtain the tensor \({\bf G}\) and thus the properties of the growth which are mostly unknown for macroscopic samples. This process requires the absence of constraints from boundaries and external forces such as gravity. The best example can be given by fresh planar leaves [99]. To verify such a hypothesis, one possible test is to cut the material at right angles. If there is no crack opening, then the material is considered as stress-free. When the leaves have in-plane residual stresses due to growth, they cannot remain planar, as shown in [74], and they buckle. Recently, a general proof of stress-free growth by conformal mapping was given [100].
#### iv.1.2 Constrained growth process
Obviously, the main source of stress comes from boundary conditions, especially from rigid walls. Imagine a parallelepiped where only one side is rigidly attached to a substrate and then it cannot evolve freely. This is the case with gels, where the chains of polymers adhere to the substrate, and then mimic clamped conditions. But it is also the case of parallel layers with different elastic properties, that are attached to each other, and grow according to their own rules. The best example concerns growing epithelia, always connected to their ECM (extracellular matrix), such as the imaginal disc of the Drosophila wing [28], the skin layers of the epidermis [101; 102], and also the cortex of the brain connected to the white matter [103; 22], in the embryonic period.
Finally, it is known that life is compatible with elastic stresses, which is the basis of the criterion of homeostasis for mammals: Compressive stress above the homeostatic pressure reduces the cell proliferation, while tensile stress favors the proliferatin.
### Volumetric Growth and elasticity
The elasticity invariants defined in eq.(2) refer to the elastic tensor \({\bf F_{e}}\) and not to the deformation gradient \({\bf F}\). \({\cal E}\) must now take into account the growth per unit volume of the sample, which is represented by \(\mbox{Det}\;({\bf F})=\mbox{Det}\;({\bf G})=J\) for an incompressible material, and the elastic energy becomes
\[{\cal E}=\iiint_{\Omega}dV\;J\left\{W(I_{1},I_{2},I_{4},I_{5})-Q\;(I_{3}-1) \right\}. \tag{20}\]
The invariants are given by eq.(2) where \({\bf F}\) is replaced by \({\bf F_{e}}\). In eq.(20) the growth appears explicitly by the factor \(J\) which indicates that the material volume has changed and implicitly in the substitution of \({\bf F}\) by \({\bf F_{e}}\) in all the invariants. If we also consider this substitution in the definition of \({\bf S}\) in eq.(10) and of \(\sigma\) in eq.(14), we have
\[{\bf S}=J{\bf G^{-1}}\left\{\frac{\partial W}{\partial{\bf F_{e}}}-Q\,{\bf F_{ e}^{-T}}\right\};\;\mathbf{\sigma}={\bf F_{e}}\frac{\partial W}{ \partial{\bf F_{e}}}-Q\,{\bf I}. \tag{21}\]
In contrast to the Piola stress tensor, \({\bf S}\), the Cauchy stress \(\sigma\) shows no signature of the growth, which can be surprising. At this stage, it is important to emphasize that, first, these two tensors are not defined in the same coordinate basis, and second, only forces acting on a given surface are invariant quantities, as will be shown later. To illustrate this paragraph, we consider a growth process that is anisotropic and the example of section III.4. There is no change in the elastic stretches except for the compressive loading \(P_{0}\) which becomes \(P_{G}=P_{0}(g_{1}g_{3})\), if we want to keep the same stress level. The stretches do not change and \(\lambda_{1}\) is the solution of eq.(18) with \(P_{0}\). However, due to the growth, the new coordinates will be: \(x_{i}=\lambda_{i}g_{i}X_{i}\).
Now consider the case where the bottom surface of the cuboid is attached to a rigid substrate, assuming anisotropic growth but no applied external compressive stress. Then for \(X=0\), the points of this surface cannot move and \(y=Y\) and \(z=Z\). If no displacement is possible in the \(Y\) and \(Z\) directions, the simplest choice is to choose the same rules \(y=Y\) and \(z=Z\), everywhere in the sample, and only the allowed displacements are in the \(X\) direction so that \(x=JX\). The elastic stretches are then:
\[\lambda_{2}=\frac{1}{g_{2}};\quad\lambda_{3}=\frac{1}{g_{3}};\quad\lambda_{1}= \frac{J}{g_{1}}=g_{2}g_{3}. \tag{22}\]
According to eq.(21) the Piola stress tensor at the top in the neo-Hookean approach becomes:
\[S_{1}=g_{2}g_{3}\left(g_{2}g_{3}-Q\frac{1}{g_{2}g_{3}}\right)=0;\ Q=g_{2}^{2}g_{3 }^{2}. \tag{23}\]
In both horizontal directions, we have:
\[S_{2}=g_{1}g_{3}\left(\frac{1}{g_{2}}-g_{2}^{3}g_{3}^{2}\right);S_{3}=g_{1}g_{2 }\left(\frac{1}{g_{3}}-g_{2}^{2}g_{3}^{3}\right). \tag{24}\]
Note that the horizontal stresses are compressive, which means \(g_{i}>1\), indicating that compressive stresses must be applied to the vertical faces at \(\pm L_{Y}\) and at \(\pm L_{Z}\) to maintain such deformation. Another possibility is an infinite sample in the \(Y\) and \(Z\) directions. However, growth can also induce a buckling instability which will be studied in detail in the following. When buckling occurs, this simple choice of deformations must be modified, but the main deformation remains for low stress levels above the buckling threshold.
In conclusion, a substrate that prohibits any displacement at the bottom of the parallelepiped is an obstacle to free growth at the origin of compressive stresses, leading eventually to a shape bifurcation.
## V Swelling of gels
Swelling hydrogels have the advantage of mimicking the mechanical behavior of growing soft tissue while being precisely controllable. They consist of reticulated networks of polymer chains with a high proportion of small solvent molecules. A phase transition can be induced in the sample when it comes into contact with a reservoir of solvent, resulting in an amazing increase in volume. Although they are perfect candidates for mimicking growing tissues, growth and swelling have different microscopic origins. A swollen hydrogel is a system in both mechanical and thermodynamic equilibrium, and the swelling does not produce any new polymeric components, which constitute the only elastic phase and become increasingly dilute during the swelling process. In addition, the solvent has no reason to be uniformly distributed in the sample. For this reason, different poroelastic models have been proposed for the swelling [104; 105; 106] but also for plant or animal tissues [107; 108; 109; 110; 111]. Here, we choose a presentation by Hong et al. [6; 71] slightly modified to be as close as possible to the section IV.
In fact, at equilibrium, the minimization concerns the grand potential \(\hat{\mathcal{W}}=\mathcal{W}(\mathbf{F},\mathcal{C})-\mu\mathcal{C}\) where \(C\) is the solvent concentration and \(\mu\) is the chemical potential: \(\mu=\partial\mathcal{W}(\mathbf{F},\mathcal{C})/\partial C\). If the gel is in contact with a reservoir full of solvent, then at the interface between the reservoir and the swelling gel, the two chemical potentials in both phases are equal: \(\mu=\mu_{s}\). If incompressibility is assumed and can be considered as the sum of its incompressible components, then \(C\) is related to \(\text{Det}(\mathbf{F})\) by the relation: \(\text{Det}(\mathbf{F})=1+\nu C\), where \(\nu C\) is simply the ratio between the volume of the solvent molecules and that of the dry matrix. Obviously, although the experiments on swelling gels are easier to perform and show interesting patterns similar to those observed in nature, we are still faced with two coupled fields: the elastic and the chemical one. Let us consider the variation of the free energy density:
\[\delta\hat{\mathcal{W}}=\delta W(\mathbf{F},C)-\mu\delta C=\frac{\partial W}{ \partial\mathbf{F}}\delta\mathbf{F}+\left(\frac{\partial W}{\partial C}-\mu \right)\delta C\,, \tag{25}\]
where \(\delta C\) is replaced by \(\delta\)\(\text{Det}(\mathbf{F})/\nu\)=\(\text{Det}(\mathbf{F})\,\mathbf{F}^{-\mathbf{T}}\delta\mathbf{F}/\nu\). Then, the corresponding stress becomes:
\[\mathbf{S}=\frac{\partial W}{\partial\mathbf{F}}+\frac{1}{\nu}\left(\frac{ \partial W}{\partial C}-\mu\right)\text{Det}(\mathbf{F})\mathbf{F}^{-\mathbf{ T}}\,. \tag{26}\]
The free energy density \(W(\mathbf{F},C)\) is often represented by the addition of two components: \(W_{e}(\mathbf{F})\) and \(W_{c}(C)\), where the first represents the elastic energy of the polymer matrix, the second, the contribution \(W_{c}(C)\)which depends only on \(C\). For \(W_{e}(\mathbf{F})\), a classical formulation due to Flory and Rehner [112] leads to :
\[W_{e}(\mathbf{F})=\frac{1}{2}NkT\left(I_{1}-3-2Log(\lambda_{1}\lambda_{2} \lambda_{3})\right), \tag{27}\]
for a compressible polymer matrix that satisfies the neo-Hookean elasticity, \(N\) is the number of the polymer chains, while for \(W_{c}(C)\) we have:
\[W_{c}(C)=-\frac{kT}{\nu}\left(\nu CLog[\frac{(1+\nu C)}{\nu C}]+\frac{\Upsilon }{1+\nu C}\right). \tag{28}\]
If we consider the case of a cuboid with clamped conditions at the bottom, then we can again imagine a diagonal strain and stress tensors with \(\lambda_{2}=\lambda_{3}\) and \(\mathbf{F}_{11}=\lambda_{1}\), so that
\[S_{1}=NkT\left\{\lambda_{1}-\frac{1}{\lambda_{1}}-\frac{1}{N\nu}\lambda_{2}^{ 2}\left(w^{\prime}+\frac{\mu}{kT}\right)\right\}=0\,, \tag{29}\]
\[S_{2}=NkT\left\{\lambda_{2}-\frac{1}{\lambda_{2}}-\frac{1}{N\nu}\lambda_{1} \lambda_{2}\left(w^{\prime}+\frac{\mu}{kT}\right)\right\}, \tag{30}\]
with
\[w^{\prime}=-\left(Log(\frac{\lambda_{1}-1}{\lambda_{1}})+\frac{1}{\lambda_{1} }+\frac{\Upsilon}{\lambda_{1}^{2}}\right), \tag{31}\]
and a similar result for \(S_{3}\), which is equal to \(S_{2}\). The relative increase of the height \(\lambda_{1}\) in the vertical direction leads to a compressive stress in the horizontal directions, at the origin of the buckling of the sample. Here the control parameter is \(\mu/\nu\) at the origin of the swelling/deswelling. Although there is an analogy between volumetric growth and swelling, the theoretical approach will be more uncertain in the second case and also more dependent on the experimental conditions. Therefore, for our purposes, and in the following, we will restrict ourselves to the simplest initial geometry and suggest how we can interpret the experiments shown in section II.
Biod's theory applied to rubber in compression versus volumetric growth
### Compression and critical instability threshold
Thick samples can buckle under compression. This volumetric instability occurs when the compressive stresses due to load reach a threshold value. In fact, as mentioned in section II, experimentalists often characterize buckling by the compressive strain \(\lambda_{2}\) rather than by the compressive load. In fact, strain, which is the ratio of the length of the specimen to the initial length, is more easily evaluated. Biot has studied this buckling instability in detail, in particular for the neo-Hookean and Mooney-Rivlin models for a semi-infinite sample representing a free surface, subjected to a lateral compression, which we will call \(P_{0}\). This simple geometry allows a diagonal representation of the strains and stresses before the bifurcation, and this instability is often called a surface instability because it is more easily observed at the surface. His proof concerns a simple plane strain instability controlled by a parameter \(\xi\), above which the simple diagonal representation ceases to be valid. \(\xi\) and \(\xi_{B}\) are given by:
\[\xi=\frac{\lambda_{1}^{2}-\lambda_{2}^{2}}{\lambda_{1}^{2}+\lambda_{2}^{2}} \quad\text{and}\quad\xi_{B}=0.839287. \tag{32}\]
For the neo-Hookean model, Biot [23] has established the following relation for \(\xi_{B}\):
\[\mathcal{Q}_{B}=\xi_{B}^{3}+2\xi_{B}^{2}-2=0\,. \tag{33}\]
We will consider three different cases, the first two were considered in [113]. The stresses are defined in the current configuration and \(\mathbf{\sigma}\) represents the Cauchy stress. In the following three cases there is no stress on the top free surface, which leads to : \(\sigma_{1}=\lambda_{1}^{2}-Q=0\) when the shear modulus is chosen as unity: \(\mu=1\). It gives \(Q=\lambda_{1}^{2}\). Remember that in this case \(\sigma_{1}-\sigma_{i}=-\sigma_{i}=\lambda_{1}^{2}-\lambda_{i}^{2}\,\) for \(i=2\) or \(3\).
#### vi.1.1 Case one
We assume that there is no strain in the \(Z\) direction and \(\lambda_{3}=1\)
\[\mathbf{F}=\mathrm{Diag}(\lambda_{1},\lambda_{2},1);\ \mathbf{\sigma}=\mathrm{ Diag}(0,\lambda_{2}^{2}-\lambda_{1}^{2},1-\lambda_{1}^{2}). \tag{34}\]
With this choice, incompressibility imposes: \(\lambda_{2}=1/\lambda_{1}\) and the parameter \(\xi\) becomes:
\[\xi=\frac{\lambda_{1}^{2}-1/\lambda_{1}^{2}}{\lambda_{1}^{2}+1/\lambda_{1}^{2} }\quad\text{so}\quad\lambda_{1}=\left(\frac{1+\xi}{1-\xi}\right)^{1/4}. \tag{35}\]
At the threshold of stability, the value of the stretches are then given by \(\xi=\xi_{B}\), and \(\lambda_{1}=1.839287\) so \(\lambda_{2}=0.543689\), and compressive stresses occur in both directions for \(Y\) with \(\sigma_{2}=-3.0873\) and for \(Z\,\) with \(\sigma_{3}=-2.38298\).
#### vi.1.2 Case two
Choosing now \(\lambda_{1}=\lambda_{3}\)
\[\mathbf{F}=\mathrm{Diag}(\lambda_{1},\lambda_{2},\lambda_{1})\quad\mathbf{\sigma} =\mathrm{Diag}(0,\lambda_{2}^{2}-\lambda_{1}^{2},0)\,. \tag{36}\]
With this choice, the incompressibility imposes: \(\lambda_{2}=1/\lambda_{1}^{2}\) and the parameter \(\xi\) and \(\lambda_{1}\) become:
\[\xi=\frac{\lambda_{1}^{2}-1/\lambda_{1}^{4}}{\lambda_{1}^{2}+1/\lambda_{1}^{4 }}\quad\text{so}\quad\lambda_{1}=\left(\frac{1+\xi}{1-\xi}\right)^{1/6}\,, \tag{37}\]
which gives the instability when \(\lambda_{1}=1.50118\) and \(\lambda_{2}=0.443746\). The compressive stress occurs only in the \(Y\) direction with \(\sigma_{2}=-2.05663\).
#### vi.1.3 Case three
Finally for the third case, we assume that the compressive loads act similarly in both directions: \(Y\) and \(Z\).
\[\mathbf{F}=\mathrm{Diag}(\lambda_{1},\lambda_{2},\lambda_{2});\ \mathbf{\sigma}= \mathrm{Diag}(0,\lambda_{2}^{2}-\lambda_{1}^{2},\lambda_{2}^{2}-\lambda_{1}^{ 2}). \tag{38}\]
With this choice, incompressibility imposes \(\lambda_{2}=1/\sqrt{\lambda_{1}}\) and the parameter \(\xi\) and \(\lambda_{1}\) become:
\[\xi=\frac{\lambda_{1}^{2}-1/\lambda_{1}}{\lambda_{1}^{2}+1/\lambda_{1}}\quad \text{so}\quad\lambda_{1}=\left(\frac{1+\xi}{1-\xi}\right)^{1/3}, \tag{39}\]
which gives the instability when \(\lambda_{1}=2.25354\) and \(\lambda_{2}=0.666142\) and a compressive stress equal in \(Y\) and \(Z\) direction: \(\sigma_{2}=\sigma_{3}=-4.6347\). Note that this last case is not considered by Biot.
### Semi-infinite samples under volumetric growth
As shown earlier, the Biot instability is mostly controlled by the strains that are directly observable for a solid under compression. There is no difference between the elastic and the geometric strains as opposed to growth. Assuming that the previous analysis remains valid, we will try to apply the Biot approach to volumetric growth. To do so, we will reconsider the three cases defined above.
#### vi.2.1 Case one
This case concerns \(\lambda_{3}=1\), which means that in this direction the displacement is equal to growth. Then the critical elastic strains evaluated in section VI.1.1 are equal to \(\lambda_{1}\sim 1.839\) and \(\lambda_{2}=0.543689\). There are several cases depending on how the growth is organized in the sample. For isotropic growth without displacement in the \(Y\) direction, we have \(x=JX\), \(y=Y\) and \(z=gZ\) with \(\lambda_{1}=J/g\), \(\lambda_{2}=1/g\) and \(J=g^{2}\). So the expansion in the \(X\) and \(Z\) direction at
criticality is \(g=1.839\). These values were determined directly in [19] and are recovered in a different way in section VII. The compressive stresses in the \(Y\) and \(Z\) directions become: \(\sigma_{2}=-3.0874\) and \(\sigma_{3}=-2.383\). \(J\) can be evaluated by noting that \(\xi=(J^{2}-1)/(J^{2}+1)\) which once introduced into eq.(33) leads to the polynomial for \(J_{B}\):
\[\mathcal{Q}_{J}=J_{B}^{3}-3J_{B}^{2}-J_{B}-1=0;\ J_{B}=3.38298\,. \tag{40}\]
This configuration will be examined in detail in all the following sections.
#### vi.1.2 Case two
This case concerns the growth of a sample with \(2\) sides without stress. Assuming \(x=J_{1}X\), \(y=J_{2}Y\) and \(Z=J_{1}X\), then at the threshold \(\lambda_{1}=J_{1}/g=\lambda_{3}=1.5012\) and \(\lambda_{2}=J_{2}/g=0.4437\) with \(g\) defined as \(g^{3}=J_{1}^{2}J_{2}\). There is only a compressive stress in the \(Y\) direction with the same value as in section VI.1.2 \(\sigma_{2}=-2.0567\)
#### vi.1.3 Case three
In this case it is assumed that \(x=J_{1}X\), \(y=J_{2}Y\) and \(z=J_{2}Z\). If the displacement is forbidden along the \(Y\) and \(Z\) directions, then \(J_{2}=J_{3}=1\) and \(J=J_{1}=g^{3}\).
\[\begin{cases}\mathbf{G}=\mathrm{Diag}(\mathrm{g},\mathrm{g},\mathrm{g});\\ \mathbf{F}=\mathrm{Diag}(\frac{1}{\mathrm{g}},1,1);\\ \mathbf{F}_{\mathbf{e}}=\mathrm{Diag}(\frac{1}{\mathrm{g}},\frac{1}{\mathrm{g }},\frac{1}{\mathrm{g}});\\ \boldsymbol{\sigma}=\mathrm{Diag}(0,\frac{1}{\mathrm{g}^{2}}-\mathrm{g}^{4}, \frac{1}{\mathrm{g}^{2}}-\mathrm{g}^{4})\,.\end{cases} \tag{41}\]
This unidirectional growth process produces lateral compressive stresses when \(g\) and \(J_{1}\) are greater than one. In the opposite case \(J_{1}<1\), the stresses are tensile. This case is similar to eq.(38) and
\[\xi_{B}=\frac{g^{4}-1/g^{2}}{g^{4}+1/g^{2}}=\frac{J_{1}^{2}-1}{J_{1}^{2}+1}\,. \tag{42}\]
At the threshold, replacing \(\xi_{B}\) by \(J_{B}\) in eq.(33) we obtain the critical threshold for such growth process given by:
\[\mathcal{Q}_{J}=J_{B}^{3}-3J_{B}^{2}-J_{B}-1=0\,. \tag{43}\]
The solution for \(J_{B}\) is then \(J_{B}=3.38298\), the critical strain is then \(\lambda_{1}=2.25354\) and \(\lambda_{2}=0.666142\). Note that we recover the same threshold for the growth parameter as for section VI.2.1.
Growth anisotropy increases the space of possible instability parameters. Here we limit ourselves to three cases and restrict ourselves to homogeneous growth. The Biot instability is generic, but depending on the situation, the thresholds can be different and must be evaluated each time. In the following, we will consider only one case with a different theoretical approach, without using the Biot's method, which imposes a critical parameter \(\xi_{B}\)[23]. We prefer a presentation in terms of variational analysis.
## VII Growth of a semi-infinite sample
It is impossible to list all the publications on volumetric growth in soft matter. If growing layers, multilayers, shells, disks, spheres are the most frequently chosen geometries [21], numerical treatments with advanced finite elements methods softwares allow to represent a variety of shapes closer to reality [114]. Our purpose is different since we want to give exact results with the simplest and standard hyperelastic model, that is the neo-Hookean model [93; 21; 91] for incompressible materials. In addition, instead of considering all possible growth processes that can be found in nature, anisotropic [7] or space dependent [97; 98], we focus on a spatially constant growth that evolves on a rather long time scale in order to neglect any transient dynamics. Since elasticity goes hand in hand with geometry [89], we start with the geometry of the sample to fix the notations used in the following.
### The geometry
We consider a semi-infinite growing sample bounded by the plane \(X=0\), infinite in the positive \(X\) direction and extending laterally between \(-\infty,\infty\) in the \(Y\) and \(Z\) directions. We assume \(\lambda_{3}=1\), so that no elastic strain exits in the third direction. The growth is assumed to be isotropic and homogeneous with a constant relative volume expansion \(J=g^{3}\). Due to the Biot instability (see the previous section), periodic patterns will appear on top of the sample with a spatial periodicity \(\Lambda\) chosen as the length unit. This geometry orients the growth mostly in the \(X\) direction and the new position for an arbitrary material point inside the sample leads to compressive stresses in the \(Y\) direction, as described before in section VI.2.1. Thus, defining a Cartesian coordinate system \(X,Y\) in the initial configuration, the position of each point after growth and the elastic deformation becomes \(x\sim JX\) and \(y\sim Y\), in leading order and \(J_{2D}=J=g^{2}\). Since an adiabatic approach to the growth process is assumed, i.e. transient deformations are quickly eliminated, a free energy describes the possible patterns resulting from a symmetry breaking. Our approach, which is poorly followed in the mechanics community, will be based on energy variation and will avoid tensorial algebra.
### The variational method based on the free energy minimization
#### vi.2.1 The free energy: elasticity and capillarity
The Euler-Lagrange equations or the equilibrium equations result from the extremum of the free energy, the sum of the elastic and possibly surface energy. Assuming a perfect periodicity of the patterns, we make a virtual partition of the initial domain into stripes of unity width and focus on \(\mathcal{P}\) the domain between \(-1/2<Y<1/2\), see the blue domains in Fig.(3). The neo-Hookean model depends on only two invariants: \(I_{1}\), for the elastic deformations and \(I_{3}\) for the relative
volume change due to elastic stresses, which we renormalize into the geometric invariants: \(\tilde{I}_{1}=JI_{1}\) and \(\tilde{I}_{3}=JI_{3}:\)
\[\tilde{I}_{1}=x_{X}^{2}+x_{Y}^{2}+y_{X}^{2}+y_{Y}^{2}-2J;\tilde{I}_{3}=x_{X}y_{Y }-y_{X}x_{Y}-J\,, \tag{44}\]
where the subscript \(X\) (resp. \(Y\)) denotes the partial derivative of any function with respect to the variable \(X\) (resp. \(Y\)).
The invariants \(I_{1}\) and \(I_{3}\) have already been defined in section III.2. The energy unit is chosen as the product: \(\mu\cdot(\Lambda^{2}t_{3})\). \(t_{3}\) is the thickness of the sample in the orthogonal direction which is irrelevant for plane strain deformations and we have for the elastic energy of a single strip:
\[\mathcal{E}_{e}=\frac{1}{2}\iint_{\mathcal{P}}dS\left(\tilde{I}_{1}-2Q\tilde{ I}_{3}\right). \tag{45}\]
The Lagrange parameter \(Q\) is also a function of \(X\) and \(Y\) fixing the incompressibility constraint \(I_{3}=1\) or \(\tilde{I}_{3}=0\) and \(dS=dXdY\). The capillary energy is often written in Eulerian coordinates:
\[\tilde{\mathcal{E}}_{c}=\gamma_{0}\int_{\partial\mathcal{P}}dy\sqrt{1+x_{y}^{ 2}}\,. \tag{46}\]
Considering the upper boundary \(\partial\mathcal{P}\):
\[X=0;\quad Y\in[-1/2,1/2],\]
where the capillary energy is defined, the following relations hold :
\[dy=\frac{\partial y}{\partial Y}|_{{}_{X=0}}\,dY\quad\text{and}\quad dx=\frac{ \partial x}{\partial Y}|_{{}_{X=0}}\,dY, \tag{47}\]
then eq.(46) is transformed into:
\[\mathcal{E}_{c}=\gamma_{0}\int_{\partial\mathcal{P}}dY(\sqrt{x_{Y}^{2}+y_{Y}^ {2}}-1)\,, \tag{48}\]
where \(\gamma_{0}\) is the rescaled capillarity coefficient and is equal to \(\gamma_{0}=\gamma/(\mu\Lambda)\) ( \(\gamma\) is the surface tension). Capillarity represents the average energy difference between the microscopic components of the sample (as atoms, molecules) located in the bulk or at the interface. It is positive when the interface separates a dense material from a more dilute phase. In practice, the capillary coefficient \(\gamma_{0}\) is very weak for ordinary gels and plays a significant role only when the sample size is of the order of \(0.1\)mm and for extremely soft gels [115]. However a skin effect can occur on top of elastic samples due to inhomogeneity of the shear modulus or to the growth process itself. This is especially true for the swelling of gels. Despite the weakness of this energy, it plays a crucial role in the determination of the wavelength and in the local regularization of singular profiles.
#### iii.2.2 The Euler-Lagrange equations
They simply result from the first variational derivative of the functional \(\mathcal{E}_{e}\) with respect to the small variation of \(x\) and \(y\):
\[\begin{cases}x_{XX}+x_{YY}=Q_{X}\,y_{Y}-Q_{Y}\,y_{X}=\{Q,y\}\,,\\ y_{XX}+y_{YY}=-Q_{X}\,x_{Y}+Q_{Y}\,x_{X}=-\{Q,x\}\,.\end{cases} \tag{49}\]
The left-hand side of the equation (49) represents the Laplacian \(\Delta\) in Cartesian coordinates, and \(\{P,x_{i}\}\) is the Poisson bracket of \(P\) and \(x_{i}\). This mathematical symbol has important properties in mechanics [116]. The zero-order solution : \(x=JX\) and \(y=Y\) verify these equations when the Lagrange parameter is a constant, so \(Q=Q_{0}\). Boundary conditions are also derived from the first variational derivative of \(\mathcal{E}_{e}\) and \(\mathcal{E}_{c}\) with respect to the elementary variation of \(x\) and \(y\), a process which allows the cancellation of the normal components \(S_{11}\) and \(S_{21}\) of the Piola stress tensor \(\mathbf{S}\)[91; 93], at the free boundary \(\partial\mathcal{P}\):
\[S_{11}=x_{X}-Q\,y_{Y}\quad\text{and}\quad S_{21}=y_{X}+Q\,x_{Y}\,. \tag{50}\]
On top, for \(X=0\), the cancellation of \(S_{11}\) gives \(Q_{0}=J\) while \(S_{21}=0\) is automatically obtained for the zero order solution. Capillarity appears for buckled solutions and is responsible for the normal \(\Gamma_{11}\) and tangential \(\Gamma_{21}\) components:
\[\Gamma_{11}=\gamma_{0}\frac{\partial}{\partial Y}\frac{x_{Y}}{(x_{Y}^{2}+y_{Y}^ {2})^{1/2}}\,, \tag{51}\]
and
\[\Gamma_{21}=\gamma_{0}\frac{\partial}{\partial Y}\frac{y_{Y}}{(x_{Y}^{2}+y_{Y}^ {2})^{1/2}}\,, \tag{52}\]
which must be added to the normal stresses at \(X=0\). Note the strong nonlinearities in the surface energy. However, since \(\gamma_{0}\) is in practice a very small parameter, the role of the capillary stresses is probably negligible for smooth patterns, but may become important in the case of creases. For completeness, the other two components of the stresses are also given:
\[S_{12}=x_{Y}+Q\,y_{X}\quad\text{and}\quad S_{22}=y_{Y}-Q\,x_{X}\,. \tag{53}\]
So far, it is assumed that the interface is regular and admits a regular curvature everywhere. Self-contacting interfaces are not considered, although in the last panel (A9)of Fig(1) on the right, such a property can explain the highly singular pattern obtained in the radial geometry. Assuming that it happens at a position \(Y=0\), then two additive stress boundary conditions must be imposed locally [38; 39; 40],
\[S_{22}|_{Y=0^{+}}=S_{22}|_{Y=0^{-}}\text{ and }S_{12}|_{Y=0^{+}}=S_{12}|_{Y=0^{-}}, \tag{54}\]
the second condition indicates the absence of friction on the singular line.
Finally, it is easy to show that the Euler-Lagrange equations, eq.(49), are equivalent to the cancellation of the divergence of the Piola stress tensor, see also section III.3 and eq.(12). In Cartesian coordinates, \(\text{Div}(\mathbf{S})_{i}=\partial Sij/\partial X_{j}\).
### Incremental approach and solution of the Euler-Lagrange equations
The classical way to detect a bifurcation in the elasticity is to expand the general solution by adding a small perturbation scaled by a small parameter \(\epsilon\). The following results are
obtained for \(x\) and \(y\) and \(Q\) :
\[\begin{cases}Q=J+\epsilon q(X,Y)\,,\\ x=JX+\epsilon U(X,Y)\quad\text{with}\quad\Delta U=q_{X}\,,\\ y=Y+\epsilon V(X,Y)\quad\text{with}\quad\Delta V=Jq_{Y}\,.\end{cases} \tag{55}\]
The incompressibility condition at \(\epsilon\) order imposes the following constraint \(U_{X}+JV_{Y}=0\) and the elimination of \(q\) is easy by cross-derivation of the previous equations, eq.(55): \(J\partial_{Y}\Delta U-\partial_{X}\Delta V=0\) which can be derivated a second time to isolate \(U\) from \(V\). Defining \(\Delta_{J}=\partial_{XX}^{2}+J^{2}\partial_{YY}^{2}\):
\[\Delta_{J}(\Delta U)=\Delta_{J}(\Delta V)=0. \tag{56}\]
and \(\Delta_{J}q=0\). The fourth order operator \(\Delta_{J}\Delta=\Delta\Delta_{J}\) accepts as possible solutions \(\Re\left(\Phi_{1}(Z)+\Phi_{2}(Z_{1})\right)\) where both functions are holomorphic functions of \(Z=X+IY\) and \(Z_{1}=JX+IY\). Nevertheless, due to the boundary conditions for \(X=0\) (on the top of the strip), \(\Phi_{1}\) and \(\Phi_{2}\) are related and we finally get for \(x\) and \(y\):
\[\begin{cases}x&=JX+J\epsilon\Re\left[\Phi(Z)+\tau_{1}\Phi(Z_{1})\right],\\ y&=Y-\epsilon\Im\left[\Phi(Z)+J\tau_{1}\Phi(Z_{1})\right],\\ Q&=J+\epsilon\tau_{0}\tau_{1}\Re\left[\Phi_{Z_{1}}\right];\ \tau_{0}=J^{2}-1\,, \end{cases} \tag{57}\]
where \(\Re\) and \(\Im\) are the real and imaginary parts of the holomorphic function \(\Phi\). The notation \(\tau_{0}=J^{2}-1\) is introduced for convenience and \(\tau_{1}\) is free at this stage, and will be determined later. A priori \(\Phi\) is arbitrary with the restriction that it must vanish for \(X\to\infty\) which automatically implies that \(\Phi\) is singular. Any singularity occurring outside of the domain \(\mathcal{P}\) (for \(X<0\)) is physically appropriate while singularities within the physical domain \(\mathcal{P}\) must be considered with caution. The balance of the elastic and capillary stresses at the surface \(\partial\mathcal{P}\) gives the value of \(\tau_{1}\) as well as the threshold \(J_{B}\) for the buckling instability. Let us first evaluate the stresses in linear order in \(\epsilon\): The calculation is not difficult and can be easily done using the Mathematica software as an example (see also the Appendix, section XV.1):
\[\begin{cases}S_{11}&=\epsilon\Re\left[2J\Phi_{Z}+(1+J^{2})\tau_{1}\Phi_{Z_{1} }\right],\\ S_{21}&=-\epsilon\Im\left[(1+J^{2})\Phi_{Z}+2J^{2}\tau_{1}\Phi_{Z_{1}}\right], \\ S_{12}&=-\epsilon J\Im\left[2\Phi_{Z}+(1+J^{2})\tau_{1}\Phi_{Z_{1}}\right],\\ S_{22}&=1-J^{2}-\epsilon\Re\left[(1+J^{2})\Phi_{Z}+2J^{3}\tau_{1}\Phi_{Z_{1}} \right].\end{cases} \tag{58}\]
Only \(S_{22}\) shows a zero order contribution in \((-\tau_{0})=1-J^{2}\), which is negative for a growing material since \(J>1\). This compressive stress explains the buckling instability and is associated with an elastic strain along \(Y\).
### The boundary conditions at the top of the sample, the Biot threshold and the harmonic modes
To derive the condition of a buckling instability, the quantities of interest are the normal \(S_{11}\) and shear \(S_{21}\) stresses at the top, which must include the capillary contribution. Only the normal capillary stress \(\Gamma_{11}\) is of order \(\epsilon\) while \(\Gamma_{21}\) is of order \(O(\epsilon^{2})\) and can be discarded so it reads, for \(X=0\) and for \(S_{11}\):
\[\epsilon\left(2J+(1+J^{2})\tau_{1}\right)\cdot\Re\left[\Phi_{Z}\right]- \epsilon\gamma_{0}J(1+\tau_{1})\Re\left[\Phi_{ZZ}\right], \tag{59}\]
where \(\tau_{1}\) is not modified. We first neglect the surface tension. Then, the cancellation of \(S_{21}\) gives the value of \(\tau_{1}\): \(\tau_{1}=-(1+J^{2})/(2J^{2})\). Once this \(\tau_{1}\) value is introduced into \(S_{11}\), there are two possibilities:
* Cancellation of \(\mathcal{Q}(J)\), leading to the determination of \(J_{B}\) such that : \[\mathcal{Q}(J_{B})=(J_{B}^{3}-3J_{B}^{2}-J_{B}-1)=0\,,\] (60) for any profile function \(\Phi(Z)\). This value was also found by Biot, see section VI.2.1 where another demonstration by Biot is proposed [113].
* \(\Re\left[\Phi_{Z}\right]=0\) which defines a family of suitable profiles but not a threshold value for observing the interface buckling. It requires that \(\Phi\) is an even function of \(Z\).
The second case does not imply any specific value of \(J\), but selects shape profiles, unlike the first case which occurs above \(J_{B}\) for any profile. It suggests and explains the diversity of experimental observations for the layer buckling: Indeed, the absence of mode selection at the Biot threshold automatically induces a spontaneous coupling of harmonic modes. The only real root \(J_{B}\) of \(\mathcal{Q}(J)\) is
\[J_{B}=\frac{1}{3}(3+6^{1/3}\{(9-\sqrt{33})^{1/3}+(9+\sqrt{33})^{1/3}\})\,, \tag{61}\]
\(J_{B}\sim 3.38\). But, as mentioned above, all holomorphic periodic functions of \(Z\) that vanish for \(X\to+\infty\) are possible eigenmodes of deformations that occur for the same threshold value. In the original papers, Biot only focused on the harmonic modes: \(\Phi_{B}=e^{-2\pi nZ}\), which appear for a compressed rubber sample. The polynomial that gives the threshold is not always the same depending on the experiment. Any modification in the physics of the problem such as more sophisticated hyperelasticity (Mooney-Rivlin, Ogden model, Fung-model [91; 93], anisotropy of the material [90] or of the growth [7; 8], possibly external loading, will modify the incremental result eq.(57) and the critical polynomial \(\mathcal{Q}\), but not the fundamental property of instability.
However, this model does not provide a choice of wavelength at the threshold, unlike the similar instabilities of fluids such as Rayleigh-Benard or Benard Marangoni [34]. Above the threshold, the determination of the wavelength for periodic fluid instabilities remains a difficult theoretical challenge [117; 118; 34] giving rise to an important literature and sometimes controversies as for the Rayleigh-Benard convection or the diffusion patterns of directional solidification. In [19], a surface tension selection mechanism is proposed for a layer of finite height \(d\). It induces a shift of the threshold \(J_{B}\) and
a selection of the ratio: wavelength over the height \(d\) of the sample, this ratio being of order one. Here the sample height is infinite and the wavelength is chosen as length unit, which means that the selection must in principle provide the value of the critical threshold \(J_{C}\). A discussion of finite size effects is deferred to the last section XIII. When capillarity is introduced, the normal stress \(S_{11}\) and the shear stress \(S_{21}\), given by eq.(59) are modified by the capillary contribution. Only the periodic function \(\Phi=e^{-2n\pi Z}\) (where n is an integer) gives a simple solution with a shift of the bifurcation threshold due to \(\Gamma_{11}\):
\[\delta J=J_{C}-J_{B}=n\pi\gamma_{0}J_{B}\frac{(1+J_{B})}{(3J_{B}^{2}-6J_{B}-1) }\,. \tag{62}\]
It is possible to recover this threshold directly minimizing the the total energy: elastic and capillary energy. In the next section, we give examples of such an evaluation which takes advantage of the expansion given in section XV.2.
Figure 3: At the top, panels A and B, the physical domain \(\mathcal{P}\), corresponding to one wavelength in cyan, is restricted to \(-1/2\leq Y\leq 1/2\), \(X\in[0,+\infty]\). Only one wavelength is plotted. To the right of each panel, is the \(\mathcal{C}\) plane with the unit Riemann disk in yellow. The red arrow indicates the correspondence between the point \(\mathcal{O}\) ( \(X=0,Y=0\) ) of the physical space and the point \(\mathcal{O}^{\prime}\) (\(1,0\)) of the \(\mathcal{C}\) plane. The region \(X->+\infty\) is contracted and associated with the center of the Riemann disk (green arrow). Only the interior and the contour of the Riemann disk mimic the physical domain. The blue dots represent the possible singularities, which are not singular for the physical domain in panel (A), but can generate sharp interface variations. Note that the dashed purple lines are images of each other due to the conformal mapping. In (B), the same plot except that a singular point enters both the Riemann disk and the physical plane P. Below, in (b), the interface profile for an outer pole, see eq.(63), with the parameter \(a=0.1\) for the red curve, \(a=1\) for the blue curve, \(a=5\) for the green curve. The profile appears quite singular for \(a=0.1\), but it remains finite. In (c) the real part of the derivative which contributes to the stresses, see \(S_{11}\), according to eq.(58). One wavelength, \(\mathcal{P}\), is shown and the rainbow colors indicate the sharp variations near the origin \(\mathcal{O}\). Note also that the scale for \(X\) varies between \(0\) and \(0.1\). Although quite singular near \(\mathcal{O}\), the stress remains finite and decreases rapidly. Similarly, (d) shows the opposite of the imaginary part present in \(S_{21}\) of eq.(58).
## VIII Periodic profiles and the Riemann theorem
### Construction of periodic profiles
The choice of the periodic functions \(\Phi(Z)\) follows from the Riemann theorem, which states that there exists a bijective mapping between a non-empty and simply connected domain and the unit disk \(\mathcal{C}\). In our case the domain is \(\mathcal{P}\), which covers one period, see Fig.(3). Introducing the complex variable \(\zeta=e^{-2\pi Z}\), Fig.(3) shows the points of correspondence, in particular the upper boundary (\(X=0\)) of \(\partial\mathcal{P}\), which is mapped onto the outer circle \(\partial\mathcal{C}\), and the zone at infinity of \(\mathcal{P}\) (\(X\rightarrow+\infty\)) concentrated in the center of the unit disk. The exterior of the unit disk corresponds to the non-physical half-plane where \(X<0\). The central vertical axis of the strip \(X>0,Y=0\), (dashed purple line in Fig.(3)) is associated with the horizontal axis of the \(\mathcal{C}\) plane (purple dashed lines) which we extend into the non-physical domain. Every holomorphic function, except constants, \(\Phi(Z)\) or \(\Psi(\zeta)\), has singularities. If they are located in the non-physical plane, these solutions are physically relevant, since they contribute to a finite elastic energy density. But this is not the case when they are located inside \(\mathcal{P}\) or \(\mathcal{C}\), where they require special attention. When they are near the boundary or of the Riemann circle \(\partial\mathcal{C}\), they become good candidates for generating creases. We will consider the regular profiles first.
### Regular patterns above the Biot threshold.
The patterns of interest are of course the harmonic modes \(\zeta_{k}=e^{-2\pi kZ}\) and their superposition: \(\Phi(Z)=a_{k}\zeta^{k}\) where the Einstein notation on double indices is assumed and with \(k\) a positive integer. The Biot solution is simply \(\zeta=e^{-2\pi Z}\). All these modes without specific parity in the change \(Z\curvearrowright-Z\) occur strictly at the Biot threshold and can easily overlap. However, when focusing on folds occurring at the interface, a more appropriate choice must be made with singularities located near the interface. The word creases is chosen to describe a sharp and localized variation of the interface shape \(\partial\mathcal{P}\), which is mathematically represented by a continuous function, such that the profile \(x(Y)\) remains differentiable, at least \(2\) times. Another definition can be that the elastic and/or capillary energy density remains locally finite. A fancy representation in complex analysis has been given by the viscous interfacial flow experts. [119; 120; 54; 121]. The creases are then simply generated at the threshold by using the conformal mapping technique [122; 123]. Defining the neighborhood of the central line \(\zeta_{a}=\zeta-1-a\), with \(a>0\), possible solutions with a pole, a logarithm or a square root can be good representations of quasi-singular profiles in the neighborhood near the center of the strip \(\mathcal{O}\) or near \(\mathcal{O}^{\prime}\):
\[\Phi=\frac{a}{\zeta_{a}};\ \Phi=-\frac{Log(-\zeta_{a})}{Log(a)};\ \Phi=(-\zeta_{a})^{1/2}\,, \tag{63}\]
\(a\) decreases as one approaches the point \(\mathcal{O}^{\prime}\), or the interface near \(\mathcal{O}\) (see Fig.(3)). The amplitude of the singular profile is normalized in the definition given by eq.(63). Fig.(3) shows different profile solutions for \(a=0.1\) (red curve), \(a=1\) (blue curve) and \(a=5\) (green curve) for a pole singularity (first choice of eq.(63) on the left) corresponding to a distance \(d_{a}\) from the point \(\mathcal{O}\) of the physical plane with \(d_{a}=-0.0152,-0.110,-0.285\) respectively. Since \(\Phi_{Z}\) goes directly in the stress definition, its value gives information about the stresses, see eq.(58) and Fig.(3), panels (c) and (d). Plotted on a single wavelength \(-0.5<Y<0.5\) (with \(a=0.1\)), the real and imaginary parts of \(\Phi\) show a strong localization near the interface for \(a=0.1\) so \(d_{a}=0.0152\) and they quickly disappear with increasing values of \(X\). However, even if the stresses at the interface are large, the solution is not singular and the linear expansion remains valid for sufficiently small values of \(\epsilon\). For the logarithm or square root choices presented in eq.(63), see the Appendix (XV.4).
### The case of even holomorphic function of \(Z\)
The second way to satisfy the cancellation of the stress \(S_{11}\) for \(X=0\) is to choose an even function of \(Z\), which means that \(\Phi(Z)=a_{k}(\zeta^{k}+\zeta^{-k})\) which will automatically diverge in the center of the Riemann disk or at infinity of \(\mathcal{P}\). The only way to satisfy the convergence at infinity is to introduce a singularity inside the Riemann disk. The choice of such a singularity is huge, but \(2D\) elasticity allows only logarithm and square root singularities for elastic energy convergence. In linear elasticity, it is well known that square roots c correspond to fractures and logarithms to wedge dislocations, see [124]. Before proceeding further in this direction, let us start with the nonlinear coupling of regular modes.
## IX Nonlinear bifurcation analysis via energy minimisation and mode coupling
All modes emerge at the Biot threshold and the mechanisms of selection are never a simple matter in the physics of continuous media. For example in diffusion-limited growth, the selection of the size and velocity of the needle-crystal remained a puzzle for five decades. In fact, Ivantsov [125; 126; 127; 128] has demonstrated the relationship between the tip radius of the crystal times its velocity as a function of the undercooling since \(1947\) but disentangling both quantities by the appropriate correct physical process has generated many conflicting hypotheses and discussions. The role of the surface tension, much debated due to mathematical difficulties, is now understood by including the surface tension anisotropy [129; 61]. In the same class of problems, the displacement of a viscous finger in a channel remains unsolved for about thirty years [130] and it has been again demonstrated the role of the surface tension which chooses a discrete set of solutions among a continuum [55; 56; 57]. When an energy emerges, as in our formulation of volumetric growth, solutions can be selected by an energy minimization, which was not the case for the two examples mentioned above. However our continuum here is a continuum of possible functions, not just a selection of a
pure number characteristic of the pattern. Using an expansion of the deformation at \(\epsilon\) order, the energy density can be expanded in \(E=\tau_{0}/2+\delta E\), as follows:
\[\delta\mathcal{E}=\epsilon\int_{\mathcal{P}}dS(E_{1}+\epsilon E_{2}+\epsilon^{2} E_{3})=\epsilon\mathcal{E}_{1}+\epsilon^{2}\mathcal{E}_{2}+\epsilon^{3} \mathcal{E}_{3}\,, \tag{64}\]
where each order is given in the Appendix (XV.2).
If the system is governed by an energy, it is a possible to analyze the bifurcation in more detail, to deduce its characteristics and finally to obtain the amplitude \(\epsilon\) of the selected mode by expanding the energy. To prepare such a calculation, which can be really tedious and even impossible for an arbitrary choice of \(\Phi\), we take advantage of the complex formulation.
### Elastic energy evaluation, order by order
Such an evaluation requires surface integrals covering the entire domain of interest \(\mathcal{P}\) (Fig.(4), top left panel) which can be obtained in two ways: either in \(X,Y\) coordinates, or in \(Z,\bar{Z}\) coordinates. The latter choice makes the calculus much more easier for the first and second order, as soon as holomorphic functions are chosen in the rectangular geometry. First, we define these surface integrals defined on \(\mathcal{P}\) as:
\[\begin{cases}K^{(1)}(f,\bar{g})=\frac{1}{2I}\iint_{\mathcal{P}}dZd\bar{Z}f_{Z }\bar{g}_{Z}\,,\\ K^{(2)}(f_{1},\bar{g}_{1})=\frac{1}{2JI}\iint_{\mathcal{P}}dZ1_{d}\bar{Z}_{1} f_{Z}\bar{g}_{\bar{Z}_{1}}\,,\\ K^{(3)}(f\bar{g}_{1})=\frac{1}{I(J+1)}\iint_{\mathcal{P}}dZd\bar{Z}_{1}f_{Z} \bar{g}_{\bar{Z}_{1}}\,,\\ K^{(4)}(f,g_{1})=\frac{1}{I(J-1)J}\iint_{\mathcal{P}}dZdZ_{1}f_{Z}g_{Z_{1}}\,. \end{cases} \tag{65}\]
According to [131], these integrals can be transformed into contour integrals such that:
\[\begin{cases}K^{(1)}(f,\bar{g})&=\frac{1}{2I}\oint_{\partial\mathcal{P}}dZf_{ Z}\bar{g}(\bar{Z})\,,\\ K^{(2)}(f_{1},\bar{g}_{1})&=\frac{1}{2IJ}\oint_{\partial\mathcal{P}}dZ_{1}f_{ Z_{1}}\bar{g}(\bar{Z}_{1})\,,\end{cases} \tag{66}\]
and for \(K^{(3)}\) and \(K^{(4)}\) which mix \(Z\) and \(Z_{1}\) using :
\[Z_{1}=Z\frac{1+J}{2}+\bar{Z}\frac{J-1}{2}\,, \tag{67}\]
it comes:
\[\begin{cases}K^{(3)}(f,\bar{g}_{1})&=\frac{1}{I(J+1)}\oint_{\partial\mathcal{P }}dZf_{Z}\bar{g}(\bar{Z}_{1})\,,\\ K^{(4)}(f,g_{1})&=\frac{1}{I(J-1)}\oint_{\partial\mathcal{P}}dZf_{Z}g(Z_{1})\,. \end{cases} \tag{68}\]
The first order corresponds to
\[\begin{cases}\mathcal{E}_{1}&=\tau_{0}\int_{\mathcal{P}}dS\Re\left(\Phi_{Z}+J \tau_{1}\Phi_{Z_{1}}\right)\\ &=\tau_{0}\Re\left[K^{(1)}(\Phi,1)\right]+J\tau_{1}K^{(2)}(\Phi,1)\,]\,.\end{cases} \tag{69}\]
Since \(\Phi\) has no singularity inside the sample, the contour integral for \(\Phi\) vanishes and \(\mathcal{E}_{1}=0\). \(\mathcal{E}_{2}\) and \(\mathcal{E}_{3}\) can be found in section XV.2, eq.(S9). Using eq.(S9), expansion of \(\mathcal{E}\) at second order gives for \(\mathcal{E}_{2}\), :
\[\begin{cases}\mathcal{E}_{2}&=\frac{1}{2}(1+3J^{2})(K_{1}+J^{2}\tau_{1}^{2}K_{ 2})\\ &+\frac{J\tau_{1}}{2}((J+1)^{3}K_{3}-(J-1)^{3}K_{4})\,,\end{cases} \tag{70}\]
with \(K_{1}=K^{(1)}(\Phi,\bar{\Phi});\,K_{2}=K^{(2)}(\Phi_{1},\bar{\Phi}_{1});\,K_{3} =K^{(3)}(\Phi,\bar{\Phi}_{1});\,\,\,\,\,\,\,K_{4}=K^{(4)}(\Phi,\Phi_{1})\). All these quantities are reduced to contour integrals obtained along \(\partial\mathcal{P}\), see Fig.(4(A) on top). We divide the outer contour into horizontal lines and vertical lines travelled in the negative sense. Because of the periodicity, the two vertical contour integrals cancel each other out, (blue lines of Fig.4) above. At infinity \(\Phi_{Z}\) vanishes so only the integral along \(\mathcal{C}_{0}\) contributes to the energy at this order. This result is valid since there is _no singularity_ inside the physical domain \(\mathcal{P}\). Finally, we get \(K_{1}\):
\[K_{1}=-\frac{1}{2}\int_{-1/2}^{1/2}dY\bar{\Phi}(-IY)\Phi_{Z}(IY)\,, \tag{71}\]
and \(K_{2}=K_{1}/J;\quad K_{3}=2K_{1}/(J+1);\quad K_{4}=0\). The energy density at second order simplifies:
\[\mathcal{E}_{2}=-\mathcal{Q}(J)\frac{(1+J)(1-J)^{2}}{8J^{3}}K_{1}\,. \tag{72}\]
Near the Biot threshold, \(\mathcal{E}_{2}\) behaves as \(\mathcal{E}_{2}\sim-E_{f}(J-J_{B})\). Defining first
\[Q_{B}=\frac{d\mathcal{Q}}{dJ}|_{J=J_{B}}=(3J_{B}^{2}-6J_{B}-1). \tag{73}\]
\(E_{f}\) reads:
\[E_{f}=K_{1}\mathcal{Q}_{2};\,\text{where}\quad\mathcal{Q}_{2}=Q_{B}\frac{(J_{B} -1)^{2}(J_{B}+1)}{8J^{3}}. \tag{74}\]
At this order of perturbation, we have recovered the linear stability result. It is possible to go one step further and specify the nature of the bifurcation that occurs near \(J_{B}\). For this we consider \(\mathcal{E}_{3}\). A third order, it reads:
\[\frac{\mathcal{E}_{3}}{p_{e}}=L_{1}+J^{2}\tau_{1}^{2}L_{2}+\frac{\tau_{1}(J+1)^ {2}}{2}L_{3}-\frac{\tau_{1}(J-1)^{2}}{2}L_{4}\,, \tag{75}\]
with \(p_{e}=J\tau_{0}\tau_{1}\) and:
\[\begin{cases}L_{1}=<\Re\left[\Phi_{Z}\Phi_{Z}\Phi_{Z_{1}}\right]>,\\ L_{2}=<\Re\left[\Phi_{Z_{1}}\Phi_{\bar{Z}_{1}}\Phi_{Z_{1}}\right]>,\\ L_{3}=<\Re\left[\Phi_{Z_{1}}\right]\Re\left[\Phi_{Z_{1}}\Phi_{Z_{1}}\Phi_{Z} \right]>,\\ L_{4}=<\Re\left[\Phi_{Z_{1}}\right]\Re\left[\Phi_{Z_{1}}\Phi_{Z}\right]>\,.\end{cases} \tag{76}\]
These formulas allow to calculate the third order for any profile function \(\Phi\). The calculation is not always easy but can be done as demonstrated hereafter for the logarithmic function defined in eq.(63).
### Nonlinear coupling of quasi-singular profiles
The purpose of this paragraph is to estimate the amplitude of the profile and the nature of the bifurcation near \(J_{B}\). Since each case is special, we limit ourselves to one of them, namely the logarithmic mode, see eq.(63): \(\Phi=-Log(1+a-e^{-2\pi Z})/Log(a)\), with \(a>0\) and shown in Fig.(8)(e). In this figure, only \(\Re\left[\Phi\right]\) is shown for \(X=0\), and
the true profile function must be multiplied by \(\epsilon\tau_{0}/(2J)\). Obviously, the desired profile is chosen with a positive value of \(\epsilon\) to have the sharp-pointed shape in the positive direction. Such a solution appears a priori at the Biot threshold and remains a regular solution, even with stresses accumulated at the interface. The corresponding elastic energy starts at the second order and the elastic energy expansion is written as:
\[\mathcal{E}=\mathcal{E}_{2}\epsilon^{2}+\mathcal{E}_{3}\epsilon^{3}=-E_{f} \left(\delta J\epsilon^{2}+e_{3}\epsilon^{3}\right)\,, \tag{77}\]
where \(\delta J=J-J_{B}\) and \(e_{3}=-\frac{\mathcal{E}_{3}}{E_{f}}\). \(E_{f}\), \(K_{1}\) and \(\mathcal{Q}_{2}\) have been defined previously, see Eqs.(71,74). Thus, minimizing the energy with respect to \(\epsilon\) leads to:
\[\epsilon=-\frac{2}{3}\frac{\delta J}{e_{3}};\quad\text{so}\quad\mathcal{E}=- \frac{4}{27}E_{f}\frac{\delta J^{3}}{e_{3}^{2}}\,. \tag{78}\]
To observe a bifurcation with such an expansion in \(\epsilon\) requires a negative value of \(\mathcal{E}\), so \(\delta J\) must be positive for positive values of \(E_{f}\) and \(K_{1}\). \(K_{1}\) depends on the logarithmic dependence of the profile,and can be estimated as:
\[K_{1}\sim-\pi\frac{Log(2a)}{Log(a)^{2}}\quad\text{for}\quad 0<a<<1\,. \tag{79}\]
The evaluation of the third order \(\mathcal{E}_{3}\) is given in section XV.5 and the corresponding result in eq.(S25). So when \(a\) is a small quantity, we get for \(e_{3}\):
\[e_{3}=\frac{2J\pi\tau_{1}\Pi_{1}}{(J-1)^{2}p_{a}Q_{B}};\text{ where }\ p_{a}=aLog(a)Log(2a). \tag{80}\]
The numerical value of \(e_{3}\) is then \(e_{3}\simeq-11.71/p_{a}\) which decreases when \(a\) increases. Since \(e_{3}<0\), \(\delta J\) is positive to obtain \(\epsilon>0\), which is required for the profile shown in Fig.(8). In this way, a bifurcation and a crease can be effectively observed. A negative sign will be counterintuitive with cusps oriented upward. Nevertheless, the cusp amplitude will remain tiny approximately given by \(\left(2/3p_{a}\delta J\right)/11.71\sim 0.01\delta J\) for \(a=0.01\). This treatment does not include surface tension because of obvious technical difficulties, [47].
### Nonlinear coupling of harmonic modes
An efficient treatment of mode coupling near a threshold is to multiply the main harmonic mode, here \(\zeta\), by a slowly varying amplitude satisfying the so-called amplitude equation derived from the separation of scales. This method is easily illustrated by the Euler Elastica [132]. An explicit solution of the Elastica bending solution can be found in [124] section \(19\) exercise \(3\). Depending on the boundary conditions applied at both ends, the threshold value of the force \(F_{c}\) responsible for the bending is found, and the nonlinearities give the amplitude of the bending profile as a function of the force above the threshold value. In this simple case, the bifurcation is supercritical since the amplitude varies as \(\pm\sqrt{F-F_{c}}\), above the threshold. For this simple example, there is also a free energy that includes the elastic energy of bending and the work of the forcing. Then, another way to study the bifurcation is also provided by the analysis of the free energy above the threshold, this is the choice made in Appendix A of [89] that we will follow here. In fact, the treatment by the separation of scales is more tedious in our case for at least two reasons: first, three unknown functions \(x,y,Q\) are coupled and second it requires an initial guess for the scaling dependence of the coupled functions,which is not easy to find a priori. The energy analysis turns out to be much more efficient. and is chosen here. We start with the coupling of \(2\) harmonics and then \(3\) harmonics. For the linear order we have shown that all harmonic modes appear at the same threshold value: \(J_{B}\)
### Intermediate algebra for the coupling of sinusoidal modes
Consider the superposition of several modes where \(\Phi(Z)=\sum_{k}^{k_{0}}a_{k}\zeta^{k}\) with \(k<k_{0}\), \(k\) and \(k_{0}\) being positive integers [133]. Then \(K_{1}=\pi\sum_{k}k|a_{k}|^{2}\) so that \(K_{1}\) is always positive and \(\mathcal{E}_{2}\) is negative above the Biot threshold. Unfortunately, at the third order, the calculus becomes much more tedious, even when sinusoidal modes are imposed. Each integral involves a triple series of the mode amplitudes \(a_{n}\).
\[\begin{cases}\tilde{L}_{1}&=\sum_{p,q,r}\frac{pqra_{p}a_{q}a_{r}}{p+q+r}( \delta_{p-q-r}+\delta_{p-q+r})\\ &=2\sum_{0<p<q}\frac{pq(q)a_{p}a_{q}a_{q-p}}{p(1-J)+q(1+J)}\,,\\ \end{cases} \tag{81}\] \[\begin{cases}\tilde{L}_{2}&=\sum_{p,q,r}\frac{pqra_{p}a_{q}a_{r}}{J(p+q +r)}(\delta_{p-q-r}+\delta_{p-q+r})\\ &=\frac{1}{J}\sum_{0<p<q}p(q-p)a_{p}a_{q}a_{q-p}\,,\\ \end{cases}\] \[\begin{cases}\tilde{L}_{3}&=\sum_{p,q,r}\frac{pqra_{p}a_{q}a_{r}}{J(p+ q)+r}(\delta_{p-q+r}+\delta_{p+q-r})\\ &=\sum_{0<p\leq q}(2-\delta_{p-q})\frac{pqa_{p}a_{q}a_{q+k}}{J+1}+\tilde{L}_{4} \,,\\ \tilde{L}_{4}&=\sum_{p,q,r}\frac{a_{p}a_{q}a_{r}pqr}{J(p+q)+r}\delta_{p-q-r}\\ &=\sum_{0<p<q}a_{p}a_{q}\frac{a_{q-p}pq(q-p)}{(J-1)p+(J+1)q}\,,\end{cases} \tag{82}\]
with \(\tilde{L}_{i}=-2\pi^{2}L_{i}\). It is to be noted that a non-vanishing third order in the energy exits if and only if modes are coupled.
#### vi.3.1 Coupling two modes near the \(J_{b}\) threshold
In the case of two modes, \(K_{1}=\pi(1+k|a_{k}|^{2})\). For the third order in \(\epsilon^{3}\), the only non-vanishing values contributing to \(\mathcal{E}_{3}\), eq.(75), are obtained for the exponents \(k=1\) and \(k=2\). Thus, the two mode profile is limited to \(\zeta+a_{2}\zeta^{2}\) where \(|a_{2}|\) is assumed to be of order \(1\), greater than \(\epsilon\) and \(K_{1}=\pi(1+2|a_{2}|^{2})\). Another scaling can be found below, in section IX.5. We have already found the second order of the energy \(\mathcal{E}_{2}\), see eq.(72, 74). Assuming \(a_{2}\) is real, the results for the associated \(L_{i}\), eq.(81), are:
\[L_{1}=4a_{2}/(3+J);\ L_{2}=a_{2}/J;\ L_{4}=2a_{2}/(3J+1)\,, \tag{83}\]
and \(L_{3}=L_{4}+a_{2}/(1+J)\) which gives \(\mathcal{E}_{3}\)
\[\begin{cases}\mathcal{E}_{3}&=-\pi^{2}a_{2}\mathcal{Q}_{3};\\ \mathcal{Q}_{3}&=\frac{(J-1)^{4}(J+1)\left(J^{2}+1\right)\left(11J^{2}+16J+3 \right)}{4J^{4}(J+3)(3J+1)}\,,\end{cases} \tag{83}\]
and the generic results found in eq.(78) and eq.(80) apply and give:
\[\epsilon=-\frac{2}{3}\frac{\mathcal{Q}_{2}}{\mathcal{Q}_{3}}\delta J\frac{K_{ 1}}{\pi^{2}a_{2}}\,;\,\,\mathcal{E}=-\frac{4}{27}\frac{K_{1}^{3}}{\pi^{4}a_{2} ^{2}}\frac{\mathcal{Q}_{3}^{2}}{\mathcal{Q}_{3}^{2}}\delta J^{3}\,. \tag{84}\]
We then deduce that the two-mode profile is a minimizer of the elastic energy above the Biot threshold, \(\delta J>0\). Such solution exists for every finite value of \(a_{2}\). The bifurcation occurs for \(\epsilon a_{2}<0\) and is transcritical [118; 32].
#### iv.4.2 Nonlinear three mode coupling in the vicinity of the \(J_{b}\) threshold
We now consider the following shape deformation given by the three-mode coupling: \(\Phi(Z)=\zeta+a_{2}\zeta^{2}+a_{3}\zeta^{3}\). For simplicity, we choose real values for all the coefficients \(a_{i}\) and \(K_{1}=\pi(1+2a_{2}^{2}+3a_{3}^{2})\). Similarly, the expansion of the elastic energy up to the third order reads:
\[\mathcal{E}=-\mathcal{Q}_{2}\delta JK_{1}\epsilon^{2}-\pi^{2}a_{2}\mathcal{Q} _{3}(1+a_{3}\mathcal{Q}_{33})\epsilon^{3}\,, \tag{85}\]
where
\[\mathcal{Q}_{33}=\frac{4(3+J)(1+3J)\tilde{\mathcal{Q}}_{33}}{(2+J)(5+J)(1+2J)( 1+5J)(3+16J+11J^{2})}\,,\]
with \(\tilde{\mathcal{Q}}_{33}=10+97J+254J^{2}+196J^{3}+37J^{4}\). The numerical value of \(\mathcal{Q}_{33}=3.8845\) for \(J=J_{B}\). The introduction of \(\zeta^{3}\) does not modify the result of section IX.4.1 unless \(a_{3}\mathcal{Q}_{33}<-1\). The function \(e_{3}\) which enters the eq.(78) and eq.(80) becomes \(e_{3}=\pi^{2}a_{2}\mathcal{Q}_{3}(1+a_{3}\mathcal{Q}_{33})\) and is shwon numerically in density plots in Fig.(8)(a). Again, the minimum non-trivial value of \(\mathcal{E}\) is found for \(\delta J>0\) with no possibility of obtaining a stable solution below the Biot threshold. Due to the complexity of the formula, we give here only the numerical value of the selected amplitude of the profile and the corresponding energy:
\[\epsilon=-\beta_{1}\frac{K_{1}\delta J/\pi}{a_{2}(1+\beta_{2}a_{3})};\,\, \mathcal{E}=-\beta_{3}\frac{(K_{1}\delta J/\pi)^{3}}{a_{2}^{2}(1+\beta_{2}a_{ 3})^{2}}\,, \tag{86}\]
with \(\beta_{1}=0.02575\), \(\beta_{2}=3.88446\), \(\beta_{3}=0.0007271\). As shown in section XV B, the surface tension creates an additive contribution in \(\epsilon^{4}\) which can change the present result, giving two solutions for \(\epsilon\) instead of one. An exhaustive study is really difficult due to the large number of degrees of freedom such as \(a_{2}\), \(a_{3}\) and \(\gamma_{0}\), the dimensionless capillary number. However, this number is rather weak and for the numerical study we choose \(\gamma_{0}=0.1\). The amplitude \(\epsilon\) which minimizes the elastic energy is a solution of a quadratic equation, so there are two solutions in addition to \(\epsilon=0\). A first numerical investigation, for coefficients \(a_{2}=0.1\) and \(0.5\) and \(a_{3}=\pm a_{2}\) is shown in Fig.(4)(b,c) and demonstrates nonlinear modes occurring after or before the Biot threshold. Only stable solutions are considered, so only the continuous lines of Fig.(4)(b,c). From these two examples, one can notice that the \(\epsilon\) values are rather weak, negative and less than \(0.1\) in absolute value. The interface profiles are shown in Fig.(4)(d). At the top we find strongly distorted profiles for a value of \(J=3.50\) and \(\epsilon=-0.019\) (\(a_{3}=-a_{2}\), \(a_{2}=0.1\)) and \(\epsilon=-0.0065\) for \(a_{3}=0.1\). Below, for \(a_{2}=0.5=-a_{3}\)\(\epsilon=-0.0096\) and \(J=3.58\) and for \(a_{3}=0.5\), \(\epsilon=-0.003560\) and \(J=3.18\). Fig.(4)(d) respects the scale but the height is magnified by a factor of \(10\). In conclusion, this nonlinear treatment shows that nonlinear modes can occur before the Biot threshold, but for strong capillary numbers. As the amplitude of the coefficient \(a_{2}\) increases, the mode becomes more and more distorted from the single sinusoidal solution but the amplitude of the interface remains small due to the small amplitude of \(\epsilon\). To better understand the bifurcation plots, we choose a more appropriate representation for the profile functions in the next section.
### Super and subcritical bifurcations
In the previous paragraph, we assumed that all the coupled harmonics are of the same order of magnitude. Now, we construct profile functions where the harmonics slightly perturb the principal mode \(\zeta\) such as \(:\Phi(Z)=\zeta-\epsilon A_{2}\zeta^{2}(1+\alpha_{3}\zeta)\), where \(A_{2}\) and \(\alpha_{3}\) are constants of order \(1\). In this case we get:
\[\mathcal{E}=-\mathcal{Q}_{2}\delta J\pi\epsilon^{2}+\pi^{2}A_{2}\mathcal{Q}_{3 }\epsilon^{4}\,. \tag{87}\]
For positive values of \(A_{2}\), we recover the classical supercritical bifurcation with \(\epsilon\sim\pm\sqrt{\delta J}\) above the Biot threshold. For the opposite sign of \(A_{2}\) and \(\alpha_{3}=-\epsilon^{2}B_{3}\) (which implies a very weak perturbation of the main mode \(\zeta\)), the selected profile becomes \(\Phi(Z)=\zeta+\epsilon B_{2}\zeta^{2}-\epsilon^{2}B_{3}\zeta^{3}\), with \(B_{2}\) and \(B_{3}\) positive and in this case:
\[\mathcal{E}=-\mathcal{Q}_{2}\delta J\pi\epsilon^{2}-\pi^{2}B_{2}\mathcal{Q}_{3 }\epsilon^{4}+\pi^{2}B_{3}B_{2}\mathcal{Q}_{3}\mathcal{Q}_{33}\epsilon^{6}\,. \tag{88}\]
The extrema of \(\mathcal{E}\) are obtained for \(\epsilon\) values given by:
\[\epsilon(J)=\pm\frac{1}{\sqrt{3B_{3}\mathcal{Q}_{33}}}\left(1\pm\sqrt{1+ \frac{3}{\pi}\frac{\mathcal{Q}_{2}}{Q_{3}}\frac{B_{3}}{B_{2}}\mathcal{Q}_{33} \delta J}\right)^{1/2}\,. \tag{89}\]
and
\[J_{C}=J_{B}-\frac{\pi}{3}\frac{B_{2}}{B_{3}}\frac{\mathcal{Q}_{3}}{\mathcal{Q}_{ 33}\mathcal{Q}_{2}}\,. \tag{90}\]
Fig (4) (e) and (f) show the evolution of the profile amplitude \(\epsilon\) as the volumetric growth coefficient \(J\) increases and decreases in the vicinity of \(J_{B}\). As \(J\) increases, and remains below \(J_{B}\), the chosen value of \(\epsilon\) remains zero (red solid curve), corresponding to the purely axial growth, but such a solution loses its stability at \(J_{B}\), (red dashed curve). Then the value of \(\epsilon\) makes a positive (or negative) jump \(\epsilon_{G}=\sqrt{2}/\sqrt{3B_{3}\mathcal{Q}_{33}}\) (resp. \(-\epsilon_{G}\)). Then \(\epsilon\) rises slightly above \(J_{B}\) following the
Figure 4: Domain \(\mathcal{P}\) for integration of the energy density, restricted to one wavelength \(-1/2\leq Y\leq 1/2\), \(X\in[0,+\infty]\). Only contours of interest are labeled, such as \(\mathcal{C}_{i}\). Panel (A) is similar to panel (A) of Fig.(3), only the contour \(\mathcal{C}_{0}\) contributes to the elastic energy, as shown in section IX.1. In (B) and (C) which is also panel (B) of Fig.(3), the relevant contours are around each singular point \(\mathcal{S}\) and \(\mathcal{S}_{J}\). In panels (b) and (c), the bifurcation diagram, \(\epsilon\) versus \(J-J_{b}\), for a triple mode coupling with surface tension: \(\gamma_{0}=0.1\). In (b), \(a_{2}=0.1\) and \(a_{3}=-0.1\) (curves in red and magenta) and \(a_{2}=0.1\) and \(a_{3}=0.1\) (curves in brown and green). Dashed curves indicate unstable solutions. In (c) \(a_{2}=0.5\) and \(a_{3}\leq 0\) with the same color code. For \(a_{3}=0.5\), the stable solution appears below the Biot threshold \(J_{B}\) in contrast to the other cases. In (d) interface profiles (multiplied by 10 for both axes; \(J-J_{B}=0.12\) for the upper case corresponding to the data of panel (b), \(J-J_{B}=-0.2\) for the lower case. In blue \(a_{3}=-a_{2}\), in red \(a_{3}=a_{2}\) with the exact values of \(\epsilon\) chosen in each case. In (e) a subcritical bifurcation diagram for the amplitude \(\epsilon\). Continuous lines indicate locally stable solutions, dashed lines unstable solutions, not observed experimentally. Red arrows indicate the trajectory for increasing values of \(J\) while black arrows indicate the trajectory for decreasing values. Note the complete symmetry between positive and negative values of \(\epsilon\). The hysteresis cycle extends between the two vertical arrows indicating the jump of the \(\epsilon\) amplitude at \(J_{C}\) (see eq.(89)) and at \(J=J_{B}\). In (f) the elastic energy density for \(J=J_{B}+0.1\) in blue and \(J=J_{B}-0.1\) in red as a function of \(\epsilon\) with \(3\) stable solutions below \(J_{B}\),(3 red minima) and only \(2\) (in blue) above.
blue trajectory in the direction of the red arrow. If there is a decrease of \(J\) from \(J>J_{B}\), \(\epsilon\) decreases along the blue line which is stable until \(J=J_{C}\), where the blue trajectory loses its stability (blue dashed curve) and the flat pattern: \(\epsilon=0\) is restored. At the transition there is also a jump in \(\epsilon_{D}=1/\sqrt{3B_{3}Q_{33}}\). Note that \(\epsilon\) can be either positive or negative, only \(\epsilon>0\) is shown for clarity but both signs are equivalent (see Fig.4)(f) which gives the energy minima for two values of \((J-J_{B})\). Only, E.Hohlfeld and Mahadevan seem to have discovered this subcritical bifurcation by numerical means (finite elements, ABAQUS), while experimentally the hysteresis associated with such a configuration was revealed by J.Yoon, J. Kim and R.C. Ryan Hayward in [77]. This scheme nicely represents the hysteresis observed in experiments. Before closing the parenthesis on nonlinear wrinkling patterns, studied with the classical techniques of bifurcation theory, let us outline a recent analysis performed with group theory methods concerning first the case of a compressed inextensible beam resting on a nonlinear foundation [43], second, the case of a thick compressed nonlinear sample [20] with different types of elasticity energy than the simple one considered here. Focusing on the first case, the very interesting point is that the authors succeed to capture localized patterns and one can wonder if it will not be possible to establish a nonlinear solitonic solution for the spatial modes detected here.
### Role of surface tension
Surface tension is a weak parameter in processes controlled by elasticity. A typical order of magnitude is given by the dimensionless number \(\gamma_{0}=\gamma/\mu H\), where \(\gamma\) is in Newtons per meter, the shear modulus \(\mu\) is in Pascals and \(H\) is a typical length, so in our case it will be the wavelength. An exact value will depend on the nature of the elastic sample and possibly of the fluid above the sample. Measurements based on elasto-capillary waves [134; 135; 115] made with extremely soft materials (\(\mu\sim 100Pa\)) give a value of about \(0.05N/m\). Recently, the role of surface tension on creases has been considered and obviously, surface tension plays an enormous role in the vicinity of quasi-singular profiles, as naively explained by the Laplace law of capillarity [136; 137]. For small deformations, well represented by a few harmonics and for ordinary elastic materials with a shear modulus around \(10^{4}\)Pa, the surface tension may be relevant only in the vicinity of the bifurcation examined in the previous section IX.5. We will first consider the case where the coupling with the first harmonic \(\zeta\) is weak as in section IX.5. The expansion of the energy density, eq.(88), must now include the capillary terms order by order:
\[\begin{cases}\mathcal{E}_{c}=\mathcal{E}_{cs}+\frac{\gamma_{0}}{2}\left(\frac {\pi(J^{2}-1)\epsilon}{2J}\right)^{2}\\ \times\bigl{(}4B_{2}^{2}+B_{2}\frac{(J-1)^{2}\pi}{J}\bigr{)}\epsilon^{2}+e_{2 c}\epsilon^{4}\bigr{)}\,.\end{cases} \tag{91}\]
\(E_{cs}\) is the capillary energy associated to the main mode \(\zeta\). It is given by eq.(S12) and eq.(S13) while and \(e_{2c}\) is given by eq.(S15), in section XV.3. \(E_{cs}\) is the capillary energy associated with the main mode \(\zeta\). Regarding the sign, the fourth and sixth order terms can be positive or negative so they can change the nature of the bifurcation which can go from subcritical to supercritical if \(\gamma_{0}\) is strong enough. One can now examine in more detail the case where the coupling of the \(3\) modes has an equivalent weight, according to the section IX.2. In this parameter range, the surface tension becomes a small parameter for the standard range of values of \(\gamma_{0}\) and the capillarity plays a critical role at the fourth order. We rescale the free energy and rewrite the equation (78) as follows:
\[\mathcal{E}_{t}=-E_{f}\epsilon^{2}\left(\delta J+2\gamma_{0}g_{2}\right)+(e_{ 3}+2\gamma_{0}g_{3})\epsilon+2\gamma_{0}g_{4}\epsilon^{2}\bigr{)}\,,\]
where \(E_{f}\) and \(e_{3}\) have been defined in eq.(78). Here we give only \(g_{2}\)
\[g_{2}=-\frac{\pi J(J+1)\left(1+4a_{2}^{2}+9a_{3}^{2}\right)}{\left(3J^{2}-6J-1 \right)\left(1+2a_{2}^{2}+3a_{3}^{2}\right)}\,. \tag{92}\]
Each coefficient of the capillary energy is a function of \(J\), \(a_{2}\) and \(a_{3}\) and is listed in section XV.3. The order of magnitude of these coefficients as \(a_{2}\) and \(a_{3}\) vary can be found in Fig.(8) in section XV.3. In fact, for normal values of the shear modulus, there is little chance that the capillary will alter the results given by eq.(78). Since \(g_{2}\) is negative, the bifurcation threshold is shifted to higher values by capillarity. This shift depends on the representation of the profile.
Post-buckling creases were studied extensively a decade ago [29; 44; 53; 138]. These studies suggest that creases can appear before the Biot threshold due to a subcritical bifurcation, as shown here in section IX.5. Note that the numerically detected creases in these studies require the introduction of periodic defects. Cao and Hutchinson [138] demonstrate the remarkable sensitivity of wrinkling results to physical imperfections and defects. This is not surprising, since it is a general property of the bifurcation theory [33].
The case of the self-contacting interface is much more difficult to handle, since analyticity is not preserved on a line (or on a segment) in the plane, so the elasticity equations are no longer valid. If we approximate the two Heaviside distributions that mimic the self-contacting interfaces by an analytic function such as \(\Phi=-b^{2}\sqrt{Z^{2}+a^{2}}\), where \(a\) is a tiny quantity, there is no reason to assume real contact between the two surfaces, which will remain separated by \(2a\). Thus, self-contacting interfaces are intentionally created like fractures. They can be nucleated by defects and then, they will have a better chance to be observed in thin samples, i.e.in \(2\) dimensions compared to \(3\) dimensions (see Dervaux's thesis [74]). Nevertheless, such triggered singularities remain a very attractive area of study, as shown by the experiment of a deflated cavity localized at a finite distance from the upper boundary [38]. Before the generation of the self-contact, a quasi-singular profile is obtained with the scaling \(x\sim|Y|^{2/3}\), which is similar to our last profile function \(\Phi\) of eq.(63), on the right but with a different exponent. The curvature at the singularity varies as \(|Y|^{-1/3}\) like \(x^{-1/2}\). This experiment is strongly reminiscent of the equivalent one realized in viscous
flow by Jeong and Moffatt [47] with contra-rotating motors. Although the interface behavior recovers the same exponent at some distance from the singularity, the curvature remains finite and parabolic at the singularity, the only unknown being the radius of curvature at the tip, which is chosen by the surface tension.
In conclusion, the observation of a bifurcation occurring before the Biot threshold is possible if at least \(3\) harmonic modes are initially coupled in the nonlinear regime. For the quasi-singular profile, the answer depends too much on the mathematical description of the profile. However, here we have presented a way to fully analyze the nature of the bifurcation in the neighborhood of the Biot threshold in order to obtain valuable predictions.
## X How to escape the Biot threshold?
In the previous sections, the existence of creases occurring at or just after the Biot threshold was examined. There is no difficulty in generating such creases as shown above, using the tools of complex analysis. However, it has been suggested by heuristic arguments [29; 139; 44] that singularities inside elastic samples can induce bifurcation below the threshold: \(J_{B}\). Singularities induced by stresses are not forbidden in plane strain or plane stress elasticity, provided that the local elastic energy remains finite even if the energy density does not. In practice in \(2D\), this means that the strains are not more singular than \(R^{1/2}\) and the elastic deformation gradient or the stresses are not more singular than \(R^{-1/2}\), where \(R\) represents the distance to the singular point \(R\to 0\). In linear plain strain elasticity, this is the case for the fracture tips and also for edge dislocations. The main difference between linear and this work is the fact that linear elasticity does not consider the nucleation of such defects that exist prior to loading and focuses more on the opening and/or the displacement of the fracture. There are very few theoretical or experimental investigations about the fracture nucleation [140; 141; 142; 143; 144; 70]. The hope here is indeed to generate these peculiar structures by volumetric growth or by compression. The main question we have to solve is the following: is it possible to lower the bifurcation threshold by considering singularities inside the sample?
As already mentioned in eq.(60), the solvability condition to observe periodic solutions implies either \(J=J_{B}\) or \(\Re\left[\Phi_{Z}\right]=0\), for \(X=0\) so \(\Phi\) is an even function of \(Z\). Here we avoid here a singularity at the interface \(X=0\), which requires a modification of the elastic model with a surface energy [45]. A nonlinear singular solution emerges, but it does not satisfy the simultaneous cancellation of normal and shear stress at \(X=0\)[46]. So we focus on singularities inside the sample. An even function of \(Z\), that can be represented by \(\Phi(Z)=F(Z)+F(-Z)\), automatically exhibits singularities inside the sample if convergence at positive infinity is required: a holomorphic function, other than a constant, cannot converge at both infinities without singularities. The choice is then a periodic allowed singular function, eliminating poles and avoiding as much as possible extended branch cuts, always a source of stress.
### Singular Profiles below the Biot threshold
In finite elasticity, such as the Neo-Hookean elasticity, allowed singularities in plane strain must induce a local finite elastic energy, which in practice means that \(|F_{Z}|\) cannot be more singular than \(|Z-X_{0}|^{-1/2}\), where the positive constant \(X_{0}\) denotes the central position of the singularity. Larger exponents are allowed since they do not contribute locally to the total elastic energy. Existing branch-cut singularities must remain limited in size. However, singular solutions can locally invalidate the hypotheses of the initial elastic model and may require more complexity in the elastc representation and even in the growth process. As an example, in fast fracture dynamics [145; 146; 147], 3 possible zones for the crack tip have been identified: first the viscoelastic, then the nonlinear elastic, and finally the traditional linear-elastic zone, which produces the square root singularity [148]. Of course, such a description requires different physical models at different length scales. Whatever the complexity introduced into the modeling, such as multiple invariants of finite elasticity, compressibility, plasticity or strain hardening, and eventually growth variations, it must remain localized in a small domain and must be treated as an inner boundary layer. Let us fix this domain around \(X_{0}\) and choose \(\Phi\) as:
\[\Phi(Z)=F(Z-X_{0})+F(-Z-X_{0})\,, \tag{93}\]
where \(F(Z)=\bar{F}(Z)\). The function \(F\) is periodic with period one, real or its Laurent series has only real coefficients and is convergent for \(Z\to\infty\). Calculating the derivative of \(\Phi\) for \(Z=IY\), it is easy to see that \(\Re\left[\Phi_{Z}\right]=0\) so the normal stress \(S_{11}\) vanishes and there is no need to cancel \(\mathcal{Q}(J)\), see Eqs.(58,59,60). For \(F\), two square root singularities are chosen, located at \(X_{0}\pm l_{0}\) and separated by a branch cut. For symmetry reasons, we will fix the branch cut along the \(X\) axis as follows:
\[F_{Z}=\frac{1}{\sqrt{\tanh^{2}(\pi Z)-\tanh^{2}(\pi l_{0})}}-\cosh(\pi l_{0})\,. \tag{94}\]
The last term fixes the cancellation of \(F_{Z}\) when \(Z\to\infty\). \(l_{0}\) is a tiny parameter that specifies the size of the branch cut \(2l_{0}\). Useful for the following, \(F(Z)\) is the primitive function of eq.(94):
\[F(Z)=f(Z)-Z\cosh(\pi l_{0})\,, \tag{95}\]
where
\[f(Z)=\sqrt{\frac{\sinh^{2}(\pi Z)-\sinh^{2}(\pi l_{0})}{\tanh^{2}(\pi Z)- \tanh^{2}(\pi l_{0})}}\frac{h(Z)}{\pi\cosh(\pi Z)}\,, \tag{96}\]
and
\[h(Z)=\tanh^{-1}\left(\frac{\sinh(\pi Z)}{\sqrt{\sinh^{2}(\pi Z)-\sinh^{2}(\pi l _{0})}}\right)\,. \tag{97}\]
Several observations must be made at this stage:
* The choice of \(F_{Z}\) is somewhat arbitrary, but takes into account symmetry arguments and is also dictated by its simplicity.
* Less singular functions with the square root replaced by a power law \((w)^{1/2}\to(w)^{a}\), where \(a>1/2\), are also possible a priori.
* As soon as we introduce a singular zone around \(X_{0}\), this automatically produces another singularity around \(X_{0}/J\) which will influence the interface more strongly, at \(X=0\). So we are faced with two boundary layers which are easier to treat independently. So \(J>1\) and \(l_{0}<<1\).
* When introducing such profiles, we are faced with a minimal list of parameters such as \(l_{0}\), \(X_{0}\), \(\epsilon\) and \(J\). We hope to find \(J\) below the Biot threshold \(J_{B}\) and to find constraints on these parameters. These parameters must be fixed in a consistent way.
* Finally, such localized singularities are often found in periodic viscous flows where the equivalent of our "blob" is the bubble in the viscous flow [119; 120; 58; 149]. Note that in the case of bubbles, there are several families of solutions that depend on the bubble location and its symmetries.
The contribution of such a function \(F(Z)\) on the interface and on the stress accumulated inside the sample are shown in Fig.(5). In order to show that such a scenario can exit, it is necessary to establish the existence of semi-singular patches where the stresses are concentrated. it will also be necessary to relate the four parameters \(l_{0}\), \(X_{0}\), \(\epsilon\) and \(J\) to other physical processes occurring in the inner boundary layer which are neglected in the outer zone.
### Physical origins of the patches
Several origins can explain the existence of patches, sites of focusing of the elastic stress. Such focusing locally destroys the linear expansion of the elastic deformation in \(\epsilon\), making the validity of the linear expansion questionable. But also, at large strains, it can invalidate the model itself: the choice of the neo-Hookean elastic energy, the incompressibility hypothesis, the constant volumetric growth. Let us examine each of these possible causes:
* The neo-Hookean model, very convenient for its simplicity fails to describe a focusing, either too strong or too localized and must be corrected by a more sophisticated hyperelasticity model involving nonlinearities in \(I_{1}\) or other invariant such as \(I_{2}\).
* The incompressibility limit is a mathematical limit that is not appropriate for large strains. A more physically relevant model may be a quasi-incompressible approximation with a strong coefficient multiplying the invariant \(I_{3}\) in the elastic energy [44].
* Spatially constant volumetric growth is a naive approximation of a true biological process. In fact, for growing living species, an excess of compressible stress is known to inhibit the cell proliferation. It is reported as the principle of homeostasis [150].
* This phenomenon also occurs in swelling. Everyone knows how to expel water from a wet sponge, simply by applying a pressure to it.
The complete solution of the nonlinearities is hopeless, since it is impossible to find a nonlinear solution inside the patch that asymptotically recovers the expansion given by Eqs.(94,96). But, we know similar situations in solid mechanics where nonlinearities play an essential role in a localized region around a singular point, as in fracture mechanics or dislocation theory, and are responsible for a constant stress intensity factor. As shown by Barenblatt [151; 152], w these nonlinearities, if they remain localized, do not prevent the validity of the linear theory, except in the zones of high predicted stress where they soften. In the next section, we relax some of the limitations of the model, by adding nonlinearities and compressibility, while keeping the volume growth constant.
### Patches as inner boundary layer
In the first patch located at \(Z=X_{0}\), a new coordinate \(U\) such as \(Z-X_{0}=\frac{1}{\pi}\tanh^{-1}(\tanh(\pi l_{0})U)\) gives the following expansion of eq.(96) in the limit of small \(l_{0}\)
\[F(Z)\sim\frac{1}{2\pi}\left\{Log\left(\frac{U+\sqrt{U^{2}-1}}{-U+\sqrt{U^{2}- 1}}\right)+\phi_{0}\right\}\,, \tag{98}\]
and \(\quad x\sim JX+\epsilon J(\Re\left[F(Z)\right]+x_{1})\) with \(x_{1}=\Re\left[\phi_{0}+F(-X_{0})+J\tau_{1}\Re\left(F((J-1)X_{0})+F(-(J+1)X_{ 1})\right]\). This expansion shows that \(F(Z)\) has two singular points \(U\pm 1\) separated by a branch cut. For \(|Z-X_{0}|>l_{0}\), then \(U>1\), so \(l_{0}<Z-X_{0}<1\), the asymptotic behavior of \(F(Z)\) is logarithmic, which gives for the outer profile:
\[\begin{cases}x&\sim JX+\epsilon\left\{\frac{J}{2\pi}Log\left(\frac{(X-X_{0}) ^{2}+Y^{2}}{l_{0}^{2}}\right)+x_{1}\right\}\,,\\ y&\sim Y-\frac{\epsilon}{\pi}\tan^{-1}\frac{Y}{X-X_{0}}\,.\end{cases} \tag{99}\]
For the second patch \(\mathcal{S_{J}}\) located at \(X_{0}/J\), a similar expansion for \(Z_{1}\) with the definition of \(U_{1}=JU_{X}+IU_{Y}\) gives a similar result, the only difference being the expansion of coordinates \((x,y)\):
\[\begin{cases}x&\sim JX+\frac{\epsilon J\tau_{1}}{2\pi}\left\{Log\left(\frac{( JX-X_{0})^{2}+Y^{2}}{l_{0}^{2}}\right)+x_{2}\right\}\,,\\ y&\sim Y-\frac{\epsilon\tau_{1}}{\pi}\tan^{-1}\frac{Y}{JX-X_{0}}\,.\end{cases} \tag{100}\]
The fact that we have two independent small parameters ( \(\epsilon\) and \(l_{0}\) ) suggests a complicated double boundary layer. An example is also given by fracture mechanics, where a separation of length scales is necessary and has been estimated in [148; 153]. Indeed, in soft and highly deformable materials, we cannot neglect nonlinear deformations and the dissipation
for fast dynamics even if Hookean elasticity gives a reasonable picture of cracks. The separation of length scales, in [148] includes the inner scale (the crack tip) due to a large amount of dissipation, the intermediate scale due to the nonlinearities of elasticity and finally the outer scale where linear elasticity is allowed. Again, two different length scales: \(\zeta\) for dissipation and \(l\) for nonlinearities coexist and have been experimentally verified [148]. In our case, we can disregard dissipation since either growth or gel swelling are slow processes, see Fig.(1)(A8,A9) and [74]. However, as with fractures, additional physical properties are needed in order to fulfill the neo-Hookean model with incompressibility which has an inherent lack of parameters and length scales. In the next section, we extend the neo-Hookean model by incorporating nonlinearities in the elastic energy such as \(I_{1}^{2}\) and \(I_{2}\) and also the third invariant \(I_{3}\), which is responsible for the compressibility.
## XI Theoretical evidence for internal singularities
### New elastic model for large stresses
We start by locally assuming a more complex elastic energy and we focus on the patches located around \(X_{0}\) and \(X_{0}/J\). They are separated by a distance \(X_{0}(J-1)/J\sim 2/3X_{0}\). We assume two boundary layers around each patch: an inner zone of size \(l_{0}\) and an outer zone of size \(l_{p}\) such that \(l_{0}<l_{p}<<X_{0}\), to avoid overlapping between the two patches. The existence of two boundary layers is required, first, to eliminate the square root singularity and, second, the necessity to recover the logarithmic asymptotic before the linear expansion in \(\Phi(Z)\). Besides nonlinearities in the modeling of the hyperelasticity density, the limit of complete incompressibility can be questionable when the strains become very large. There are a significant number of models in the literature that treat compressibility, see chapters \(6\) and \(8\) of [91]. Due to the large strains localized in the patches, the best approach is the compressible model of R. Ogden [91, 93], which separates the elastic energy density into two parts: \(\Psi_{iso}\) and a volumetric part, also called the bulk part \(\Psi_{vol}\). In practice however, it is very difficult to find explicit solutions that occur on a scale smaller than \(l_{0}\). So we focus first on the intermediate regime where \(|Z-X_{0}|>l_{0}\) and we restrict on nonlinearities in \(I_{1}\) and a compressibility penalty treated as a quadratic expansion. For the elastic energy density: either \(\Psi^{(p)}\) (nonlinear model) or \(\Psi^{(mr)}\) (the Mooney-Rivlin model) the following expressions are given for geometric invariants which are corrected by the growth according to eq.(44):
\[\begin{cases}\Psi^{(p)}&=\tilde{I}_{1}+\frac{\omega_{p}}{2^{p_{r}-1}}\tilde{I }_{1}^{2p}+\frac{\kappa}{J}(\tilde{I}_{3}-J)^{2}\\ \Psi^{(mr)}&=\tilde{I}_{1}+\frac{\omega_{m}}{2^{p_{r}-1}}\tilde{I}_{2}+\frac{ \kappa}{J}(\tilde{I}_{3}-J)^{2}\,,\end{cases} \tag{101}\]
where \(p\) is a real number greater than \(1\), \(\omega_{p}\) and \(\omega_{mr}\) are small parameters, positive or negative, representing a first correction to the neo-Hookean elasticity, but \(\kappa\) is expected to be a large positive quantity to mimic the quasi-incompressibility.
Figure 5: On the left, profiles corresponding to the deformation given by eq.(96). The parameters are \(J=3,\epsilon=0.1\). For the blue profile \(X_{0}=0.01\) and \(l_{0}=0.001\), for the red curve, the same \(X_{0}\) value and \(l_{0}=0.005\), for the brown profile, \(l_{0}=0.001\) and \(X_{0}=0.1\). Note that \(l_{0}\) has little effect on the shape profile, which depends strongly on the distance of \(X_{0}\) from \(X=0\). The difference in height of the averaged surface has no physical meaning. On the right, the stress \(S_{11}\) in logarithmic scale (\(Log(1+S_{11}^{2})\), multiplied by a scaling factor of \(0.01\) for visualization purposes, with \(l_{0}=0.01\) and \(X_{0}=0.1\). Note the two singular zones around \(X_{0}\) and \(X_{0}/J\). These singularities will merge in the boundary layer. The stress decreases rapidly so that the map is confined to a limited area of the sample between \(0.0<X<0.25\) and \(-0.2<Y<0.2\).
### The intermediate boundary layer analysis
For the first patch, we define the rescaled quantities for the space coordinates in the initial configuration:
\[\hat{X}=(X-X_{0})/l_{p};\quad\hat{Y}=Y/l_{p}\,. \tag{102}\]
A localized patch \(\mathcal{S}\) or \(\mathcal{S}_{J}\) requires \(l_{p}\leq X_{0}\) and to recover the asymptotics of \(F(Z)\), we assume a solution for the profile function \(\epsilon(x_{s}(\hat{R}),\epsilon y_{s})\) given by:
\[x_{s}=\frac{\epsilon}{\pi}f_{s}(\hat{R})+JX_{0}\quad\text{and}\quad y_{s} \sim\frac{-\epsilon}{\pi}T\,, \tag{103}\]
where \(\hat{R}=\sqrt{\hat{X}^{2}+\hat{Y}^{2}}\) and \(T=\arctan(\hat{Y}/\hat{X})\). Assuming a possible correspondence with eq.(99), the unknown function \(f_{s}\) must satisfy \(f_{s}(\hat{R})\sim JLog\hat{R}\), for large values of \(\hat{R}\), but must remain finite for small values of \(\hat{R}\), a property which is not verified by eq.(99). This gives the following scaling for the invariants:
\[\tilde{I}_{1}=\epsilon^{2}/(\pi l_{p})^{2}\,,\quad\tilde{I}_{1}^{p}\sim \epsilon^{4p}/(\pi l_{p})^{4p}\,,\quad\tilde{I}_{3}\sim\epsilon^{2}/(\pi l_{ p})^{2}. \tag{104}\]
The size of the boundary layer given by \(l_{p}\) must eliminate arbitrary coefficients as much as possible. Note that \(l_{p}\) is a tiny value, unlike \(\kappa\) which can take large values to represent quasi-incompressible material. Define:
\[\begin{cases}\tilde{I}_{1}=\frac{\epsilon^{2}}{\pi^{2}l_{p}}\hat{I}_{1};\quad \hat{I}_{1}=\frac{1}{\hat{R}^{2}}+f_{s}^{\prime}(\hat{R})^{2},\\ \tilde{I}_{3}-J=-J\hat{K};\quad\hat{K}=1+|\omega_{p}|^{-1/p_{0}}\frac{f_{s}^{ \prime}(\hat{R})}{\hat{R}},\end{cases} \tag{105}\]
where \(p_{0}=2p-1\) and \(l_{p}=|\epsilon|/\pi|\omega_{p}|^{1/(2p_{0})}J^{-1/2}\), then the elastic energy, for the new elasticity model given by eq.(101), is transformed to :
\[\mathcal{E}_{s}=\frac{\epsilon^{2}}{\pi}\int_{0}^{\infty}\hat{R}d\hat{R}\left( \hat{I}_{1}+\hat{I}_{1}^{2p}+J\kappa\hat{K}^{2}\right). \tag{106}\]
Variation with respect to \(f_{s}\) leads to a second order E-L equation, which can be integrated once without difficulty, and finally we are faced with a first order nonlinear differential equation:
\[\hat{R}f_{s}^{\prime}(\hat{R})\left(1+2p\left(f_{s}^{\prime}(\hat{R})^{2}+ \frac{1}{\hat{R}^{2}}\right)^{p_{0}}\right)+\kappa_{0}\frac{f_{s}^{\prime}( \hat{R})}{\hat{R}}=C_{S}, \tag{107}\]
where \(\kappa_{0}=J\kappa\,|\omega_{p}|^{-1/p_{0}}\) and \(p\geq 1\). \(C_{s}\) is an arbitrary integration constant at this stage. In the quadratic case, \(p=p_{0}=1\), the Euler-Lagrange equation for \(\hat{x}_{s}\) is easily solved, and the second Euler-Lagrange equation for \(\hat{y}\) is automatically satisfied once the relation eq.(103) is imposed. Even if an exact solution can be found, we focus on the two limits of interest, first for \(\hat{R}\to 0\) and then for \(\hat{R}\to\infty\):
\[\text{For}\quad\hat{R}\to 0\quad f_{s}^{\prime}(\hat{R})=\frac{Cs\hat{R}^{4p-3}}{2p +\kappa_{0}\hat{R}^{4(p-1)}}\,. \tag{108}\]
Regardless of the value of \(p\geq 1\), the behavior of \(f_{s}\) is a regular function of \(\hat{R}\), as it is required for a physical solution. Note that \(\kappa_{0}\) only plays a critical role if \(p=1\), which leads to:
\[\text{For}\quad\hat{R}\to 0\quad f_{s}^{\prime}(\hat{R})=\frac{C_{S}}{2p+ \kappa_{0}}\hat{R}\,. \tag{109}\]
Finally, choosing \(C_{s}=J\) leads to the convenient asymptotic for \(f_{s}\) when \(\hat{R}\to\infty\) whatever the \(p\) values. So the outer boundary layer seems to satisfy the physical requirements, see eq.(99). Since we have introduced two boundary layers: an inner core of size \(l_{0}\) and an intermediate zone of size \(l_{p}>l_{0}\), this allows to easily perform the adjustment in the asymptotics of \(F(Z)\) given by eq.(98). Another standard way to modify the neo-Hookean model is the Mooney-Rivlin model, which is even simpler:
\[\hat{R}\left(\frac{df_{s}}{d\hat{R}}\right)\left(1+\frac{1}{\hat{R}^{2}}\right) =-\frac{\kappa_{mr}}{\hat{R}}\left(\frac{df_{s}}{d\hat{R}}\right)+C_{s}\,, \tag{110}\]
with \(\kappa_{mr}=J\kappa/\omega_{mr}\) and \(l_{mr}=\epsilon/\pi\sqrt{\omega_{mr}/J}\). The solution for \(f_{s}\) is easily found and in physical units, we have:
\[x_{s}=\frac{J\epsilon}{2\pi}Log\left\{\frac{R^{2}}{l_{mr}^{2}}+\left(1+\frac{ \kappa_{mr}}{\omega_{mr}}\right)\right\}\,, \tag{111}\]
with no change for \(y_{s}=-\epsilon T/\pi\).
However, in the inner region very close to \(X_{0}\) where \(|Z-X_{0}|\leq l_{0}\), these solutions given by eq.(108,110) present a singularity of the strain in \(1/R\) due to the choice of \(y_{s}\), which limits their validity in this inner zone.
Note that, for both models, \(x_{s}\) has the same asymptotic limit for \(R>l_{p}\) and they are regular for \(R->0\), so even if we are not able to find the inner core solution, we can recover the result of eq.(99) and the correct asymptotic of \(\Phi(Z)\).
The same analysis applies to the upper singularity centered around \(X_{0}/J\). For the second patch \(\mathcal{S}_{\mathcal{J}}\), the same strategy is followed, and it is easy to show that the associated deformations \((\hat{x}_{S_{J}},\hat{y}_{S_{J}})\) according to eq.(100) satisfy :
\[\begin{cases}\hat{x}_{s_{J}}=\frac{1}{2\pi}Log\left(\frac{1}{\pi^{2}}+J^{2} \hat{X}^{2}+\hat{Y}^{2}\right)\\ \text{and}\\ \hat{y}_{s_{J}}=-\frac{1}{\pi}\tan^{-1}\frac{\hat{Y}}{J\overline{X}}\,.\end{cases} \tag{112}\]
The same matching between deformations in the second patch is done by \(\epsilon\tau_{1}(Jx_{s_{J}},y_{s_{J}})\) which gives the same analysis for both patches. So, both singularities can be treated in the same way.
In summary, once linearized, the primary incompressible model can have two patches \(\mathcal{S}\) and \(\mathcal{S}_{J}\) with a singularity inside. The actual patch size is \(l_{p}\). Each patch contains an inner core of size \(l_{0}^{p}\) and an intermediate zone of size \(l_{p}\), corresponding to two boundary layers with a more regular stress distribution. The main information we obtain for the treatment of the intermediate zone is in fact its size \(l_{p}\) given by \(\epsilon\) and the first nonlinear correction of the neo-Hookean energy scaled by the constant: \(|\omega_{p}|\).
### The inner core
In the inner core, the strains and stresses are expected to increase again, and perhaps the specimen is strongly modified by nonlinearities going from stress hardening to plasticity [154] Such material transformations have been studied in detail in the fracture mechanics literature [147; 148]. Here we try to stay as close as possible to our original model although we agree that any modification of the material structure is possible and may change some conclusions. Therefore, we keep the compressible hyperelasticity point of view but we reinforce the compressibility analysis of the material with the Ogden model, where the elastic energy is decoupled into a purely volumetric elastic response and a dilatational part [93]. As explained in section III.2 at the very beginning of the paper, this requires a new definition of the strains according to eq.(5).
#### iii.3.1 Rescaling the strains and the invariants
We retain the quadratic compressibility model of the previous paragraph, which is still a function of \(I_{3}\) although other approximations may be more appropriate for \(R\sim l_{0}^{p}\) (see Chapter \(6.5\) of [91]). Within the inner core, we suggest the following description for the current coordinates \((x_{c},y_{c})\):
\[x_{c}=\frac{\epsilon}{\pi}l_{0}^{s}\bar{F}(\rho,\theta);\;y_{c}=-\frac{ \epsilon l_{0}^{q}}{\pi}\bar{G}(\rho,\theta)\,. \tag{113}\]
with \(\rho=R/l_{0}^{p}\) and \(\theta=T/l_{0}^{q}\). We restrict ourselves to a cubic nonlinearity \(I_{1}^{3}\) for reason of simplicity but the method can be applied to any kind of dilatational hyperelasticity. We impose that \(p\) and \(q\) are positive and that \(x_{c}\) and \(y_{c}\) are regular for \(\rho\to 0\). For \(\rho\to\infty\), the matched asymptotic analysis required that \(x_{c}\) and \(y_{c}\) coincide with the behavior of \(x_{s}\) and \(y_{s}\) for \(R\to 0\). For \(y_{c}\), this simply requires \(G\to-\theta\). For \(x_{c}\), we first consider \(x_{s}\) from eq.(109), in the neighborhood of \(R\to 0\):
\[x_{s}=\frac{\epsilon J}{\pi}\frac{R^{2}}{6\,l_{3/2}^{2}}\quad\mbox{where}\quad l _{3/2}=\frac{\epsilon}{\pi\sqrt{J}}|\omega_{3/2}|^{1/4}. \tag{114}\]
Thus, at infinity, \(x_{c}\) must behave as \(x_{c}\sim\rho^{2}\) and matching with \(x_{s}\) gives a more precise result:
\[x_{c}=\frac{\epsilon}{\pi}l_{0}^{s-2p}R^{2}\,\Rightarrow\,l_{0}=\left(\frac{6 \,l_{3/2}^{2}}{J}\right)^{1/(2p-s)}. \tag{115}\]
If these conditions are satisfied, the inner coordinates \((x_{c},y_{c})\) can correctly match the corresponding ones \((x_{s},y_{s})\) in the intermediate zone, allowing to recover eq.(112). However, our analysis involves many degrees of freedom in addition to the unknown material parameters \(\kappa\) and \(\omega_{3/2}\). Most likely, there will be several possibilities and to limit them, we recapitulate the different constraints. Since the singularity of the strains calculated with \(x_{s}\) and \(y_{s}\) comes from the derivative with respect to \(T\), we impose that the corresponding strains for \(x_{c}\) and \(y_{c}\) dominate. After evaluating the amplitude of the strains, we get:
\[\begin{cases}\frac{\partial x_{c}}{\partial R}&=\frac{\epsilon}{\pi}l_{0}^{s- p}\frac{\partial\bar{F}}{\partial\rho}<<\frac{1}{R}\frac{\partial x_{c}}{ \partial T}\sim\frac{\epsilon}{\pi}l_{0}^{s-p-q}\frac{1}{\rho}\frac{\partial \bar{F}}{\partial\theta},\\ \\ \frac{\partial y_{c}}{\partial R}&=-\frac{\epsilon l_{0}^{q-p}}{\pi}\frac{ \partial\bar{G}}{\partial\rho}<<\frac{1}{R}\frac{\partial y_{c}}{\partial T} \sim-\frac{\epsilon}{\pi\rho}l_{0}^{-p}\frac{\partial\bar{G}}{\partial\theta }\,.\end{cases} \tag{116}\]
\(\frac{\partial\bar{F}}{\partial\rho}\), \(\frac{\partial\bar{F}}{\partial\theta}\), \(\frac{\partial\bar{G}}{\partial\rho}\) being quantities of order one, then the exponent \(q\) is positive. Now we examine \(I_{1}\), reduced to the shear strains:
\[I_{1}\simeq\frac{\epsilon^{2}}{\pi^{2}}l_{0}^{-2p}\frac{1}{\rho^{2}}\left(l_{ 0}^{2(s-q)}\left(\frac{\partial\bar{F}}{\partial\theta}\right)^{2}+\left( \frac{\partial\bar{G}}{\partial\theta}\right)^{2}\right). \tag{117}\]
The two terms in \(I_{1}\) are of different weight. However, only the first one gives the correct asymptotics for the Euler-Lagrange equations, such as \(x_{c}\sim\rho^{2}\). This imposes \(s<q\). Then we define \(\hat{I}_{1}\) and \(\hat{I}_{3}\)
\[\begin{cases}I_{1}=\frac{\epsilon^{2}}{\pi^{2}}l_{0}^{2(s-q-p)}\hat{I}_{1}, \quad I_{3}=\frac{\epsilon^{2}}{\pi^{2}}l_{0}^{s-2p}\hat{I}_{3},\\ \hat{I}_{1}=\frac{1}{\rho^{2}}\left(\frac{\partial\bar{F}}{\partial\theta} \right)^{2},\\ \hat{I}_{3}=\frac{1}{\rho^{2}}\left(\frac{\partial\bar{F}}{\partial\rho}\frac{ \partial\bar{G}}{\partial\theta}-\frac{\partial\bar{F}}{\partial\theta}\frac{ \partial\bar{G}}{\partial\rho}\right).\end{cases} \tag{118}\]
#### iii.3.2 The energy density of the inner core
Although families of possible deformations have been published for arbitrary elastic energy densities, very few concern compressible materials ([155; 156; 157]). Since we can expect a high compression in the inner core, we modify the bulk energy and adopt the Ogden model [91; 93] of compressible constrained materials, but keep the penalty term as before. There are then two choices, depending on the value of \(\vartheta=|\omega_{3/2}|l_{0}^{2s-4q}/J^{2}\). In fact \(|\omega_{3/2}|\) is a tiny amount, but \(s<q\) so \(\vartheta\) is arbitrary.
* \(\vartheta<<1\), defining \(K_{1}=\frac{\epsilon}{\pi}\frac{\epsilon^{4}}{\pi^{4}}l_{0}^{s-4p+2q}\) \[W_{c}=l_{0}^{s-2q}\left(\frac{\hat{I}_{1}}{\hat{I}_{3}}+K_{1}\hat{I}_{3}^{2} \right).\] (119)
* \(\vartheta>>1\), and \(K_{2}=\frac{J\kappa}{|\omega_{3/2}|}\frac{\epsilon^{4}}{\pi^{4}}l_{0}^{6q-4p-s}\) \[W_{c}=\frac{l_{0}^{3s-6q}|\omega_{3/2}|}{J^{2}}\left(\pm\left(\frac{\hat{I}_{1}}{ \hat{I}_{3}}\right)^{3}+K_{2}\hat{I}_{3}^{2}\right).\] (120)
A good way to get very low values of \(l_{0}\) is to choose \(p\simeq s/2\). However, it is sufficient to have \(2p<2+s\). In addition, one must reduce the elastic energy which is confined in the core.This means that \(2q-s\) must be as small as possible. In this context, the elastic energy trapped in the core can be estimated to be around \(l_{0}^{s-2q+2p}\), which means that \(s-2q+2p>0\) must be positive for the first hypothesis, and for the second, a necessary condition is \(3s-6q+2p>0\). It is relatively
easy to choose good parameters, such as \(s=0,p>q\) or \(s=0,p>3/2q\) for the second choice. However, these conditions may not be sufficient and must be compared with the elastic energy in the intermediate range. Note that the nonlinear eigenvalues \(K_{1}\) and \(K_{2}\) are numbers that are difficult to predict even as an order of magnitude. Whatever the hypothesis, eq.(119) or eq.(120), the corresponding asymptotics perfectly check the overlap with \(x_{s}\) and \(y_{s}\) for \(\rho\to\infty\), and the behavior for \(\rho\to 0\), for eq.(119) is \(\bar{F}(\rho,\theta)\sim\rho\) and the same for \(\bar{G}\). The analysis for the second hypothesis is less obvious. This study proves that the matching is possible and the deformation remains regular, but due to the degrees of freedom as \(p\) and \(q\), we do not get any information about \(J\) and \(\epsilon\), \(X_{0}\) and \(l_{0}\) as a function of the material parameters \(\kappa\) and \(\omega_{3/2}\). At this stage, it remains to evaluate how the zero order elastic energy is modified by the localized compression zones. In fact, a bifurcation is possible if the elastic energy is reduced.
### Energy of the patches
The goal now is to evaluate the elastic energy involving the entire physical plane (the zero-order and linear expansion in \(\epsilon\)) and the energy due to the patches, including the inner core and the outer ring of both singular patches. As shown before, in section IX, the expansion of the elastic field has \(3\) contributions in power of \(|\epsilon|\), see eq.(69). If there are no singularities inside the physical plane, the linear term \(\mathcal{E}_{1}\) vanishes. If there are singularities, this is not the case and one has to evaluate this term. For this, we use the same method of complex analysis as described in IX.1:
\[\begin{cases}\mathcal{E}_{1}&=\tau_{0}\iint_{\mathcal{P}}dS\Re\left[\Phi_{Z}+ J\tau_{1}\Phi_{Z_{1}}\right]\\ &=\frac{\tau_{0}^{2}}{2J^{2}}\Re\left[\frac{1}{2T}\iint_{\mathcal{P}}dZd\bar{ Z}\Phi_{Z}\right]\end{cases} \tag{121}\]
Without singularities inside the physical plane, the integration contour called \(\partial\mathcal{P}\), which can be observed in Fig.(4), first row on the left, corresponds to the outer boundary of the physical strip and because of the periodicity of \(\Phi\) the integration gives no contribution. This is not the case now since the calculation requires to cross the cell in the middle along paths that go from the point \(M_{0}\) up to the first and second singularities \(\mathcal{S}\) and \(\mathcal{S}_{J}\) and up along the neighboring path on the right (see Fig.(4, first row)). So the additive contribution comes from the two circular contours around the singularities. After simple simplifications and taking the size of the singularity radius as \(l_{S}\sim l_{1}\), for both patches, we get:
\[\begin{cases}\mathcal{E}_{1}&=\frac{\tau_{0}^{2}}{2J^{2}}\Re\left[\frac{1}{2T }\oint_{\mathcal{P}}dZ\Phi(\bar{Z})\right.\\ &=\frac{\tau_{0}^{2}}{4J}\frac{l_{1}}{\pi}\Re\left[\int_{-\pi}^{\pi}dTe^{IT}(- IT)\right]=\frac{\tau_{0}^{2}}{2J}l_{1}\,.\end{cases} \tag{122}\]
Thus, the singularities inside the sample give a correction of the elastic energy of order \(\epsilon l_{1}\) which, if \(\epsilon<0\), indicates a possible bifurcation: for negative \(\epsilon\), the singular solution is less energetic than the uniform axial growth. However, one must also evaluate the energy inside the core of the patches, for both singularities. Evaluating the energy inside the inner core in each singular patch leads to :
\[\mathcal{E}_{c}\simeq|\omega_{3/2}|l_{3/2}^{\varpi};\quad\varpi=\frac{2}{2p-s }(3s-6q+2p). \tag{123}\]
where \(\varpi=p(3s-6q+2p)/(2p-s)\). In order to compare with the energy of the intermediate zone, \(\varpi\) must be as large as possible. Unfortunately the value \(s=2p\) is not compatible with our constraints but \(s=0\), \(q\) small and \(p=1\) leads to \(\varpi\simeq 2\) which is enough for \(\mathcal{E}_{c}\) to be negligible. This quantity has to be compared with \(\epsilon l_{3/2}=\epsilon^{2}|\omega_{3/2}|^{1/4}/\pi\). Also the small amount \(|\omega_{3/2}|^{1/4}>|\omega_{3/2}|\) justifies the fact that the intermediate singular zone dominates. We conclude that the dominant energy density corresponds to the uniform axial growth corrected by:
\[\delta\mathcal{E}\sim\epsilon\frac{(J^{2}-1)^{2}}{2J}|\epsilon||\omega_{3/2}| ^{1/4}\,. \tag{124}\]
Thus, singularities inside the sample lower the instability threshold and are at the origin of a bifurcation. At this stage, there is no restriction on the values of \(J\) except that the distance between the two singularities must be greater than \(2l_{1}<X_{0}(J-1)/J\), which means that \(\delta J=J-1\) must be larger than \(2Jl_{1}/X_{0}\). This is a necessary condition for our analysis based on the separation of the two patches. Thus, the new bifurcation threshold results from controlled deviations from the neo-Hookean model.
## XII Path independent contour integrals
A fancy and easy way to determine unknown parameters in singular elasto-static fields is to use path-independent integrals [158; 159; 160]. They result from Noether's first theorem [161] as recently demonstrated and recalled by J. Rice and collaborators [162]. This theoretical method relates the geometric parameters of the singularities to the boundary conditions imposed on the far-field elasticity. It has been successfully applied to many topics of elasticity [163], but also to other physical fields as soon as they are governed by variational principles. One can think of interfacial potential flows (Darcy or Euler flows, [164])and electrostatic fields ([165; 162]). It is widely used in all aspects of solid mechanics such as fracture [166], dislocations. [167; 168; 169], notches [160], erosion [170], it is not limited to time-independent formulations [171], nor to linear elasticity although the finite elasticity singularity must be treated with care [168], especially in the case of non-quadratic formulations. Nonlinear problems, sometimes time-dependent, are often interpreted in terms of internal forces acting on defects present in materials and path-independent integrals have also been established in these cases [166; 172]. It remains, however, that some applications in nonlinear elasticity have been questioned [173; 174], more precisely for the so-called M integrals. Proofs of the application of this technique are justified in the Appendix section XV.6. Our goal is to to discover relationships between \(l_{0}\) and \(X_{0}\) and the neo-Hookean parameters with growth such as \(\epsilon\) and \(J\).
### The J-Integral
This approach is not fully nonlinear because we perform an incremental expansion. In addition, our sample is pre-streched by growth which destroys the spatial isotropy. Thus, we cannot claim that the J-integral methods are directly applicable. Knowles and Sternberg have demonstrated the validity of the \(J\) integral in fully nonlinear elasticity and also for incremental deformations but only when the initial state is stress-free, which is a different case. Therefore, it is important to verify that the J-integral remains valid for the model described in section VII.2. This is done in Appendix (XV.6) and we define \(\mathcal{J}\) which is a contour integral, see Fig. (4), panel (B) or (C) above:
\[\mathcal{J}=\oint dS(E.\vec{\mathbf{e}}_{x}-N_{k}S_{ik}F_{i1})=0\,. \tag{125}\]
The stress \(S_{ik}\) was introduced in the section VII.2, eq.(50) and eq.(53) and the strains are simply given by \(F_{ij}=\partial_{j}x_{i}\). Note that the J-integral is a vector so \(\mathcal{J}\) is only one component. Not all of the J-component can give pertinent information. The contour, shown in Fig.(4), panel B on top, first goes from \(M\) to \(M_{0}\), the center of \(\mathcal{C}_{0}\), then it goes down to avoid the two singularities \(\mathcal{S}_{J}\) and \(\mathcal{S}\), it climbs up to \(M_{0}\) to join the point \(M_{1}\), it continues along \(M_{1}M_{2}\), then \(M_{2}M_{3}\) and finally \(M_{3}M\). Only the brown paths can contribute, the blue paths cancel each other out for reasons of periodicity and symmetry. Focusing on the contour \(\mathcal{C}_{0}\), which is \(MM_{1}\) at the top of the domain, only the energy density contributes, since both normal stress components vanish, a necessary condition for a free boundary. Decomposing \(\mathcal{J}\) into \(\mathcal{J}^{(0)}+\epsilon\mathcal{J}^{(1)}+\epsilon^{2}\mathcal{J}^{(2)}\), we get for the upper boundary \(\mathcal{C}_{0}\) :
\[\mathcal{J}^{(0)}_{\mathcal{C}_{0}}=\frac{(J-1)^{2}}{2};\,\mathcal{J}^{(2)}_{ \mathcal{C}_{0}}=\frac{\tau_{0}^{3}}{8J^{2}}\int_{-1/2}^{1/2}dY(\phi_{Z})^{2} |_{{}_{Z=IY}}\,. \tag{126}\]
and \(\quad\mathcal{J}^{(1)}_{\mathcal{C}_{0}}=0\). The last integral is difficult to evaluate exactly, but in the limit of small \(l_{0}\), it gives:
\[\mathcal{J}^{(2)}_{\mathcal{C}_{0}}\sim-\frac{\tau_{0}^{3}}{2J^{2}}\left(\coth (2\pi X_{0})-1\right). \tag{127}\]
If we now focus on the singularities \(\mathcal{S}_{J}\) and \(\mathcal{S}\), the vertical brown contours have no contribution, so only the singularities \(\mathcal{S}\) and \(\mathcal{S}_{\mathcal{J}}\) play a role. By defining a small radius around each singularity: \(R=((X-N_{0})^{2}+Y^{2})^{1/2}\) for \(l_{0}<R<1\), one can approximate \(\phi_{Z}\) close to \(\mathcal{C}_{\mathcal{S}}\) by:
\[\phi_{Z}\sim\frac{e^{-IT}}{\pi R}-\coth(2\pi X_{0})\,, \tag{128}\]
and in the neighborhood of \(\mathcal{C}_{\mathcal{S}_{\mathcal{J}}}\) by:
\[\phi_{Z}\sim\frac{1}{\pi R(J\cos(T)+I\sin(T))}-\coth(2\pi X_{0})\,, \tag{129}\]
and derive the contributions of these singularities to \(\mathcal{J}\), first for \(\mathcal{S}\):
\[\mathcal{J}^{(1)}_{\mathcal{S}}=2\tau_{0}\quad\mathcal{J}^{(2)}_{\mathcal{S}} \sim 2\tau_{0}\coth(2\pi X_{0})\,, \tag{130}\]
and for \(\mathcal{S}_{J}\):
\[\mathcal{J}^{(1)}_{\mathcal{S}_{\mathcal{J}}}=2J\tau_{0}\tau_{1};\quad J^{(2)} _{\mathcal{S}_{\mathcal{J}}}=-2J^{2}\tau_{0}\tau_{1}^{2}\coth(2\pi X_{0})\,, \tag{131}\]
Finally, after adding the integral contribution \(\mathcal{J}\) at \(\mathcal{C}_{0}\), around the singularities \(\mathcal{S}\) and \(\mathcal{S}_{\mathcal{J}}\), and on the two trajectories between \(M_{0}\) and the singularities and on the two vertical lines
Figure 6: Panel (a) Continuous lines: the ratio \(\mathcal{R}_{0}\) between \(X_{0}\) and \(|\epsilon|\) for different values of \(J\), dashed lines: \(d_{j}=\mathcal{R}_{0}(J-1)/J\) the distance (divided by \(|\epsilon|\)) between \(S_{J}\) and \(S\), which must be greater than \(2l_{1}/|\epsilon|=2\sqrt{|\omega_{1}|}\) represented by the dot-dashed curve in black. Panel (b) continuous lines: \(\mathcal{R}_{0}\),dotted and dot-dashed lines, corrected values due to change of \(J\) at large distances: \(j_{0}\pm 0.1\), with dot-dashed lines for \(+\) and dotted lines for \(-\), see eq.(134). In (c) the quantity \(j_{0}d\) (jump of the growth values between the bottom and the top of the strip the times the thickness \(d\) of the sample) as a function of \(J\) required for two sizes of the singularity, \(l_{0}=0.01\), and \(l_{0}=0.001\).
\(M_{1}M_{2}\) and \(M_{3}M\), and after simplifications, we determine the value of \(\mathcal{J}_{P}=\mathcal{J}_{\mathcal{C}_{0}}+\mathcal{J}_{\mathcal{S}}+ \mathcal{J}_{\mathcal{S}_{J}}=\mathcal{J}_{\mathcal{C}_{1}}\), where \(\mathcal{J}_{\mathcal{C}_{1}}\) is restricted to the horizontal lower segment. If at \(+\infty\), the volumetric growth is kept at \(J\), then \(\mathcal{J}_{\mathcal{C}_{1}}=\mathcal{J}_{\mathcal{C}_{0}}^{(0)}\), which eliminates the zero order and results in:
\[\mathcal{J}=-\frac{\epsilon\tau_{0}(J-1)^{2}}{J}\left\{1+\frac{\epsilon(1+J)^{ 2}}{2J}\left(\frac{2}{\tanh(2\pi X_{0})}-1\right)\right\}\,. \tag{132}\]
This evaluation is correct for \(X_{0}>l_{0}\), which results in \(X_{0}\sim-\epsilon(1+J)^{2}/(2\pi J)\). Again, the negative sign of \(\epsilon\) is confirmed. The numerical values of the \(X_{0}\) solution of eq.(132) are shown in Fig.(6), which shows the ratio \(\mathcal{R}_{0}=X_{0}/|\epsilon|\) in panel (a) with continuous lines while the distance of both singularities also divided by \(|\epsilon|\): \(d_{J}=X_{0}(J-1)/J\) is represented by dashed curves. Both sets of curves show a decrease at small \(\epsilon\) and then an increase, so we can deduce that \(X_{0}\sim-\epsilon\). Since \(d_{J}\) must be greater than \(2l_{p}=2|\epsilon||\omega_{p}|^{1/2p}\), a threshold value for \(J\) depending on \(|\sqrt{|\omega_{1}|}\) can be proposed: as an example, from Fig.(6), the value \(J+1.1\) is obviously too low. Our analysis assumes that the growth conditions are maintained at infinity and that the sample is infinite, which is not true in real experiments. If the sample has a finite depth \(d\geq 1\), we know that the elastic deformations decay exponentially from a distance to the interface of the order of the wavelength, our approach remains valid near the interface, but we must consider a substrate that may alter our estimate of \(\mathcal{J}_{\mathcal{C}_{1}}\). In addition, the growth law may change away from the interface. These two points will be explored below.
#### v.1.1 Constant growth and finite size effects
We now assume that our sample has a finite size \(d\) and it is attached on a solid substrate. For \(X=d\), the growing material cannot penetrate the substrate but can slide freely on it. The deformation \(\Phi(Z)\), estimated from the top of the layer, decreases exponentially as \(\Phi(Z)\sim-2X_{0}e^{-2\pi Z}\) when \(|Z|>>1\). We need to adjust this deformation near the substrate when \(Z\sim d\). Following the eq.(57) at a distance \(d\), the profile function can be represented by:
\[\begin{cases}x&=JX-d_{1}+\epsilon_{1}\cos(2\pi Y)\left(e^{-2\pi X}-\tilde{\tau }e^{-2\pi JX}\right),\\ y&=Y-\epsilon_{1}\sin(2\pi Y)\left(e^{-2\pi X}-J\tilde{\tau}e^{-2\pi JX} \right).\end{cases} \tag{133}\]
where \(d_{1}=d(J-1)\), \(\epsilon_{1}=-2X_{0}\epsilon\) and \(\tilde{\tau}=-e^{2\pi d(J-1)}\). \(\epsilon_{1}\) results from the matching with the lower expansion and is of the order of \(\epsilon^{2}\). \(\mathcal{J}_{\mathcal{C}_{1}}\) is easy to compute and read: \(\mathcal{J}_{\mathcal{C}_{1}}=(J-1)^{2}/2\{1-(4\epsilon\pi X_{0}e^{-2\pi d})^ {2}(J+1)\}\). Obviously, once \(d\) is of the order of the wavelength and even larger, this correction becomes negligible: for \(d=1\), \(e^{-4\pi d}=3.4610^{-6}\), for \(d=2.5\), \(e^{-4\pi d}=210^{-14}\). Note that a sliding substrate allows an easy estimation of finite size effects. Clamped conditions, as discussed later in the section XIII are much more difficult to fit to our singular deformation mode.
#### v.1.2 Inhomogeneous volumetric growth
If the growth becomes slightly inhomogeneous at large distances, becoming \(\tilde{J}=J+\epsilon j_{0}\) at the bottom of the sample, then estimating \(\mathcal{J}_{\mathcal{C}_{1}}=(1-J)^{2}/2+\epsilon(J-1)j_{0}\) will change the \(X_{0}\) value into :
\[X_{0}\sim-\epsilon\frac{(J-1)^{2}(1+J)^{3}}{2\pi J(1-J-J^{2}+J^{3}+j_{0}J)}\,. \tag{134}\]
This estimate for \(X_{0}\) is given for small \(\epsilon\), see Fig. (6)(b). Increasing the volumetric growth at the bottom (\(j_{0}<0\)) also increases the value of \(X_{0}\). However, such an estimate is valid for a change in volumetric growth localized only at the bottom.
### The M-Integral
Despite debates about the validity of the M-integrals in finite elasticity, let us now consider these integrals which have the advantage of explicitly introducing the size of the elastic samples. Unlike the \(J\) and \(L\) integrals, proved valid by Knowles and Sternberg Knowles and Sternberg (1999), the M-integral technique turns out not to be always applicable for arbitrary energy densities. Nevertheless, when applicable, it remains a very useful tool for demonstrating properties of nonlinear fields such as the creeping closure, for example Knowles and Sternberg (1999). As before for the \(J\) integral, it is better to convince ourselves that a path-independent integral \(\mathcal{M}\) can be constructed, and this is realized in the Appendix section XV.6. For our modeling, the definition of \(\mathcal{M}\) follows:
\[\mathcal{M} =\oint ds\left(E-\tfrac{(J-1)^{2}}{2}\right)\vec{\mathbf{X}}. \vec{\mathbf{N}}-S_{jk}U_{ji}X_{i}.N_{k}\] \[-(J+1)\left(U_{x}\cdot N_{X}+U_{y}\cdot N_{Y}\right)=0\,. \tag{135}\]
where \(U_{X}=x-JX\) and \(U_{y}=y-Y\), the equation (135) being valid up to \(O(\epsilon^{3})\). As before for \(\mathcal{J}\), \(\mathcal{M}\) results from \(4\) contributions along the horizontal axis \(X=0\), the two patches \(\mathcal{C}_{\mathcal{S}}\) and \(\mathcal{C}_{\mathcal{S}_{J}}\), and the far field. The vertical lines do not contribute as before. Considering the upper boundary where \(X=0\), only the third term in eq.(135) of order \(|\epsilon|\) contributes. The contribution of the two patches is of order \(\mathcal{J}X_{0}\) for the first two terms of eq.(135). Each term, either \(\mathcal{J}\) or \(X_{0}\), is of order \(|\epsilon|\) and the result will be neglected. For the last term, it is of order \(|\epsilon|l_{1}\), which is even smaller than \(|\epsilon|X_{0}\), so the patches make a subdominant contribution. Consequently, the only way to compensate \(\mathcal{M}_{X=0}\) is to close the contour at a finite distance \(d\) as done before and to assume a slight difference in the volumetric growth. Let us first evaluate \(\mathcal{M}_{X=0}\) for a very small value of \(l_{0}\)
\[\begin{cases}\mathcal{M}_{X=0}&=-(J+1)\int_{-1/2}^{1/2}(x-JX)dY\\ &=-J(J+1)(1+\tau_{1})\epsilon\int_{-1/2}^{1/2}dYI_{f}\,,\end{cases} \tag{136}\]
with \(I_{f}=F(IY-X_{0})+F(-IY-X_{0})\). A careful analysis of the integral of \(I_{f}\) gives:
\[-2X_{0}+2Log(\sinh(\pi l_{0}))/\pi\]
for small values of \(l_{0}\), so the last term dominates which finally leads to:
\[\mathcal{M}_{X=0}\sim-\frac{1}{J\pi}(J-1)(J+1)^{2}\epsilon Log(\sinh(\pi l_{0}))\,. \tag{137}\]
Now let us evaluate the contribution of the lower boundary at \(X=d\) where \(\tilde{J}=J+\epsilon j_{0}\). The first and last terms contribute giving : \((J-1)d\epsilon j_{0}-(J+1)d\epsilon j_{0}\) and then:
\[\mathcal{M}_{X=d}=-2d\epsilon j_{0}\quad\text{so}\quad l_{0}\sim\frac{1}{\pi} e^{\left(\frac{2\pi dJ_{0}}{(J-1)(J+1)^{2}}\right)}\,. \tag{138}\]
Since \(l_{0}\) is a tiny quantity, the model is validated if \(j_{0}<0\), that is, if the volumetric growth is greater at the bottom than at the top. Noticing that values of \(l_{0}\) of order \(10^{-2}\) or \(10^{-3}\) require both a negative jump value \(j_{0}\) and a finite thickness for the sample \(d\), a graph representing the product of \(j_{0}\times d\), see Fig.(6)(c), shows that this product must be of the order of several units for suitable \(l_{0}\) values, except for very low growth values: \(J-1\sim 0.1\). In other words, thin shells \(d\sim 1\) are more likely to reach low values of \(l_{0}\).
In conclusion, our solution is based on the determination of \(\Phi(Z)\), which involves 2 parameters \(X_{0}\) and \(l_{0}\). \(X_{0}/J\) gives the position of the closest singularity of our doublet from the top while \(X_{0}\) indicates the position of the second singularity. \(l_{0}\)is a parameter that determines the outer solutions. Since these two constants are relevant to the outer solution, they are automatically eliminated in the boundary layer treatment which concerns the inner solutions as shown by Eq\((111)\), and so are not detected at this level. To capture them, the path-independent integral treatment is appropriate, since this fancy technique introduces the boundary conditions at the level of the whole domain. In fracture theory, the \(J\) integral relates the singular stress at the fracture tip to the dimensions of the specimen while the M-integral gives access to more complex singular fields and in particular to interfacial fractures. Here, the \(X_{0}\) value is determined by the \(J\) integral, and can be slightly modified by growth inhomogeneity at large distances from the interface. The balance of the \(\mathcal{M}\) integral is dominated by the two horizontal boundaries: above and below when the volumetric growth varies at both ends. This is due to the fact that the \(\mathcal{M}\) integrals associated with the singularities are subdominant, being of order \(\epsilon^{2}\). They are evaluated in section XV.6. Obviously, introducing growth heterogeneity at the bottom is the best way to fix \(l_{0}\). One may wonder whether our results concerning for either \(\mathcal{J}\) or \(\mathcal{M}\) remain valid when we add the growth heterogeneity. In fact, the initial axial growth makes the elastic model anisotropic : therefore, we check the validity of these approaches in section.(XV.6). In addition, we add local heterogeneity at the bottom. It can be shown that the method, which is valid at order \(\epsilon^{2}\) for constant volumetric growth, remains valid only at order \(\epsilon\), in the case of heterogeneity. This is also the reason why we assume that the growth jump is localized at the bottom. Finally, at this stage, \(X_{0}\) and \(l_{0}\) are completely determined by \(J\) and \(j_{0}\) and \(\epsilon\). Since \(J\) is given by the nonlinearity of the hyperelastic model, the only unknown is \(\epsilon\).
Thus, in order to conveniently treat the two boundary layers, the neo-Hookean model must be modified. A weak compressibility is required, as well as a nonlinearity of the elastic energy in \(I_{1}^{p}\) and finally a variation of the growth in the far field. Surprisingly, a case studied by Pandurangi _et al._[20] for a semi-infinite sample, consists in an elastic energy modeling that also includes compressibility and a quadratic energy in \(I_{2}\). However, they also introduce a graded material property in the vertical direction, while our choice consists in a graded growth in this direction. We can conclude that, although the two approaches are different, the physics of creases requires going beyond the simple incompressible neo-Hookean hyperelasticity.
## XIII Finite-size effects or the buckling of layers
One may wonder whether the degeneracy of the solutions presented above is not due either to the simplicity of the neo-Hookean model or to the fact that the initial geometry is too simple. It is obvious that a length scale is missing in our formulation, since we arbitrarily set the wavelength to unity. Consider the case of a gel layer whose height is given by the parameter \(d\). In order to keep as much as possible the same definitions and equations given in the previous section, we continue to use the wavelength as the unit of length. This situation was also considered by Biot in 1963, but with different boundary conditions [23] and with a different point of view: the limit of small height \(d\) compared to the wavelength where the analogy with Euler's buckling becomes more explicit [23]. Sinusoidal modes have been found and the dispersion relation given the wavelength as a function of \(d\) has been given numerically. In this section, our aim is to revisit his results with different boundary conditions at the substrate and considering \(d>1\). For a single layer, there is no need to change the main equations, just remember that boundary conditions have to be applied on both horizontal sides \(X=0\) and now \(X=d\) and during the growth, the layer can be free or glued at the bottom. In the first case, and unlike the second case, symmetry allows us to choose half of the sample with boundary conditions for \(X=d/2\), which is now the bottom. In any case, we will have strain conditions at the bottom and free stress conditions at the top: \(X=0\). These two cases are physically similar and will only differ in numerical values. The two sets of boundary conditions to be applied either at the top or at the bottom are different in nature, and due to the finite extension of the layer, divergent solutions at infinity are now relevant. They were not allowed and eliminated in the previous section. The non-symmetric case adapts more easily to bilayers and is therefore of more interest. This is especially true when the second layer is stiffer than the upper layer (see [8]) although the case of a soft substrate is more considered in the literature [175; 176; 177; 178]. Under growth, the description of the new positions \((x,y)\) can follow the same perturbation scheme as before, but must now include \(2\) holomorphic functions: \(\Phi_{e}(Z)\), (an even function of \(Z\)) and \(\Phi_{o}(Z)\) (an odd function of \(Z\)). Then, by defining \(\tilde{Z}=Z-d=X+IY-d\) and \(\tilde{Z}_{1}=Z_{1}-Jd=J(X-d)+IY\)
the Euler-Lagrange equation associated with the incompressibility condition gives the following results for the deformations:
\[\begin{cases}x=JX-(J-1)d+J\epsilon\Re\left[F_{1}(\tilde{Z})+F_{2}(\tilde{Z}_{1}) \right],\\ y=Y-\epsilon\Im\left[F_{1}(\tilde{Z})+JF_{2}(\tilde{Z}_{1})\right].\end{cases} \tag{139}\]
where
\[\begin{cases}F_{1}=\Phi_{e}(\tilde{Z})+a_{1}\Phi_{o}(\tilde{Z}) \,,\\ F_{2}=b_{1}\Phi_{e}(\tilde{Z}_{1})+b_{2}\Phi_{o}(\tilde{Z}_{1})\,.\end{cases} \tag{140}\]
With this definition, the incompressibility condition valid everywhere in the sample:
\[\frac{\partial(x-JX)}{\partial X}+J\frac{\partial(y-Y)}{\partial Y }\quad\forall\quad X\quad\text{and}\quad Y\,, \tag{141}\]
is automatically checked at first order in \(\epsilon\). The boundary conditions of anchoring to the solid substrate impose: \(x=d\) and \(y=Y\) for \(X=d\). Since \(\Re\left[\Phi_{o}(IY)\right]\) and \(\Im\left[\Phi_{e}(IY)\right]\) vanish independently of the \(Y\) values, the anchorage then imposes \(b_{1}=-1\) and \(b_{2}=-a_{1}/J\). It remains to apply the free stress conditions involving \(S_{11}\) and \(S_{21}\) on the upper boundary (X=0) which must be verified for arbitrary \(Y\) values. Let us limit ourselves to the harmonic modes.
### Selection of a unique harmonic mode
Selecting \(\Phi_{e}(Z)=\cosh(2\pi Z)\) and \(\Phi_{o}(Z)=\sinh(2\pi Z)\), to represent the current \((x,y)\) coordinates, the first order incremental correction in \(\epsilon\) becomes: \(\delta x=J\cos(2\pi Y)f_{1}(\tilde{X},X_{1})\) and \(\delta y=\sin(2\pi Y)f_{2}(\tilde{X},\tilde{X}_{1})\), where \(\tilde{X}=2\pi(X-d)\) and \(\tilde{X}_{1}=2\pi J(X-d)\) and
\[\begin{cases}f_{1}&=\cosh\tilde{X}-\cosh\tilde{X}_{1}+a_{1}(\sinh\tilde{X}- \frac{\sinh\tilde{X}_{1}}{J})\,,\\ f_{2}&=a_{1}(\cosh\tilde{X}_{1}-\cosh\tilde{X})+J\sinh\tilde{X}_{1}-\sinh \tilde{X}\,.\end{cases}\]
Now, considering the cancellation of the normal \(S_{11}\) and of the shear stress \(S_{21}\) at the top of the strip, we derive the value of the coefficient \(a_{1}\) in a first step:
\[a_{1}=\frac{J(2J\sinh(\tilde{d})-(1+J^{2})\sinh(J\tilde{d})}{2J^{2}\cosh( \tilde{d})-(1+J^{2})\cosh(J\tilde{d})}\,, \tag{142}\]
where \(\tilde{d}=2\pi d\). The dispersion relation \(\mathcal{D}\) gives the new threshold \(J_{d}\) as a function of the ratio of wavenumber to thickness, so \(2\pi d\) is the solution of a transcendental equation:
\[\mathcal{D} =-4J_{d}^{2}(1+J_{d}^{2})+(1+2J_{d}^{2}+5J_{d}^{4})\cosh(\tilde{d })\cosh(J_{d}\tilde{d})\,, \tag{143}\] \[\qquad-J_{d}(1+6J_{d}^{2}+J_{d}^{4})\sinh(\tilde{d})\sinh(J_{d} \tilde{d})=0\,,\]
### Nonlinearity and creasing above threshold for growing layer
Why focus on a single harmonic mode? Each harmonic mode will correspond to \(\cosh(2\pi mZ)\) and \(\sinh(2\pi mZ)\) and will have a different threshold given by \(J_{md}\) as opposed to the unique threshold independent of \(m\) for an infinite thickness, see section (IX.3). Thus, we cannot simply combine different modes and evaluate the nonlinearities. In fact, nonlinear profiles do not result from a single mode and traditional techniques become more difficult. Other asymptotic techniques consist of the coupling mode approach of classical bifurcation theory such as the Landau formalism of the amplitude equations [32, 33, 118, 34] but it remains complex or even impossible
Figure 7: The selected threshold \(J_{d}\) as a function of \(2\pi d\), where \(d\) is the ratio between of thickness to pattern wavelength. For a thin film, the threshold increases dramatically as \(J_{d}\sim 4.93347/(2\pi\Delta d)\). When the thickness is of order or greater than \(\Lambda/\pi\), \(J_{d}\) reaches the asymptotic limit \(J_{d}\sim J_{\infty}=J_{b}\), so \(3.3829\) represented by a solid blue line in (a). In the inset, the critical amplitude \(\mathcal{A}\) defined in eq.(145) (in units of the wavelength for observing a crease with a single harmonic mode). A plateau of order 0.3, is reached very quickly. In b) a normalized pattern is plotted for a value of \(\mathcal{A}\) at the critical value \(0.3\) where \(d=t_{3}/\pi\) and \(J_{c}\) the threshold value. The amplitude \(x\) is divided by \(\mathcal{A}\) for normalization. A cusp can be observed for \(Y=0\), which repeats periodically for \(Y=n\pi\). In c) superposition of the profiles for increasing amplitudes for \(\mathcal{A}=0.1\) in blue, \(\mathcal{A}=0.2\) and \(\mathcal{A}=0.3\).
to use them with partial differential equations with boundary conditions. One method, different from the present one, concerns the use of a nonlinear stream function introduced in [19], which exactly treats the incompressibility and reports the nonlinearities on the elastic energy [7; 8; 52]. This method, valid only in \(2D\) geometry or when the elastic deformations are reduced to a two-dimensional space, has allowed to demonstrate a third order of expansion: [19] :
\[\begin{cases}x=JX-(J-1)d+\epsilon J\cos(2\pi y)f_{1}\,,\\ Y=y-\epsilon\sin(2\pi y)f_{2}\,.\end{cases} \tag{144}\]
The parameter \(\epsilon\) being predicted at the third order. Of course, this prediction depends on the size of the layer \(d\). This formulation assumes that all inversion formulas can be achieved, which is obviously not the case when creases occur at the interface. They appear when \(\partial Y/\partial y\) vanishes for \(X=0\) according to the theorem of the implicit function, ([32]), which gives the critical value \(\mathcal{A}\), which is equal to \(\epsilon Jf_{1}\) of the deformation at the cusp position (see inset in Fig.(7), panel (a)) and eq.(144). It can be noticed that the required amplitude saturates to a finite value around 0.3 for values \(d\) of the sample width of the order of the wavelength or more, and that very thin samples, very easily exhibit cusps as they grow, although their threshold \(J_{d}\) is obtained for a higher value. In Fig.(7), panel (b) we plot the profile of the cusp function over one period. It is divided by \(\mathcal{A}\) for normalization so that the amplitude varies between \(-1\) and \(1\). To evaluate whether the amplitude \(\mathcal{A}\) can be obtained in practice requires an analytical treatment of the nonlinearities (which is not reported here, see Ref.([8])). It approximates the amplitude of the wavy regular pattern above the bifurcation threshold \(J_{d}\) as
\[x\sim-(J-1)d\pm 0.537\sqrt{J-J_{d}}\cos(2\pi Y) \tag{145}\]
where the numerical number \(0.537\) is analytically predicted and the zero order in eq.(145) indicates the increase in height due to growth, which appears negative due to our choice of coordinate system. The nonlinear treatment assumes a regular wave pattern and does not assume a priori singularities as cusps. This estimation must be compared with the amplitude \(\mathcal{A}\) and results in a growth parameter inducing a possible crease given by \(\mathcal{A}\sim 0.3=0.537\sqrt{J-J_{d}}\), which implies a distance from the threshold approximately equal to \(0.3\), or \(10\%\) of the Biot threshold. Thus, for a thickness of the order of the pattern wavelength: \(d/\Lambda\sim 1\), creases appear rather quickly at the interface once the threshold value is exceeded. Although nonlinearities can be responsible for creases, they always appear above the Biot threshold and not below.
### Conclusion
We have shown that the sinusoidal Biot profile is not the only solution that occurs in a growing hyperelastic sample. Restricting to the simplest configuration of a semi-infinite two-dimensional neo-Hookean sample, growing with an isotropic constant growth rate \(J\), we show that other candidates are possible solutions that appear exactly at the same threshold. Among them, quasi-singular solutions with a periodic array of cusps can be found, at the Biot threshold. Nonlinearities can be evaluated by classical nonlinear treatments, supercritical bifurcations are rather common, but also subcritical bifurcations can appear slightly below the Biot threshold when several harmonics are coupled. This explains the diversity of experimental observations independent of the elasticity model, since this diversity occurs at the level of the simplest growth formalism. Independent of these patterns, which are always related to the Biot threshold, it has been suggested that patterns can occur well below the Biot threshold if local singularities also occur within the material. We consider this conjecture and show that it can be the source of new families of solutions. In this case, tiny linear singularities occur through pairs near (but not at ) the interface where the compressive elastic field is concentrated. The high level of stress generated requires a slight local modification of the elastic model. Relevant parameters such as the linear extension of the singularities and their positions are recovered by path-independent integrals. In addition, this study proposes a threshold value for the volumetric growth below the Biot threshold determined by the nonlinearities above the neo-Hookean approach.
## XIV Acknowledgements
I would like to thank Linda Cummings, Darren Crowdy and Saleh Tanveer for insightful discussions during the programme "Complex analysis: techniques, applications and computations" (Fall 2019, July 2023) of the Isaac Newton Institute of Mathematical Sciences, Cambridge, for its support and hospitality. I acknowledge the support of the ANR (Agence Nationale de la Recherche) under the contract MecaTiss (ANR-17-CE30-0007) and the contract EpiMorph (ANR-2018-CE13-0008).
## XV Appendix
### Nonlinear elasticity at first order: stress and energy expansion
This appendix is written with the elastic fields given by a complex function. It is written in the initial frame of coordinates and the stress corresponds to the Piola stress tensor[93].
\[x = JX+\epsilon J\Re\,\left[\Phi(Z)+\tau_{1}\Phi(Z_{1})\right],\] \[y = Y-\epsilon\Im\left[\Phi(Z)+\tau_{1}J\Phi(Z_{1})\right].\] (S1)
In this work, we restrict to \(\Phi=\bar{\Phi}\) or \(\Phi\) function having a real expansion in \(Z\). The Jacobian in \(2D\) is given by \(I_{3}=x_{X}\cdot y_{Y}-x_{Y}\cdot y_{X}-J\). At linear order it reads:
\[x_{X} = J+\epsilon J\Re\,\left[\Phi_{Z}+J\tau_{1}\Phi_{Z_{1}}\right],\] \[y_{Y} = Y-\epsilon\Re\,\left[\Phi_{Z}+J\tau_{1}\Phi_{Z_{1}}\right],\] \[x_{Y} = -\epsilon J\Im\left[\Phi_{Z}+\tau_{1}\Phi_{Z_{1}}\right],\] \[y_{X} = -I\epsilon\Im\left[\Phi_{Z}+J^{2}\tau_{1}\Phi_{Z_{1}}\right].\] (S2)
With this choice for the deformation field, we can verify that at linear order in \(\epsilon\), \(I_{3}=0\) as required.
The Euler-Lagrange equation for the Neo-Hookean model are given by:
\[\Delta x=x_{XX}+x_{YY}=Q_{X};\ \Delta y=y_{XX}+y_{YY}=JQ_{Y}\,.\] (S3)
Obviously, only the term depending on \(Z_{1}\) and \(\bar{Z}_{1}\) are concerned by these equations which implies that the Lagrange parameter \(Q\) is only a function of \(Z_{1}\) and \(\bar{Z}_{1}\). Both equations of eq.(S3) determines \(Q\) as:
\[Q=J+\frac{1}{2}\epsilon\tau_{0}\tau_{1}\left\{\Phi_{Z_{1}}+\tilde{\Phi}_{Z_{1} }\right\}\] (S4)
It remains to check that the boundary verifies the cancellation of the shear \(S_{21}\) and normal stress \(S_{11}\) for the free surface \(X=0\). Using eq. (S2), we first obtain the components of the Piola stress tensor. For the diagonal elements:
\[\begin{cases}S_{11}&=\epsilon\Re\left[2J\Phi_{Z}+(1+J^{2})\tau_{1}\Phi_{Z_{1}} \right],\\ S_{22}&=-\tau_{0}-\epsilon\Re\left[(1+J^{2})\Phi_{Z}+2J^{3}\tau_{1}\Phi_{Z_{1} }\right].\end{cases}\] (S5)
For the off-diagonal elements, we have:
\[S_{21}=y_{X}+Qx_{Y}\ \text{and}\ S_{12}=x_{Y}+Qy_{X}\ \text{which reads:}\] (S6)
Note that, for \(X=0\) so for \(Z=IY\), the normal stresses on the top stress become:
\[S_{11}=\frac{1}{2}(2J+(1+J^{2})\tau_{1})(\Phi_{Z}+\bar{\Phi}_{Z})\,,\] (S7)
\[S_{21}=\frac{I}{2}(1+J^{2}+2J^{2}\tau_{1})(\Phi_{Z}-\bar{\Phi}_{Z})\,.\] (S8)
The boundary conditions impose \(S_{11}=S_{21}=0\) but If \(\Phi(Z)\) is an even function of \(Z\), the derivative \(\Phi^{\prime}(Z)\) is odd, so \(S_{11}\) vanishes automatically, and we only need to choose \(\tau_{1}=-(1+J^{2})/(2J^{2})\) to cancel \(S_{21}\). If \(\Phi(Z)\) is an odd function of \(Z\), then \(S_{21}\) cancels automatically and \(S_{11}=0\) imposes \(\tau_{2}=-2J/(1+J^{2})\). Our choice in this manuscript corresponds to the first case. The Biot solution \(\Phi(Z)=e^{-2\pi Z}\) has no parity so both normal stress components must vanish at the top of the strip \(X=0\), which explains the existence of the threshold \(J=J_{B}\). Regardless of the choice of \(\tau_{1}\) or \(\tau_{2}\), the threshold value \(J_{B}\) is identical.
### Expansion of the elastic and capillary energy density
Expansion of the the elastic energy density \(E\) given by eq.(45) at third order of the parameter \(\epsilon\): \(E=E_{0}+\epsilon E_{1}+\epsilon^{2}E_{2}+\epsilon^{3}E_{3}\), reads:
\[\begin{cases}E_{0}=\frac{1}{2}(J-1)^{2}\,,\\ \\ E_{1}=\tau_{0}\Re\left[\Phi_{Z}+J\tau_{1}\Phi_{Z_{1}}\right],\\ \\ E_{2}=\frac{1}{2}(3J^{2}+1)(|\Phi_{Z}|^{2}+J^{2}\tau_{1}^{2}|\Phi_{Z_{1}}|^{2} )\\ +\frac{J\tau_{1}}{2}\Re\left[(J+1)^{3}\Phi_{Z}\Phi_{Z_{1}}-(J-1)^{3}\Phi_{Z} \Phi_{Z_{1}}\right],\\ \\ E_{3}/(J\tau_{0}\tau_{1})=\Re\left[\Phi_{Z_{1}}\right]\times(|\Phi_{Z}|^{2}+J ^{2}\tau_{1}^{2}|\Phi_{Z_{1}}|^{2}\\ +\frac{\tau_{1}}{2}\Re\left[(J+1)^{2}\Phi_{Z}\Phi_{Z_{1}}-(J-1)^{2}\Phi_{Z} \Phi_{Z_{1}}\right])\,.\end{cases}\] (S9)
Note that if \(J=1\) (no growth) all the coefficients \(E_{i}\) vanish whatever the function \(\Phi(Z)\).
The capillary energy density evaluated at \(X=0\) so for \(Z=IY\) can be expanded up to forth order:
\[\mathcal{E}_{c}=\gamma_{0}\epsilon(E_{1c}+\epsilon E_{2c}+\epsilon^{2}E_{3c}+ \epsilon^{3}E_{4c})\,,\] (S10)
where \(\gamma_{0}\) is the ratio between the capillary energy and the shear modulus multiplied by the wavelength and where:
\[\begin{cases}E_{1c}=\frac{(J-1)^{2}}{2J}\Re\left[\Phi_{Z}\right],\\ \\ E_{2c}=\frac{\tau_{0}^{2}}{8J^{2}}(\Im\left[\Phi_{Z}\right])^{2}\,,\\ \\ E_{3c}=-\frac{\tau_{0}^{2}(J-1)^{2}}{16J^{3}}\Im\left[\Phi_{Z}\right]^{2}\Re \left[\Phi_{Z}\right],\\ \\ E_{4c}=\frac{\tau_{0}^{2}(J-1)^{2}}{128J^{4}}\Im\left[\Phi_{Z}\right]^{2}\times \\ \{(5-6J+5J^{2})\Re\left[\Phi_{Z}\right]^{2}-(J+1)^{2}|\Phi_{Z}|^{2}\}\,.\end{cases}\] (S11)
### Evaluation of the total energy for a single mode, double and triple mode
We consider first a single mode: \(\zeta=e^{-2\pi Z}\). in this case, only \(E_{2c}\) and \(E_{4c}\) and \(E_{6c}\) contribute to the capillary energy \(E_{cs}\). After integrating on the top interface and defining the following quantities:
\[\begin{cases}\alpha_{c}=\left(\frac{(J-1)\pi}{2J}\right)^{2};E_{s0}=\gamma_{0} \alpha_{c}\epsilon^{2}(1+J)^{2},\\ \\ \mathcal{Q}_{1c}=\frac{1}{4}(J^{2}-14J+1),\\ \\ Q_{2c}=\frac{1}{4}(1-12J+102J^{2}-12J^{3}+J^{4}).\end{cases}\] (S12)
it reads:
\[\mathcal{E}_{cs}=E_{s0}\left(1+\epsilon^{2}\alpha_{c}\mathcal{Q}_{1c}+\epsilon^ {4}\alpha_{c}^{2}\mathcal{Q}_{2c}\right)\] (S13)
If the base state includes other harmonics such as \(\zeta^{2}\) and \(\zeta^{3}\) with decreasing amplitudes as in section IX.5, \(\phi[Z]=\zeta+\zeta\)
\(\epsilon B_{2}\zeta^{2}-\epsilon^{2}B_{3}\zeta^{3}\), the capillary energy includes \(\mathcal{E}_{cs}\) but also additive terms in \(\epsilon^{4}\) and \(\epsilon^{6}\):
\[\mathcal{E}_{c}=\mathcal{E}_{cs}+E_{s0}\epsilon^{2}\left(e_{1c}+\epsilon^{2}e_{ 2c}\right).\] (S14)
\[\begin{cases}e_{1c}=B_{2}(4B_{2}+\pi(J-1)^{2}/J),\\ e_{2c}=9B_{3}^{2}-\frac{3\alpha_{c}}{\pi}B_{3}(8JB_{2}+\pi(1+J)^{2})\\ +4\alpha_{c}Q_{1c}e_{1c}.\end{cases}\] (S15)
We now consider the 3 mode coupling with \(\Phi(Z)=\zeta+a_{2}\zeta^{2}+a_{3}\zeta^{3}\) of the section IX.4.2. The goal is to evaluate the weight of each term in the expansion of the elastic energy, eq.(85), and to compare it with the capillary energy. Writing the total energy: the elastic plus the capillary energy, we obtain for \(\mathcal{E}_{t}=-E_{f}\epsilon^{2}\tilde{\mathcal{E}}_{t}\) (see eq.(85)) where:
\[\begin{cases}E_{f}=\pi\mathcal{Q}_{2}(1+2a_{2}^{2}+3a_{3}^{2})\,,\\ \tilde{\mathcal{E}}_{t}=\delta J+2\gamma_{0}g_{2}+(e_{3}+2\gamma_{0}g_{3}) \epsilon+2\gamma_{0}g_{4}\epsilon^{2}.\end{cases}\] (S16)
where \(\mathcal{Q}_{2}\) has been given in eq.(74), \(e_{3}\) in eq.(87) and \(e_{4}=0\):
\[e_{3}=\pi^{2}a_{2}\mathcal{Q}_{3}(1+a_{3}\mathcal{Q}_{33})/E_{f};\,g_{i}=-E_{ ic}/E_{f}\] (S17)
Figure 8: In (a) and (b) density plots of the coefficients \(e_{3}\) and \(g_{2}\) entering the expansion of the free energy eq.(S16) as a function of \(a_{2}\) and \(a_{3}\). The numerical values can be estimated from the legend to the right of each panel. In (c) and (d) Density plots of the coefficients \(g_{3}\) and \(g_{4}\) entering the expansion of the free energy eq.(S16) as a function of \(a_{2}\) and \(a_{3}\).
We first define:
\[f_{g}=-\frac{(-1+J)^{2}(1+J)}{(1+2a2^{2}+3a3^{2})(3J^{2}-6J-1)}\] (S18)
\[\begin{cases}g_{2}=(1+4a_{2}^{2}+9a_{3}^{2})\frac{J\pi f_{g}}{(J-1)^{2}},\\ g_{3}=a_{2}(1+6a_{3})\pi^{2}f_{g},\\ g_{4}=\frac{\pi^{2}f_{g}}{4J}\{3a_{3}(1+J)^{2}+Q_{1c}\times(1+36a_{3}^{2}\\ +81a_{3}^{4}+16a_{2}^{2}(1+a_{2}^{2}+3a_{3}+9a_{3}^{2})\}\end{cases}\] (S19)
\(E_{f}\) is positive and all polynomials must be evaluated for \(J=J_{B}\). The order of magnitude of each coefficient as \(e_{3},g_{2},g_{3},g_{4}\) is function of the two parameters \(a_{2}\) and \(a_{3}\), once \(J=J_{B}\) is imposed and is represented by density plots (see Fig.(8)). More specifically, Fig.(8(a) gives the order of magnitude of \(e_{3}\) while panel (b) gives the \(g_{2}\) coefficient as function of the same quantities. \(g_{2}\) is negative and so is responsible for shifting the threshold towards higher values, a similar result was found in [19]. Fig.(8c) and Fig.(8d) are dedicated respectively to \(g_{3}\) and \(g_{4}\) respectively. Note that all \(g_{i}\) must be multiplied by the capillary number \(\gamma_{0}\) and only \(g_{4}\) appears alone in the asymptotics eq.(S16).
### Profiles and Cartography of the stress
The profiles and the stress of both a logarithmic singularity and of a square root singularity, localized outside the elastic sample, are displayed in Fig. (8). Panels (e,f,g) are devoted to \(\Phi(Z)=-Log1+a-e^{-2\pi Z}/Log(a)\) for \(a=0.01,0.1\) and to \(\Phi(Z)=Log(2-e^{-2I\pi Y})\) for \(a=1\). This corresponds to singularities in the physical plane located to a distance \(d_{a}=-0.00158,-0.01516,-0.110318\), outside the elastic sample. The panels (h,i,j) concern the square root singularity:\(\sqrt{e^{-2\pi Z}-1-a}\). An indication of the normal stress \(S_{11}\), which includes the real part of \(\Phi_{Z}\), and the shear stress \(S_{21}\), which includes the imaginary part are also shown in panels (f,i) and (g,j) of Fig.(8). The stress contributes significantly at the boundary, but decreases very rapidly. \(S_{11}\) and \(S_{21}\) are given by eq.(S5).
### Weakly nonlinear analysis for quasi-singular profiles
These profiles have been discussed in section (VIII.2. A full exhaustive study cannot be achieved and it is not certain that the method of weakly nonlinear analysis will converge for these choices of quasi-singularity. The evaluation of the integrals is not easy in \(X\) and \(Y\) geometry and and we choose a particular mode: \(\phi(Z)=-(Log(1+a-e^{-2\pi Z})-Log(1+a))/Log(a)\). Obviously, an expansion in sinusoidal modes: \(\zeta^{p}\) have little chance to converge quickly so we have to modify our strategy. The algebra is more complex even with formal mathematical software and we cannot find in handbooks of integrals the result for the previously defined integrals that go into the energy expansion. Thus, the results cannot be obtained completely analytically without approximations. It is also not possible, except for \(L_{3}\), to integrate in the complex plane of \(\zeta\) or the physical plane \(\Omega\) due to the juxtaposition of \(Z\) and \(Z_{1}\). Let us give an estimate. A first integration on \(Y\) induces the evaluation of two kinds of integrals, \(\mathcal{L}_{m,n}\) :
\[\mathcal{L}_{m,n}=\int_{0}^{\infty}\frac{e^{-2\pi(m+n)u}du}{\left(\alpha^{2}-e ^{-2\pi mu}\right)\left(\alpha^{2}-e^{-2\pi nu}\right)}\,,\] (S20)
where \(m\) and \(n\) are positive integers,\(\alpha=a+1\). Using Watson's lemma [179] and the fact that the denominator is singular near \(X=0\) for vanishing \(a\) values, once the parameter \(l_{m}\) is introduced with \(l_{m}=a(2+a)/(2m\pi)\), we approximate the denominator by linear expansion giving
\[\mathcal{L}_{m,n}\sim\int_{0}^{\infty}\frac{du}{p_{a}}e^{-2\pi(m+n)u}\left( \frac{1}{l_{m}+u}-\frac{1}{l_{n}+u}\right)\,,\] (S21)
with \(p_{a}=2\pi a(2+a)(m-n)\). The two last integrals can be computed explicitly with the function exponential integrals \((E_{i})\) and the limit \(l_{m}->0\) finally gives the following result:
\[\mathcal{L}_{m,n}\sim\frac{1}{4\pi a}l_{m,n}+O(a)\text{ with }l_{m,n}=\frac{Log(m/n)}{m-n}\,.\] (S22)
For \(m=n\), \(l_{n,n}=1/n\). Then, taking into account only the contribution in \(a^{-1}\) and defining \(m_{a}=-2\pi^{2}/(aLog(a)^{3})\), we get:
\[\begin{cases}L_{1}&\sim m_{a}l_{2,1+J}=\frac{m_{a}}{1-J}Log\frac{2}{1+J}\,,\\ L_{2}&=m_{a}l_{2J,2J}=\frac{m_{a}}{2J}\,,\\ L_{4}&\sim\frac{m_{a}}{2}l_{2J,1+J}=\frac{m_{a}}{2(J-1)}Log\frac{2J}{1+J}\,,\\ L_{3}&\sim\frac{m_{a}}{2}(l_{1+J,1+J}+l_{2J,1+J})\\ &\sim\frac{m_{a}}{2}\left(\frac{1}{1+J}+\frac{Log(2J/(1+J))}{J-1}\right)\,. \end{cases}\] (S23)
A comparison between numerical values of integrals (see eq.(S20) and the estimate given by eq.(S22) is correct for \(a\sim 0.01\), but smaller values are necessary for eq.(S23). This treatment can always be done numerically, the advantage here is to find the scaling of \(\mathcal{E}_{3}\). For the logarithmic choice, we finally derive:
\[\begin{cases}\Pi_{1}\sim 4J^{2}(J+1)^{2}Log\left(\frac{2}{J+1}\right)\\ +\left(J^{2}+1\right)\left(4J^{2}Log(J)+(J-1)^{2}\right)\,,\end{cases}\] (S24)
and the third order correction gives:
\[\mathcal{E}_{3}=-\frac{(J+1)}{8J^{2}}\Pi_{1}\tau_{1}m_{a}\simeq-\frac{38.38}{aLog (a)^{3}}\,.\] (S25)
### Path-independent integrals
It may seem pointless to demonstrate that conservation laws remain valid in incremental models of finite elasticity after the pioneering work of Knowles and Sternberg [171], but there are some slight differences with our approach. Fist, we are concerned with growth so our initial state is an anisotropic
(axially pre-stretched) state. Second, our demonstration is achieved at the second order in \(\epsilon\) and not at first order. For this reason, if we can predict the result for the \(J\) integral, it is less obvious for the \(M\) integral, which is not believed to be valid in nonlinear elasticity. The strategy to prove the existence of path-independent integrals is simple: it consists in relating a scalar or a vector to a vector or a tensor which is divergence free. In \(2\) dimensions, we shall demonstrate that it is the case for \(\mathcal{J}\) and \(\mathcal{M}\). First let us begin by \(\mathcal{J}\)
\[\mathcal{J}_{i}=\iint dS\left\{\frac{\partial(E\delta_{ik})}{\partial X_{k}}- \frac{\partial(S_{jk}F_{ji})}{\partial X_{k}}\right\}\,.\] (S26)
where we use the Einstein convention for repeated indices and \(dS=dXdY\) and the brackets represent \(\mathcal{T}_{i}\) The index \(i\) reminds us that the \(J\) integral is indeed a vector but here only the component \(\mathbf{\tilde{e}}_{X}\) is important. \(\mathbf{F}\) is the gradient of the deformation tensor, whose components are: \(F_{ij}=\partial x_{i}/\partial X_{j}\). It can be replaced by \(\mathbf{U}\) so that \(U_{ij}=F_{ij}\) if \(i\neq j\) and \(U_{11}=F_{11}-J\) and \(U_{22}=F_{22}-1\). We want to show that \(\mathcal{T}_{i}\) is a divergence. In \(2\) dimensions, we have :
\[\mathcal{T}_{1}=\frac{\partial E}{\partial X}-\frac{\partial(S_{11}U_{11}+S_{ 21}U_{21})}{\partial X}-\frac{\partial(S_{12}U_{11}+S_{22}U_{21})}{\partial Y}\,.\] (S27)
where \(E\) has been given in eq.(45), \(\mathbf{S}\) is the Piola stress tensor already defined in eq.(58). At equilibrium, the divergence of the Piola stress tensor cancels:
\[\frac{\partial S_{11}}{\partial X}+\frac{\partial S_{12}}{\partial Y}=0\quad \frac{\partial S_{21}}{\partial X}+\frac{\partial S_{22}}{\partial Y}=0\,.\] (S28)
\[\begin{cases}\mathcal{T}_{1}&=\frac{\partial E}{\partial X}-\left(S_{11}\frac {\partial^{2}x}{\partial X^{2}}+S_{21}\frac{\partial^{2}y}{\partial X^{2}} \right)\\ \\ &\quad-\left(S_{12}\frac{\partial^{2}x}{\partial X\partial Y}+S_{22}\frac{ \partial^{2}y}{\partial X\partial Y}\right).\end{cases}\] (S29)
We recall the Piola stress components, given in section VII.2, Eqs.(50,53), that we evaluate at linear order in \(\epsilon\):
\[\begin{cases}S_{11}=\frac{\partial x}{\partial X}-Q\frac{\partial y}{ \partial Y}&S_{12}=\frac{\partial x}{\partial Y}+Q\frac{\partial y}{\partial X }\,,\\ S_{21}=\frac{\partial y}{\partial X}+Q\frac{\partial x}{\partial Y}&S_{22}= \frac{\partial y}{\partial Y}-Q\frac{\partial x}{\partial X}\,.\end{cases}\] (S30)
From eq. (S29) it is easy to show that all terms of the neo-Hookean part are eliminated by the stress contribution of the same equation and that terms proportional to \(Q\) also cancel each other, a relation which is always true even without expansion in \(\epsilon\). So \(\mathcal{T}_{1}\) vanishes and the same result is obtained for \(\mathcal{T}_{2}\). In our case, \(\mathcal{J}_{1}\) and an \(\mathcal{J}_{2}\) vanishes rigorously on a closed contour, and so also up to second order. Although \(\mathcal{J}_{2}\) vanishes identically in our geometry because of the chosen contour, we write its generic form, which will be useful to establish the \(\mathcal{M}\) integral:
\[\begin{cases}\mathcal{T}_{2}=\frac{\partial E}{\partial Y}\\ -\frac{\partial(S_{11}U_{12}+S_{21}U_{22})}{\partial X}-\frac{\partial(S_{12} U_{12}+S_{22}U_{22})}{\partial Y}\,.\end{cases}\] (S31)
Let us consider now \(\mathcal{M}=\iint dSN\)
\[N=\frac{\partial\delta EX_{i}}{\partial X_{i}}-\frac{\partial\left(S_{jk}U_{ji }X_{i}\right)}{\partial X_{k}}-\tau_{0}/J\frac{\partial U_{X}}{\partial X}\,.\] (S32)
where \(\delta E=E-(J-1)^{2}/2\), \(U_{i}\) is the displacement: \(U_{1}=x-JX\) and \(U_{2}=y-Y\). The last term can be replaced by \(-(1+J)Div(U)=-(1+J)(\partial_{X}U_{X}+\partial_{Y}U_{Y}\). This result, which is easy to demonstrate with mathematica software, turns out to be less obvious to show analytically and differs from the result obtained by Knowles and Sternberg for incremental elasticity. The reason is due to growth since \(J^{2}-1\) different from from zero, in our case. Nevertheless, our definition of \(M\) satisfies the criterion for defining a path independent integral similar to the classical \(M\) integral of linear elasticity up to order \(\epsilon^{2}\). After some algebra with this definition, which is slightly different from the one given by [171], we can construct a path-independent integral, valid up to second order for a pre-stretched sample. Compared to eq.(3.36) of [171]'s equation, the only difference comes from the last term of the previous equation. Transforming \(\mathcal{M}\) into a closed contour integral: \(\mathcal{M}=\oint dsm=0\)
\[\begin{cases}m=\left(E-\frac{(J-1)^{2}}{2}\right)\mathbf{\vec{X}}.\mathbf{ \vec{N}}-S_{jk}U_{ji}X_{i}.N_{k}\\ -(J+1)\left\{(x-JX)\cdot N_{X}+(y-Y)\cdot N_{Y}\right\}\,.\end{cases}\] (S33)
This result is true up to \(O(\epsilon^{3})\). Then the horizontal boundary at the top contributes to the order of \(\epsilon\). We have already discussed the case of two boundaries in the main text section (XII.2). For completeness we will also consider the patches here, at least to confirm our approach. Considering now the \(2\) patches where the function \(F\) or \(\Phi\) has singularities in \(X=X_{0}\) and \(X=X_{0}/J\), we can then consider \(3\) contributions. We start with a separation into three contributions for \(\mathcal{M}_{\mathcal{J}_{S}}\) into: \(\mathcal{M}^{(1)}+\mathcal{M}^{(2)}+\mathcal{M}^{(3)}\) which will be done for the two patches one after the other. Around the patch located at \(X=X_{0}\), the contour being a circle with radius \(R\) greater than \(l_{0}\) but less than \(X_{0}\), defined by
\[\begin{cases}m_{1}&=E-\frac{(J-1)^{2}}{2}-S_{11}U_{11}-S_{21}U_{21},\\ m_{3}&=E-\frac{(J-1)^{2}}{2}-S_{12}U_{12}-S_{22}U_{22},\\ m_{2}&=S_{12}U_{11}+S_{22}U_{22},\\ m_{4}&=S_{11}U_{12}+S_{21}U_{22}.\end{cases}\] (S34)
Note that each \(m_{i}\) is of order \(\epsilon^{2}\)
\[\mathcal{M}_{\mathcal{J}_{S}}=R\int_{-\pi}^{\pi}dT(X_{0}+R\cos T)\left\{\cos Tm _{1}-m_{2}\sin T\right\}\,.\]
This relation can eventually be simplified to give:
\[\mathcal{M}_{\mathcal{J}_{S}}^{(1)}=X_{0}\mathcal{J}_{\mathcal{S}}+R^{2}\int_{- \pi}^{\pi}dT\cos T(\cos Tm_{1}-\sin Tm_{2})\,.\]
Defining in the same way:
\[\mathcal{M}_{\mathcal{J}_{S}}^{(2)}=R^{2}\int_{-\pi}^{\pi}dT\sin T\left\{\sin Tm _{3}-\cos Tm_{4}\right\},\]
and finally:
\[\mathcal{M}^{(3)}_{\mathcal{J}_{S}}=\tau_{0}R\int_{-\pi}^{\pi}dT\sin T(y-Y)\,.\]
So the leading order for \(\mathcal{M}_{\mathcal{J}_{S}}\) is \(X_{0}J_{S}\) so \(\epsilon^{2}\). For the second patch which is around \(X=X_{0}/J\), the difference comes from the fact that \(X_{0}\) has to be changed to \(X_{0}/J\) and of course each integral is different due to the local behavior of the function \(\Phi\) in each patch given by Eqs.(128,129). From this expansion, we can deduce that \(\mathcal{M}^{(3)}_{S}\) and \(\mathcal{M}^{(3)}_{S,\mathcal{J}}\) is negligible for \(R\to 0\) compared to the other contributions. Also it is clear that \(X_{0}\mathcal{J}_{S}\) is of the order of \(X_{0}\mathcal{J}_{S}\sim X_{0}\epsilon\sim\epsilon^{2}\) as the integrals in \(R^{2}\). Then we conclude that only the upper boundary and lower boundary contribute to the \(\mathcal{M}\) integral and the inner singularities contribute only to the order of\(\epsilon^{2}\).
|
2309.15674 | Speech collage: code-switched audio generation by collaging monolingual
corpora | Designing effective automatic speech recognition (ASR) systems for
Code-Switching (CS) often depends on the availability of the transcribed CS
resources. To address data scarcity, this paper introduces Speech Collage, a
method that synthesizes CS data from monolingual corpora by splicing audio
segments. We further improve the smoothness quality of audio generation using
an overlap-add approach. We investigate the impact of generated data on speech
recognition in two scenarios: using in-domain CS text and a zero-shot approach
with synthesized CS text. Empirical results highlight up to 34.4% and 16.2%
relative reductions in Mixed-Error Rate and Word-Error Rate for in-domain and
zero-shot scenarios, respectively. Lastly, we demonstrate that CS augmentation
bolsters the model's code-switching inclination and reduces its monolingual
bias. | Amir Hussein, Dorsa Zeinali, Ondřej Klejch, Matthew Wiesner, Brian Yan, Shammur Chowdhury, Ahmed Ali, Shinji Watanabe, Sanjeev Khudanpur | 2023-09-27T14:17:53Z | http://arxiv.org/abs/2309.15674v1 | # Speech Collage: Code-Switched Audio Generation by Collaging Monolingual Corpora
###### Abstract
Designing effective automatic speech recognition (ASR) systems for Code-Switching (CS) often depends on the availability of the transcribed CS resources. To address data scarcity, this paper introduces _Speech Collage_, a method that synthesizes CS data from monolingual corpora by splicing audio segments. We further improve the smoothness quality of audio generation using an overlap-add approach. We investigate the impact of generated data on speech recognition in two scenarios: using in-domain CS text and a zero-shot approach with synthesized CS text. Empirical results highlight up to 34.4% and 16.2% relative reductions in Mixed-Error Rate and Word-Error Rate for in-domain and zero-shot scenarios, respectively. Lastly, we demonstrate that CS augmentation bolsters the model's code-switching inclination and reduces its monolingual bias.
Amir Hussein \({}^{\dagger 1}\), Dorsa Zeinali \({}^{\dagger 2}\), Ondrej Klejch\({}^{3}\), Matthew Wiesner\({}^{1}\), Brian Yan\({}^{4}\),
Shammur Chowdhury\({}^{5}\), Ahmed Ali\({}^{5}\), Shinji Watanabe \({}^{4}\), Sanjeev Khudanpur\({}^{1}\)\({}^{1}\)Johns Hopkins University, USA, \({}^{2}\)Northeastern University, USA, \({}^{3}\) University of Edinburgh, UK,
\({}^{4}\)Carnegie Mellon University, USA, \({}^{5}\) Qatar Computing Research Institute, Doha
Code-switching, ASR, data augmentation, end-to-end, zero-shot learning
## 1 Introduction
In multilingual societies, code-switching (CS) is integral to communication, enabling clearer expression and reflecting cultural nuances [1, 2]. While CS is prevalent in daily conversations, it is underrepresented in transcribed datasets. This linguistic phenomenon, where speakers interweave languages within a conversation or utterance, poses challenges for voice technologies like automatic speech recognition (ASR). Given the abundance of monolingual data and scarcity of labeled CS speech, there's a pressing need to harness monolingual resources for CS applications. The prime challenge lies in developing robust ASR systems for CS in zero-shot settings where no CS training data is available.
Several approaches have proposed to build CS ASR directly from monolingual data by utilizing multilingual training [3, 4, 5, 6, 7, 8]. Further studies advocate for the joint modeling of CS and monolingual ASR, effectively breaking down bilingual tasks into monolingual components [9, 10, 11]. A prominent issue with monolingual training is the model's monolingual bias which impedes seamless language switching [12]. To address this issue, several data augmentation strategies have been proposed including textual data augmentation, text-to-speech synthesis, and concatenation-based speech generation. In [13] authors proposed a methodology to generate the code-switching text from monolingual text to improved ASR performance with language model rescoring. In [8, 14], researchers propose merging monolingual utterance to mimic code-switching. However, this strategy tends to primarily capture inter-sentential switches, often sidelining the nuances of intra-sentential CS. On another front, text-to-speech (TTS) based synthetic audio has gained traction for CS data generation [15, 16, 17, 18, 19, 20, 21]. Despite its potential, TTS based augmentation suffer from limited speaker variability compared to real data. Consequently, there's a growing interest in using audio segment splicing as augmentation to covers more speaker variations and acoustic environments [22, 23]. However in the proposed splicing, speech segments and their corresponding words are randomly selected, and the potential of splicing method in code-switching remains unexplored.
In this paper, we introduce _Speech Collage1_, a data augmentation technique that constructs synthetic code-switched audio from monolingual data. Our method is inspired by traditional concatenation-based speech synthesis techniques [24, 25]. We demonstrate the efficacy of _Speech Collage_ with two scenarios: a) In-domain CS text: where target-domain CS text is leveraged, and b) Zero-shot CS: where synthesized CS text is used. Our study covers two language pairs: Mandarin-English and Arabic-English. Experimental results show substantial improvements _Speech Collage_ brings to code-switching ASR for both scenarios. Our contributions include: (i) a novel speaker-agnostic CS data augmentation derived from monolingual resources, (ii) further improving ASR performance with enhanced audio quality in generated data, and (iii) propose a of zero-shot learning framework tailored for CS. As an additional contribution, we conduct an ablation study to assess the significance of each component on the final performance. We also perform a modified Code Mixed Index (CMI) analysis to identify where the primary gains achieved through our augmentation method.
Footnote 1: Visit our repository for audio samples and implementation [https://github.com/JSALT2022CodesSwitchingASR/generating-code-switched-audio](https://github.com/JSALT2022CodesSwitchingASR/generating-code-switched-audio)
## 2 Speech Collage
We propose a framework designed to splice speech units extracted from monolingual corpora. These units are based on code-switched text, either real or synthesized, as depicted in Figure1. For the merging process, we select word-units for English and Arabic, and characters for Mandarin. While smaller units, such as phones, offer greater adaptability, they tend to degrade audio quality [26]. The constructed data form segment splicing, encompasses variations from multiple speakers and diverse acoustic environments. We first obtain the unit alignments with audio from the monolingual data by training standard Hidden Markov Model-Gaussian Mixture Model
(HMM-GMM) 2 using Kaldi ASR toolkit [27]. Utilizing these alignments, in conjunction with the CS text and monolingual audio, our Speech Collage framework generates the CS audio dataset. In cases where the training data possesses multiple segments for a singular unit, a segment is selected at random. The generated audio quality is further enhanced using the overlap-add technique, energy normalization, and n-gram matching, as detailed below. The audio enhancement and segment splicing were implemented using the Lhotes toolkit [28].
Footnote 2: [https://github.com/kaldi-asr/kaldi/tree/master/egs/aishell/s5](https://github.com/kaldi-asr/kaldi/tree/master/egs/aishell/s5)
[https://github.com/kaldi-asr/kaldi/tree/master/egs/mdp2_arabic/s5](https://github.com/kaldi-asr/kaldi/tree/master/egs/mdp2_arabic/s5)
### Overlap-add
To enhance the quality of the generated CS audio, we employ the overlap-add with a Hamming window to mitigate discontinuity effects resulting from spliced units. To ensure the segment capture of each unit, we extend the unit-segments by \(0.05\) seconds at the start and end of the segment. This extension provides an extra \(0.05\) second which is utilized as overlap in overlap-add process.
### Energy normalization
Additionally, we normalize the synthesized utterance by the average of unit-segments energy to remove artifacts introduced by energy variations between segments. For a speech sequence \(X\) of length \(T\), \(X=\{x_{t}\in\mathbb{R}|t=1,\cdots,T\}\), the average energy is calculated as follows:
\[X^{\prime}=\left\{\frac{x_{t}}{\sqrt{\frac{1}{T}\sum_{t}x_{t}^{2}}}|t=1, \cdots,T\right\} \tag{1}\]
### N-gram units
To further enhance the quality of the generated CS we explore splicing consecutive units (n-grams), in alignment with selecting longer units in concatenated speech synthesis [29] Given a CS sentence our approach starts by matching the largest consecutive unit from monolingual alignments. If a specific n-gram is unavailable, the algorithm backs off to a smaller unit. It's worth noting that in this study, we only experimented with unigrams and bigrams. A detailed description of n-gram Speech Collage implementation is described in Algorithm 1. Using the alignments from monolingual data and maximum n-gram size, SetupSupervisions(\(\cdot\)) creates a collection \(\mathcal{D}\) of audio segments corresponding to each n-gram unit. Consecutive n-gram units are matched from alignments, starting with \(n\) and progressing to unigrams. If an n-gram is absent, the algorithm backs off to an (\(n-1\)) unit. In GenerateCollage(\(\cdot\)), the function getConsecUnits(\(\cdot\)) returns all consecutive (\(1:n\)) units. Each n-gram unit is randomly drawn from its respective collection using SampleUnit(\(\cdot\)). These segments are appended to the current spliced utterance with overlapAdd(\(\cdot\)) described in SS2.1, and the resulting combined utterance undergoes energy normalization NormalizeEnergy(\(\cdot\)) from Eq1.
Figure 1: High level illustration of the proposed Speech Collage CS generation approach.
### Zero-shot CS framework
In this case study we focus on generating Arabic-English code-switching (CS) data, operating under the assumption that no Arabic-English CS training data is available. To generate speech data using the Speech Collage method, we require CS text. We generate the CS text from monolingual resources using the lexicon-based (Random) replacements approach described in [13]. The approach entails the following steps:
1. **Parallel Text Translation**: We leverage a public Arabic-English Machine Translation System3 to generate the parallel English text from the Arabic transcription. Footnote 3: API access available from [https://mt.qcri.org/api](https://mt.qcri.org/api)
2. **Word Level Alignments**: After translation, we fine-tune multilingual BERT (mBERT) [30] to obtain the word-level alignments.
3. **Random Replacement**: Given the alignments, Arabic words are randomly substituted with their corresponding English words at a rate of \(20\%\), as suggested by [13].
### End-to-End Speech Recognition
In this work, we utilized the end-to-end (E2E) ASR conformer architecture [31], with the ESPNET toolkit [32]. The E2E-ASR implementation consists of a conformer encoder and a transformer decoder. Both are multiblocked self-attention architectures with the encoder further enhanced by an additional convolution module. The ASR task is formulated as Bayesian decision finding the most probable target word sequence \(\hat{\mathbf{Y}}\), from all possible outputs \(\mathbf{Y}^{*}\), by selecting the sequence which maximizes the posterior likelihood \(P(\mathbf{Y}|\mathbf{X})\), given T-length sequence of D-dimensional speech features, \(\mathbf{X}=\{\mathbf{x}_{\mathbf{t}}\in\mathbb{R}^{D}|t=1,\cdots,T\}\). For text tokenization, we used word-piece byte-pair-encoding [33]. The total loss function \(\mathcal{L}_{\text{\tiny{asr}}}\) is a multi-task learning objective that combines the decoder cross-entropy (CE) loss \(\mathcal{L}_{\text{ce}}\) and the CTC loss [34]\(\mathcal{L}_{\text{ce}}\).
\[\mathcal{L}_{\text{\tiny{asr}}}=\alpha\mathcal{L}_{\text{\tiny{ctc}}}+(1- \alpha)\mathcal{L}_{\text{ce}} \tag{2}\]
where \(\alpha\) is used for interpolation. In our approach, the conformer is initially pre-trained on monolingual data and subsequently fine-tuned on monolingual and synthetic CS speech combined.
### Code-Mixing Index
To quantify the amount of code-switching we use _Code-Mixing Index_ (CMI) metric [35]. The CMI for an utterance is defined as:
\[CMI=\frac{\frac{1}{2}*(N-\text{max}_{i})+\frac{1}{2}P(x)}{N} \tag{3}\]
Where \(max_{i}\) represents the number of words in the dominant language \(i\), \(N\) is the total word count in the utterance, \(P\) is the number of code alternation points, with the constraint \(0\leq P<N\). A low CMI score indicates monolingualism in the text whereas the high CMI score implies high degree of code-mixing in the text.
## 3 Data and Experimental Setup
**In-domain:** The target domain we are considering is the Mandarin-English code-switching, specifically SEAME [36]. In this scenario, we utilize monolingual training data from Chinese AISHELL-1 [37], \(100\)h of English data randomly sampled from Tedlium3 [38] and SEAME text [36] to generate \(62.2\) hours of CS data. Evaluation is performed on SEAME test sets (devman and devsge), measuring mixed error-rate (MER) that considers word-level English and character-level Mandarin. We also report WER on monolingual English and CER on monolingual Chinese subsets.
**Zero-shot:** For this scenario, we use monolingual training data from MGB-2 [39] and Tedlium3. We generate \(80\) hours of CS data using synthetic CS text described in SS2.4. Evaluation is conducted on ES-CWA [8], which is a real Arabic-English CS dataset.
**Data pre-processing:** All audios are augmented with speed perturbations (\(0.9\), \(1.0\) and \(1.1\)) and transformed into \(83\)-dimensional feature frames (\(80\) log-mel filterbank coefficients plus \(3\) pitch features). Additionally, we augment the features with specaugment, with mask parameters \((mT,mF,T,F)=(5,2,27,0.05)\) and bi-cubic time-warping.
**Models:** the conformer encoder consists of \(12\) blocks, each with \(2048\) feed-forward dimensions, \(256\) attention dimensions, and \(4\) attention heads. The transformer decoder has \(6\) blocks with configurations similar to the encoder. We combine \(2622\) Mandarin characters with \(3000\) English BPE units for **In domain** scenario. As for the **Zero-shot** scenario we use a shared Arabic-English vocabulary of size \(5000\) BPE. Our training configuration utilizes Adam optimizer with a learning rate of \(0.001\), warmup-steps of \(25\)K, a dropout-rate of \(0.1\) and \(40\) epochs. We use joint training with hybrid CTC/attention by setting CTC weight \(\alpha\), Eq 2, to \(0.3\). During inference, we use a beam size of \(10\) with no length penalty. For the language model (LM), we train a long short term memory (LSTM) with \(4\) layers, each of \(2048\) dimensions, over \(20\) epochs. When integrating LM with E2E-ASR, we apply an LM weight of \(0.2\).
## 4 Results and Analysis
### In-domain CS text
We examine the impact of augmenting data with generated CS speech from monolingual data, particularly by integrating in-domain CS text. The results, presented in Table 2, are based on the SEAME evaluation. The results from _Mono_, obtained by training on monolingual Chinese and English data, act as our baseline. A shallow fusion with a _SEAME-LM_, trained on SEAME text data, results in a marginal relative reduction: up to \(2\)% in MER. However, simple CS augmentation using unigram units yields up to \(15.3\)% relative reductions in MER, compared to _Mono_. By further enhancing the audio quality of the generated data, we achieve an overall relative improvement of up to \(34.4\)% in MER compared to the _Mono_. Finally comparing our best results to ASR trained on SEAME, the
\begin{table}
\begin{tabular}{l c c c c c c} \hline \hline \multirow{2}{*}{**Model**} & \multicolumn{3}{c}{**DevMan**} & \multicolumn{3}{c}{**DevSeg**} \\ \cline{2-7} & CER-MAN & WER-EN & MER & CER-MAN & WER-EN & MER \\ \hline Mono & 37.2 & 67.4 & 32.9 & 56.7 & 47.5 & 38.4 \\ + SEAME-LM & 36.4 & 65.9 & 32.2 & 55.2 & 46.5 & 37.6 \\ + CS-Unigram & 31.5 & 53.3 & 28.4 & 47.5 & 42.2 & 34.4 \\ + CS-Unigram-SE & 29.7 & 53.7 & 27.2 & 44.0 & 40.9 & 33.0 \\ + CS-Bigram-SE & **27.2** & **47.9** & **25.4** & **39.7** & **38.1** & **31.4** \\ \hline SEAME-ASR (proline) & 15.1 & 28.8 & 16.5 & 21.7 & 28.7 & 23.5 \\ \hline \hline \end{tabular}
\end{table}
Table 1: Comparison of the CER/WER/MER results on SEAME. **CS**: generated CS using in-domain SEAME text. **Mono**: baseline trained on monolingual data, **(Unigram, Bigram)**: generated CS using (unigram, bigram) units, **SE**: signal enhancement from §2, **SEAME-ASR**: topline model trained on SEAME.
absolute gap is up to \(8.9\)% MER. Given that we utilize SEAME text for data generation, this gap can be attributed to audio mismatches. Thus, we anticipate that further enhancements in audio quality to align with SEAME will bridge this gap.
### Zero-shot CS
We investigate the effects of augmenting the dataset with CS speech, generated from monolingual data and synthetic CS text. This synthetic CS text is produced from the monolingual Arabic MGB-2 and English Tedlium3 datasets, as described in SS2.4. Our evaluations, detailed in Table 2, utilize the ESCWA dataset. Operating under our assumption that we do not have access to real CS data, we use the merged evaluation sets from MGB-2 and Tedlium3 to select the best epochs for the model. The observations align with those from $4.1: the _CS-Unigram_ yields relative reductions of \(12.3\)% in WER and \(22.8\)% in CER. Interestingly, the results from shallow fusion with \(Mono+CS\)-\(LM\) consistently underperform when compared to \(Mono\). Moreover, enhancing the quality of the generated audio further improves results, leading to an overall relative improvement of \(16.2\)% in WER and \(27.6\)% in CER compared to \(Mono\). It's noteworthy that, on monolingual data, performance deteriorates with CS augmentation. This suggests model bias towards code-switching and a reduced inclination for monolingual data. We further analyze this observation in SS4.4.
### Generated CS data size
We explore the impact of the amount of generated CS data size on ASR system performance. Figure 2 illustrates the WER at different percentages of generated CS data. In this experiment, we generated CS data with bigrams at 10%, 50%, and 100%. The 0% represents monolingual condition, while 100% corresponds to 80 hours for Arabic-English and 62.2 hours for Chinese-English. It can be observed that there is a substantial improvement when using 10% of generated CS data. However, as the percentage of generated CS data increases, the rate of improvement decreases. This suggests that with more data, further gains can be expected, albeit at a diminishing rate.
### Analysis
To understand the effect of our proposed CS augmentation, we measure the average CMI. Notably, the conventional CMI doesn't account for the accuracy of the sentence. To address this, we select predictions that closely align with the reference using a WER with heuristic threshold set at \(\leq 20\)%. It can be observed from Table 3, that employing CS data augmentation consistently elevates the CMI. This affirms our assumption that CS augmentation enhances the model's aptitude for code-switching.
## 5 Conclusion
We introduced a framework that generates synthetic code-switched data from monolingual corpora. Our findings demonstrate that integrating this CS data augmentation yields substantial improvements that surpass results from training exclusively on monolingual sources or simply combining with a code-switched language model. The enhancement of the generated audio's quality further improves the performance. Additionally, in a zero-shot learning scenario, our CS augmentation is superior to solely monolingual training. Finally, we show that improvements from using CS data augmentation stem from the model's increased propensity for code-switching and a decreased bias towards monolingual input.
## 6 Acknowledgements
This work was carried out during the 2022 Jelinek Memorial Summer Workshop on Speech and Language Technologies at Johns Hopkins University, which was supported by Amazon, Kanari AI, Microsoft and Google. This work was also partially supported by NSF CCRI Grant No 2120435.
\begin{table}
\begin{tabular}{l c c c c c c} \hline \hline \multirow{2}{*}{**Model**} & \multicolumn{2}{c}{**MGB-2**} & \multicolumn{2}{c}{**TED3**} & \multicolumn{2}{c}{**ESCWA**} \\ \cline{2-7} & CER & WER & CER & WER & CER & WER \\ \hline Mono & **6.1** & **12.9** & **4.4** & **8.5** & 31.1 & 48.7 \\ + CS-LM & 6.3 & 12.5 & 4.6 & 8.7 & 38.0 & 57.0 \\ + CS-Unigram & 6.9 & 14.6 & 5.2 & 10.1 & 24.0 & 42.7 \\ + CS-Unigram-SE & 7.0 & 14.7 & 10.4 & 5.4 & 23.1 & 42.0 \\ + CS-Bigram-SE & 7.0 & 14.7 & 10.2 & 5.2 & **22.5** & **40.8** \\ \hline \hline \end{tabular}
\end{table}
Table 2: Comparison of the CER/WER results on ESCWA. CS: data generated using synthetic CS text. **Mono**: baseline trained on monolingual data, **(Unigram, Bigram)**: generated CS using (unigram, bigram) units, **SE**: signal enhancement from §2
\begin{table}
\begin{tabular}{l c c c c c} \hline \hline
**Dataset** & **Ref** & **Mono** & **CS-Uni** & **CS-Uni-SE** & **Bi-SE** \\ \hline ESCWA & 15.6 & 8.7 & 10.6 & 11.6 & 10.5 \\ SEAME & 10.4 & 3.3 & 5.4 & 6.2 & 7.3 \\ \hline \hline \end{tabular}
\end{table}
Table 3: Comparison of the average CMI. **Mono**: baseline trained on monolingual data, **SE**: Signal enhancement from §2, **Ref**: reference, **(Uni, Bi)**: generated CS using (unigram, bigram) units.
Figure 2: WER/WER at different percentages of generated CS data where **0%**: represents Monolingual, **100%**: represents Monolingual with all generated CS. |
2309.13205 | A Practical Survey on Zero-shot Prompt Design for In-context Learning | The remarkable advancements in large language models (LLMs) have brought
about significant improvements in Natural Language Processing(NLP) tasks. This
paper presents a comprehensive review of in-context learning techniques,
focusing on different types of prompts, including discrete, continuous,
few-shot, and zero-shot, and their impact on LLM performance. We explore
various approaches to prompt design, such as manual design, optimization
algorithms, and evaluation methods, to optimize LLM performance across diverse
tasks. Our review covers key research studies in prompt engineering, discussing
their methodologies and contributions to the field. We also delve into the
challenges faced in evaluating prompt performance, given the absence of a
single "best" prompt and the importance of considering multiple metrics. In
conclusion, the paper highlights the critical role of prompt design in
harnessing the full potential of LLMs and provides insights into the
combination of manual design, optimization techniques, and rigorous evaluation
for more effective and efficient use of LLMs in various NLP tasks. | Yinheng Li | 2023-09-22T23:00:34Z | http://arxiv.org/abs/2309.13205v1 | # A Practical Survey on Zero-shot Prompt Design for In-context Learning
###### Abstract
The remarkable advancements in large language models (LLMs) have brought about significant improvements in Natural Language Processing(NLP) tasks. This paper presents a comprehensive review of in-context learning techniques, focusing on different types of prompts, including discrete, continuous, few-shot, and zero-shot, and their impact on LLM performance. We explore various approaches to prompt design, such as manual design, optimization algorithms, and evaluation methods, to optimize LLM performance across diverse tasks. Our review covers key research studies in prompt engineering, discussing their methodologies and contributions to the field. We also delve into the challenges faced in evaluating prompt performance, given the absence of a single "best" prompt and the importance of considering multiple metrics. In conclusion, the paper highlights the critical role of prompt design in harnessing the full potential of LLMs and provides insights into the combination of manual design, optimization techniques, and rigorous evaluation for more effective and efficient use of LLMs in various NLP tasks.
## 1 Introduction
In recent years, transformer-based language models (such as [14], [17], [18]) have emerged as a transformative force in the field of artificial intelligence, revolutionizing Natural Language Understanding(NLU) and Generation(NLG). As model size and training data have evolved, the GPT series has exhibited extraordinary capabilities in a wide range of natural language tasks by relying on a paradigm known as in-context learning. According to [17], in-context learning harnesses the context provided by input data to generate appropriate responses or predictions, contrasting with traditional methods that necessitate explicit task-specific training and fine-tuning on labeled datasets. In-context learning enables large language models to capitalize on vast amounts of data and adapt to various tasks in a flexible and dynamic manner. There are several categories of in-context learning, including zero-shot, one-shot, and few-shot learning. In all types of in-context learning, the key to success lies in effective prompt design, which is occasionally referred to as an "art." This survey paper aims to categorize each type of in-context learning, discuss the core principles, examine state-of-the-art design techniques, and explore recent advancements in in-context learning, with a particular focus on zero-shot discrete in-context learning.
## 2 Definition
Although there is no formal definition for prompt design optimization, we follow the principle from [17] and provide the definition in (1) for prompt design in in-context learning:
\[P^{\star}=\operatorname*{arg\,max}_{P}\mathbb{E}_{x_{i},y_{i}\in\mathcal{D}}[S (f_{\theta}(P,x_{i}),y_{i})] \tag{1}\]
Here, \(x_{i}\) represents input sentences and features, while \(y_{i}\) denotes the target labels. \(\theta\) signifies the parameters for any Large Language Models (LLMs) or Pretrained Language Models (PLMs), which remain frozen in the case of in-context learning. \(f_{\theta}\) represents the output from LLMs given input \(x_{i}\) and prompt \(P\). \(S\) is a scoring function that measures the performance of the model output in relation to the ground truth label \(y_{i}\). The objective of in-context learning (or prompt engineering) is to identify the optimal prompt \(P^{\star}\) that maximizes the score \(S\) in the test distribution.
Based on the structure of \(P\), in-context learning can be further classified into discrete (hard) prompt when \(P\) consists of a list of tokens or continuous
prompt (soft) where \(P\) represents an embedding vector (see Figure 1). Additionally, for zero-shot in-context learning, \(P\) is independent of \(x_{i}\), whereas for one-shot or few-shot in-context learning, \(P\) can be a function of \(x_{i}\) (from training data). This survey focuses on zero-shot in-context learning with discrete prompts and examines its application exclusively in decoder-only LLMs, such as the GPTx series.
## 3 Relevant Work
### Prompts for Encoder-only Transformer Models (BERT)
Before the advent of in-context learning, some research efforts have been devoted to studying how to design effective prompts to enhance the performance of BERT models. As depicted in Figure 2, prompts in BERT are usually combined with input to form a cloze-style structure, while for transformer decoder-based models, prompts are more flexible.
Numerous studies have investigated prompt design in BERT. In the work by Jiang et al. (2020), the authors proposed heuristic-based approaches for designing discrete prompts. Dependency parsing is employed to identify useful prompts from Wikipedia. In Gao et al. (2021), the authors utilized T5 as a prompt generator with a beam search to create a set of diversified prompts. They then used \(D_{dev}\) to select a single prompt with the best performance. In Shin et al. (2020), a gradient-based prompt search approach was proposed, wherein each prompt token is learned by directly optimizing LMs on the downstream task.
In addition to prompt designing strategies, other research work focuses on enriching the prompt candidates and ensembling the output from multiple prompts for the same input. To enrich prompts, Jiang et al. (2020) employed back-translation to paraphrase prompts. Building on this work, Havivi et al. (2021) trained a separate BERT model to rewrite prompts using the nearest BERT vector embedding.
The concept of in-context learning originates from the work by Brown et al. (2020). However, BERT models can also perform similar tasks by using a single token as output. For example,
France's capital is [MASK].
Only the output for the [MASK] position is used for inference. This characteristic enables the ensemble of answers from different prompts, although it is not apparent for similar practices in GPT-style models. In Jiang et al. (2020), the authors proposed rank-based ensemble and optimized ensemble methods to aggregate answers generated from different prompts.
Among the studies designing prompts for BERT models, the majority focus on discrete prompts (i.e., hard prompts). To the best of our knowledge, we did not find any work attempting to generate continuous prompts. In general, optimizing prompts in BERT brings only marginal improvements to the original model. Given the size and structure of BERT, it is more favorable to fine-tune on downstream tasks.
### Prompts for Decoder-only Transformer (GPT)
#### 3.2.1 Continuous Prompt
Another line of research has focused on optimizing soft prompts, which eliminate the constraint that prompts have to be natural language. Soft prompts can be learned and optimized directly within the same language model. The key difference between soft prompt tuning and fine-tuning is that prompt tuning typically fixes the weights of the language model and only performs gradient updates on the network that generates the prompt. Prefix-Tuning Li and Liang (2021) is one of the early works that tunes prompts on GPT-2 with a small amount of data per task, achieving comparable performance to the full data fine-tuning setting. Prefix-Tuning does not use a separate network; instead, it utilizes the same transformer network but only optimizes the input embedding of the prompt. In P-Tuning V1 Liu et al. (2021) and V2 Liu et al. (2022),
Figure 1: Prompt categorization by prompt form
generate the input prompt for the language model. While using soft prompts provides more flexibility in prompt design, it requires access to either the weights of language models or the ability to input vectors into language models. As recent language models are hosted as cloud services and large language models are difficult to access via vector inputs, this practice becomes less feasible when using GPT-3 or PaLM Chowdhery et al. (2022).
#### 3.2.2 Few-Shot Learning
In the GPT paper Brown et al. (2020), few-shot learning demonstrates strong NLP capabilities across various benchmarks. As the title suggests, Language Models are Few-Shot Learners. In the few-shot setting, a task description along with a few examples are presented to the model, which is then asked to complete the task for an unseen example. Numerous studies have been conducted to optimize few-shot examples and prompts to enhance performance. In Liu et al. (2021), the authors discovered that GPT-3 generally performs better when in-context examples are similar to the test examples. As a result, they proposed an in-context example algorithm based on example similarities. Similarity is measured using RoBERTa embedding distance in Euclidean space or cosine distance. Other works, such as Rubin et al. (2021) and Gutierrez et al. (2022), have adopted similar example selection logic and demonstrated better performance over randomly selected examples. In addition to example selection methods, research efforts like Wu et al. (2022) and Kumar and Talukdar (2021) have been made to optimize the rank and order of retrieved examples.
While few-shot learning exhibits remarkable performance, according to the no free lunch(NFL) theorem Wolpert and Macready (1995, 1997), providing examples inevitably introduces bias to the prediction algorithm. In cases where out-of-distribution samples occur, applying few-shot learning can hinder the inference process.
## 4 Zero-Shot Discrete Prompts
With the recent success of Large Language Models such as GPTs, designing zero-shot discrete prompts has become increasingly popular in practice. In the experiments conducted by Reynolds and McDonell (2021), the authors demonstrate that carefully engineered zero-shot prompts can actually outperform few-shot prompts. They argue that providing examples does not always help because examples tend to be interpreted as part of a narrative rather than serving as categorical guidance.
On the other hand, the advantages of using zero-shot discrete prompts can be listed as follows: (1) zero-shot prompts are highly interpretable, (2) few training data or examples are required, (3) the designing process is more straightforward as we only need to deal with task instructions, and (4) the prompt structure is flexible, allowing us to insert our input wherever needed. Zero-shot discrete prompts are also known as task instructions. There are two primary approaches to obtaining a good discrete prompt. The first is heuristic-based manual design, while the second relies on an optimization algorithm to find the optimal prompt. In this section, we focus on reviewing research on prompt
Figure 2: Prompt categorization by model types
design for transformer decoder style models (e.g., GPT), which has been the focus of a majority of research efforts.
### Manual Design
In their work Reynolds and McDonell (2021), the authors argue that GPT (or other LLMs) resemble a superposition of human authors. Therefore, it can be helpful to ask GPT to pretend to be a character in the prompt or use the prompt to signify a dialogue between people (i.e., task specification by memetic proxy). The authors also discuss the idea of MetaPrompts, which encapsulate a general intention that will develop towards specific meanings when additional information, such as a task question, is provided. The example prompts they provide, such as "Let's solve this problem by splitting it into steps," have been proven to be significantly helpful by subsequent works.
In the work Mishra et al. (2021), the authors propose five principles for designing prompts for GPT-3 based on their observations of GPT-3's failures. These principles include: (1) using simple patterns to specify expected output, (2) using bulleted lists and assertions, (3) breaking down complex tasks into multiple simpler ones, (4) adding explicit textual statements of output constraints, and (5) customizing the instructions so that the model can directly output the results. These principles can be a good starting point for manual design.
Another line of work focuses on improving the reasoning capabilities of large language models via prompt design. The work Chain-of-Thought (CoT) Wei et al. (2022) was initially proposed in few-shot learning, where the reasoning steps were presented as part of the solution for several few-shot examples. The zero-shot version of CoT was later proposed in Kojima et al. (2022), which demonstrates that inserting the single prompt "let's think step by step" into the task instruction significantly improves performance on mathematical reasoning. The authors also experimented with different templates for prompts and found that instructive prompts help improve the model's performance in mathematical reasoning, while misleading or irrelevant prompts do not contribute to performance enhancement.
### Prompt Optimization
Finding the optimal prompt can also be treated as an optimization process, where the goal is to optimize the performance of the target task. Similar to finding the best soft prompt or finding the optimal examples for few-shot learning, algorithms can be implemented to find the best zero-shot prompt. However, such work typically requires a small set of evaluation data to assess the prompt performance. In the work by Zhou et al. (2022), the authors proposed Automatic Prompt Engineer (APE) for zero-shot prompt design. A LLM is used to generate a group of prompts given the task example or human description, and an iterative Monte Carlo search method is used to search for the optimal prompt given the objective function. In addition to using Monte Carlo search for prompt optimization, a gradient-free, edit-based search approach called Gradientfree Instructional Prompt Search (GRIPS) is introduced in Prasad et al. (2022). GRIPS starts from a manually designed instruction and iteratively searches among generated prompts from four operations (delete, add, swap, paraphrase) to find the optimal prompt for a target task.
Another line of research uses gradient-based methods but to generate discrete zero-shot prompts. The work FluentPrompt Shi et al. (2022) follows the idea from AutoPrompt Shin et al. (2020), using a gradient-based method to generate discrete prompts. They also use a fluency constraint to encourage human-readable prompt outcomes, which helps improve performance. Another gradient-based prompt generation method RLPROMPT is introduced in Deng et al. (2022). This work uses a reinforcement learning structure to generate prompts that optimize the task-based reward function. The prompts generated from this framework are often incoherent gibberish but are claimed to achieve significant performance improvement.
### Evaluation
Evaluating prompt design is very challenging. As there is no ground truth dataset for prompt generation, there is no "best" prompt but only better prompts. Therefore, the evaluation of the prompt performance for in-context learning usually falls into the following categories.
**Conditional Probability (Likelihood)**: To evaluate the performance of a text generation model, we can measure the probability of the generated text. In our case, we can calculate the conditional probability of ground truth(\(y\)) given prompt (\(p\)), input(\(x\)) or calculate the joint probability of \(x,y,p\) averaging over the training data, as shown in (2)
\[\begin{split} Prob(y|x,p)\\ x,y\in X,Y\end{split} \tag{2}\]
This is a simple strategy because the models for in-context learning are generative language models which will generate the joint probability (likelihood) automatically. However, this metric sometimes fails to represent the actual performance of the downstream task.
**Execution Accuracy**: A more direct method to measure the performance of a prompt is to use metrics from the target task Zhou et al. (2022), as ultimately the performance on the task is what we care about. In addition to measuring the execution accuracy directly on the entire training set, there are ways to efficiently estimate the performance on a subset of training data to save computational cost Zhou et al. (2022), Li et al. (2022).
**Prompt Transferability** is another evaluation metric reported in Zhou et al. (2022), Deng et al. (2022) which is used to prove the quality of the prompt generation methods. However, this metric is more useful in selecting the prompt designing method than evaluating the performance of a single prompt.
**General Metrics for Language Models** should be used when using large language models via zero-shot in-context learning. It is also important to measure the performance from additional aspects. For example, if we are to build a Question-Answering system, we need to measure the risk of hallucination Ji et al. (2022). If we are to build an email generation system, we may need to measure the toxicity and prevent generating any aggressive content. The work of Holistic Evaluation of Language Models (HELM) Liang et al. (2022) provides a great example in evaluating the performance for language models via in-context learning. Although various metrics have been reported in HELM for existing models, it is worth noting that the design of our prompt will directly impact the models' performance.
## 5 Conclusion
The rapid development of large language models (LLMs) has significantly influenced various NLP tasks. Among the techniques to harness their capabilities, in-context learning with different types of prompts--discrete, continuous, few-shot, and zero-shot--has shown remarkable promise.
Discrete prompt engineering emphasizes human-readable prompts that can enhance model performance, while continuous prompt optimization involves soft prompts that can be learned and optimized directly in the same language model. Few-shot learning leverages a small number of examples to guide the model in the right direction, whereas zero-shot discrete prompts only require task instructions, offering a more straightforward design process.
Manual design of prompts can be guided by principles based on model behavior, and optimization algorithms can be used to find optimal prompts. Evaluating the performance of prompts is challenging, as there is no single "best" prompt, and various metrics need to be considered.
In conclusion, as LLMs continue to evolve, prompt design remains a crucial factor in harnessing their full potential across a wide range of applications. A combination of manual design, optimization techniques, and rigorous evaluation can lead to more effective and efficient use of LLMs in diverse NLP tasks.
|
2309.10724 | Sound Source Localization is All about Cross-Modal Alignment | Humans can easily perceive the direction of sound sources in a visual scene,
termed sound source localization. Recent studies on learning-based sound source
localization have mainly explored the problem from a localization perspective.
However, prior arts and existing benchmarks do not account for a more important
aspect of the problem, cross-modal semantic understanding, which is essential
for genuine sound source localization. Cross-modal semantic understanding is
important in understanding semantically mismatched audio-visual events, e.g.,
silent objects, or off-screen sounds. To account for this, we propose a
cross-modal alignment task as a joint task with sound source localization to
better learn the interaction between audio and visual modalities. Thereby, we
achieve high localization performance with strong cross-modal semantic
understanding. Our method outperforms the state-of-the-art approaches in both
sound source localization and cross-modal retrieval. Our work suggests that
jointly tackling both tasks is necessary to conquer genuine sound source
localization. | Arda Senocak, Hyeonggon Ryu, Junsik Kim, Tae-Hyun Oh, Hanspeter Pfister, Joon Son Chung | 2023-09-19T16:04:50Z | http://arxiv.org/abs/2309.10724v1 | # Sound Source Localization is All about Cross-Modal Alignment
###### Abstract
Humans can easily perceive the direction of sound sources in a visual scene, termed sound source localization. Recent studies on learning-based sound source localization have mainly explored the problem from a localization perspective. However, prior arts and existing benchmarks do not account for a more important aspect of the problem, cross-modal semantic understanding, which is essential for genuine sound source localization. Cross-modal semantic understanding is important in understanding semantically mismatched audio-visual events, e.g., silent objects, or off-screen sounds. To account for this, we propose a cross-modal alignment task as a joint task with sound source localization to better learn the interaction between audio and visual modalities. Thereby, we achieve high localization performance with strong cross-modal semantic understanding. Our method outperforms the state-of-the-art approaches in both sound source localization and cross-modal retrieval. Our work suggests that jointly tackling both tasks is necessary to conquer genuine sound source localization.
## 1 Introduction
Humans can easily perceive where the sound comes from in a scene. We naturally attend to the sounding direction and associate incoming audio-visual signals to understand the event. To achieve human-level audio-visual perception, sound source localization in visual scenes has been extensively studied [50, 51, 4, 47, 8, 35, 31, 33, 53, 54, 52, 36, 39, 38, 20]. Motivated by that humans learn from natural audio-visual correspondences without explicit supervision, most of the studies have been developed on a fundamental assumption that audio and visual signals are temporally correlated. With the assumption, losses of the sound source localization task are modeled by audio-visual correspondence as a self-supervision signal and are implemented by contrasting audio-visual pairs,, contrastive learning.
While these approaches appear to be unsupervised methods, they strongly rely on partial supervision information;, using supervisedly pretrained vision networks [50, 51, 47, 53, 54, 20] and visual objectness estimators for post-processing [39, 38]. Without leveraging such strong initial representations, the performance is degraded. Thus, the previous methods are not purely self-supervised approaches. Even further, there are recent studies [45, 39, 38] that point out visual objectness bias in existing sound source localization benchmarks and exploit the objectness prior to improve the localization accuracy. They show that, even without interaction between visual and audio signals, a model may achieve strong accuracy in localization by only referring visual signals alone, which is not the true intention of the sound source localization task. In short, the current evaluation and setting of the sound source localization do not capture the true sound source localization performance.
In this work, we first sort out evaluating sound source localization methods by introducing a cross-modal retrieval task as an auxiliary evaluation task. By this task, we can measure whether the learned representation have the capability to accurately interact between audio and visual modalities;, more fine-grained audio-visual correspondence which is essential for genuine sound source localization. This aspect has been missed in existing sound source localization benchmarks. Indeed, our experiments show that higher sound localization performance does not guarantee higher cross-modal retrieval performance.
Figure 1: **A conceptual difference between prior approaches and our alignment-based sound source localization.**
Second, given this additional criterion, we revisit the importance of semantic understanding shared across audio and visual modalities in both sound source localization and cross-modal retrieval. In the previous methods [50, 51, 54, 47], the cross-modal semantic alignment is induced by instance-level cross-modal contrastive learning,, cross-modal instance discrimination between visual and audio features. However, they are aided by labels or supervisedly pretrained encoder 2 for easing challenging cross-modal feature alignment. Instead, our method learns from scratch supporting the lack of guidance by incorporating multiple positive samples into cross-modal contrastive learning. Specifically, we construct a positive set for each modality using both multi-view [10] and conceptually similar samples [17]. Thereby, we enhance feature alignment and achieve high localization performance and strong cross-modal semantic understanding.
Footnote 2: Typically, an image encoder is pretrained on ImageNet [16] and an audio encoder is pretrained on AudioSet [25] in supervised ways.
We evaluate our method on the VGG-SS and SoundNet-Flickr benchmarks for sound source localization and cross-modal retrieval. As aforementioned, the sound source localization task is closely related to the cross-modal retrieval task, but our experiments show that existing works have a weak performance correlation between them. This implies that we need to evaluate both tasks for evaluating the genuine sound source localization. The proposed method performs favorably against the recent state-of-the-art approaches in both tasks.
We summarize the contributions of our work as follows:
* We analyze that sound source localization benchmarks are not capable of evaluating cross-modal semantic understanding, thereby sound source localization methods may perform poorly in cross-modal retrieval tasks.
* We propose semantic alignment to improve cross-modal semantic understanding of sound source localization models.
* We expand semantic alignment with multi-views and conceptually similar samples which leads to state-of-the-art performance on both sound source localization and cross-modal retrieval.
## 2 Related work
**Sound source localization.** Sound source localization in visual scenes has been investigated by exploiting correspondences between audio and visual modalities. The most widely used approach for sound source localization is cross-modal attention [50, 51, 57] with contrastive loss [13, 29, 42]. Later, the attention-based method is improved by intra-frame hard sample mining [8], iterative contrastive learning with pseudo labels [35], feature regularization [36], positive mining [52], negative free learning [54] with stop-gradient operation [12], or momentum encoders [38].
Some sound localization approaches exploit additional semantic labels [47, 33, 53] or object prior [39, 63]. Semantic labels are used to pretrain audio and vision encoders with classification loss [33, 53] or refine audio-visual feature alignment [47]. A more explicit way to refine localization output is to use object prior. EZVSL [39] proposes post-processing to combine attention based localization output with a pretrained visual feature activation map. Similarly, Xuan [63] propose to combine off-the-shelf object proposals with attention based sound localization results. However, postprocessing by object prior may generate a false positive output as it is solely based on vision without audio-visual interaction.
In addition to the localization, there has been an attempt to localize sounding objects and recover the separated sounds simultaneously, also known as the cocktail party problem [27, 37]. The separation of sound mixture is achieved by predicting masks of spectrogram guided by visual features [19, 1, 64, 23, 62, 21, 2, 65, 24, 58, 56]. Furthermore, a number of recent papers are presented on audio-visual navigation for a given sound source [7, 22].
**Self-supervised representation learning.** In a broader categorization, sound source localization belongs to self-supervised multimodal learning. Our work is also relevant to self-supervised audio-visual representation learning, and other multimodal learning studies.
Contrastive learning aims to learn robust representations from large-scale raw data without annotations. Recent representation learning approaches [60, 10, 28, 11] use instance discrimination by contrastive learning [13, 29, 42] as a pretext task with notable advancements in visual recognition tasks. Recently, positive mining by nearest-neighbor search are used to learn representations of images [17, 18, 61], videos [26], neural recordings [6], and text-image [34]. In this work, we expand the previous works by incorporating both multi-views and conceptually similar samples into audio-visual modalities for cross-modal feature alignment.
A series of audio-visual representation learning studies have shown that audio and visual contents in a video are correlated, therefore a visual representation can be learned by sound prediction [44] or audio representation can be distilled from visual representation [5, 55]. Later, a variety of joint audio-visual representation learning methods are proposed with an assumption that there is a semantic [3, 30, 41, 40] or temporal [14, 43, 32, 15] correspondence between them. However, simply learning sound source localization by audio-visual correspondence with instance discrimination ignores the semantic similarity of audio-visual contents among samples, introducing false negatives or positives. In order to mitigate this issue, clustering [30], sampling [41], weighting [40], and hard mining [32] are proposed. Similarly, in this work, we go beyond instance discrimination by using multiple positive samples
to enforce semantic understanding across modalities.
## 3 Method
### Preliminaries
**Contrastive learning** learns representation by containing positive and negative pairs. Given an encoded query sample \(q\) and its encoded positive pair \(k^{+}\) and negative pairs \(k\), the loss can be defined as:
\[\mathcal{L}=-\mathrm{log}\frac{\mathrm{exp}(q\cdot k^{+}/\tau)}{\sum_{i} \mathrm{exp}(q\cdot k_{i}/\tau)} \tag{1}\]
where \(\tau\) is the temperature parameter.
**Cross-modal contrastive learning** extends contrastive learning across multiple modalities. In sound source localization, audio-visual correspondence is used to define positive and negative cross-modal pairs. With an audio-visual dataset \(\mathcal{D}=\{(v_{i},a_{i}):i=1,...,N\}\) and its encoded features \(\mathbf{v}_{i}=f_{v}(v_{i})\) and \(\mathbf{a}_{i}=f_{a}(a_{i})\), cross-modal contrastive learning loss is defined as:
\[\mathcal{L}_{i}=-\mathrm{log}\frac{\mathrm{exp}(s(\mathbf{v}_{i},\mathbf{a}_{ i})/\tau)}{\sum_{j}\mathrm{exp}(s(\mathbf{v}_{i},\mathbf{a}_{j})/\tau)} \tag{2}\]
where \(s\) is a cross-modal similarity function. The cross-modal contrastive loss Eq. (2) can be extended to symmetric form [48] as used in a few previous works [39, 38].
### Cross-Modal Feature Alignment
We consider both spatial localization and semantic feature alignment for sound source localization. To this end, we use two different similarity functions \(s_{L}\) and \(s_{A}\) for contrastive learning (Eq. (2)), \(s_{L}\) for localization and \(s_{A}\) for cross-modal feature alignment.
Recent studies rely on audio-visual spatial correspondence maps to learn sound source localization by contrasting them. Given a spatial visual feature \(\mathbf{v}\in\mathbb{R}^{c\times h\times w}\) and audio feature \(\mathbf{a}\in\mathbb{R}^{\text{e}}\), audio-visual similarity with a correspondence map can be calculated as follows:
\[s_{L}(\mathbf{v},\mathbf{a})=\sum_{xy\in M}\frac{1}{|M|}\frac{\mathbf{v}^{xy} \cdot\mathbf{a}}{\|\mathbf{v}^{xy}\|\|\mathbf{a}\|} \tag{3}\]
where \(\mathbf{v}^{xy}\) is a feature vector at location \((x,y)\), and \(M\) is an optional binary mask when an annotation or pseudo-mask [8, 36] is available. Since we assume no supervision for sound source localization, we do not use any mask, therefore, \(M=\mathbf{1}\).
The contrastive loss with localization similarity \(s_{L}\) enforces location dependent alignment giving sparse but strong audio-visual correspondence which enables to perform localization. However, our empirical studies on cross-modal retrieval indicate that strong localization performance does not guarantee semantic understanding. To overcome the low semantic understanding in recent studies, we propose to add instance-level contrastive loss. Instance-level contrasting encapsulates the whole context in a scene, enforcing better audio-visual semantic alignment. However, instance-level contrasting may smooth out spatial discriminativeness learned by Eq. (3). Inspired by SimCLR [10], we adopt a projection layer to align audio-visual semantics in a projection space. The projection layer separates the latent space of localization and semantic alignment, thereby preventing the alignment loss smoothing out the spatial discriminativeness. The similarity function for cross-modal feature alignment is defined as follows:
\[s_{A}(\mathbf{v},\mathbf{a})=\frac{p_{v}(\mathsf{avg}(\mathbf{v}))\cdot p_{a} (\mathbf{a})}{\|p_{v}(\mathsf{avg}(\mathbf{v}))\|\|p_{a}\mathbf{a}\|} \tag{4}\]
where \(\mathsf{avg}(\cdot)\) is spatial average pooling, \(p_{v}\) is a projection
Figure 2: **Our sound source localization framework.** Our model construct multiple positive pairs with augmentation and Nearest Neighbor Search (Conceptually Similar Samples). By using these newly constructed 9 pairs, our model employs spatial localization, \(s_{L}\), and semantic feature alignment, \(s_{A}\), for each pair to learn a better sound source localization ability.
layer for visual features, and \(p_{a}\) is a projection layer for audio features.
### Expanding with Multiple Positive Samples
Typically, contrastive learning contrasts between one positive pair and multiple negative pairs as shown in Eq. (1). In audio-visual learning, by an audio-visual correspondence assumption, an audio-image pair from the same clip is used as a positive pair while negative pairs are sampled from different clips. However, single-instance discrimination may not be sufficient to achieve strong cross-modal alignment. In this section, we expand contrastive learning beyond single instance discrimination by positive set construction and pairing them. To construct a positive set, we incorporate both hand-crafted positive and conceptual positive samples for each modality. Later, we adjust the contrastive learning to incorporate multiple positive pairs to enforce cross-modal alignment.
Obtaining hand-crafted positive samples.Using randomly augmented samples as positive multi-view pairs are widely adopted in self-supervised representation learning, _i.e_., instance discrimination. Similarly, we extend a single anchor audio-image pair to multiple positive pairs by applying simple augmentations on image and audio samples separately. While we utilize common image transformations on images, we apply temporal shifting to audios. It is worth noting that sound source localization task learns from the underlying semantic consistency rather than subtle time differences as in videos. Thus, a slight shift in the audio may not alter contextual information significantly. As a result of hand-crafted multi-view positive pair generation, we obtain additional \(\mathbf{v}^{aug}\) and \(\mathbf{a}^{aug}\) samples.
Obtaining conceptual positive samples.Apart from manually created augmented views, we additionally expand our positive set with conceptually similar samples. The sampling strategy with nearest neighbor search can be performed in a various way, such as on-the-fly sampling [17, 49, 61, 34], sampling by pretrained encoders [52], or guided sampling [26, 18] using another modality. For selecting our conceptually similar samples, we utilize pretrained encoders. Note that pretrained encoders trained either with supervised or self-supervised learning are effective in positive sample mining as shown in the experiment section. By employing readily available image and audio encoders, we use the \(k\)-nearest neighborhood search to sample semantically similar samples in both modalities. In particular, given a pair of image and audio, we compute cosine similarity with all other samples and choose the top-\(k\) most similar samples among the training set for each modality. From a set of \(k\) samples, we randomly select one sample to obtain conceptually similar samples for each modality, \(\mathbf{v}^{conc}\). and \(\mathbf{a}^{conc}\). By utilizing the conceptually similar samples as positive samples, our model expands semantic understanding.
Pair Construction.Once we obtain the conceptual and hand-crafted positive samples for each modality, we proceed to create 9 distinct audio-visual pairs by pairing \(\mathbf{V}=\{\mathbf{v},\mathbf{v}^{aug},\mathbf{v}^{conc}\}\) and \(\mathbf{A}=\{\mathbf{a},\mathbf{a}^{aug},\mathbf{a}^{conc}\}\). This is done to ensure semantic alignment and consistency between them through contrastive learning. The negative pairs are randomly paired from the remaining samples in a training set. It is worth noting that some of these pairs are a combination of hand-crafted and conceptually similar samples, which further enhances the feature alignment of our model during training.
### Training
Our loss formulation incorporates both localization and instance-level similarity functions with multiple positive pairs constructed by augmentation and conceptually similar sample search. The final loss term is defined as follows:
\[\begin{split}\mathcal{L}_{i}=-\sum_{\mathbf{v}_{i}\in\mathbf{V} }\sum_{\mathbf{a}_{i}\in\mathbf{A}}\Bigg{[}\mathrm{log}\frac{\exp(s_{L}( \mathbf{v}_{i},\mathbf{a}_{i})/\tau)}{\sum_{j}\exp(s_{L}(\mathbf{v}_{i}, \mathbf{a}_{j})/\tau)}\\ +\mathrm{log}\frac{\exp(s_{A}(\mathbf{v}_{i},\mathbf{a}_{i})/ \tau)}{\sum_{j}\exp(s_{A}(\mathbf{v}_{i},\mathbf{a}_{j})/\tau)}\Bigg{]}\end{split} \tag{5}\]
where \(\mathbf{V}\) and \(\mathbf{A}\) indicate positive sample sets.
## 4 Experiments
Our proposed method for sound source localization is validated through experiments conducted on VGGSound [9] and SoundNet-Flickr [5]. First, we conduct a quantitative analysis to evaluate the accuracy of the localization, cross-modal retrieval, and the impact of various components of our model. Then, we visualize our sound source localization results across different categories of sounds.
### Experiment Setup
Datasets.Our method is trained using the VGGSound [9] and SoundNet-Flickr-144K [50, 51]. VGGSound is an audio-visual dataset containing around \(\sim\)200K videos. SoundNet-Flickr-144K set is the subset of SoundNet-Flickr [5]. After training, we test the sound localization performance with VGG-SS [8] and SoundNet-Flickr-Test [50] datasets for the main experiments. These evaluation sets have bounding box annotations of sound sources for \(\sim\)5K and 250 samples, respectively. Moreover, we employ the AVSBench [66] and Extended VGGSound/SoundNet-Flickr [38] datasets for additional evaluations. AVSBench dataset provides binary segmentation maps that show the
audio-visually correspondent pixels for roughly 5k five-second videos belonging to 23 categories. Lastly, the Extended VGGSound /SoundNet-Flickr dataset, proposed by [38], is used to understand non-visible sound sources.
**Implementation details.** We use two ResNet18 models for both audio and vision encoding. Unlike prior approaches, we do not fine-tune (or use a pretrained) a visual encoder from ImageNet pretrained weights. Instead, we train both the audio and vision encoders from scratch. We preprocess images and audios following the previous works [52, 8]. To create multiple pairs, we utilize both NN search and generic augmentation approaches. For NN search, we experiment on two different setups to retrieve k conceptually similar samples: (1) For supervisedly pretrained encoder experiments, We employ ResNet and VGGSound models pretrained on ImageNet and VGGSound respectively, (2) For self-supervisedly pretrained encoder experiments, we utilize the CLIP [48] Vision Encoder and Wav2CLIP [59] Audio Encoder. We use \(k\)=1000 for the experiments. To perform image augmentations, we follow the augmentations used in SimCLR [10]. For audios, we randomly select time-window shifts in a time axis. The model is trained for 50 epochs with Adam Optimizer and a learning rate of 0.0001. \(\tau\) is set to 0.07 in contrastive learning.
### Quantitative Results
**Comparison with strong baselines.** In this section, we conduct a comparative analysis of our sound source localization method against existing approaches. We carry out our evaluations in two settings, following previous approaches. Firstly, we train our model on VGGSound-144K and evaluate it on VGG-SS and SoundNet-Flickr test sets. Secondly, we train our model on SoundNet-Flickr-144K and evaluate it on the SoundNet-Flickr test set. It is important to note that all the compared models are trained using the same amount of data. AVEL [57], AVObject [2], and LCBM [53] models rely on video input, and as such, they cannot be evaluated on the SoundNet-Flickr dataset, which contains static image and audio pairs. We present our results in Table 1 and Table 2.
Our proposed model achieves higher performance compared to prior approaches on both test sets. Specifically, it yields a +2.15\(\%\) cloU and +0.6\(\%\) AUC improvement on VGGSS, as well as a +3.7\(\%\) cloU improvement on SoundNet-Flickr compared to the state-of-the-art methods that uses pretrained vision encoder. It is worth highlighting that unlike the majority of previous works, our proposed model does not utilize a vision encoder pretrained on ImageNet in a sound source localization backbone. This is because, as discussed in Mo _et al_. [38], using supervisedly pretrained vision encoders makes the sound source localization problem a weakly supervised problem. However, it is worth noting that even without using a pretrained vision encoder, our method achieves state-of-the-art performance on both experiments that are presented in Table 1 and Table 2. We demonstrate the performance of our model with the pretrained models learned through supervised learning (NN Search w/ Supervised Pre. Encoders) and with models that are pretrained through self-supervised learning (NN Search w/ Self-Supervised Pre. Encoders) in NN Search module. As the results indicate, using self-supervised pre
\begin{table}
\begin{tabular}{l c c c c} \hline \hline
**Method** & **Pre. Vision** & **cloU**\(\uparrow\) & **AUC**\(\uparrow\) & **cloU**\(\uparrow\) & **AUC**\(\uparrow\) \\ \hline Attention [50]\({}_{\text{CVPR18}}\) & ✓ & 18.50 & 30.20 & 66.00 & 55.80 \\ Concrete [47]\({}_{\text{ICCV20}}\) & ✓ & 29.10 & 34.80 & - & - \\ LCBM [53]\({}_{\text{CVPR21}}\) & ✓ & 32.20 & 36.60 & - & - \\ LVS [5]\({}_{\text{CVPR21}}\) & ✗ & 30.30 & 36.40 & 72.40 & 57.80 \\ LVS [5]\({}_{\text{CVPR21}}\) & ✗ & 34.40 & 38.20 & 71.90 & 58.20 \\ HardPos [52]\({}_{\text{ICASSP2}}\) & ✗ & 34.60 & 38.00 & 76.80 & 59.20 \\ SSPPL (w/o PCM) [54]\({}_{\text{CVPR22}}\) & ✓ & 27.00 & 34.70 & 73.90 & 60.20 \\ SSPPL (w/o PCM) [54]\({}_{\text{CVPR22}}\) & ✓ & 33.90 & 38.00 & 76.70 & 60.50 \\ E-VEL (w/o OGL) [39]\({}_{\text{ECCV2}}\) & ✓ & 35.96 & 38.20 & 78.31 & 61.74 \\ SSL-TL [56]\({}_{\text{ICASSP2}}\) & ✓ & 38.63 & 39.75 & 96.50 & 61.20 \\ SLACV (w/o OGL) [38]\({}_{\text{CNNLP82}}\) & ✓ & 37.79 & 39.40 & **83.60** & - \\ \hline
**Ours** & & & & \\ \(\backslash\) NN Search w/ Supervised Pre. Encoders & ✗ & **39.94** & **40.02** & **29.60** & **63.44** \\ \(\backslash\) NN Search w/ Self-Supervised Pre. Encoders & ✗ & 39.20 & 39.20 & 79.20 & 63.00 \\ \hline _wRL OGL:_ & & & & \\ E-VEL (w/o OGL) [39]\({}_{\text{ECCV2}}\) & ✓ & 38.85 & 39.54 & 83.94 & 63.60 \\ SLACV (w/ OGL) [39]\({}_{\text{CNNLP82}}\) & ✓ & 39.80 & - & **86.00** & - \\ \hline
**Ours** & & & & \\ \(\backslash\) NN Search w/ Supervised Pre. Encoders & ✗ & **42.64** & **41.48** & 82.40 & **64.40** \\ \(\backslash\) NN Search w/ Self-Supervised Pre. Encoders & ✗ & 42.47 & 41.42 & 82.80 & 64.48 \\ \hline _wifi Object Flow:_ & & & & \\ HearTheFlow [20]\({}_{\text{NACV23}}\) & ✓ & 39.40 & 40.00 & 84.80 & 64.00 \\ \hline \hline \end{tabular}
\end{table}
Table 1: **Quantitative results on the VGG-SS and SoundNet-Flickr test sets.** All models are trained with 144K samples from VGG-Sound and tested on VGG-SS and SoundNet-Flickr. \(\dagger\) is the result of the model released on the official project page. SLAVC [38] does not provide AUC scores.
\begin{table}
\begin{tabular}{l c c c} \hline \hline
**Method** & **Pre. Vision** & **cloU**\(\uparrow\) & **AUC**\(\uparrow\) \\ \hline Attention [50]\({}_{\text{CVPR18}}\) & ✓ & 18.50 & 30.20 & 66.00 & 55.80 \\ Concrete [47]\({}_{\text{ICCVPR20}}\) & ✓ & 29.10 & 34.80 & - & - \\ LCBM [53]\({}_{\text{CVPR21}}\) & ✓ & 32.20 & 36.60 & - & - \\ LVS [5]\({}_{\text{CVPR21}}\) & ✗ & 30.30 & 36.40 & 72.40 & 57.80 \\ LVS [5]\({}_{\text{CVPR21}}\) & ✗ & 34.40 & 38.20 & 71.90 & 58.20 \\ HardPos [52]\({}_{\text{ICASSP2}}\) & ✗ & 34.60 & 38.70 & 76.80 & 59.20 \\ SSPPL (w/o PCM) [54]\({}_{\text{CVPR22}}\) & ✓ & 27.00 & 34.80 & 73.90 & 60.20 \\ SSPPL (w/o PCM) [54]\({}_{\text{CVPR22}}\) & ✓ & 33.90 & 38.00 & 76.70 & 60.50 \\ E-VEL (w/o OGL) [39]\({}_{\text{ECCV2}}\) & ✓ & 35.96 & 38.20 & 78.31 & 61.74 \\ SSL-TL [56]\({}_{\text{ICASSP2}}\) & ✓ & 38.63 & 39.75 & 96.50 & 61.20 \\ SLAVC (w/o OGL) [38]\({}_{\text{CNNLP82}}\) & ✓ & 37.79 & 39.40 & **83.60** & - \\ \hline
**Ours** & & & & \\ \(\backslash\) NN Search w/ Supervised Pre. Encoders & ✗ & **39.94** & **40.02** & **29.60** & **63.44** \\ \(\backslash\) NN Search w/ Self-Supervised Pre. Encoders & ✗ & 39.20 & 39.20 & 79.20 & 63.00 \\ \hline _wifi Object Flow:_ & & & & \\ HearTheFlow [20]\({}_{\text{NACV23}}\) & ✓ & 38.65 & 39.50 & \multicolumn{1}{c}{} & \\ \hline \hline \end{tabular}
\end{table}
Table 2: **Quantitative results on the SoundNet-Flickr test set.** All models are trained and tested on the SoundNet-Flickr 144K dataset. \(\dagger\) is the result of the model from the official project page. SLAVC [38] does not provide results with SoundNet-Flickr 144K.
trained encoders in NN Search performs on par with the supervised pretrained encoders in NN Search. This shows that our model does not depend on supervised pretrained encoders for the NN search module and can utilize any type of pretrained encoder feature for nearest neighbor search. Note that these pretrained encoders are not used in the backbone networks of the sound source localization module but only in the NN Search Module, as illustrated in Figure 2.
We also discuss the methods employed by previous studies, such as SSPL [54] which utilizes a sub-module called PCM to reduce the impact of background noise, HTF [20] which utilizes Optical Flow, and EZ-VSL [39] which refines its initial audio-visual localization outcomes through object guidance obtained from an ImageNet pretrained visual encoder. Our model, on the other hand, and any of its variations do not require any task-specific modules or operations to achieve the state-of-the-art (SOTA) results. This suggests that using additional semantic and multi-view correspondence, as well as feature alignment, provides more varied and robust supervision for better aligned audio and visual features, as opposed to using task-specific approaches.
The quantitative results presented in Table 1 and Table 2 also showcase the performance of previous methods that utilize object guidance to evaluate their final sound source localizations. Our model outperforms all previous methods that employ object guidance on the VGG-SS test set and achieves comparable results on the SoundNet-Flickr test set, even though our model _does not use object guided refinement (OGL)_. Additionally, we acknowledge that the addition of OGL to our audio-visual localization results in improvement on the VGGSS test set, while degrading performance on the SoundNet-Flickr test set. In contrast, prior methods see modest improvements when utilizing OGL. This can be explained by the fact that our model is already accurately localizing the sounding objects, and object guidance can interfere with localization results by introducing visual regions that are not sounding (refer to Section 4.4 for visual results). Unlike prior methods, we do not use OGL in our architecture for the remainder of this paper, unless it is being directly compared with OGL-based methods.
Finally, in comparison to HearTheFlow, which utilizes an additional Optical Flow modality, our method outperforms it on the VGGSS test set, and achieves slightly lower performance on the SoundNet-Flickr test set without utilizing any additional modalities, but instead relying on better audio-visual correspondence and alignment.
these open set experiments. While some conclude that their models have strong generalization ability because their performance in unheard categories is higher than heard categories [39, 38, 46], the other works that cannot achieve the same trend discuss that this is expected since their models are dealing with unseen categories [36]. However, our results show that these conclusions are highly dependent on the chosen train/test splits. Our model performs better than existing works in both splits, but there is no uniform trend in between two splits. While our method performs better on unheard categories in the splits of [8, 39, 38, 46], it performs worse on unheard categories in the split of [36]. Therefore, we conclude that the observed trends are highly dependent on the randomly selected train/test splits.
**AVSBench [66].** To demonstrate the precise sound localization ability of our model, we conduct experiments on the AVSBench S4 dataset. The dataset's objective is to detect audio-visual correspondence and correlation at the pixel level. To make a fair comparison, we use some of the self-supervised sound source localization methods mentioned earlier. All models are trained on VGGSound-144K and directly assessed on the AVSBench S4 dataset without any further fine-tuning (zero-shot setting). Our results, which are presented in Table 5, indicate that our method achieves the highest performance, as in the previous experiments.
**Retrieval.** We evaluate sound localization models on the VGG-SS dataset for cross-modal retrieval. As shown in Table 6, our method clearly outperforms other state-of-the-art methods. One interesting observation is that EZ-VSL [39] notably performs better than SLAVC [38] on cross-modal retrieval, while SLAVC performs better on sound source localization in Table 1. This shows that with the current benchmark evaluations, better sound localization performance does not guarantee better audio-visual semantic understanding, thereby we need to additionally evaluate sound source localization methods on cross-modal understanding tasks. Another observation is that the performance gap between our method and the strongest competitor SSL-TIE [36] is notably larger on cross-modal retrieval than sound source localization. This is due to the strong cross-modal feature alignment of our method that is overlooked in the sound source localization benchmarks.
**Extended Flickr and VGG-SS datasets.** The prior study [38] points out that the current sound source localization benchmarks overlook false positive detection. It is because the evaluation samples always contain at least a sounding object in a scene; thus cannot capture false positive outputs,, silent objects or off-screen sounds. To analyze false positive detection, Mo and Morgado [38] extended the benchmarks with non-audible, non-visible, and mismatched audio-visual samples. The expectation is that a sound source localization model should not localize any objects when audio-visual semantics do not match.
The experiment with the extended datasets in Table 7 shows that our method performs favorably against state-of-the-art competitors. Our method performs better than the competing methods in false positive detection measured by \(\mathbf{AP}\) and \(\mathbf{max}\)-\(\mathbf{F1}\), while SLAVC [38] achieves better localization performance on Extended Flickr-SoundNet. As both false positive detection and cross-modal retrieval require cross-modal interaction, our method shows strong performance on both tasks.
### Ablation Results
We conduct a series of experiments in order to verify our design choices and make further analysis. To save computational time and resources, we primarily perform ablation studies by training our model on VGGSound-144K with NN Search w/ Supervised Pre. Encoders setup and evaluating it on VGG-SS. Results are in Table 8.
**Impact of Semantic and Multi-View Invariance.** In order to understand the impact of each type of invariance (consistency), we analyze the performance of our model with different type of invariance methodologies in Table 8. As the results of (C _vs._ E) and (D _vs._ F) reveal, using semantically similar samples (semantic invariance) produces better performance (+0.45\(\%\) and +0.5\(\%\) on cIoU respectively) compared to augmented multi-view invariance. Moreover, as the results of (A _vs._ C) and (A _vs._ E) depict, the combination of these two different types of invariance complement each other and and further enhances the model's performance. Using pair combination of these two different
\begin{table}
\begin{tabular}{l l l l l l l l} \hline \hline \multicolumn{1}{c}{} & \multicolumn{4}{c}{**Extended Flickr-SoundNet**} & \multicolumn{4}{c}{**Extended VGG-SS**} \\ \cline{3-8}
**Method** & **Pro. Vision** & \(\mathbf{AP}\) & \(\mathbf{max}\)-\(\mathbf{F1}\) & \(\mathbf{Lucket}\) & \(\mathbf{AP}\) & \(\mathbf{max}\)-\(\mathbf{F1}\) & \(\mathbf{Lucket}\) \\ \hline \hline Cross-modal [14] & \(\boldsymbol{\chi}\) & 0.00 & 83.50 & 47.20 & 0.00 & 19.00 & 21.93 \\ \hline \(\mathbf{S}\)[39] & \(\boldsymbol{\chi}\) & 0.00 & 17.00 & 19.60 & 8.15 & 6.90 & 10.43 \\ Attention[38] & \(\boldsymbol{\chi}\)[38] & \(\boldsymbol{\chi}\) & 15.98 & 24.50 & 54.16 & 6.50 & 13.30 & 14.04 \\ Text [39] & \(\boldsymbol{\chi}\) & 25.56 & 44.00 & 52.80 & 11.53 & 25.30 & 22.63 \\ DSDS[39] & \(\boldsymbol{\chi}\) & 38.22 & 40.40 & 72.91 & 16.58 & 25.60 & 26.27 \\ DSDS[39] & \(\boldsymbol{\chi}\) & 40.20 & 57.70 & 27.78 & 17.85 & 39.00 & 36.58 \\ DSDS[39] & \(\boldsymbol{\chi}\) & 46.30 & 54.60 & 64.00 & 24.55 & 30.90 & 31.58 \\ DSDS[39] & \(\boldsymbol{\chi}\) & 51.30 & 51.30 & 51.80 & 52.98 & 69.00 & 37.79 \\ \hline
**Ours** & \(\boldsymbol{\chi}\) & **46.40** & **66.90** & **72.60** & **34.73** & **40.70** & **30.94** \\ \hline \(\mathbf{1}\)-35 Results and Self-Supervised Pre. Session & \(\boldsymbol{\chi}\) & **42.72** & **60.10** & **79.20** & **31.02** & **40.01** & **79.20** \\ \hline \hline \end{tabular}
\end{table}
Table 7: **Quantitative results on the Extended VGG-SS and Extended SoundNet-Flickr sets**. All models are trained with 144K samples from VGG-Sound. The results of the prior approaches are obtained from [38].
\begin{table}
\begin{tabular}{l l l l l l} \hline \hline & **Semantic** & **Multi-View** & **Feature Alignment** & **cIoU** & \(\mathbf{AUC}\) \\ \hline \hline (A) & ✓ & ✓ & ✓ & **39.94** & **40.02** \\ (B) & ✓ & ✓ & ✗ & 39.10 & 39.44 \\ (C) & ✓ & ✗ & ✓ & 38.75 & 39.34 \\ (D) & ✓ & ✗ & ✗ & 38.24 & 38.90 \\ (E) & ✗ & ✓ & ✓ & 38.30 & 39.38 \\ (F) & ✗ & ✓ & ✗ & 37.72 & 39.19 \\ (G) & ✗ & ✗ & ✓ & 34.93 & 37.94 \\ (H) & ✗ & ✗ & ✗ & 34.22 & 37.67 \\ \hline \hline \end{tabular}
\end{table}
Table 8: **Ablation studies on our proposed method to see the impact of each main component.**
types of consistency elements provides additional supervisions, invariance and alignments, leading to a more robust representation space and improve sound localization performance.
**Impact of Feature Alignment.** We perform controlled experiments to verify the effect of the feature alignment strategy, and the results are presented in Table 8. Comparing the performance of the proposed model with and without feature alignment, (A _vs._ B), highlights the importance of this strategy to boost the performance. Further, examining the results of experiments (C _vs._ D) and (E _vs._ F) reveals that feature alignment provides additional gains irrespective of the consistency types. These findings indicate that global feature-based alignment helps the optimization of audio-visual correspondence.
**Impact of \(k\) in conceptually similar sample selection.** Selecting an appropriate \(k\) value for sampling nearest neighbors is crucial. If this value is set too high, it may result in noisy samples that could disrupt the learning phase. Conversely, if the value is set too low, only very similar samples to the anchor will be provided and it limits semantic invariance. Nevertheless, when compared to Table 8 (E), we observe performance gain throughout the range of \(k\) used for the ablation study.
Table 9 shows an ablative evaluation of the effect of \(k\) value used to select neighborhood samples. The results indicate that an optimal choice is \(k\)=1000. This choice of \(k\) can be explained by the fact that it provides a balance between semantic similarity and sufficient diversity.
### Qualitative Results
In this section, we visualize and compare our sound localization results with the recent prior works on standard benchmarks, namely on VGG-SS and SoundNet-Flickr. The visualized samples in Figure 3 show that localized regions of the proposed method are more compact and accurately aligns with the sounding objects than the other methods. For instance, small size musical instrument is localized accurately compared to the recent methods in the top right column.
We also compare our localization results with and without object-guided localization (OGL). As shown in Figure 4, OGL deteriorates our sound localization outputs. OGL captures objectness in a scene, thereby tending to attend to any distinctive objects regardless of whether it is the sound source or not. Therefore, OGL can be helpful when localization totally fails because of the objectness bias in the benchmarks, but it is harmful when the localization is accurate which is the case for the examples shown. This result is consistent with the quantitative result in Table 2, showing that our method with OGL performs worse.
Throughout the paper, we discuss the importance of
Figure 4: **OGL degrades our sound localization results on SoundNet-Flickr.**
Figure 3: **Sound Localization Results on VGG-SS (top) and SoundNet-Flickr (bottom).**
\begin{table}
\begin{tabular}{c|c c c c c} \hline \hline \(k\) **in _k_-NN** & **10** & **30** & **100** & **500** & **1000** \\ \hline \hline cIoU \(\uparrow\) & 38.80 & 38.82 & 39.46 & 39.90 & **39.94** \\ AUC \(\uparrow\) & 39.51 & 39.67 & 39.93 & 40.00 & **40.02** \\ \hline \hline \end{tabular}
\end{table}
Table 9: **Varying k in conceptually similar sample selection.**
cross-modal semantic understanding. We demonstrate interactiveness of our method across modalities in Figure 5. Genuine sound source localization should be able to localize objects that are correlated with the sound. To visualize cross-modal interaction, we synthetically pair up the same image with different sounds of objects that are visible in a scene. The examples demonstrate that the proposed method can localize different objects depending on the contexts of sounds, while the competing method can not.
## 5 Conclusion
In this work, we investigate cross-modal semantic understanding that has been overlooked in sound source localization studies. We observe that higher sound source localization performance on the current benchmark does not necessarily show higher performance in cross-modal retrieval, despite its causal relevance in reality. To enforce strong understanding of audio-visual semantic matching while maintaining localization capability, we propose semantic alignment with multi-views of audio-visual pairs in a simple yet effective way. The ablation study shows that strong semantic alignment is achieved when both semantic alignment loss and enriched positive pairs are used. We extensively evaluate our method on sound source localization benchmarks including cross-dataset and open-set settings. Moreover, our analyses on cross-modal retrieval and false positive detection verify that the proposed method has strong capability in cross-modal interaction. Our study suggests that sound localization methods should be evaluated not only on localization benchmarks but also on cross-modal understanding tasks.
## 6 Acknowledgment
This work was supported by the National Research Foundation of Korea (NRF) grant funded by the Korea government (MSIT) (No. RS-2023-00212845, Multimodal Speech Processing for Human-Computer Interaction). H. Pfister and J. Kim were partially supported by NIH grant R01HD104969. T.-H. Oh was partially supported by IITP grant funded by the Korea government (MSIT) (No. 2021-0-02068, Artificial Intelligence Innovation Hub; No. 2022-0-00290, Visual Intelligence for Space-Time Understanding and Generation based on Multi-layered Visual Common Sense).
|
2309.09275 | Breakdown in vehicular traffic: driver over-acceleration, not
over-reaction | Contrary to a wide-accepted assumption about the decisive role of driver
over-reaction for breakdown in vehicular traffic, we have shown that the cause
of the breakdown is driver over-acceleration, not driver over-reaction. To
reach this goal, we have introduced a mathematical approach for the description
of driver over-acceleration in a microscopic traffic flow model. The model, in
which no driver over-reaction occurs, explains all observed empirical
nucleation features of traffic breakdown. | Boris S. Kerner | 2023-09-17T13:45:07Z | http://arxiv.org/abs/2309.09275v1 | # Breakdown in vehicular traffic: driver over-acceleration, not over-reaction
###### Abstract
Contrary to a wide-accepted assumption about the decisive role of driver over-reaction for breakdown in vehicular traffic, we have shown that the cause of the breakdown is driver over-acceleration, not over-reaction. To reach this goal, we have introduced a mathematical approach for the description of driver over-acceleration in a microscopic traffic flow model. The model, in which no driver over-reaction occurs, explains all observed empirical nucleation features of traffic breakdown.
pacs: 89.40.-a, 47.54.-r, 64.60.Cn, 05.65.+b Traffic breakdown is a transition from free flow to congested vehicular traffic occurring mostly at bottlenecks. In 1958s-1961s, Herman, Gazis, Montroll, Potts, Rothery, and Chandler [1] as well as Kometani and Sasaki [2] assumed that the cause of the breakdown is driver _over-reaction_ on the deceleration of the preceding vehicle: Due to a delayed deceleration of the vehicle resulting from a driver reaction time the speed becomes less than the speed of the preceding vehicle. If this over-reaction is realized for all following drivers, then traffic instability occurs [1; 2; 3]. The instability leads to a wide moving jam (J) formation in free flow (F) called as an F\(\rightarrow\)J transition [4]. The traffic instability is currently a theoretical basic of standard traffic theory (e.g., [3; 5; 6]).
However, rather than the F\(\rightarrow\)J transition, in real field data traffic breakdown is a phase transition from free flow to synchronized flow (S) (F\(\rightarrow\)S transition) [7; 8]; the empirical traffic breakdown exhibits the nucleation nature (Fig. 1(a)) [15]. To explain the empirical nucleation nature of the F\(\rightarrow\)S transition, three-phase traffic theory was introduced [7], in which there are three phases: free flow (F), synchronized flow (S), and wide moving jam (J), where the phases S and J belong to congested traffic.
Driver over-reaction that should explain traffic breakdown can occur _only_ if space gaps between vehicles are small enough [1; 2; 3; 4; 5; 6]. At large enough gaps, rather than over-reaction, the vehicle speed does _not_ become less than the speed of the decelerating preceding vehicle, i.e., usual _speed adaptation_ to the speed of the preceding vehicle occurs that causes _no_ instability.
* Contrary to standard theory [1; 2; 3; 4; 5; 6], it is assumed in three-phase traffic theory [7] that traffic breakdown is realized at larger gaps between vehicles when no driver over-reaction can still occur.
In three-phase traffic theory, the empirical nucleation nature of the F\(\rightarrow\)S transition is explained through a hypothesis about a discontinuity in the probability of vehicle acceleration when free flow transforms into synchronized flow (Fig. 1(b)) [10]: In free flow, vehicles can accelerate from car-following at a lower speed to a higher speed with a larger probability than it occurs in synchronized flow. Vehicle acceleration that probability exhibits the discontinuity when free flow transforms into synchronized flow is called _over-acceleration_, to distinguish over-acceleration from "usual" driver acceleration that does not show a discontinuous character. The discontinuous character of over-acceleration is explained as follows: Due to smaller space gaps in synchronized flow, vehicles prevent each other to accelerate from a local speed decrease; contrarily, due to larger space gaps in free flow at the same flow rate vehicles can easily accelerate from the local speed decrease. The discontinuous character of over-acceleration can lead to an S\(\rightarrow\)F instability in synchronized flow [7]. Contrary to the classical traffic instability that is a growing wave of a local _decrease_ in the vehicle speed [1; 2; 3; 4; 5; 6], the S\(\rightarrow\)F instability is a growing wave of a local _increase_ in the speed [7]. Microscopic three-phase models [11] that simulate the nucleation nature of traffic breakdown (Fig. 1(a)) show also the classical traffic instability leading to a wide moving jam emergence. In these complex traffic models [11], both driver over-acceleration and driver over-reaction are important. Thus, up to now there has been no mathematical proof that the cause of the nucleation nature of traffic breakdown is solely over-acceleration without the influence of driver over-reaction.
In the paper, we introduce a mathematical approach for over-acceleration \(a_{\rm OA}\):
\[a_{\rm OA}=\alpha\Theta(v-v_{\rm syn}) \tag{1}\]
that satisfies the hypothesis about the discontinuous character of over-acceleration (Fig. 1(b)). In (1), \(v\) is the vehicle speed, where \(0\leq v\leq v_{\rm free}\), \(v_{\rm free}\) is a maximum speed; \(\alpha\) is a maximum over-acceleration; \(\Theta(z)=0\) at \(z<0\) and \(\Theta(z)=1\) at \(z\geq 0\); \(v_{\rm syn}\) is a given synchronized flow speed (\(v_{\rm syn}<v_{\rm free}\)).
Based on (1), we develop a microscopic traffic flow model, in which vehicle acceleration/deceleration \(a\) in a road lane is described by a system of equations:
\[a = K_{\Delta v}\Delta v+a_{\rm OA}\ {\rm at}\ g_{\rm safe}\leq g \leq G, \tag{2}\] \[a = a_{\rm max}\ {\rm at}\ g>G,\] (3) \[a = a_{\rm safe}(g,v,v_{\ell})\ {\rm at}\ g<g_{\rm safe}, \tag{4}\]
where \(g\) is a space gap to the preceding vehicle, \(\Delta v=v_{\ell}-v\), \(v_{\ell}\) is the preceding vehicle speed, \(K_{\Delta v}\) is a positive coefficient, \(a_{\rm max}\) is a maximum acceleration, \(G\) is a synchronization space-gap, \(G=v\tau_{\rm G}\), \(\tau_{\rm G}\) is a synchronization time headway, \(g_{\rm safe}\) is a safe space-gap, \(g_{\rm safe}=v\tau_{\rm safe}\), \(\tau_{\rm safe}\) is a safe time headway, \(a_{\rm safe}(g,v,v_{\ell})\) is a safety deceleration. The physics of model (2)-(4) is as follows:
(i) In Eq. (2), in addition to over-acceleration (1), there is function \(K_{\Delta v}\Delta v\)[7; 11] that describes vehicle speed adaptation to the preceding vehicle speed \(v_{\ell}\) occurring independent of gap \(g\) within the gap range \(g_{\rm safe}\leq g\leq G\). Thus, a decrease in \(v_{\ell}\) does not lead to a stronger decrease in the speed \(v\): No driver over-reaction occurs.
(ii) Eq. (3) describes acceleration at large gaps \(g>G\).
(iii) Contrary to over-acceleration \(a_{\rm OA}\) (1) applied in Eq. (2), function \(K_{\Delta v}\Delta v\) in Eq. (2) at \(\Delta v>0\) and Eq. (3) describe "usual" acceleration that does not show a discontinuous character.
(iv) Eq. (4) describes safety deceleration that should prevent vehicle collisions at small gaps \(g<g_{\rm safe}\); contrary to Eq. (2), safety deceleration \(a_{\rm safety}(g,v,v_{\ell})\) in Eq. (4) can lead to driver over-reaction. There are many concepts developed in standard models [1; 2; 3; 4; 5; 6] that can be used for safety deceleration \(a_{\rm safety}(g,v,v_{\ell})\). For simulations below, we use one of them described by Helly's function
\[a_{\rm safety}(g,v,v_{\ell})=K_{1}(g-g_{\rm safe})+K_{2}\Delta v, \tag{5}\]
where \(K_{1}\) and \(K_{2}\) dynamic coefficients [16].
Obviously, through an appropriated parameter choice in standard models [1; 2; 3; 4; 5; 6] driver over-reaction is not realized even at the smallest possible gap \(g=g_{\rm safe}\) in initial steady states of traffic flow. However, in this case no nucleation of congestion is possible to simulate with the standard models.
Contrarily, if we choose coefficients \(K_{1}\) and \(K_{2}\) in (5) (Fig. 2) at which even at \(g\leq g_{\rm safe}\) no driver over-reaction
Figure 2: Simulations with model (2)–(5) of nucleation nature of traffic breakdown (F\(\rightarrow\)S transition) on single-lane road of length \(L=10\) km with two identical on-ramp bottlenecks B and B-down at road locations \(x=x_{\rm on,B}=6\) km and \(x=x_{\rm on,B-down}=9\) km, respectively: (a) Speed data presented in space and time as made in Fig. 1(a). (b, c) Averaged (1-min) speeds at \(x=7\) km within MSP (b) and at \(x=5.7\) km within SP induced through MSP propagation at bottleneck B. Flow rate on the road at \(x=0\) is \(g_{\rm in}=2250\) vehicles/h. For each of the bottlenecks that model is the same as that in [14], there is a merging region of length \(L_{\rm m}=0.3\) km; vehicles merge at a middle location between vehicles on the road at the preceding vehicle speed \(v^{+}\) when \(g>g_{\rm safe}^{\rm(min)}=\lambda_{\rm b}v^{+}+d\) with \(\lambda_{\rm b}=0.3\) s; on-ramp inflow rates are \(q_{\rm on,B-down}=0\) and \(q_{\rm on,B}=685\) vehicles/h; to induce the MSP at bottleneck B-down, impulse \(q_{\rm on,B-down}=400\) vehicles/h at \(t=20\) min during 2 min has been applied. All vehicles in traffic flow are identical ones with the following model parameters: \(\tau_{\rm safe}=1\) s, \(\tau_{\rm G}=3\) s, \(a_{\rm max}=2.5\) m/s\({}^{2}\), \(\alpha=1\) m/s\({}^{2}\), \(v_{\rm typ}=80\) km/h, \(K_{\Delta v}=0.8\) s\({}^{-1}\), \(K_{1}=0.15\) s\({}^{-2}\), \(K_{2}=0.95\) s\({}^{-1}\), \(v_{\rm free}=120\) km/h, \(d=7.5\) m. Under conditions \(0\leq v\leq v_{\rm free}\), vehicle motion is found from equations \(dv/dt=a\), \(dx/dt=v\) solved with the second-order Runge-Kutta method with time step \(10^{-2}\) s.
Figure 1: Empirical nucleation nature of traffic breakdown (F\(\rightarrow\)S transition) at bottlenecks (a) and hypothesis about discontinuous character of over-acceleration (b, c) [10]. (a) Speed data presented in space and time with an averaging method were measured with road detectors installed along road: A moving synchronized flow pattern (MSP) that has emerged at downstream bottleneck (B-down) while propagating upstream induces F\(\rightarrow\)S transition (induced traffic breakdown) leading to emergence of synchronized flow pattern (SP) at upstream bottleneck (B); adapted from [7]. (b, c) Qualitative density-dependence of over-acceleration probability per a time interval (b) and equivalent presentation of (b) as discontinuous flow-rate dependence of the mean time delay in over-acceleration (c); F and S are states of free flow and synchronized flow, respectively.
occurs in model (2)-(5), then, nevertheless, this model shows all known empirical nucleation features of traffic breakdown (Fig. 1(a)): An MSP induced at downstream bottleneck B-down propagates upstream. While reaching upstream bottleneck B, the MSP induces F\(\rightarrow\)S transition at the bottleneck (Fig. 2).
Formula (1) for over-acceleration explains induced traffic breakdown as follows. Due to vehicle merging from on-ramp, condition \(g<g_{\rm safe}\) can be satisfied resulting in vehicle deceleration: A local speed decrease occurs at bottleneck B (Fig. 2(a)). The minimum speed \(v_{\rm min}^{\rm(dec)}\) within the local speed decrease satisfies condition \(v_{\rm min}^{\rm(dec)}>v_{\rm syn}\). Therefore, according to (1), vehicles accelerate with over-acceleration \(a_{\rm OA}=\alpha\) from the local speed decrease; this prevents congestion propagation upstream of bottleneck B. Contrarily, the minimum speed within the MSP satisfies condition \(v_{\rm min}^{\rm(MSP)}<v_{\rm syn}\) (Fig. 2(b)). Then, according to (1), over-acceleration \(a_{\rm OA}=0\): When the MSP reaches bottleneck B, synchronized flow is induced. The emergent SP remains at bottleneck B because the speed within the SP is less than \(v_{\rm syn}\) in (1) (Fig. 2(c)) and, therefore, over-acceleration \(a_{\rm OA}=0\). These simulations, in which no driver over-reaction can occur under chosen model parameters, support the statement of this paper:
* Traffic breakdown is caused by over-acceleration, not driver over-reaction.
Formula (1) for over-acceleration explains also the S\(\rightarrow\)F instability. We consider the time-development of a local speed increase in an initial steady synchronized flow state (Fig. 3). The cause of the local speed increase is a short-time acceleration of one of the vehicles (vehicle 1 in Figs. 3(a, b) or vehicle 8 in Figs. 3(c-e)); the vehicle must decelerate later to the speed of the preceding vehicle moving at the initial synchronized flow speed (\(v=70\) km/h, Fig. 3). There are two possibilities: (i) The increase in the speed of following vehicles (vehicles 2-7 in Figs. 3(a, b)) decays over time (Figs. 3 (a, b)); this occurs when the maximum speed of vehicle 2 (\(v_{\rm max}^{\rm(2)}=77.9\) km/h) is less than \(v_{\rm syn}\) in (1) and, therefore, over-acceleration \(a_{\rm OA}=0\). (ii) Contrarily, if vehicle 8 (Figs. 3(c, d)) accelerates only 0.5 s longer than vehicle 1 (Figs. 3(a, b)), the local speed increase initiated by vehicle 8 grows over time (vehicles 9-14 in Figs. 3(c, d)) leading to the S\(\rightarrow\)F instability (Figs. 3(c-e)); this occurs because the maximum speed of vehicle 9 (\(v_{\rm max}^{\rm(9)}=81.9\) km/h) is higher than \(v_{\rm syn}\) in (1) and, therefore, over-acceleration \(a_{\rm OA}=\alpha\) causes the S\(\rightarrow\)F instability.
We have found that in model (2)-(5) under parame
Figure 3: Nucleation character of S\(\rightarrow\)F instability simulated on single-lane road (8 km long) without bottlenecks with initial steady synchronized flow state at \(v=70\) km/h and \(g=27.5\) m: (a, b) No S\(\rightarrow\)F instability. (c–e) S\(\rightarrow\)F instability. In (a–d), time-development of speeds (a, c) and trajectories (b, d) of vehicles 1–7 (a, b) and 8–14 (c, d) caused by initial local speed increase of vehicle 1 (a, b) and vehicle 8 (c, d) simulated through vehicle short-time acceleration with \(a=0.5\) m/s\({}^{2}\) during 6.5 s in (a, b) and 7 s in (c, d). (e) Spatiotemporal development of speed during S\(\rightarrow\)F instability shown in (c, d). Other model parameters are the same as those in Fig. 2.
Figure 4: Absence of driver over-reaction in model (2)–(5) under parameters used in Fig. 2. Simulations made on single-lane road (8 km long) without bottlenecks with initial steady state of synchronized flow with \(v=70\) km/h and \(g=g_{\rm safe}=19.5\) m: Time-development of vehicle trajectories (a), speed in space and time (b), and speeds of a sequence of vehicles 15–21 caused by initial local speed decrease of vehicle \(i\) in (a) simulated through deceleration of vehicle \(i\) with \(a=-\) 0.5 m/s\({}^{2}\) to the speed \(v=0\); vehicle \(i\) remains stationary for 1 s and then accelerates.
ters used in Fig. 2 there is no driver over-reaction on the deceleration of the preceding vehicle even at the smallest possible space gap between vehicles \(g=g_{\rm safe}\) in an initial homogeneous steady state of traffic flow. In Fig. 4, under condition \(g=g_{\rm safe}\) in an initial synchronized flow, vehicle \(i\) decelerates to a standstill, remains stationary for 1 s and then accelerates. It turns out that none of the following vehicles decelerates to the standstill. The minimum speed of the following vehicles increases slowly over time (vehicles 15-21 in Fig. 4(c)). Finally, rather than a wide moving jam (J), a new state of synchronized flow with speed \(v\approx 15.5\) km/h results from the deceleration of vehicle \(i\).
Clearly, other model parameters in (2)-(5) in comparison with those used above (Figs. 2-4) can be chosen at which driver over-reaction occurs. In this case, simulations of the model show usual results of three-phase traffic theory [7; 11]: (i) In free flow, the F\(\rightarrow\)S transition (traffic breakdown) occurs that features are qualitatively the same as presented in Figs. 2 and 3. (ii) Contrary to Fig. 4, in synchronized flow with lower speeds the classical traffic instability occurs leading to the S\(\rightarrow\)J transition. However, a detailed analysis of these results is out of the scope of the paper.
I thank Sergey Klenov for help in simulations and useful suggestions. I thank our partners for their support in the project "LUKAS - Lokales Umfeldmodell fur das Kooperative, Automatisierte Fahren in komplexen Verkehrssituationen" funded by the German Federal Ministry for Economic Affairs and Climate Action.
|
2309.06146 | On Wolfgang Lusky's paper "The Gurarij spaces are unique'' | This note surveys Wolfgang Lusky's proof of uniqueness of the Gurariy spaces
and mentions further developments. | Dirk Werner | 2023-09-12T11:34:55Z | http://arxiv.org/abs/2309.06146v1 | # On Wolfgang Lusky's paper
###### Abstract.
This note surveys Wolfgang Lusky's proof of uniqueness of the Gurariy spaces and mentions further developments.
Key words and phrases:Gurariy space; Banach spaces of almost universal disposition 2020 Mathematics Subject Classification: Primary 46B04; Secondary 46B10, 46B25 This piece has been commissioned by the editors of Archiv der Mathematik on the occasion of the 75th anniversary of the journal
## 1. Introduction
In 1966, V. I. Gurariy [11] defined the notion of a _Banach space of (almost) universal disposition_ by a certain extension property; see Definition 2.1. He proved the existence of (separable) such spaces and investigated some of their properties; henceforth, such spaces were called _Gurariy spaces_ (alternative spellings: Gurarii, Gurarij, Gurarii,...); we shall reserve this name to separable spaces of this kind. While it is not a daunting task to prove that any two Gurariy spaces are almost isometric in the sense that their Banach-Mazur distance is \(1\), it remained open to decide whether they are actually isometric. This was asked for instance by J. Lindenstrauss and his collaborators at various junctures ([20, Problem II.4.13], [17]).
The isometry problem was solved in 1976 by a fresh PhD from the (likewise rather freshly established) University of Paderborn, Wolfgang Lusky, in his first-ever published paper (the title says it all)
[L] The Gurarij spaces are unique. _Arch. Math._ 27, 627-635 (1976).
We shall refer to this paper, which is [23] in the bibliography, simply by [L].
The present note aims at surveying the background, Lusky's proof, and the ramifications of this result along with an outlook.
Interestingly, some 30 years later Gurariy and Lusky cooperated intensively on a rather different topic, the Muntz spaces, which has led to their monograph [12].
The notation in this note is standard; \(B_{X}\) stands for the closed unit ball of \(X\) and \(\operatorname{ex}B_{X}\) for the set of its extreme points. We are considering only real Banach spaces.
## 2. Banach spaces of almost universal disposition
V. I. Gurariy (1935-2005) was a member of the Kharkiv school of Banach spaces led by M. I. Kadets (sometimes spelled Kadec), one of the strongest in Europe which had its heyday from the late 1950ies till the collapse of the Soviet Union that produced a brain-drain in all fields of science. Gurariy himself emigrated to the United States in the early 1990ies. After 2000, the Kharkiv school was basically reduced to V. Kadets and his students. In 2022 the terror regime in Moscow set out to destroy the university of Kharkiv altogether [31], but remembering a slogan from many years back,!No pasaran!
Here is the key definition of his paper [11].
**Definition 2.1**.: Let \(X\) be a Banach space with the following property.
* For finite-dimensional spaces \(E\) and \(F\), isometries \(T\colon E\to X\) and \(S\colon E\to F\), and for \(\varepsilon>0\), there exists an operator \(\widehat{T}\colon F\to X\) satisfying \(\widehat{T}S=T\) and \[(1+\varepsilon)^{-1}\|y\|\leq\|\widehat{T}y\|\leq(1+\varepsilon)\|y\|\qquad(y \in F)\] ("an \(\varepsilon\)-isometry").
Then \(X\) is called a Banach space of _almost universal disposition_. A separable such space will also be called a _Gurariy space_.
The epithet "almost" in this definition refers to the quantifier "for all \(\varepsilon>0\)"; if \(\varepsilon=0\) is permissible above, then the "almost" will be dropped. However, Gurariy proved in [11, Th. 10] that no separable space of universal disposition exists, but see Subsection 6.3 below.
If in the above definition, \(S\) is the identical inclusion, i.e., \(E\subset F\), then \(\widehat{T}\) is an extension of \(T\), which can likewise be considered as the identical inclusion.
To see that the condition of Definition 2.1 is quite restrictive, let us discuss two examples.
**Example 2.2**.: (a) \(c_{0}\) is not a space of almost universal disposition. Indeed, let \(E=\mathbb{R}\), \(T\colon E\to c_{0}\), \(T(r)=(r,0,0,\dots)\), \(F=\ell_{\infty}^{2}=\mathbb{R}^{2}\) with the max-norm, \(S\colon E\to F\), \(S(r)=(r,r)\). Assume that \(\widehat{T}\) has the properties of Definition 2.1, and let \(\widehat{T}(-1,1)=(x_{1},x_{2},\dots)\). Note that \(\widehat{T}(1,1)=(1,0,0,\dots)\) and therefore
\[\widehat{T}(0,1) =\Big{(}\frac{1+x_{1}}{2},\frac{x_{2}}{2},\dots\Big{)},\] \[\widehat{T}(1,0) =\Big{(}\frac{1-x_{1}}{2},\frac{-x_{2}}{2},\dots\Big{)}.\]
This shows that \(\widehat{T}\) cannot be an \(\varepsilon\)-isometry for small \(\varepsilon\). (If \(x\) is a real number close to \(1\) in modulus, then \(\frac{1\pm x}{2}\) cannot both be close to \(1\).)
(b) \(C[0,1]\) is not a space of almost universal disposition. Indeed, let \(E=\mathbb{R}\), \(T\colon E\to C[0,1]\), \(T(r)=r\mathbb{1}\) (the constant function), \(F=\ell_{2}^{2}=\mathbb{R}^{2}\) with
the \(\ell_{2}\)-norm, \(S\colon E\to F\), \(S(r)=(r,0)\). Assume that \(\widehat{T}\) has the properties of Definition 2.1, and let \(\widehat{T}(0,1)=f\). Note that \(\widehat{T}(1,0)=\mathbb{1}\) and therefore
\[\widehat{T}(1,1)=\frac{\mathbb{1}+f}{2},\]
which must have norm \(\sqrt{2}=\|(1,1)\|_{2}\) up to \(\varepsilon\). Since \((1+\varepsilon)^{-1}\leq\|f\|\leq 1+\varepsilon\), this is impossible for small \(\varepsilon\).
These examples indicate that positive results might not be very easy to come by. By a technical inductive argument, Gurariy shows in [11, Th. 2] the following existence theorem.
**Theorem 2.3**.: _There exists a separable Banach space of almost universal disposition._
As for uniqueness, he proves the following result. To formulate it succinctly, let us recall the _Banach-Mazur distance_ between (isomorphic) Banach spaces
\[d(X,Y)=\inf\{\|\Phi\|\|\Phi^{-1}\|\colon\ \Phi\colon X\to Y\ \text{is an isomorphism}\}\]
and call two Banach spaces _almost isometric_ if their Banach-Mazur distance equals \(1\).
Now for Theorem 5 of [11].
**Theorem 2.4**.: _Any two separable spaces of almost universal disposition are almost isometric._
A quick sketch of the proof can also be found in [20, p. 168].
## 3. The Lazar-Lindenstrauss approach
A key property of the Gurariy spaces (from now on we shall use this terminology) is that they are \(L_{1}\)-preduals. Recall that an \(L_{1}\)_-predual_ (a.k.a. a _Lindenstrauss space_) is a Banach space whose dual is isometrically isomorphic to a space \(L_{1}(\mu)\) of integrable functions on some measure space. This class of spaces is the subject of Lindenstrauss's epoch-making memoir [18].
**Proposition 3.1**.: _Every Gurariy space is an \(L_{1}\)-predual._
In the literature, especially from the previous century, there are only vague indications as to why this is so. Since a recent article [6] admits that this proposition is "not completely evident from the definition" and since it is instrumental for Lusky's proof, I'll sketch a proof. To begin with, we have to recall a characterisation of \(L_{1}\)-preduals from Lindenstrauss's memoir; see [20, Th. 6.1] in conjunction with [20, Lemma 4.2], or [16, SS21].
**Theorem 3.2**.: _A Banach space \(X\) is an \(L_{1}\)-predual if and only if any four open balls \(U(x_{i},r_{i})\) that intersect pairwise have a nonvoid intersection. It is enough to check this for balls of radius \(1\)._
Let us verify that a Gurariy space \(X\) has this property. So suppose \(U(x_{1},1),\ldots,U(x_{4},1)\) are four open balls in \(X\) with radius \(1\) that intersect pairwise, i.e., \(\|x_{i}-x_{j}\|<2\). Choose \(\varepsilon>0\) such that even \(\|x_{i}-x_{j}\|<2-4\varepsilon\). Let \(E\) be the span of \(x_{1},\ldots,x_{4}\). There are some \(N\in\mathbb{N}\) and a linear operator \(S_{1}\colon E\to\ell_{\infty}^{N}\) such that
\[\frac{1}{1+\varepsilon}\|S_{1}x\|_{\infty}\leq\|x\|\leq\|S_{1}x\|_{\infty} \qquad(x\in E).\]
Let us consider the balls \(U_{\ell_{\infty}^{N}}(S_{1}x_{i},1-\varepsilon)\) in \(\ell_{\infty}^{N}\). They intersect pairwise since
\[\|S_{1}x_{i}-S_{1}x_{j}\|_{\infty}\leq(1+\varepsilon)\|x_{i}-x_{j}\|<(1+ \varepsilon)(2-4\varepsilon)<2-2\varepsilon.\]
Being pairwise intersecting balls in \(\ell_{\infty}^{N}\), these balls have a point in common. This means that there exists some \(z\in\ell_{\infty}^{N}\) such that
\[\|z-Sx_{i}\|_{\infty}<1-\varepsilon\qquad(i=1,\ldots,4).\]
Unfortunately, \(S_{1}\) is not an isometry and therefore is not eligible for being used in Definition 2.1. However, we can renorm \(\ell_{\infty}^{N}\) to make it an isometry: note that \(B_{\ell_{\infty}^{N}}\cap S_{1}(E)\subset S_{1}(B_{E})\), and we can renorm \(\ell_{\infty}^{N}\) by letting the new unit ball be the convex hull of \(S_{1}(B_{E})\) and \(B_{\ell_{\infty}^{N}}\). Call this renorming \(F\), and let \(S=S_{1}\) considered as an operator from \(E\) to \(F\); this is an isometry. We have
\[\frac{1}{1+\varepsilon}\|y\|_{\infty}\leq\|y\|_{F}\leq\|y\|_{\infty}\qquad(y \in F)\]
and thus
\[\|z-Sx_{i}\|_{F}\leq\|z-S_{1}x_{i}\|_{\infty}<1-\varepsilon.\]
Since \(X\) is a Gurariy space, there is an \(\varepsilon\)-isometry \(\widehat{T}\colon F\to X\) satisfying \(\widehat{T}Sx=x\) for \(x\in E\). Let \(x_{0}=\widehat{T}z\); then \(x_{0}\in\bigcap U(x_{i},1)\):
\[\|x_{0}-x_{i}\|=\|\widehat{T}z-\widehat{T}Sx_{i}\|\leq(1+\varepsilon)\|z-Sx_{ i}\|_{F}<(1+\varepsilon)(1-\varepsilon)<1.\]
In the more contemporary literature one can find explicit proofs of Proposition 3.1 based on another characterisation of \(L_{1}\)-preduals and a "pushout argument" [9, Th. 2.17], [5, Prop. 6.2.8].
Now let \(X\) be a separable \(L_{1}\)-predual. By the results of Michael and Pelczynski [26] and Lazar and Lindenstrauss [17] there is a chain of finite-dimensional subspaces \(E_{n}\) of \(X\) such that
* \(E_{1}\subset E_{2}\subset\ldots\) ;
* \(\dim E_{n}=n\), and \(E_{n}\) is isometrically isomorphic to \(\ell_{\infty}^{n}\),
* \(\bigcup E_{n}\) is dense in \(X\).
The inclusion \(E_{n}\subset E_{n+1}\) entails some degree of freedom, namely the choice of an isometry \(\psi_{n}\colon\ell_{\infty}^{n}\to\ell_{\infty}^{n+1}\). To study the structure of these \(\psi_{n}\), we need the ad-hoc notion of an admissible basis: if \(\delta_{1},\ldots,\delta_{n}\) denotes the canonical unit vector basis of \(\ell_{\infty}^{n}\) and \(\psi\colon\ell_{\infty}^{n}\to\ell_{\infty}^{n}\) is an isometry, then \(\psi(\delta_{1}),\ldots,\psi(\delta_{n})\) is called an _admissible basis_ for \(\ell_{\infty}^{n}\). Note that \(\psi\) takes a vector \((a_{1},\ldots,a_{n})\) to \((\vartheta_{1}a_{\pi(1)},\ldots,\vartheta_{n}a_{\pi(n)})\) for some permutation \(\pi\) and some signs \(\vartheta_{j}=\pm 1\)
Thus, an admissible basis is just a permutation of the unit vector basis up to signs, and the isometric image of an admissible basis is again an admissible basis.
Let us return to the isometric embedding \(\psi_{n}\colon\ell_{\infty}^{n}\to\ell_{\infty}^{n+1}\), and let \(e_{1,n},\dots\), \(e_{n,n}\) be an admissible basis for \(\ell_{\infty}^{n}\). We can develop the vectors \(f_{j}:=\psi_{n}(e_{j,n})\) into the unit vector basis of \(\ell_{\infty}^{n+1}\). Since \(\psi_{n}\) is an isometry, there is at least one coordinate \(i\) where \(|f_{j}(i)|=1\). Then, if \(k\neq j\), \(f_{k}(i)=0\): pick a sign \(\lambda\) such that
\[|f_{j}(i)+\lambda f_{k}(i)|=|f_{j}(i)|+|f_{k}(i)|=1+|f_{k}(i)|\]
and so
\[1=\|e_{j,n}+\lambda e_{k,n}\|=\|f_{j}+\lambda f_{k}\|\geq|f_{j}(i)+\lambda f_ {k}(i)|=1+|f_{k}(i)|,\]
hence the claim. Since \(\|\psi_{n}\|=1\), we also have
\[\Bigl{|}\sum_{j=1}^{n}f_{j}(i)\Bigr{|}=\Bigl{|}\psi_{n}\Bigl{(}\sum_{j=1}^{n} e_{j,n}\Bigr{)}(i)\Bigr{|}\leq\Bigl{\|}\sum_{j=1}^{n}e_{j,n}\Bigr{\|}=1.\]
Therefore, there is an admissible basis \(e_{1,n+1},\dots,e_{n+1,n+1}\) for \(\ell_{\infty}^{n+1}\) such that for some numbers \(a_{jn}\)
\[\psi_{n}(e_{j,n})=e_{j,n+1}+a_{jn}e_{n+1,n+1}\qquad(j=1,\dots,n)\]
and
\[\sum_{j=1}^{n}|a_{jn}|\leq 1.\]
We can rephrase these representations in terms of the \(E_{n}\) as follows.
**Proposition 3.3**.: _There exist admissible bases in each \(E_{n}\) and real numbers \(a_{jn}\) such that_
\[e_{j,n}=e_{j,n+1}+a_{jn}e_{n+1,n+1}\qquad(j=1,\dots,n;\ n=1,2, \dots)\] \[\sum_{j=1}^{n}|a_{jn}|\leq 1\qquad(n=1,2,\dots).\]
This proposition is due to Lazar and Lindenstrauss [17]. The triangular matrix \((a_{jn})_{j\leq n,n\in\mathbb{N}}\) is called a _representing matrix_ for the given \(L_{1}\)-predual \(X\). Conversely does the choice of admissible bases and of an array \((a_{jn})\) lead to an \(L_{1}\)-predual.
Lazar and Lindenstrauss use this approach to present another proof of the existence of Gurariy spaces. Let \(a_{n}=(a_{1n},\dots,a_{nn},0,0,\dots)\) be the \(n^{\text{th}}\) column of a matrix as in Proposition 3.3; then each \(a_{n}\) is in the unit ball of \(\ell_{1}\).
**Theorem 3.4**.: _If \(\{a_{1},a_{2},\dots\}\) is dense in the unit ball of \(\ell_{1}\), then the corresponding matrix is associated to a Gurariy space._
It should be noted that the representing matrix \(A\) of an \(L_{1}\)-predual \(X\) is not uniquely determined, and much work has been done to study the relation of \(A\) and \(X\) for certain classes of \(L_{1}\)-preduals; see e.g. Lusky's paper [24].
## 4. Lusky's uniqueness proof
Here is Lusky's uniqueness theorem.
**Theorem 4.1**.: _Any two Gurariy spaces are isometrically isomorphic._
Let us first remark that almost isometric spaces (cf. Theorem 2.4) need not be isometric. The following is a classical counterexample due to Pelczynski from [28]: Let \(X\) and \(Y\) be \(c_{0}\) equipped with the equivalent norms (\(x=(x_{n})\))
\[\|x\|_{X} =\|x\|_{\infty}+\Bigl{(}\sum_{n=1}^{\infty}\frac{|x_{n}|^{2}}{2^{ n}}\Bigr{)}^{1/2},\] \[\|x\|_{Y} =\|x\|_{\infty}+\Bigl{(}\sum_{n=1}^{\infty}\frac{|x_{n+1}|^{2}}{ 2^{n}}\Bigr{)}^{1/2}.\]
The operators \(\Phi_{n}\): \(X\to Y\), \(x\mapsto(x_{n},x_{1},\dots,x_{n-1},x_{n+1},\dots)\) are isomorphisms satisfying \(\lim_{n}\|\Phi_{n}\|\|\Phi_{n}^{-1}\|=1\) so that \(X\) and \(Y\) are almost isometric; but \(X\) is strictly convex while \(Y\) isn't, therefore \(X\) and \(Y\) are not isometric.
Benyamini [3] has shown that such counterexamples also exist among \(L_{1}\)-preduals.
The proof of Theorem 4.1 consists of a delicate inductive construction of \(\ell_{\infty}^{n}\)-subspaces and admissible bases. The key problem to be solved here is this.
**Problem 4.2**.: Let \(X\) be a Gurariy space and \(E\subset F\) be finite-dimensional spaces with \(E\cong\ell_{\infty}^{n}\) and \(F\cong\ell_{\infty}^{n+1}\). Let \(T\): \(E\to X\) be an isometry. When does there exist an isometric extension \(\widehat{T}\): \(F\to X\)?
Lusky notes that this is not always the case [L, p. 630], and he gives the following useful criterion in terms of admissible bases. W.l.o.g. suppose that \(T\) is the identity. Let \(e_{1},\dots,e_{n}\) and \(f_{1},\dots,f_{n+1}\) be admissible bases for \(E\) resp. \(F\) such that
\[e_{i}=f_{i}+r_{i}f_{n+1},\qquad i=1,\dots,n.\]
**Lemma 4.3**.: _Problem 4.2 has a positive solution if \(\sum_{i=1}^{n}|r_{i}|<1\)._
This criterion is a little hidden in the proof of the Corollary [L, p. 630], where the extreme point condition \(\operatorname{ex}B_{E}\cap\operatorname{ex}B_{F}=\emptyset\) is spelled out to be sufficient; but the heart of the matter is Lemma 4.3.
Now let's take a quick glimpse at the proof of Theorem 4.1. Suppose that \(X\) and \(Y\) are Gurariy spaces coming with \(\ell_{\infty}^{n}\)-approximations \(\bigcup_{n}E_{n}\) and \(\bigcup F_{n}\), respectively. Comparing Proposition 3.3 with Lemma 4.3 one realises
that one has to perturb the given admissible bases so that Lemma 4.3 becomes applicable. The details of this process are quite technical [L, pp. 631-633] and lead to sequences of admissible bases. Ultimately one can pass to the limit and obtain admissible bases \(\{e_{i,n}\colon i\leq n,\,n\geq 1\}\) resp. \(\{f_{i,n}\colon i\leq n,\,n\geq 1\}\) spanning dense subspaces of \(X\) resp. \(Y\), and the operator \(e_{i,n}\mapsto f_{i,n}\) acts as a well-defined isometry.
In an addendum to [L], dated January 10, 1976, Lusky applies his methods to Mazur's rotation problem that asks whether a separable transitive space is isometric to a Hilbert space; a Banach space \(X\) is called _transitive_ if whenever \(\|x\|=\|y\|=1\), there is an isometric automorphism \(T\colon X\to X\) mapping \(x\) to \(y\), i.e., \(Tx=y\). This problem is open to this day, and recent papers on the subject include [4] and [6].
What Lusky proves in his addendum is that the Gurariy space (now that we know it's unique we may use the definite article) is transitive for smooth points. Recall that \(x_{0}\) is a smooth point of the unit ball \(B_{X}\) if \(\|x_{0}\|=1\) and there is exactly one \(x_{0}^{*}\in X^{*}\) such that \(\|x_{0}^{*}\|=x_{0}^{*}(x_{0})=1\); equivalently, the norm function \(x\to\|x\|\) is Gateaux differentiable at \(x_{0}\). It is a theorem of Mazur that smooth points are dense in the unit sphere of a separable Banach space.
**Theorem 4.4**.: _Let \(x\) and \(y\) be smooth points of the unit ball of the Gurariy space \(G\). Then there is an isometric automorphism \(T\colon G\to G\) mapping \(x\) to \(y\)._
Another result of [L] is a refined version of a theorem originally due to Wojtaszczyk [32] (see also [24]).
**Theorem 4.5**.: _Let \(X\) be a separable \(L_{1}\)-predual and \(G\) be the Gurariy space. Then there exist an isometry \(T\colon X\to G\) and a norm-\(1\) projection \(P\colon G\to G\) onto \(T(X)\); further \((\operatorname{Id}-P)(G)\) is isometrically isomorphic to \(G\)._
This indicates that the Gurariy space is "maximal" among the separable \(L_{1}\)-predual spaces; in particular it contains \(C[0,1]\) and is universal, a fact proved by other means by Gevorkyan in [10].
We close this section by mentioning another proof of Theorem 4.1, due to W. Kubis and S. Solecki [15]. Their proof avoids the Lazar-Lindenstrauss machinery and just depends on the defining properties of a Gurariy space. They also prove the universality of the Gurariy space from first principles, without relying on the universality of \(C[0,1]\). Still another proof is in Kubis's paper [14] in _Archiv der Mathematik_, which builds on a Banach-Mazur type game.
## 5. The Poulsen simplex
This note wouldn't be complete without mentioning the cousin of the Gurariy space in the world of compact convex sets, the _Poulsen simplex_. The traditional definition of a (compact) simplex is a compact convex subset \(S\) of a Hausdorff locally convex space \(E\) such that the cone generated by
\(S\times\{1\}\) in \(E\oplus\mathbb{R}\) is a lattice cone. Thus, a triangle in the plane is a simplex while a rectangle isn't. For our purposes it is important to note that the space \(A(S)\) of affine continuous functions on a compact convex set is an \(L_{1}\)-predual if and only if \(S\) is a simplex.
Poulsen [29] had proved the existence of a metrisable simplex, which now bears his name, whose set of extreme points is dense. It is a result due to Lindenstrauss, Olsen, and Sternfeld [19] that such a simplex is uniquely determined up to affine homeomorphism. They write:
We discovered the uniqueness of the Poulsen simplex after reading Lusky's paper [L] on the uniqueness of the Gurari space. Our proof of the uniqueness uses the same idea which Lusky used in [L].
The role of admissible bases is now played by peaked partitions of unity.
The authors mention a lot of similarities between the Poulsen simplex and the Gurariy space. For example, the counterpart of the defining property of the Poulsen simplex \(S_{P}\) is Lusky's theorem from [L] and [24] that a separable \(L_{1}\)-predual is a Gurariy space \(G\) if and only if \(\operatorname{ex}B_{G^{*}}\) is weak\({}^{*}\) dense in the unit ball \(B_{G^{*}}\). However, \(A(S_{P})\) is not the Gurariy space since for example the transitivity property of Theorem 4.4 fails. But, as shown by Lusky [25], one can salvage this by requiring a slightly more stringent assumption on \(x\) and \(y\), which are now supposed to be positive: in addition, \(1-x\) and \(1-y\) should be smooth points.
## 6. Outlook
### Fraisse theory
The Gurariy space is a very homogeneous object, for example [11, Th. 3]: If \(E\) and \(F\) are finite-dimensional subspaces of the same dimension of a Gurariy space \(G\), then for every \(\varepsilon>0\), every isometric isomorphism from \(E\) to \(F\) extends to an \(\varepsilon\)-isometric automorphism of \(G\). In recent years, such homogeneous structures were investigated by methods of model theory known as Fraisse theory ([8], [2], [13]). Fraisse theory associates a unique limit to certain substructures. This approach is at least implicit in the Kubis-Solecki uniqueness proof, and a detailed exposition involving the Gurariy space, the Poulsen simplex and a whole lot more can be found in M. Lupini's paper [22].
### Noncommutative Gurariy spaces
T. Oikhberg, in his _Archiv der Mathematik_ paper [27], proved the existence and uniqueness of a "noncommutative" Gurariy space, i.e., a Gurariy-like object in the setting of operator spaces a la Effros-Ruan. Again, this can also be viewed from the perspective of Fraisse theory [21].
### Nonseparable spaces
We have already mentioned in Section 2 Gurariy's result that no space of universal disposition can be separable. Since the definition of (almost) universal disposition makes perfect sense beyond the separable case, it was studied in several papers, e.g., [1], [7], [9]. It turns out that there are spaces of almost universal disposition of density character
\(\aleph_{1}\), but the uniqueness breaks down (Th. 3.6 and Th. 3.7 in [9]). Likewise, there are spaces of universal disposition of density \(\aleph_{1}\), and again, uniqueness fails ([1], [7]). Indeed, it should be noted that in these papers also the variant of being of (almost) universal disposition with respect to separable spaces, already considered by Gurariy, is studied: in Definition 2.1 one now allows \(E\) and \(F\) to be separable rather than finite-dimensional.
### Banach lattices
Recently, M. A. Tursi [30] proved the existence of a uniquely determined Gurariy-like Banach lattice. She exploits ideas of Fraisse theory.
|
2310.00454 | SimLVSeg: Simplifying Left Ventricular Segmentation in 2D+Time
Echocardiograms with Self- and Weakly-Supervised Learning | Echocardiography has become an indispensable clinical imaging modality for
general heart health assessment. From calculating biomarkers such as ejection
fraction to the probability of a patient's heart failure, accurate segmentation
of the heart structures allows doctors to assess the heart's condition and
devise treatments with greater precision and accuracy. However, achieving
accurate and reliable left ventricle segmentation is time-consuming and
challenging due to different reasons. Hence, clinicians often rely on
segmenting the left ventricular (LV) in two specific echocardiogram frames to
make a diagnosis. This limited coverage in manual LV segmentation poses a
challenge for developing automatic LV segmentation with high temporal
consistency, as the resulting dataset is typically annotated sparsely. In
response to this challenge, this work introduces SimLVSeg, a novel paradigm
that enables video-based networks for consistent LV segmentation from sparsely
annotated echocardiogram videos. SimLVSeg consists of self-supervised
pre-training with temporal masking, followed by weakly supervised learning
tailored for LV segmentation from sparse annotations. We demonstrate how
SimLVSeg outperforms the state-of-the-art solutions by achieving a 93.32%
(95%CI 93.21-93.43%) dice score on the largest 2D+time echocardiography dataset
(EchoNet-Dynamic) while being more efficient. SimLVSeg is compatible with two
types of video segmentation networks: 2D super image and 3D segmentation. To
show the effectiveness of our approach, we provide extensive ablation studies,
including pre-training settings and various deep learning backbones. We further
conduct an out-of-distribution test to showcase SimLVSeg's generalizability on
unseen distribution (CAMUS dataset). The code is publicly available at
https://github.com/fadamsyah/SimLVSeg. | Fadillah Maani, Asim Ukaye, Nada Saadi, Numan Saeed, Mohammad Yaqub | 2023-09-30T18:13:41Z | http://arxiv.org/abs/2310.00454v3 | UniLVSeg: Unified Left Ventricular Segmentation with Sparsely Annotated Echocardiogram Videos through Self-Supervised Temporal Masking and Weakly Supervised Training
###### Abstract
Echocardiography has become an indispensable clinical imaging modality for general heart health assessment. From calculating biomarkers such as ejection fraction to the probability of a patient's heart failure, accurate segmentation of the heart and its structures allows doctors to plan and execute treatments with greater precision and accuracy. However, achieving accurate and robust left ventricle segmentation is time-consuming and challenging due to different reasons. This work introduces a novel approach for consistent left ventricular (LV) segmentation from sparsely annotated echocardiogram videos. We achieve this through (1) self-supervised learning (SSL) using temporal masking followed by (2) weakly supervised training. We investigate two different segmentation approaches: 3D segmentation and a novel 2D superimage (SI). We demonstrate how our proposed method outperforms the state-of-the-art solutions by achieving a 93.32% (95%CI 93.21-93.43%) dice score on a large-scale dataset (EchoNet-Dynamic) while being more efficient. To show the effectiveness of our approach, we provide extensive ablation studies, including pre-training settings and various deep learning backbones. Additionally, we discuss how our proposed methodology achieves high data utility by incorporating unlabeled frames in the training process. To help support the AI in medicine community, the complete solution with the source code will be made publicly available upon acceptance.
Keywords:Left Ventricle Segmentation Sparse Video Segmentation 3D Segmentation Super Image Self-supervision Temporal Masking
## 1 Introduction
Echocardiograms are a crucial modality in cardiovascular imaging due to their safety, availability, and high temporal resolution [11]. In clinical practice, echocardiogram
diagram information is used to diagnose heart conditions and understand the preoperative risks in patients with cardiovascular diseases [7]. By accurately segmenting the heart structures, especially the end-diastole (ED) and end-systole (ES) frames, clinicians can assess the extent and location of the disease, determine the appropriate treatment approach, and monitor the patient's response to therapy [10].
The typical manual workflow of segmenting LV is as follows: 1) a sonographer acquires an echocardiogram video using an ultrasound device and records the patient's heartbeat, 2) finds ED and ES by locating candidate frames indicated by the recorded heartbeat signal and then verifies them visually with the recorded echocardiogram video, 3) draws some key points to represent LV region as shown in Figure 1. That manual LV segmentation workflow is typically time-consuming and prone to intra- and inter-observer variability. The inherent speckle noise in echocardiograms makes LV segmentation more challenging, as LV boundaries are sometimes unclear. Hence, sonographers need to consider the temporal context to eliminate the ambiguity caused by unclear heart structures in echocardiograms and perfectly segment LV to achieve accurate results. It can even add more burden for sonographers since they must go back-and-forth between echocardiogram frames to analyze the ambiguous boundaries properly. Automatic LV segmentation can help sonographers in solving this arduous task more efficiently.
A wide range of work on performing medical image segmentation using a supervised deep-learning approach is presented ([23], [14]). Earlier segmentation approaches on echocardiograms propose a frame-by-frame (2D) image segmentation solution ([25], [13], [16], [21], [2]). The image-based approaches however, do not capitalize on the periodicity and temporal consistency of the echocardiograms, which may lead to incoherence in the segmentation results from one frame to the next. This has motivated a recent body of video-based echocardiogram segmentation approaches.
Li et al. [19] use a Conv-LSTM to ensure spatiotemporal consistency between consecutive frames. Ahn et al. [1] use a multi-frame attention network to perform 3D segmentation. Wu et al. [31] demonstrated the effectiveness of semi-supervision using mean-teacher networks and spatiotemporal fusion on segmentation. Recently, Wei et al. [30] propose a two-stage training to enforce temporal consistency on a 3D U-Net by leveraging an echocardiogram ED & ES sequence constraint. Painchaud et al. [22] improve the average segmentation performance by enforcing temporal smoothness as a post-processing step on video segmentation outputs.
These video-based approaches show high temporal consistency and state-of-the-art performance. However, they pose certain limitations. Recurrent units in [19] incur a high computational cost. Multi-frame attention in [1] similarly has computational cost correlated to the number of frames and they are limited to using five frames. [31] limit the temporal context to three frames to obtain optimum performance-compute trade-off. [30] leverages a constraint in their training pipeline where the segmented area changes monotonically as the first input frame
is ED and the last frame is ES in the same (_one_) heartbeat cycle, thus limiting the usage of vastly unannotated frames in other cycles. On the other hand, image-based networks are computationally cheaper and retain an advantage in being effectively pre-trained on a large corpus of annotated image datasets. Annotated video datasets are, in comparison, more scarce. Fan et al. [5] introduced the idea of super images by flattening videos into image grids and successfully performed video-based tasks such as action recognition using image classifiers. Sobirov et al. [26] employ this approach on medical images for atrial and head and neck cancer segmentation problems.
Moreover, publicly available echocardiogram datasets ([21], [17]) have typically two annotated frames only per video, i.e. end-diastole (ED) and end-systole (ES) frames. In the case of the EchoNet-Dynamic dataset [21], this utilizes less than 1.2 % of the available frames when training in a 2D supervised setting. Self-supervised learning (SSL) alleviates this problem. Saeed et al. [24] use contrastive pre-training to provide self-supervision on echocardiograms. Recently, He et al. [8] show that masked autoencoders (MAE) for self-supervised pre-training enable accelerated training and improve accuracy on natural image-based tasks. Feichtenhofer et al. [6] and Tong et al. [28] extend this idea to spatiotemporal masking and show promising results on action recognition.
The aforementioned works perform LV segmentation from echocardiogram videos **either by** 1) analyzing frames independently with simple 2D deep learning models **or** 2) performing 2D+time analysis and developing models using complex training schemes. In our proposed method, while achieving state-of-the-art performance, we aim to mimic clinical assessment where doctors assess multiple frames concurrently in a simplified approach. We introduce a novel self-supervised pre-training approach and a loss calculation method for video-based echocardiogram segmentation, specifically designed to handle sparsely annotated frames in the downstream task. Our key contributions are:
* We propose a self-supervised temporal masking approach that leverages vastly unannotated echocardiogram frames to provide a better network initialization for the downstream LV segmentation task by learning the periodic nature of echocardiograms.
* We propose a loss calculation mechanism that allows a video-based segmentation network to learn LV segmentation from sparsely annotated echocardiogram videos without any heartbeat cycle constraint.
* We show the compatibility of our approach with the 2D super image and 3D segmentation network with various encoder backbones.
* We demonstrate how our proposed approach outperforms the state-of-the-art in LV segmentation on Echonet-Dynamic in terms of performance and efficiency through extensive ablation studies.
## 2 Methodology
Our proposed method is demonstrated in Figure 2. A network utilizes unannotated frames for a pre-training stage and learns from annotated frames in a
weakly-supervised manner. The performance of the proposed method was evaluated with 3D segmentation and 2D super image (SI) segmentation [5] approach, as depicted in Figure 3. The details are described below.
**Self-Supervised Temporal Masking.** In the EchoNet-Dynamic [21] dataset, most of the frames are unannotated, thus the ability to perform supervised training is limited. To benefit from the vast amount of unlabeled frames, we implement a self-supervised temporal masking algorithm to pre-train our model. As depicted in Figure 2, a clip of an echocardiogram video is retrieved, and a portion of the frames is masked. The model is then pre-trained to reconstruct the masked clip. Through this process, the model learns valuable latent information from the periodic nature of echocardiograms, e.g. the embedded temporal pattern or cardiac rhythm, that benefit the downstream LV segmentation task.
Figure 1: A sequence of an echocardiogram video [21]. The number of frames varies, yet only two are labeled, i.e. the end-diastole (_left-most_) and the end-systole (_right-most_) frame. Annotators draw key points to represent the left ventricular (LV) region. Then, LV segmentation labels are inferred from the given key points.
Figure 2: An illustration of our approach. A video segmentation network is developed to segment LV on every input echocardiogram frame. The network is pre-trained using a self-supervised temporal masking method, which is then fine-tuned on the LV segmentation task with sparse annotations.
More formally, suppose \(V\) is an echocardiogram video with \(H\times W\) frame size. From \(V\), we sample a clip \(v\in\mathbb{R}^{H\times W\times F\times 3}\) consisting of \(F\) number of consecutive frames with a stride or sampling period of \(T\). Then, we provide a masked clip \(v_{m}\in\mathbb{R}^{H\times W\times F\times 3}\) by randomly choosing \(f\) number of frames from \(v\) and adjusting their pixel values to 0. A video network \(\mathcal{G}\) is then pre-trained to reconstruct \(v\) from \(v_{m}\). The network \(\mathcal{G}\) is optimized by minimizing the mean-squared difference of pixel values between the reference clip \(v\) and the reconstructed clip \(\mathcal{G}(v_{m})\).
**LV Segmentation with Sparse Annotation.** The sparsely-annotated echocardiogram videos make the LV segmentation challenging as training a video segmentation model on EchoNet-Dynamic is not trivial. To tackle the issue, inspired by [3], we propose a training strategy to develop a video segmentation network specifically for LV. As illustrated in Figure 2, the network takes in \(F\) number of frames and segments the LV on each frame. Then, the loss is calculated and backpropagated only based on the prediction of frames having a segmentation label.
More formally, let \(\mathcal{G}\) be a video segmentation network with a set of parameters \(\Psi\) which takes in an input echocardiogram clip \(v\in\mathbb{R}^{H\times W\times F\times C}\) and predicts LV segmentation \(\hat{\mathbf{y}}\in\mathbb{R}^{H\times W\times F}\), where \(F\), \(C\), and \(H\times W\) are number of frames, number of channels which is 3, and frame size respectively. Also, let \(\mathbf{y}=\{y_{f_{1}},y_{f_{2}},\ldots,y_{f_{n}}\}\) denote the segmentation label of the input clip, where \(y_{f_{i}}\in\mathbb{R}^{H\times W}\) is the \(f_{i}\)-th frame label (\(f_{i}\leq F\)) and \(n\leq F\) is the number of the labeled frames. Thus, the total dice loss \(\mathcal{L}_{d}\) can be formulated as:
\[\mathcal{L}_{d}(\mathbf{y},\hat{\mathbf{y}})=\sum_{i=1}^{F}\ell_{d}\left(y_{i},\hat{y}_ {i}\right)=\underbrace{\sum_{j\in\mathcal{F}_{l}}\ell_{d}\left(y_{j},\hat{y_{j} }\right)}_{\text{labeled (annotated) frames}}+\underbrace{\sum_{k\in\{1,\ldots,F\} \setminus\mathcal{F}_{l}}\ell_{d}\left(y_{k},\hat{y}_{k}\right)}_{\text{ unlabeled frames}} \tag{1}\]
where \(\ell_{d}\) is the _frame-wise_ dice loss, \(\mathcal{F}_{l}=\{f_{1},\ldots,f_{n}\}\) is the labeled frames, and \(y_{k}\) is a dummy label if \(k\in\{1,\ldots,F\}\backslash\mathcal{F}_{l}\) (_unlabeled frames_). The gradient of \(\mathcal{L}_{d}\) w.r.t. a parameter \(\psi\in\Psi\) is given by:
\[\frac{\partial\mathcal{L}_{d}}{\partial\psi}(\mathbf{y},\hat{\mathbf{y}})=\sum_{j\in \mathcal{F}_{l}}\frac{\partial\ell_{d}}{\partial\psi}\left(y_{j},\hat{y_{j}} \right)+\sum_{k\in\{1,\ldots,F\}\backslash\mathcal{F}_{l}}\frac{\partial\ell_ {d}}{\partial\psi}\left(\mathbf{y},\hat{y}_{k}\right) \tag{2}\]
where \(\frac{\partial\ell_{d}}{\partial\psi}\left(y_{k},\hat{y}_{k}\right)\) is set to zero because the \(k\)-th frame is unlabeled. **Since (1)**\(\hat{y}_{j}\in\mathcal{G}\left(v;\,\Psi\right)\), and **(2)**\(\mathcal{G}\) typically consists of shared-weights operators (e.g. convolution and attention), **then**
\[\frac{\partial\ell_{d}}{\partial\psi}\left(y_{j},\hat{y}_{j}\right)\in\mathbb{ R}\implies\sum_{j\in\mathcal{F}_{l}}\frac{\partial\ell_{d}}{\partial\psi} \left(y_{j},\hat{y_{j}}\right)\in\mathbb{R}\implies\frac{\partial\mathcal{L}_ {d}}{\partial\psi}(\mathbf{y},\hat{\mathbf{y}})\in\mathbb{R} \tag{3}\]
for all parameters \(\psi\) in \(\Psi\). Thus, although a clip \(v\) is partially labeled and gradients do not come from unlabeled frames, this framework can facilitate training for all \(\mathcal{G}\) parameters.
During training, a clip is randomly extracted around an annotated frame from every video with the specified number of frames \(F\) and sampling period \(T\), resulting in more variations and acting as a regularizer. In other words, there is only a segmentation mask for one frame on every clip. To reduce randomness during the evaluation step, a clip is extracted from each video where an annotated frame is at the center of the clip.
**3D Segmentation Approach.** Echocardiogram videos consist of stacked 2D images. Considering the time axis as the 3rd dimension allows 3D models to segment the LV on an echocardiogram clip. Thus, the 3D U-Net [3] is utilized as the architecture. As depicted in Fig. 4, we use a CNN with residual units [15] as the encoder, which has 5 stages where the stage outputs are passed to the decoder. A residual unit comprises two Conv2D layers, two instance norm layers, two PReLU activation functions, and a skip connection.
**2D Super Image Approach.** Unlike the 3D approach, the SI addresses the video segmentation problem in a 2D fashion [26]. An echocardiogram video \(v\in\mathbb{R}^{H\times W\times F\times C}\) is rearranged into a single big image \(x\in\mathbb{R}^{\hat{H}\times W\times C}\), where \(\hat{H}\) and \(\hat{W}\) are the height and width of the SI respectively. Since the SI works best with a grid layout [5], we set the echocardiogram SI size to be \(H\sqrt{F}\times W\sqrt{F}\). Hence, existing techniques for 2D image analysis can be well utilized to help solve the problem, e.g. state-of-the-art architectures, self-supervised methods, and strong pre-trained models.
The 2D U-Net [23] is used as the main architecture with the UniFormer-S [18] as the encoder. We select the UniFormer-S since 1) it leverages the strong properties of convolution and attention, and 2) it is the recent state-of-the-art on EchoNet-Dynamic ejection fraction estimation [20]. In short, the network consists of 4 stages, where the first two stages utilize convolution operators to
Figure 3: The 3D vs. 2D super image segmentation approach. The first approach utilizes a 3D segmentation network, while the second rearranges the echocardiogram clip as a super image and then utilizes a 2D network.
extract features, and the rest implement multi-head self-attention (MHSA) to learn global contexts. The inductive biases of convolution layers allow the model to learn efficiently and the MHSA has a large receptive field that is favorable for SI [5].
## 3 Experimental Setup
Experiments were performed on EchoNet-Dynamic [21], a large-scale echocardiography dataset, using an NVIDIA RTX 6000 GPU with CUDA 11.7 and PyTorch 1.12.
**Dataset.** We conducted our experiments on the EchoNet-Dynamic dataset [21]. EchoNet-Dynamic is the largest publicly available 2D+Time echocardiograms of the apical four-chambers (a4c) view of the human heart. The dataset comprises approximately 10,030 heart echocardiogram videos with a fixed frame size of \(112\times 112\). Video length varies from 28 to 1002 frames, yet only two are annotated (ED & ES frames). A sample echocardiogram sequence is given in Figure 1.
To ensure a fair comparison with reported state-of-the-art methods, we adhered strictly to the organizer's provided split, consisting of 7460 training videos, 1288 validation videos, and 1276 test videos.
**Implementation Details.** We pre-trained our video segmentation models for 100 epochs with self-supervision. Each echocardiogram video was randomly sampled on every epoch with a specified number of frames (\(F\)) and a stride or sampling period (\(T\)) to give more variations. We utilized the AdamW optimizer with a 3e-4 learning rate and a 1e-5 weight decay. A set of augmentation was applied to enrich the variation during training, consisting of color jitter, CLAHE, random rotation, and random padded cropping. Then, the model is fine-tuned for the LV segmentation task with sparse annotations in a weakly-supervised manner for 70 epochs. Every video was sampled twice on every epoch to accommodate the annotated ED and ES frames. Main hyper-parameters were set experimentally.
Figure 4: The 3D U-Net architecture. A residual unit [15] consists of convolutional layers, instance norm layers, PReLU, and a skip connection. Residual Unit [\(C\)] denotes a residual unit with \(C\) number of feature channels.
## 4 Results
**Comparison with the state-of-the-art.** Our method outperforms other approaches on the EchoNet test set, as mentioned in Table 1. The 3D U-Net results in 93.32% overall dice similarity coefficient (DSC), and the SI approach shows on-par performance. Confidence interval (CI) analysis further shows no overlap between the 95% CI of our methods with other state-of-the-art solutions, indicating that our improvements hold statistical significance over those methods with a p-value of less than 0.05. The 3D U-Net was trained with 32 frames sampled consecutively, while the SI was trained with 16 frames sampled at every 5\({}^{\text{th}}\) frame. This experiment shows that a video segmentation network trained in a weakly-supervised manner is capable of segmenting the LV with a 3.8x lower computational cost compared to [2].
**Number of Frames and Sampling Period.** The number of frames \(F\) and the sampling period \(T\) play important roles ([20], [31]). Large \(F\) allows a network
\begin{table}
\begin{tabular}{l|c|c|c|c|c|c} \hline Method & \multicolumn{3}{c|}{DSC (95\%CI)} & FLOPs \# Params \\ \hline & \multicolumn{1}{c|}{Overall} & ES & ED & (G) & (M) \\ \hline EchoNet & 92.00 (91.87-92.13) & 90.68 (90.55-90.86) & 92.78 (92.61-92.94) & 7.84 & 39.64 \\ \hline nnU-Net [14] & 92.86 (92.74-92.98) & 91.63 (91.43-91.83) & 93.62 (93.48-93.76) & 2.30 & **7.37** \\ \hline SepXception [2] & 92.90 & - & 91.73 (91.54-91.92) & 93.64 (93.50-93.78) & 4.28 & 55.83 \\ \hline Ours (SI) & 93.31 (93.19-93.43) & 92.26 (92.08-92.44) & **93.95** (93.81-94.09) & (\({}^{\star}\)) 2.17 & 24.83 \\ Ours (3D) & **93.32** (93.21-93.43) & **92.29** (92.11-92.47) & **93.95** (93.81-94.09) & (\({}^{\star}\)) **1.13** & 18.83 \\ \hline \end{tabular}
\end{table}
Table 1: Dice similarity coefficient (DSC) on EchoNet-Dynamic test set. Our approach shows state-of-the-art performance with fewer FLOPs and relatively fewer parameters. fvcore was utilized to count the FLOPs. Note that we report computational cost compared to [2].
Figure 5: A comparison with other state-of-the-art solutions. Our methods achieve higher DSC on the EchoNet-Dyamic test set while being more efficient. The bubble size represents the number of parameters.
to retrieve rich temporal information while increasing \(T\) reduces redundancy between frames. We studied the combination of (\(F\), \(T\)) to find the optimum pair as provided in Table 2. The (16, 5) combination results in the highest DSC of 93.21% for SI while (32, 1) gives the best performance for 3D approach, resulting in 93.31% DSC. Additionally, all (\(F\), \(T\)) pairs result in a better performance compared to [2].
**SSL Temporal Masking.** We conducted an ablation study (Table 3) to find the optimum value of the masking ratio and obtain the best results for 60 % masking. We find that SSL pre-training helps maintain better temporal consistency and improve robustness (Fig. 6).
**Different backbones.** An ablation study was performed on different encoders of the segmentation architecture to see how well our approach adapts to model complexity. We implemented ResNet-18 [9], MobileNet-V3 [12], and ViT-B/16 [4] as the encoder of the SI approach. We also tested with a smaller version of 3D U-Net (Fig. 4), which consists of two residual units on every stage (3D U-Net-S).
\begin{table}
\begin{tabular}{|c|c|c|c|c|c|} \hline \multirow{2}{*}{Approach} & \multirow{2}{*}{\# Frames} & \multicolumn{4}{c|}{Sampling Period} \\ \cline{3-6} & & 1 & 2 & 3 & 5 \\ \hline \multirow{4}{*}{2D Super Image (UniFormer-S)} & 4 & 93.06 & 93.12 & 93.17 & 93.09 \\ & 9 & 93.11 & 93.14 & 93.15 & 93.13 \\ \cline{1-1} & 16 & 93.17 & 93.13 & 93.18 & **93.21** \\ \cline{1-1} & 25 & 93.16 & 93.11 & 93.20 & 93.12 \\ \hline \multirow{2}{*}{3D U-Net} & 16 & 93.23 & 93.25 & 93.11 & 93.04 \\ & 32 & **93.31** & 93.14 & 93.06 & 92.90 \\ \hline \end{tabular}
\end{table}
Table 2: An ablation study on the number of frames and the sampling period. During this experiment, the UniFormer-S was pre-trained on ImageNet, and the 3D U-Net was trained from scratch. All reported values are overall DSC (%) scores.
\begin{table}
\begin{tabular}{r|r} \hline Masking ratio & Overall DSC (\%) \\ \hline N/A & 93.19 \\
0.3 & 93.25 \\
0.4 & 93.23 \\
0.5 & 93.21 \\
0.6 & **93.31** \\
0.7 & 93.23 \\ \hline \end{tabular}
\end{table}
Table 3: An ablation study on masking ratio during pre-training with the 2D SI approach. N/A denotes without pre-training. The optimum masking ratio is 60%.
As provided in Table 4, the experiment shows that the performance is robust to encoder backbones.
## 5 Discussions
Table 1 shows that while more efficient, our video segmentation networks outperform the highest reported DSC on the EchoNet-Dynamic test set. Our networks aggregate both spatial and temporal information by analyzing multiple echocardiogram frames at a single pass. The networks predict an LV segmentation trace for every input frame at once, thus eliminating the redundancy in analyzing the same frames multiple times as in [27] and [31]. In addition, our training pipeline is simple yet effective, easy to implement, and scalable, as it does not require pseudo labels ([29], [30]) or temporal regularization [22]. Compared to ([29], [30]), our proposed approach does not depend on a specific heart stage, thus eliminating the burden of locating the ED and ES frame when creating training data. This also allows us to easily exploit non-ED and -ES frames for supervision if their corresponding segmentation labels are available. Table 2 highlights the robustness of our approach to the sampling hyperparameters. This allows for a broader design space to meet hardware limitations such as memory and compute power (FLOPs) while still achieving a satisfactory segmentation performance.
We observed that randomly masking a significant portion (60%) of an echocardiogram clip during SSL pre-training results in the best performance. The masking SSL improves the overall DSC of the SI approach from 93.19% to 93.31%, as reported in Table 3. Further, as shown in Fig. 6, we observe that self-supervision with temporal masking enables the network to maintain better temporal consistency across predictions in a given echocardiogram clip. We hypothesize that the pre-training stage helps the 3D U-Net model to better learn the semantic features that are useful for estimating human heart structures in the A4C view, resulting in a more robust prediction. This finding indicates that pre-training with self-supervision remarkably benefits the downstream LV segmentation task. Hence, self-supervised learning with vast echocardiogram videos can be a promising solution to provide strong pre-trained models that can generalize well in downstream echocardiography-related clinical tasks.
\begin{table}
\begin{tabular}{l|r|r|r|r|r} \hline Approach & \multicolumn{2}{c|}{\begin{tabular}{c} \% DSC \\ (\# Frames, Period) \\ \end{tabular} } & \multicolumn{2}{c|}{\begin{tabular}{c} \% DSC \\ (Overall) \\ \end{tabular} } & \multicolumn{2}{c|}{Params (M)} & \multicolumn{2}{c}{
\begin{tabular}{c} FLOPs (G) \\ Single pass \\ \end{tabular} } & \multicolumn{2}{c}{One frame} \\ \hline Super Image (SI) & MobileNetV3 & 93.16 & **6.69** & **12.46** & **0.78** \\ (16, 5) & ResNet-18 & 93.23 & 14.33 & 21.75 & 1.36 \\ & ViT-B/16 & 92.98 & 89.10 & 120.20 & 7.51 \\ \hline
3D (32, 1) & 3D U-Net-S & **93.27** & 11.26 & 27.34 & 0.85 \\ \hline \end{tabular}
\end{table}
Table 4: An ablation study on various encoder backbones. Our approach is robust to the selection of backbone complexity. The SI backbones were pre-trained on the ImageNet dataset, while the 3D U-Net-S was trained from scratch.
We have shown that both the SI and 3D methods trained using sparse annotations are capable of accurately segmenting the left ventricle in echocardiogram videos. The 3D U-Net performance is slightly better than the SI network with the UniFormer-S backbone. However, designing a backbone for 3D U-Net is not straightforward since it requires tedious hyperparameter tuning. On the other hand, there are plenty of optimized models that can be utilized as a backbone for the SI approach. For instance, MobileNetV3, with only 6.69 M of parameters, can give an on-par performance with 93.16% overall DSC, as can be seen in Table 4. The pre-trained models on ImageNet can also help generalize better if we only have a small data. Moreover, many self-supervised learning algorithms for 2D can also be employed to further improve performance.
## 6 Conclusion
We propose a novel approach to tackle the LV segmentation task on echocardiogram videos. Our method outperforms other works on the EchoNet-Dynamic test set. The method utilizes a video segmentation network that efficiently combines both spatial and temporal information. The network is pre-trained on a reconstruction task and then trained with sparse annotations to predict LV. An extensive experiment was performed to show the superiority of the proposed approach both quantitatively and qualitatively. We expect that this work will motivate researchers to explore more about the video segmentation approach for LV instead of working on frame-by-frame prediction.
We limit our experiments in this work to self-supervision using temporal masking only. However, there remains scope to improve the self-supervision by identifying the optimum masking scheme between temporal, random spatiotemporal, space-wise, and block-wise masking. Further, we aim to validate the cross-dataset generalizability of our approach using other publicly available echocardiogram datasets.
|
2309.04751 | Optical microcavities as platforms for entangled photon spectroscopy | Optical microcavities are often proposed as platforms for spectroscopy in the
single- and few-photon regime due to strong light-matter coupling. For
classical-light spectroscopies, an empty microcavity simply acts as an optical
filter. However, we find that in the single- or few-photon regime treating the
empty microcavity as an optical filter does not capture the full effect on the
quantum state of the transmitted photons. Focusing on the case of entangled
photon-pair spectroscopy, we consider how the propagation of one photon through
an optical microcavity changes the joint spectrum of a frequency-entangled
photon pair. Using the input-output treatment of a Dicke model, we find that
propagation through a strongly coupled microcavity above a certain coupling
threshold enhances the entanglement entropy between the signal and idler
photons. These results show that optical microcavities are not neutral
platforms for quantum-light spectroscopies and their effects must be carefully
considered when using change in entanglement entropy as an observable. | Ravyn Malatesta, Lorenzo Uboldi, Evan J. Kumar, Esteban Rojas-Gatjens, Luca Moretti, Andy Cruz, Vinod Menon, Giulio Cerullo, Ajay Ram Srimath Kandada | 2023-09-09T10:45:23Z | http://arxiv.org/abs/2309.04751v1 | # Optical microcavities as platforms for entangled photon spectroscopy
###### Abstract
Optical microcavities are often proposed as platforms for spectroscopy in the single- and few-photon regime due to strong light-matter coupling. For classical-light spectroscopies, an empty microcavity simply acts as an optical filter. However, we find that in the single- or few-photon regime treating the empty microcavity as an optical filter does not capture the full effect on the quantum state of the transmitted photons. Focusing on the case of entangled photon-pair spectroscopy, we consider how the propagation of one photon through an optical microcavity changes the joint spectrum of a frequency-entangled photon pair. Using the input-output treatment of a Dicke model, we find that propagation through a strongly coupled microcavity above a certain coupling threshold enhances the entanglement entropy between the signal and idler photons. These results show that optical microcavities are not neutral platforms for quantum-light spectroscopies and their effects must be carefully considered when using change in entanglement entropy as an observable.
## I Introduction
Due to spectacular advances within the field of quantum optics, experimentalists can now control non-classical states of light with high levels of precision in optics laboratories. Combined with advances in single-photon detection, these innovations lay the groundwork for the growing field of quantum-light spectroscopy[1]. There are many advantages to using quantum-light for spectroscopy, including access to information otherwise inaccessible using classical spectroscopies[2][3][4], and importantly superior signal-to-noise ratio that can enable spectroscopy at extremely low excitation fluence.
Quantum light refers to any state of light that cannot be described classically, such as single photons, squeezed light, or entangled photon pairs. Entangled photon pairs exhibit non-classical correlations that provide an advantage for both linear and nonlinear spectroscopies[5][6][7][8][9]. In the single- or few-photon regime, classical spectroscopic signals are swamped with noise but entanglement-enhanced spectroscopies can surpass the shot-noise limit by taking advantage of quantum correlations[10][11]. Similarly, entangled light can enhance signal-to-noise ratios of nonlinear spectroscopies, resulting in sharper spectroscopic features and greater simultaneous time-frequency resolution[12]. Furthermore, entangled-photon pairs provide direct access to nonlinear processes even at low-level excitation, facilitating the study of nonlinear processes in photo-sensitive systems that might bleach or otherwise be destroyed at excitation powers[13]. Hao Li _et al._ describe theoretically how the entanglement entropy of biphoton states (photon pair states) can be used as a probe of many-body correlations that are often elusive or obscured in classical nonlinear spectroscopic measurements[14].
In the single- or few-photon regime, a challenge arises for spectroscopists because of the low provability of light-matter interactions. One popular method to address this problem is to use an optical microcavity to couple to optical excitations in materials and thus enhance the processes of interest[15][16][17][18]. Optical microcavities are extremely controllable platforms for light-matter interaction; they are used to manipulate molecular states, enhance spontaneous emission, and drive chemical reactions[19]. For all of their uses, microcavities are an extremely versatile platform for quantum spectroscopy, but they are not neutral platforms and cannot be treated as such.
To demonstrate this, we consider how the joint spectrum and entanglement entropy of a frequency-entangled biphoton state changes after one photon (the idler) propagates through an optical microcavity. We first briefly describe the modeling of the biphoton joint spectrum and its transformation using input-output theory. We then consider an empty microcavity. Although for classical light an empty microcavity behaves as a simple optical filter, we find experimentally that treating the microcavity as an optical filter does not capture the full effect on the transmitted biphoton state. With simple input-output theory, we can model the filtering effect of the empty microcavity but cannot explain the full joint spectral transformation. We next move on to a simple model
of a microcavity coupled to \(N\) two-level systems and consider how strong-coupling transforms the joint spectrum and entanglement entropy of the biphoton state after the idler passes through an active microcavity. We find that above a certain coupling strength, passing through the microcavity system alone, regardless of detuning, enhances the entanglement entropy even without including many-body interactions in the model. These results confirm that optical microcavities are not neutral platforms for quantum-light spectroscopies and their effects must be carefully considered when using change in entanglement entropy as a spectroscopic observable.
## II Biphoton state transformation
### Joint spectrum and entanglement
Sources of entangled photons that are based on spontaneous parametric downconversion (SPDC) generate two daughter photons, historically called the _signal_ and _idler_, from a single pump photon according to energy- and momentum-conservation. Following the development of Zielnicki _et al._[20], a generic biphoton state of a signal and idler pair can be written as
\[\left|\psi_{s,i}\right\rangle=\int\int d\omega_{s}d\omega_{i}\mathcal{F}\left( \omega_{s},\omega_{i}\right)a_{1}^{\dagger}(\omega_{s})a_{2}^{\dagger}(\omega _{i})\left|0\right\rangle, \tag{1}\]
where the creation operators \(a_{1}^{\dagger}(\omega_{s})\) and \(a_{2}^{\dagger}(\omega_{i})\) operate on the vacuum state to create photons at frequency \(\omega_{s}\) and \(\omega_{i}\), respectively. The joint spectral amplitude, \(\mathcal{F}\left(\omega_{s},\omega_{i}\right)\), describes the frequency-correlations between the signal and idler photons.
Experimentally, we typically measure the joint spectral intensity (JSI),
\[\left|\mathcal{F}\left(\omega_{s},\omega_{i}\right)\right|^{2}=\left|A\left( \omega_{s},\omega_{i}\right)\right|^{2}\!\left|\Phi\left(\omega_{s},\omega_{i }\right)\right|^{2}, \tag{2}\]
where \(A\left(\omega_{s},\omega_{i}\right)\) is based on the spectral amplitude of the pump beam and \(\Phi\left(\omega_{s},\omega_{i}\right)\) is determined by the _phase-matching conditions_ and _spatial profile_ of the pump.
To quantify the entanglement between the signal and idler photons, we compute the von Neumann entanglement entropy, \(S\). We first normalize the joint spectral amplitude, and then use singular value decomposition to find the Schmidt coefficients \(\lambda_{j}\) which satisfy the normalization condition \(\sum_{j}\lambda_{j}^{2}=1\). We then calculate the entanglement entropy as
\[S=-\sum_{i}\lambda_{i}^{2}\ln\left(\lambda_{i}^{2}\right). \tag{3}\]
### Application of input-output theory
In their seminal work in 1984 [21], Collett and Gardiner develop a general input-ouput theory that relates output operators to input operators via internal dynamics of a cavity system governed by quantum Langevin equations. For a single photon mode, if we express the input-output transformation as a frequency-dependent function \(C(\omega_{i})\), we simply write the output creation operator in terms of the input as
\[\tilde{a}^{\dagger}(\omega_{i})=C\left(\omega_{i}\right)a^{\dagger}(\omega_{ i}). \tag{4}\]
Now, considering the case of a biphoton state where we allow only the idler photon to propagate through a microcavity system, we replace the original idler creation operator \(a_{2}^{\dagger}(\omega_{i})\rightarrow\tilde{a}_{2}^{\dagger}(\omega_{i})\) to get the transformed biphoton state
\[\left|\Psi\right\rangle=\int\int d\omega_{s}d\omega_{i}\mathcal{F}\left(\omega _{s},\omega_{i}\right)C\left(\omega_{i}\right)a_{1}^{\dagger}(\omega_{s})a_{2 }^{\dagger}(\omega_{i})\left|0\right\rangle. \tag{5}\]
In this simple approach, the transformed JSA is the product \(\mathcal{F}\left(\omega_{s},\omega_{i}\right)\cdot C\left(\omega_{i}\right)\). The transformed state bears similarity to the expression developed by Kalashnikov _et al._ before they act with a beamsplitter to see how interaction with a resonant medium changes the quantum interference pattern in Hong-Ou-Mandel interferometry[22].
## III Propagation through an empty microcavity
### Theory
Following the input-output formalism of Collett and Gardiner[21], we consider an empty cavity confining a
Figure 1: Transmission function, joint spectral intensity, and applied phase shift of a (a) one-sided and (b) two-sided empty microcavity following Gardiner and Collett.
single optical mode. Starting with a one-sided empty microcavity, i.e. a microcavity with substantial loss through a single mirror, the transformation of the output photon creation operator in terms of the input is
\[\tilde{a}^{\dagger}\left(\omega\right)=\frac{\frac{1}{2}\gamma-i\left(\omega- \omega_{0}\right)}{\frac{1}{2}\gamma+i\left(\omega-\omega_{0}\right)}a^{ \dagger}\left(\omega\right), \tag{6}\]
where \(\gamma\) is the coupling strength of the cavity photons to input(output) photons and \(\omega_{0}\) is the frequency of the cavity mode. The coupling strength \(\gamma\) is directly related to the cavity photon lifetime, \(\tau=1/\gamma\). For all our simulations, we choose \(\gamma\) such that the cavity photon lifetime \(\tau\) is \(150\,\mathrm{fs}\).
As noted by Collett and Gardiner, the one-sided cavity imposes a frequency-dependent relative phase shift, but does not change the JSI. Therefore the JSI shown in Fig. 1(a) is identical to that of the input biphoton state. For all simulations shown here, we use the same input state, assuming a Gaussian pump with a central down-converted wavelength of \(685\,\mathrm{nm}\) for both signal and idler photons. To replicate experimental conditions, we apply detection filters to both the signal and idler. For the filter shape, we choose a Gaussian squared, centered at \(685\,\mathrm{nm}\) with an \(8\,\mathrm{nm}\) bandwidth. Until we consider the effect of the pump bandwidth on entanglement entropy, the pump bandwidth is set at \(6\,\mathrm{nm}\).
Next, we move on to a two-sided empty microcavity, i.e. a cavity with leaky mirrors on both sides, and so with two input and two output modes. We assume the coupling to be same for both mirrors, \(\gamma_{1}=\gamma_{2}=\gamma\), and a single input mode. Thus in transmission, the output photon creation operator is
\[\tilde{a}^{\dagger}\left(\omega\right)=\frac{\gamma}{\gamma+i\left(\omega- \omega_{0}\right)}a^{\dagger}\left(\omega\right). \tag{7}\]
Now we see the filtering effect of the empty microcavity acting on the idler photon, as shown in Fig. 1(b). The centering and bandwidth of the transmission function are determined by the cavity mode frequency and the cavity photon lifetime, respectively. The filtering effect of the microcavity slightly affects the entanglement entropy bringing the base entanglement entropy from \(S=0.395\) to \(S=0.359\).
### Experiment
To further test the formalism developed in the previous section, we experimentally measure the JSI of a biphoton state with one of the photons transmitted through an empty microcavity. The spectrally entangled state is generated in a Type-I \(\beta\)-Barium Borate (BBO) crystal phase-matched for SPDC close to the degeneracy at the pump wavelength of \(343\,\mathrm{nm}\). The pump beam here is the third harmonic of a femtosecond laser oscillator (Pharos, Light Conversion) output operating at \(1030\,\mathrm{nm}\) and \(75\,\mathrm{MHz}\). The photons are spatially separated and transmitted through a translating-wedge-based identical pulses encoding system (GEMINI, Nireos srl) and a co-incidence detection system (Hydraharp, Picoquant), which enable measurement of spectral correlations between the photons. More details of the measurement system can be found in Ref. [23]. The JSI spectrum of the as-prepared biphoton state is shown in Fig. 2(a), in which the spectral correlation between the signal and idler photons is evident through the diagonal feature.
We transmit the idler photon of this state through a planar optical microcavity, which is built on distributed Bragg reflectors (DBR) and has an optical resonance at \(691\,\mathrm{nm}\) with a full width at half maximum of \(8\,\mathrm{nm}\) at normal incidence. The JSI spectrum of the transmitted biphoton state is shown in Fig. 2(b). We observe clear spectral filtering of the biphoton state with the peak of the JSI map at the peak resonance of the cavity. On closer inspection, we observe a reduction in the degree of spectral correlation in the transmitted state with the previously extended diagonal feature flattening along the idler-axis, close to the cavity resonance. To reproduce this behavior we consider a biphoton state whose JSI spectrum follows Eq. 2, and shown in Fig. 2(c). Based on the formalism developed in the previous section, we estimate the JSI spectrum of the biphoton state whose idler photon is transmitted through a microcavity. By setting the cavity resonance to \(690\,\mathrm{nm}\) in our simulation, we can approximate the experimentally measured transmission function of the empty microcavity. While the filtering effect is reproduced, we miss the effects of the
Figure 2: Joint spectral intensity before and after propagating through an empty microcavity from (a, b) experimental measurement and (c, d) input-output theory.
microcavity that depend on the joint spectrum - that is we miss the effects of the microcavity that depend simultaneously on the signal and idler frequency even though only the idler propagated through the microcavity.
## IV Propagation through a strongly-coupled microcavity
Having already established that an empty microcavity has a non-trivial effect on the entanglement of frequency-entangled photon pairs, we now consider a simple model of a strongly-coupled microcavity system. Taking inspiration from Li _et al._Li _et al._ (2015), we use a Dicke model of \(N\) identical 2-level emitters coupled to an optical cavity described by the following Hamiltonian:
\[\hat{H} = \sum_{j}\frac{\hbar\omega_{e}}{2}\hat{\sigma}_{z,j}+\sum_{k}\hbar (\omega_{k}-i\gamma)\hat{\psi}_{k}^{\dagger}\hat{\psi}_{k} \tag{8}\] \[+\sum_{k,j}\frac{\hbar\lambda_{kj}}{\sqrt{N}}(\hat{\psi}_{k}^{ \dagger}+\hat{\psi}_{k})(\hat{\sigma}_{j}^{+}+\hat{\sigma}_{j}^{-}),\]
where \(\omega_{e}\) is the frequency of the emitter, \(\{\hat{\sigma}_{z,j},\hat{\sigma}_{j}^{\pm}\}\) are the corresponding spin-1/2 operators for site \(j\), \(\hat{\psi}_{k}^{\dagger}\) is the cavity photon creation operator, and \(\lambda_{kj}\) is the coupling between a cavity photon and a molecular excitation at site \(j\). As before, \(\gamma\) is the coupling of the cavity photon mode to an external photon mode. For simplicity, we consider only the normal cavity mode, \(k=0\), with frequency \(\omega_{0}\). We also constrain ourselves to the strong coupling regime, \(\lambda>\gamma/2\), but stay well below the critical point \(\lambda_{c}\) where the system undergoes a quantum phase transition.
Using an input-output treatment of this model, Li _et al._ develop an analytical expression for the response function of the strongly-coupled system Li _et al._ (2015) which we use to define the transformation function \(C(\omega_{i})\) for an idler photon propagating through the strongly-coupled microcavity. The transformation function depends on many parameters: the frequency of the molecular emitter \(\omega_{e}\), the frequency of the cavity mode \(\omega_{0}\), the cavity lifetime \(1/\gamma\), and the strength of the coupling \(\lambda\).
Applying this transformation to the same input biphoton state as before (Fig. 2(c)), we immediately find much different behavior than for transmission through an empty microcavity. We analyze transmission through a strongly-coupled microcavity with both the molecular and cavity resonance at 685 nm, a 150 fs cavity photon lifetime, and equal coupling to the molecular excitation and external photons (\(\gamma=\lambda\)). The resulting transmission spectrum and the JSI map shown in Fig. 3(a) are composed of two peaks associated with the lower and upper polariton states. While this is an expected result, two intriguing details emerge on deeper analysis. Firstly, we see sharp discontinuity in the applied phase shift corresponding to the molecular resonance. As the strength of the light-matter coupling increases with respect to the coupling to external photons, the applied phase shift begins to resemble a step function, as shown in Fig. 3(b).
Secondly, the entanglement entropy of the transformed state is \(S=0.437\), which is a higher value than the entanglement entropy of the input state \(S=0.395\). But the increase in the entropy is curiously not monotonically related to the strength of light-matter coupling. The transformation of the biphoton state due to propagation of the idler photon through the strongly-coupled microcavity system _increases_ only above a certain coupling-strength threshold, see Fig. 3(c). Below this threshold, the entropy substantially reduces, possibly due to the spectral filtering of the idler photons by the dominant molecular transition. Of course, the exact coupling strength at which the enhanced entanglement entropy surpasses that of the input depends on the specific state and microcavity system parameters including the molecular resonance, cavity photon lifetime, and cavity detuning. In general, across several cavity detunings, shown in Fig. 3(c), at the lower end of the strong-coupling limit when coupling to external photons out-competes coupling to the molecular excitation (\(\gamma>\lambda>\gamma/2\)), propagation through the coupled microcavity system suppresses the entanglement entropy even below propagation through an empty microcavity. As the coupling strength increases, we reach a regime where the microcavity system improves the entanglement entropy past that of the input state for a wide range of cavity-detuning values, until the dependence of the entanglement entropy on cavity-detuning plateaus.
We find a similar ebb and flow of entropy improvement when considering how the entanglement entropy changes with the bandwidth of the Gaussian pump generating the input biphoton state, seen in Fig. 4. For very narrow pump bandwidths, the frequencies of the signal and idler are strongly anti-correlated and the input state thus has a relatively high entanglement entropy. Within this limit, propagation of the idler through a sufficiently strongly coupled microcavity system still improves the entanglement entropy. Beyond \(\lambda=1.35\gamma\), the benefit weakens but the entanglement entropy remains firmly above that for an empty microcavity.
## V Conclusion
In summary, we show with experiment and simple input-output theory that propagation through empty optical microcavities exerts a non-trivial effect on the state of frequency-entangled biphotons. We also theoretically consider the case of strongly coupled microcavities and identify peculiar transformations of the spectral correlations of the output biphoton state. From our experimental measurements of an empty microcavity we expect there to be further correlated effects in these systems that our simple input-output approach does not
capture, but understanding the interplay of the modeled and unmodeled changes requires further theoretical development and experimentation. Nevertheless, we show even with a simple theoretical treatment that the microcavity platform has notable effects on entanglement entropy of biphoton states.
Notably, the experimental configuration and the model we consider here simply correspond to the _linear_ response of microcavities. Previous works propose to use the entanglement entropy of the biphoton state transmitted through the cavity as a probe of many-body processes, including polariton-polariton interactions. While these treatments show the biphoton entanglement entropy is sensitive to such many-body interactions, we have to also consider the non-trivial entropy changes identified in this work that can manifest even in the absence of any correlating mechanisms. While optical microcavities can indeed be excellent platforms that enable spectroscopy with entangled photons, care must be taken to design systems with light-matter coupling strengths that minimize the linear-response induced variations in the JSI, so that the transformation of the biphoton state can be directly correlated with many-body dynamics.
Figure 4: Entanglement entropy by pump bandwidth for increasing coupling strength compared to an empty microcavity (dashed black).
Figure 3: Changes to the biphoton state due to idler propagation through a strongly-coupled microcavity. (a) Transmission function, joint spectral intensity, and applied phase shift of a strongly-coupled (\(\lambda=\gamma\)) microcavity with a 150 fs lifetime with zero detuning, (b) dependence of the microcavity induced phase shift on coupling strength, and (c) change in entanglement entropy with coupling strength for variable cavity detuning for a molecular resonance at 685 nm, compared to input state entropy (dotted gray).
###### Acknowledgements.
A.R.S.K. acknowledges the start-up funds provided by Wake Forest University and funding from the Center for Functional Materials and the Office of Research and Sponsored Programs at WFU. The authors thank Prof Carlos Silva, Prof Eric Bittner and Dr Andrei Piyatinski for insightful discussions. This material is based upon work supported by the National Science Foundation Graduate Research Fellowship under Grant No. DGE-2039655. Any opinion, findings, and conclusions or recommendations expressed in this material are those of the authors and do not necessarily reflect the views of the National Science Foundation.
|
2309.03758 | Hybrid of representation learning and reinforcement learning for dynamic
and complex robotic motion planning | Motion planning is the soul of robot decision making. Classical planning
algorithms like graph search and reaction-based algorithms face challenges in
cases of dense and dynamic obstacles. Deep learning algorithms generate
suboptimal one-step predictions that cause many collisions. Reinforcement
learning algorithms generate optimal or near-optimal time-sequential
predictions. However, they suffer from slow convergence, suboptimal converged
results, and overfittings. This paper introduces a hybrid algorithm for robotic
motion planning: long short-term memory (LSTM) pooling and skip connection for
attention-based discrete soft actor critic (LSA-DSAC). First, graph network
(relational graph) and attention network (attention weight) interpret the
environmental state for the learning of the discrete soft actor critic
algorithm. The expressive power of attention network outperforms that of graph
in our task by difference analysis of these two representation methods.
However, attention based DSAC faces the overfitting problem in training.
Second, the skip connection method is integrated to attention based DSAC to
mitigate overfitting and improve convergence speed. Third, LSTM pooling is
taken to replace the sum operator of attention weigh and eliminate overfitting
by slightly sacrificing convergence speed at early-stage training. Experiments
show that LSA-DSAC outperforms the state-of-the-art in training and most
evaluations. The physical robot is also implemented and tested in the real
world. | Chengmin Zhou, Xin Lu, Jiapeng Dai, Bingding Huang, Xiaoxu Liu, Pasi Fränti | 2023-09-07T15:00:49Z | http://arxiv.org/abs/2309.03758v1 | # Hybrid of representation learning and reinforcement learning for dynamic and complex
###### Abstract
Motion planning is the soul of robot decision making. Classical planning algorithms like graph search and reaction-based algorithms face challenges in cases of dense and dynamic obstacles. Deep learning algorithms generate suboptimal one-step predictions that cause many collisions. Reinforcement learning algorithms generate optimal or near-optimal time-sequential predictions. However, they suffer from slow convergence, suboptimal converged results, and overfittings. This paper introduces a hybrid algorithm for robotic motion planning: _long short-term memory_ (LSTM) pooling and skip connection for attention-based discrete soft actor critic (LSA-DSAC). First, graph network (relational graph) and attention network (attention weight) interpret the environmental state for the learning of the discrete soft actor critic algorithm. The expressive power of attention network outperforms that of graph in our task by difference analysis of these two representation methods. However, attention based DSAC faces the overfitting problem in training. Second, the skip connection method is integrated to attention based DSAC to mitigate overfitting and improve convergence speed. Third, LSTM pooling is taken to replace the sum operator of attention weigh and eliminate overfitting by slightly sacrificing convergence speed at early-stage training. Experiments show that LSA-DSAC outperforms the state-of-the-art in training and most evaluations. The physical robot is also implemented and tested in the real world.
Motion Planning, Navigation, Reinforcement Learning, Representation Learning, Intelligent Robot
## I Introduction
Intelligent robots play an important role in our daily life. For example, autonomous robot has been applied to hotel guidance [1], parcel delivery [2][3], and robotic arms in manufacturing [4][5]. Motion planning or path planning is the soul of robotic decision making. It enables robots to reach the goal and finish the tasks.
Classical planning algorithms like graph search (e.g., A* [6]) enable robots to navigate in static environment. However, they cause many collisions in the environment with dense and dynamic obstacles because of the huge burden in updating the environmental map in real time.
Classical reaction-based algorithms like _dynamic window approach_ (DWA) [7] and _optimal reciprocal collision avoidance_ (ORCA) [8] reduce collisions in the environment with dense and dynamic obstacles because they compute robot's motion by just considering obstacle's geometry and speed information. This requires fewer information updates compared to the map update. However, high collision rates still exist in reaction-based algorithms when the robot avoids obstacles with high speed because of the increasing burden in the information update.
_Deep learning algorithms_ (DL) like _convolutional neural network_ (CNN) [9] avoid the information update problem of the reaction-based algorithms by training a model that generates robot's decisions or actions in real time. However, decisions from DL are based on one-step predictions which result in suboptimal trajectories when the robot moves toward its goals.
_Reinforcement learning_ (RL) algorithms like _deep Q network_ (DQN) [10] and _advantage actor critic_ (A2C) or _asynchronous advantage actor critic_ (A3C) [11] improve the one-step predictions of CNN by training the models based on the multi-step time-sequential predictions. These multi-step time-sequential predictions are better than the one-step predictions because the training of the RL model considers the time-sequential information of the goals and obstacles. RL, however, may suffer from slow convergence speed and suboptimal converged result once the input (the environment state) is low-quality with limited expressive power.
**Progress of representation learning and RL.** Currently, representation learning methods like LSTM pooling [12], graph network [13][14], and attention network [15][16] improve the expressive power of input. RL algorithms are also improved by introducing new architectures like double Q networks [17], dueling architecture [18], and deterministic architecture [19][20]. The fused architecture of the double Q networks and the actor-critic network is proved to be one of the most efficient architectures in RL [21]. This architecture is also applied to many RL variants like _Twin delayed deep deterministic policy gradient_ (TD3) [22] and _soft actor critic_ (SAC) [23][24][25]. The combination of representation learning and RL is a promising
direction for better motion planning performance, because RL is fed with the input with high expressive power. This improves the overall convergence of RL algorithms.
**Technical difficulties of existing works.** The combination of representation learning and RL is promising for improving robotic motion planning performance. However, current works in this direction are still not good enough for challenging commercial tasks. Existing works about the combination of the representation learning and RL include the _relational graph_ (RG) [13], _proximal policy optimization_ (PPO) with multiple robots [26], CADRL [27], LSTM-A2C [28][29], LSTM-RL [15] and SARL [15].
RG is the combination of relational graph and DQN. Relational graph describes the relationship of all agents (the robot and obstacles), instead of focusing on the robot-obstacle relationship. Relational graph partly and indirectly represents the robot-obstacle relationship, therefore its expressive power is limited. DQN faces over-estimation problems which cause the slow and suboptimal convergence of overall networks.
PPO with multiple robots faces problems of data quality because it learns obstacle features from entire source environmental state without precisely and explicitly analyzing the relationship between the robot and obstacles. Moreover, the entire source environmental state is interpreted by the CNN in the PPO with multiple robots. Background noise is also included in this interpretation process, resulting in poor quality of interpreted environmental state.
CADRL learns the pairwise feature of the robot and one obstacle by DQN. Then, a trained model is applied to the multiple-obstacle case. CADRL is myopic because it does not consider the relationship between the robot and obstacle. The closest obstacle feature is just used for training instead of all obstacle features. DQN in CADRL also brings high bias and variance.
In LSTM-A2C and LSTM-RL, LSTM encodes obstacle features by distance-based order which partly represents robot-obstacle relationship, resulting in limited expressive power of interpreted environmental state. A2C/A3C lack efficient data replay strategies, resulting in a slow convergence. A2C/A3C and DQN in LSTM-A2C and LSTM-RL bring high bias and variance, resulting in a slow convergence.
SARL consists of an attention network and DQN where attention network interprets robot-obstacle features to the attention weight that better describes the relationship between the robot and obstacles, resulting in the improvement of expressive power. However, attention network still faces the overfitting problem if overall architecture has deep and complex networks. Moreover, DQN brings high bias and variance. These two reasons cause the slow and suboptimal convergence of SARL.
**Optimizations and contributions.** For better motion planning performance of the robot among dense and dynamic obstacles,
1) we first implemented the _discrete action for soft actor critic_ (DSAC) which is the soft actor critic algorithm in the setting of discrete action space, and is also one of most efficient RL algorithms currently. DSAC is then combined with the _relational graph_ (RG) [13], resulting in the _relational graph based DSAC_ (RG-DSAC) that achieves satisfactory performance in motion planning. However, we found that the expressive power of the relational graph is limited in the experiment. The relational graph just partly describes the relationship between the robot and obstacles via establishing the relationships for all agents without precisely focusing on the relationship between the robot and obstacles. This may result in the limited expressive power of interpreted environmental state.
2) The expressive power of interpreted environmental state is improved by replacing the relational graph using _attention weight_ (AW) [15] which precisely and explicitly analyze and describe the relationship between the robot and obstacles. This results in the _attention weight based DSAC_ (AW-DSAC) which outperformed RG-DSAC in the early-stage training but suffered from overfittings.
3) After analysis, we concluded that the _feature loss_ and _pooling method_ in attention network may cause the overfitting. Hence, we optimized attention network by integrating the skip connection method and LSTM pooling into the architecture of the attention network, resulting in the _skip connection for attention-based DSAC_ (SA-DSAC) and LSA-DSAC. SA-DSAC _mitigated_ the problem of overfittings in training in case of fewer dynamic obstacles. LSA-DSAC _eliminated_ overfittings by sacrificing the convergence speed slightly at the early-state training.
Overall, the workflow of our motion planning task is shown in Fig. 1. Main contributions of this paper include
1) the implementation of RG-DSAC and AW-DSAC,
2) the LSA-DSAC which is the optimized version of AW-DSAC by integrating the skip connection method and LSTM pooling into the architecture of the attention network of AW-DSAC,
3) extensive evaluations of our algorithms against the state-of-the-art, and
4) physical implementation and testing of the robot in real world.
Fig. 1: Workflow of our LSA-DSAC. The training data is collected from the circle-crossing simulator. The relational graph based DSAC (RG-DSAC) is implemented and selected as the trainable baseline algorithm for comparisons. Relational graph is replaced the attention weight (or attention network), resulting in attention weight based DSAC (AW-DSAC). The skip connection is applied to the attention network to improve the convergence, resulting in the skip connection for Attention weight based DSAC
(SA-DSAC). Finally, LSTM is applied to the SA-DSAC to further improve the convergence, resulting in the LSTM encoding and skip connection for attention weight based DSAC (LSA-DSAC). The training curves demonstrate a good convergence of LSA-DSAC over the rest algorithms. This paper includes the physical implementation which demonstrates how to transplant our algorithm into the real world. Our test code is available on website [https://github.com/CHUENGMINCHOU/LSA-DSAC](https://github.com/CHUENGMINCHOU/LSA-DSAC)
This paper is arranged as follows: Section II presents the state-of-the-art, problem formulation and preliminary of RL and DSAC. Section III presents RG-DSAC, AW-DSAC, SA-DSAC and LSA-DSAC. Section IV presents network framework of LSA-DSAC, model trainings, model evaluations and physical implementation.
## II Research Background
This section first presents the state-of-the-art for dynamic robotic motion planning tasks. The state-of-the-art includes classical algorithm ORCA and trainable algorithms CADRL, LSTML, LSTM-A2C/A3C, PPO with multiple robots, SARL, and RG-DQN. Then, the problem formulation of motion planning tasks are given by mathematic descriptions. Finally, the preliminary of RL and DSAC are presented. They are fundamental concepts for following further algorithm implementations and optimizations.
### _State-of-the-art for dynamic motion planning_
This part concludes the state-of-the-art of motion planning algorithms. They includes ORCA [8], CADRL [27], LSTMRL [15][10], LSTM-A2C/A3C [11][29][28], PPO with multiple robots [30][26], SARL [15], RG-DQN [13]. Reaction-based ORCA relies on the positions and velocities of robots and obstacles to compute possible robot's velocity. CADRL is based on DQN that learns pairwise features of the robot and one obstacle. Trained model is then applied to multiple-obstacle cases. LSTMRL and LSTM-A2C/A3C are based on DQN and A2C/A3C to learn obstacle features that are pooled to hidden features by LSTM. PPO with multiple robots is based on CNN and PPO to learn from entire source environmental state that include features of the robot and obstacles and potential background noise. SARL is based on DQN where the attention network pools the pairwise robot-obstacle features to attention features (attention weight). RG-DQN is also based on DQN where the relation matrix and message passing process interpret source features to the graph features. We implemented ORCA, CADRL, LSTMRL, LSTM-A2C, SARL as baseline algorithms for comparisons.
### _Problem formulation_
All algorithms in this paper are trained and tested in simulators (Fig. 2) provided by ORCA [8]. Simulators includes _circle-crossing_ and _square-crossing_ simulators that add _predictable complexity_ to the motion planning tasks. Let \(a\) and \(v\) represent the action and velocity of robot where \(a=v=\left[v_{x},v_{y}\right]\). Let \(p=\left[p_{x},v_{y}\right]\) represent the robot position. Let \(s_{t}\) represent the robot state at time step \(t\). \(s_{t}\) consists of observable and hidden parts \(s_{t}=\left[s_{t}^{obs},s_{t}^{h}\right]\), \(s_{t}\in R^{9}\). Observable part refers to factors that can be measured or observed by others. It consists of the position, velocity, and radius \(s^{obs}=\left[p_{x},p_{y},v_{x},v_{y},r\right],~{}s^{obs}\in R^{5}\). The hidden part refers to factors that cannot be seen by others. It consists of planned goal position, preferred speed and heading angle \(s^{h}=\left[p_{gx},p_{gy},v_{pref},\theta\right],s^{h}\in R^{4}\). The state, position, and radius of the obstacles are described by \(\xi\), \(\hat{p}\) and \(\hat{r}\) respectively.
We first introduce one-robot one-obstacle case, and then the one-robot multi-obstacle case. The robot plans its motion by the policy \(\pi\): \((s_{0:t},s^{obs}_{0:t})\to a_{t}\) where \(s_{0:t}\) and \(s^{obs}_{0:t}\) are the robot states and observable obstacle states from time step \(0\) to time step \(t\), while the obstacles plan their motions by \(\hat{\pi}\): \((s_{0:t},s^{obs}_{0:t})\to a_{t}\) where \(\hat{s}_{0:t}\) and \(s^{obs}_{0:t}\) are the obstacle states and observable robot states from time step \(0\) to time step \(t\). The robot's objective is to minimize the expectation (average) of the time to its goal \(E\left[t_{g}\right]\) (1) under the policy \(\pi\) without collisions to the obstacles. The constraints of robot's motion planning can be formulated via (2-5) that represent the _collision avoidance constraint_, _goal constraint_, _kinematics of the robot_ and _kinematics of the obstacle_, respectively. Collision avoidance constraint denotes that the distance of robot and obstacles \(\left\|p_{t}-\hat{p}_{t}\right\|_{2}\) should be greater than or equal to the radius sum of the robot and obstacles \(r+\hat{r}\). Goal constraint denotes that the position of the robot \(p_{tg}\) should be equal to the goal position \(p_{g}\) if the robot reaches its goal. Kinematics of the robot denotes that the robot position \(p_{t}\) is equal to the sum of robot position \(p_{t-1}\) and the change of the robot position \(\Delta t\cdot\pi\): \((s_{0:t},s^{obs}_{0:t})\). The robot policy \(\pi\): \((s_{0:t},s^{obs}_{0:t})\) is a velocity decided by learning from historical robot states and obstacle states. Kinematics of the obstacle is the same as that of the robot. \(\hat{\pi}\): \((s^{obs}_{0:t},s_{0:t})\) is a velocity decided by the obstacle policy \(\hat{\pi}\) like ORCA.
\[minimizeE\left[t_{g}|s_{0:t},s^{obs}_{0:t},\pi,\hat{\pi}\right]~{}s.t. \tag{1}\]
\[\left\|p_{t}-\hat{p}_{t}\right\|_{2}\geq r+\hat{r}~{}\forall t \tag{2}\]
\[p_{tg}=p_{g} \tag{3}\]
\[p_{t}=p_{t-1}+\Delta t\cdot\pi\colon(s_{0:t},s^{obs}_{0:t}) \tag{4}\]
\[\hat{p}_{t}=\hat{p}_{t-1}+\Delta t\cdot\hat{\pi}\colon(s^{obs}_{0:t},s_{0:t}) \tag{5}\]
In one-robot \(N\)-obstacle case, the objective is replaced by \(minimizeE\left[t_{g}|s_{0:t},\{s^{obs}_{0:t},s^{obs}_{0:t}\},\pi,\hat{\pi}\right]\) where we assume that obstacles use the same policy \(\hat{\pi}\). Collision avoidance constraint is replaced by
Fig. 2: Circle-crossing and square-crossing simulators. Obstacles are randomly generated near the brink of the circle in a circle-crossing environment. Then they move toward their opposite side. In the square-crossing environment, obstacles are randomly generated on the left side or right side and then they move toward random positions on their opposite side.
\[\left\{\begin{array}{l}\|p_{t}-\hat{p}_{0:t}\|_{2}\geq r+\hat{r}\\ \|p_{t}-\hat{p}_{1:t}\|_{2}\geq r+\hat{r}\\...\\ \|p_{t}-\hat{p}_{N-1:t}\|_{2}\geq r+\hat{r}\end{array}\right.\forall t \tag{6}\]
assuming that obstacles are in the same radius \(\hat{r}\). \(\hat{p}_{N-1:t}\) denotes the position of \(N\)-th obstacle at the time step \(t\). Kinematics of the robot is replaced by \(p_{t}=p_{t-1}+\Delta t\cdot\pi\): \((s_{0:t},\hat{s}_{0:t}^{obs},\hat{s}_{0:t}^{obs},\hat{s}_{-1:t}^{obs})\) where the historical states of all obstacles \(\{s_{0:t}^{obs}\ldots s_{N-1:t}^{obs}\}\) are considered for generating the robot policy. Kinematics of the obstacles is replaced by
\[\left\{\begin{array}{l}\hat{p}_{0:t}=\hat{p}_{0:t-1}+\Delta t\cdot\hat{r}\\ \hat{p}_{1:t}=\hat{p}_{1:t-1}+\Delta t\cdot\hat{r}\\ \hat{p}_{N-1:t}=\hat{p}_{N-1:t-1}+\Delta t\cdot\hat{r}\end{array}\right. \tag{7}\]
_Preliminary_. _Markov decision process_ (MDP) is sequential decision process based on Markov Chain [31]. Markov Chain is defined by a variable set \(\textbf{{X}}=\{X_{n}\colon n>0\}\) where the probability \(p(X_{t+1}|X_{t},...,X_{1})=p(X_{t+1}|X_{t})\). This means the state and action of the next step only depend on the state and action of the current step. MDP is described as a tuple \(<S,A,P,R>\). \(S\) denotes the state and here it refers to the state of robot and obstacles. \(A\) denotes an action taken by the robot. Action \(A=[\theta,v]\) is selected from _action space_ where directions \(\theta\in\{0,\frac{\pi}{g},...2\pi\}\) and Speed of each direction \(v\in\{0.2,0.4,..1\}\). Hence, action space consists of 81 actions including a stop action. \(P\) denotes the possibility to transit from one state to the next state. \(R\) denotes the reward or punishment received by the robot after executing actions. The reward function in this paper is defined by
\[R(s,a)=\left\{\begin{array}{cl}1&\text{if }p_{current}=p_{g}\\ -0.1+\frac{d_{min}}{2}&\text{if }0<d_{min}<0.2\\ -0.25&\text{if }d_{min}<0\\ \frac{d_{start,to\,goal}(-p_{g}-p_{current})}{d_{start,to,goal}}\cdot 0.5& \text{if }t=t_{max}\text{ and }\\ &p_{t}\neq p_{g}\\ 0&\text{otherwise}\end{array}\right. \tag{8}\]
where \(p_{current}\) denotes the position of the robot currently. \(p_{g}\) denotes the position of the goal. \(d_{min}\) denotes the minimum distance of the robot and obstacles during motion planning process. \(d_{start,to\,goal}\) denotes the distance of the start to the goal. \(t_{max}\) is the allowed maximum time for any episode of the motion planning. Our reward function (8) is modified from [15] which cannot work without the imitation learning. (8) accelerates convergence speed by attaching a reward to _the final position of the robot_. This encourages the robot to approach the goal.
Other crucial terms of RL include the _value_, _policy_, _value function_, and _policy function_. Value denotes _how good one state is or how good one action is in one state_. The value consists of the _state value_ (\(V\) value) and _state-action value_ (\(Q\) value). Value is defined by the expectation of accumulators rewards \(V(s)=\mathbb{E}[R_{t+1}+\gamma R_{t+1}+\cdots+\gamma^{T-1}R_{T}|s_{t}]\) or \(Q(s,a)=\mathbb{E}[R_{t+1}+\gamma R_{t+1}+\cdots+\gamma^{T-1}R_{T}|(s_{t},a_{t})]\) where \(\gamma\) is a discounted factor. The policy denotes the way to select actions. In function approximation case, policy is represented by the neural network. Value function in deep RL scope is represented by neural networks to estimate the value of environmental state via the function approximation [32]. Policy function is also represented by neural networks. Actions are selected by indirect way (e.g., \(a\gets argmax_{a}R(s,a)+Q(s,a;\theta\) ) in DQN [10][33]) or direct way (e.g., \(\pi_{\theta}:s\to a\) in the actor-critic algorithm [34]).
**Discrete soft actor critic.** The policy of classical RL algorithm is obtained by maximizing the objective \(\sum_{t=0}^{T}\mathbb{E}_{(s_{t},a_{t})\sim p_{\pi}}[r(s_{t},a_{t})]\). The objective of SAC is defined by the maximum entropy objective that considers the reward and entropy simultaneously
\[J(\pi)=\sum_{t=0}^{T}\mathbb{E}_{(s_{t},a_{t})\sim p_{\pi}}[r(s_{t},a_{t})+\pi \mathcal{H}\big{(}\pi(\cdot|s_{t})\big{)}],\mathcal{H}\big{(}\pi(\cdot|s_{t}) \big{)}=-\log\pi(\cdot|s_{t}) \tag{9}\]
where \(\mathcal{H}\big{(}\pi(\cdot|s_{t})\big{)}\) denotes the entropy. \(\alpha\) is the temperature parameter. In objective maximization, SAC policy converges to optimal policy certainly by the soft policy iteration which consists of _policy evaluation_ and _policy improvement_. Optimal policy is obtained by repeated application of policy evaluation and policy improvement. Policy evaluation [24] proves that if \(Q^{k+1}=\mathcal{T}^{\pi}(Q^{k})\), \(Q^{k}\) will converge to the soft Q value of \(\pi\) when \(k\rightarrow\infty\). \(\mathcal{T}^{\pi}(\cdot)\) is a modified Bellman backup operator given by
\[\mathcal{T}^{\pi}(Q)(s_{t},a_{t})\triangleq r(s_{t},a_{t})+\gamma\mathbb{E}_{s _{t+1}\sim p}[V(s_{t+1})] \tag{10}\]
where
\[V(s_{t})=\mathbb{E}_{a_{t}\sim\pi}[Q(s_{t},a_{t})-\log\pi(a_{t}|s_{t})]. \tag{11}\]
Applying \(\mathcal{T}^{\pi}(\cdot)\) to Q value will bring Q value _closer_ to \(Q^{\pi}\). This means \(Q(s_{t},a_{t})\leq\mathcal{T}^{\pi}(Q)(s_{t},a_{t})\leq Q^{\pi}(s_{t},a_{t})\). Policy improvement [24] proves that \(Q^{\pi_{new}}\geq Q^{\pi_{old}}\) in objective maximization. \(\pi_{new}\) is defined by
\[\pi_{new}=\arg\min_{n^{\prime}\in\Pi}D_{\mathcal{K}\Gamma}(\pi^{\prime}(\cdot|s_{ t})\parallel\frac{\exp(Q^{\pi_{old}}(s_{t}))}{\mathcal{T}^{\pi_{old}}(s_{t})}) \tag{12}\]
where \(\mathcal{I}^{\pi_{old}}(s_{t})\) is the partition function for distribution normalization. It can be ignored because it does not contribute to the gradient of new policy. \(Q^{\pi_{old}}\) guides the policy update to ensure an improved new policy. New policy is constrained to a parameterized family of distribution \(\pi^{\prime}\in\Pi\) like Gaussians to ensure the tractable and optimal new policy. Given the repeated application of policy evaluation and improvement, policy \(\pi\) eventually converges to optimal policy \(\pi^{*}\), \(Q^{\pi^{*}}\geq Q^{\pi},\pi\in\Pi\).
SAC is the combination of _soft policy iteration_ and _function approximation_. In (9), temperature \(\alpha\) is either a fixed value or an adaptive value. In function approximation, networks \(\theta\), and \(\phi\) are used to approximate the action value and policy value. The action value objective and its gradient are obtained by
\[\left\{\begin{array}{cl}J(\theta)=\mathbb{E}_{(s_{t},a_{t})\sim p_{\pi}}[ \mathbb{I}_{2}^{1}\big{(}Q(s_{t},a_{t};\theta)-\bar{Q}(s_{t},a_{t})\big{)}^{2}] \\ \bar{Q}(s_{t},a_{t})=r(s_{t},a_{t})+\gamma\mathbb{E}_{s_{t+1}\sim p}[V(s_{t+1}; \bar{\theta})]\\ \nabla_{\theta}(\theta)(\theta_{s})\\ =\nabla_{\theta}Q(s_{t},a_{t};\theta)\cdot(Q(s_{t},a_{t};\theta)-r(s_{t},a_{t})+ \gamma V(s_{t+1};\bar{\theta})\\ -\alpha\log\pi_{\phi}(a_{t+1}|s_{t+1}))\end{array}\right. \tag{13}\]
where state value is approximated by \(V(s_{t+1};\bar{\theta})\). \(\bar{\theta}\) is the target action value network. \(\gamma\) is a discount factor. The policy objective and its gradient are obtained by
\[\left\{\begin{array}{l}J(\phi)=\mathbb{E}_{s_{t}\sim\mathcal{D}}\left[D_{KL}\left( \pi_{\phi}(\cdot\mid s_{t})\mid\mathbb{I}\frac{\exp(Q(s_{t},\cdot\theta))}{z_{ \theta}(s_{t})}\right)\right]\\ =\mathbb{E}_{s_{t}\sim\mathcal{D}}[\mathbb{E}_{a_{t}\sim\pi_{\phi}}[\alpha\log \pi_{\phi}(a_{t}|s_{t})-Q(s_{t},a_{t};\theta)]]\\ \nabla_{\phi}f(\phi)=\nabla_{\phi}\alpha\log\pi_{\phi}(a_{t}|s_{t})+\\ \nabla_{\phi}f_{\phi}(\epsilon_{t};s_{t})\cdot(\nabla_{a_{t}}\alpha\log\pi_{ \phi}(a_{t}|s_{t})-\nabla_{a_{t}}Q(s_{t},a_{t}))\\ a_{t}=f_{\phi}(\epsilon_{t};s_{t})\end{array}\right. \tag{14}\]
where \(f_{\phi}(\epsilon_{t};s_{t})\) is the network transformation. \(\epsilon_{t}\) is an input noise vector sampled from fixed distribution like spherical Gaussian. The temperature objective is defined by
\[J(\alpha)=\mathbb{E}_{a_{t}\sim\pi_{t}}[-\alpha\log\pi_{t}\left(a_{t}|s_{t} \right)-\alpha\mathcal{\bar{H}}] \tag{15}\]
where \(\mathcal{\bar{H}}\) is the target entropy. Temperature objective gradient is obtained by approximating dual gradient descent [35]. Eventually, the networks and temperature are updated by
\[\left\{\begin{array}{l}\theta\leftarrow\theta-\gamma_{\theta}\nabla_{\theta }J(\theta)\\ \phi\leftarrow\phi-\gamma_{\phi}\nabla_{\phi}J(\phi)\\ \alpha\leftarrow\alpha-\gamma_{\alpha}\nabla_{\omega}J(\alpha)\\ \bar{\theta}\leftarrow\tau\theta+(1-\tau)\bar{\theta}\end{array}\right. \tag{16}\]
SAC is used in tasks with continuous action space. However, the action space in this paper is discrete. Hence, SAC should be modified to suit our task. Some modifications [25] should be made. They are summarized as the follows:
1) \(Q\) function should be moved from \(Q\colon S\times A\rightarrow\mathbb{R}\) to
\[Q\colon S\times A\rightarrow\mathbb{R}^{|A|}. \tag{17}\]
This means \(Q\) values of all possible actions should be outputted, instead of a \(Q\) value of the action taken by the robot.
2) The outputted policy should be the action distribution
\[\pi\colon S\rightarrow[0,1]^{|A|} \tag{18}\]
instead of the _mean_ and _covariance_ of action distribution of SAC \(\pi\colon S\rightarrow\mathbb{R}^{2|A|}\).
3) In temperature objective (15), its expectation \(\mathbb{E}_{a_{t}\sim\pi_{t}}[\cdot]\) is obtained by the Monte-Carlo estimation which involves taking an expectation over the action distribution [25]. In the discrete action space case, the expectation should be calculated directly, instead of Monte-Carlo estimation. Hence, the temperature objective changes into
\[J(\alpha)=\pi(s_{t})^{T}[-\alpha\log\pi_{t}\left(s_{t}\right)-\alpha\mathcal{ \bar{H}}] \tag{19}\]
Where \(\mathcal{\bar{H}}\) is the target entropy. Similarly, the policy objective changes into
\[J(\phi)=\mathbb{E}_{s_{t}\sim\mathcal{D}}[\pi(s_{t})^{T}[\alpha\log\pi_{\phi}( s_{t})-Q(s_{t};\theta)]] \tag{20}\]
## III Method
This section presents the implementations and optimizations of our motion planning algorithms. We first presents the implementation of the relational graph based DSAC. Then, relational graph based DSAC is improved by introducing the attention weight based DSAC. Finally, the attention weight based DSAC is further improved by integrating the skip connection method and LSTM pooling into the architecture of attention network of attention weight based DSAC.
### _Relation graph based DSAC (RG-DSAC)_
The mechanism of relational graph [13] is shown in Fig. 3a. The source input collected from the environment consists of source robot feature \(s_{r}\) and \(N\)-obstacle features \((o_{i},i\in 1,2,..N)\). These features are in different dimensions. They should be formatted to the same dimension to accommodate the input requirement of the graph by
\[s_{r,emb\_1}=MLP(s_{r}) \tag{21}\]
\[o_{i,emb\_1}=MLP(o_{i}) \tag{22}\]
They are achieved by the _multi-layer perceptron_ (MLP). \(s_{r,emb\_1}\) and \(o_{i,emb\_1}\) denote the embeddings of the robot and \(i\)-th obstacle feature. All obstacle features are concatenated to form the embedded obstacle state \(s_{o,emb\_1}\). The embedded robot state and embedded obstacle state are concatenated to form the initial
Fig. 3: Mechanisms of relational graph (a), attention weight (b), skip connection method for attention weight (c), and LSTM pooling and skip connection method for attention weight (LSA) (d). In the LSA encoder, the skip connection is for reducing the feature loss, while LSTM replaces the sum operator of the attention network to make the final interpreted features (environmental state) injective.
feature matrix (environmental representation) \(X\) by
\[X=concat(s_{r,emb\_1},s_{o,emb\_1}) \tag{23}\]
The first row of \(X(X[0,:])\) denotes the robot feature and remaining rows (\(X[i,:],i\in 1,2,...N\)) denotes all obstacle features.
Given feature matrix \(X\), relation matrix \(A\) that represents the relationship of the _robot-obstacle and obstacle-obstacle_ is computed by a _similarity function_. This is achieved by concatenating one of features in \(X[i,:],i\in 0,1,..N\) with all features in \(X[j,:],j\in 0,1,..N\) recursively to form the _pairwise features of the robot-obstacle and obstacle-obstacle_. Then, the relation feature \(A[i,:]\) is obtained by MLP that maps these pairwise features to a fixed dimension (the same dimension as that of \(X[i,:]\)) via
\[A[i,:]=MLP\big{(}concat(X[i,:],X[:,:])\big{)}.i\in 0,1,..N \tag{24}\]
Given feature matrix and relation matrix, interaction feature matrix \(H\) is obtained by the _message passing rule_ from the _graph convolutional network_ (GCN) via
\[H^{1+1}=\sigma(AH^{1}W^{1})+H^{1},H^{0}=H^{1}=X,l\in 0,1.. \tag{25}\]
where \(\sigma\) is the activation function. \(W\) is layer-specific trainable weight, and \(l\) denotes the number of the neural network layers. The _difference_ between the initial feature matrix \(X\) and the interaction feature matrix \(H\) is that \(H\) includes both initial features and relation features, while \(X\) only includes initial features. Interaction feature matrix \(H^{i+1}\) outperforms initial feature matrix \(X\) in the _expressive power_. This is achieved via the relation matrix \(A\) and the message passing, and it is shown by training and evaluation performances in the simulation and real-world motion planning. Moreover, the expressive power of the interaction feature can be further improved by LSTM pooling that maps interaction feature of obstacles \(H^{1+1}[1:,:]\) to the sequential hidden features. Hence, final output of the relational graph (environmental state) that feeds DSAC consists of interaction feature of the robot and obtained sequential hidden features via
\[S^{rg}=[H^{1+1}[0,:],LSTM(H^{1+1}[1:,:])] \tag{26}\]
### _Attention weight based DSAC (AW-DSAC)_
Given the relational graph, it is obvious that relation matrix \(A\) plays an essential role to improve the expressive power of output features. Relation matrix includes the robot-obstacle relation and obstacle-obstacle relation. Now, let's recall our task: robotic motion planning among dense obstacles. It is easy to see that robot-obstacle relation matters in our task. However, obstacle-obstacle relation does not show much _direct importance_ in this task to generate features with high expressive power, although it has _marginal importance_ to predict future obstacle trajectories that slightly improve the motion planning performance [13]. To further improve the motion planning performance, much attention should be paid to making the best of the robot-obstacle relation. Moreover, the importance of the obstacles also vary in different time step. The importance is shown by the robot speed, moving directions, and distance of robot and obstacle.
Recent attention weight mechanism [15] focuses on the pairwise robot-obstacle feature. It computes an attention score that weighs the importance of dynamic obstacles and makes the expressive power of interpreted environmental state interpretable. Hence, we apply the attention weight to replace relational graph for high and interpretable expressive power of the output features.
As the relational graph, in the attention weight case (Fig. 2(b)), the environmental state to feed DSAC \(S^{aw}\) is defined by the feature combination of the robot and obstacles via
\[S^{aw}=[s_{r},S^{aw}_{o}] \tag{27}\]
where \(S^{aw}_{o}\) denotes the weighted obstacle feature, and it is defined by
\[S^{aw}_{o}=\sum_{i=1}^{n}[softmax(\alpha_{i})]\cdot h_{i} \tag{28}\]
where \(\alpha_{i}\) and \(h_{i}\) denote the _attention score_ and the _interaction feature_ of the robot and the obstacle \(\alpha_{i}\) respectively. The interaction feature is a high-level feature that better outlines a robot-obstacle relation, compared to a shallow feature \(e_{i}\). The interaction feature is defined by
\[h_{i}=f_{h}(e_{i};w_{n}) \tag{29}\]
where \(f_{h}(\cdot)\) and \(w_{h}\) denote the MLP and its weight. \(e_{i}\) denotes the _embedded shallow feature_ obtained from the pairwise robot-obstacle feature \([s_{r},\alpha_{i}]\). The attention score is defined by
\[\alpha_{i}=f_{\alpha}(e_{i};w_{a}) \tag{30}\]
where \(f_{\alpha}(\cdot)\) and \(w_{a}\) denote the MLP and its weight. The embedded shallow feature is defined by
\[e_{i}=f_{e}([s_{r},o_{i}];w_{e}),i\in 1,2,..N \tag{31}\]
where \(f_{e}(\cdot)\) and \(w_{e}\) denote the MLP and its weight.
### _Skip connection for \(\Delta\)tention weight based DSAC (SA-DSAC)_
Recent progress in supervised DL [36][37][38] unveils that low-level (shallow) and high-level (deep) features play different role in the learning of the neural networks. The low-level feature provides more details of the source environmental state, while high-level feature outlines an overall structure of source environmental feature. Both low-level and high-level features contribute to the expressive power. Obviously, the attention weight mechanism just includes high-level feature to form the interaction feature \(h_{i}\), given the mechanism of the attention weight in Fig 2(b). This causes the loss of the details in environmental state and low expressive power of final feature \(S^{aw}\) follows. To improve the expressive power of environmental state interpreted by attention weight, we introduce SA-DSAC that integrates the skip connection method (Fig. 2(c)) into the architecture of the attention network for generating optimized interaction feature by
\[h_{i}=f_{h}(concat(e_{i},[s_{r},o_{i}]);w_{n}) \tag{32}\]
where \(f_{h}(\cdot)\) and \(w_{h}\) denote the MLP and its weight. \(e_{i}\) denotes the _embedded shallow feature_ obtained from the pairwise robot-obstacle feature \([s_{r},o_{i}]\).
### _LSTM pooling and Skip connection for \(\Delta\)tention weight based DSAC (LSA-DSAC)_
Given the attention weight mechanism, we can notice that weighted obstacle features \(S^{aw}_{o}\) are pooled by summing all weighted interaction features. Recent research [39] unveils a high performance of the sum operation over the _mean_ and _max_ operations in pooling features for generating new features with high expressive power. However, it does not mean that sum operation is absolute _injective_[39]. The more injective the feature is, the more distinguishable the feature is against other features.
Hence, high injectivity of feature means high expressive power [39]. Sum operation just outlines an overall structure of pooled features, and some pooled features based on the sum operation lack injectivity or are undistinguishable. For instance, _sum_ (3,1) and _sum_ (2,2). Features [3,1] and [2,2] are equal statistically, but they are obviously different features. We think that _keeping some source features in the feature pooling process_ contributes to the injectivity of pooled features.
LSTM pooling is expected to be a good solution to achieve this goal where the source features are just mapped into sequential hidden features. In this process, the structural information, and a part of the feature details of the source features are kept, instead of just keeping the statistical property of source features via the sum operation. Hence, we introduce LSA-DSAC that takes LSTM to replace sum operation in the pooling of weighted obstacle feature \(S_{o}^{aw}\). LSTM maps weighted interaction features \(softmax(\alpha_{i})\cdot h_{i}\) to sequential features (Fig. 3d). This better preserve the feature of each weighted interaction feature. This is achieved by
\[S_{o}^{aw}=LSTM[softmax(\alpha_{i})\cdot h_{i}],i\in\text{1,2,..}\,N \tag{33}\]
Once the environmental state \(S^{aw}=[s_{r},S_{o}^{aw}]\) generated by LSA is well prepared, it will feed DSAC to generate trained models for the motion planning of the robot. In implementation, separate attention networks are taken to form the _critic_ and _policy_ of DSAC by
\[critic=[\theta_{att_{c}},\theta_{c1},\theta_{c2}] \tag{34}\]
\[policy=[\theta_{att\_p},\theta_{p}] \tag{35}\]
where double-network architecture (networks \(\theta_{c1},\theta_{c2}\)) is taken in critic to reduce the overestimation of the Q value, while policy just has single network \(\theta_{p}\) for prediction. The attention network connects with the prediction network to form the critic or policy of DSAC (Alg. 1). The training process of LSA-DSAC is shown in Fig. 4. Episodic data \(<\)\(s,a,r,s^{\prime}\)\(>\) of each time step is obtained by performing the policy of DSAC
\[<s_{t},a_{t},r_{t},S_{t+1}>\sim\pi(a_{t}|s_{t};[\theta_{att\_p},\theta_{p}]) \tag{36}\]
Episodic data is stored in replay buffer \(\mathcal{D}\) at the end of each episode by
\[\mathcal{D}\leftarrow\mathcal{D}\cup\mathcal{E},\mathcal{E}=\mathcal{E}+<s_{t},a_{t},r_{t},S_{t+1}> \tag{37}\]
Networks are trained in each step of an episode (Alg. 2). In the forward propagation process, the critic loss \(\mathcal{Loss}_{Q}\) and policy loss \(\mathcal{Loss}_{p}\) are obtained by
\[\mathcal{Loss}_{Q}=MSE\big{(}Q(a|s)_{c1},Q_{next\_dis}\big{)}+MSE(Q(a|s)_{c2}, Q_{next\_dis}) \tag{38}\]
\[\mathcal{Loss}_{p}=-mean[\mathbb{E}_{s-p}[Q(s)]+\alpha\cdot\mathcal{H}\big{(} \pi(\cdot\mid s)\big{)}] \tag{39}\]
where \(Q_{next\_dis}\) and \(Q(a|s)_{c1},i\in\text{1,2}\) denote the discounted next state value and current action values respectively. \(\mathbb{E}_{s-p}[Q(s)]\) and \(\mathcal{H}\big{(}\pi(\cdot\mid s)\big{)}\) denote the expectation of current state value and current policy entropy respectively. \(\alpha\) denotes the temperature parameter.
**Compute discounted next state value.**\(Q_{next\_dis}\) is computed by
\[Q_{next\_dis}=r(s,a)+\gamma\mathbb{E}_{s^{\prime}-p}[Q(s^{\prime})] \tag{40}\]
where \(r(s,a)\) denotes the reward from the environment after executing action \(a\) in state \(s\). \(\mathbb{E}_{s^{\prime}-p}[Q(s^{\prime})]\) denotes the expectation of the next state value. \(\gamma\) denotes a discount factor. \(\mathbb{E}_{s^{\prime}-p}[Q(s^{\prime})]\) is computed by
\[\mathbb{E}_{s^{\prime}-p}[Q(s^{\prime})]=\sum[p^{\prime}\cdot\min(Q(s^{\prime })_{c1},Q(s^{\prime})_{c2})-\alpha\cdot\log p^{\prime}] \tag{41}\]
where \(Q(s^{\prime})_{c1}\) and \(Q(s^{\prime})_{c2}\) denote the next state values computed by the target critic via the algorithm _Forward-propagation-critic_. \(p^{\prime}\) and \(\log p^{\prime}\) denote the next policy distribution and its logit value. They are computed by _Forward-propagation-policy_.
The forward propagations of the critic and policy (Alg. 3-4) are almost the same. Their difference is that the critic takes two networks to compute two Q values by
\[Q(s)_{c1},Q(s)_{c2}\gets f_{\theta_{c1}}(S^{aw}),i\in\text{1,2} \tag{42}\]
Then, an average Q value is obtained. This reduces the bias (overestimation) of the Q value. The policy just takes single network to compute policy distribution and its logit value by
\[\log p,p\gets f_{\theta_{p}}(S^{aw}) \tag{43}\]
**Compute current action values.** To obtain \(Q(a|s)_{c1},i\in\text{\{1,2\}}\), current state values \(Q(s)_{c1},i\in\text{\{1,2\}}\) should be computed first by the algorithm _Forward-propagation-critic_. Then, current action values are computed by gathering state value along the policy distribution of the action \(a\) via
\[Q(a|s)_{c1}=Q(s)_{c1}.gather(a),Q(a|s)_{c2}=Q(s)_{c2}.gather(a) \tag{44}\]
**Compute expectation of current state value.** The process to compute the expectation of current state value is different from that of the next state value. It is computed by
\[\mathbb{E}_{s-p}[Q(s)]=\sum\min[(Q(s)_{c1},Q(s)_{c2})\cdot p] \tag{45}\]
where \(Q(s)_{c1},i\in\text{\{1,2\}}\) and \(p\) are computed respectively by _Forward-propagation-critic_ and _Forward-propagation-policy_.
Figure 4: Training process of our LSA-DSAC. LSA-DSAC starts by collecting data from the environment using the initialized models. Collected data is saved in the replay buffer from which data is sampled. The policy and critic are updated based on sampled data until convergence, resulting in trained models which are saved for evaluations in the motion planning tasks.
**Compute policy entropy.**\(\mathcal{H}\big{(}\pi(\cdot\mid s)\big{)}\) is computed by
\[\mathcal{H}\big{(}\pi(\cdot\mid s)\big{)}=-log\ \pi(\cdot\mid s)=-\sum p\cdot\log p \tag{46}\]
Before the back-propagation process, the temperature loss \(\mathcal{L}_{\alpha}\) is also required for the network update. \(\mathcal{L}_{\alpha}\) is computed by
\[\mathcal{L}_{\alpha}=-\min\ [\log\alpha\cdot(\mathcal{\bar{H}}-\mathcal{H})] \tag{47}\]
where \(\mathcal{\bar{H}}\) is the target entropy. Then, the temperature and all networks are updated by the gradient ascent via
\[\theta_{att,c,i}\leftarrow\theta_{att,c,i}-\gamma\nabla_{\theta_{att,c,i}}Loss _{q},i\in\text{1,2} \tag{48}\]
\[\theta_{att,p,p}\leftarrow\theta_{att,p,p}-\gamma\nabla_{\theta_{att,p}}Loss_{p} \tag{49}\]
\[\alpha\leftarrow\alpha-\gamma\nabla_{\alpha}L_{\alpha},\ \alpha\gets e^{\alpha} \tag{50}\]
Finally, target critic is also updated for a new training round via
\[\theta_{att,c}\leftarrow\theta_{att,c},\ \theta_{c1}\leftarrow\theta_{c1},\ \theta_{c2}\leftarrow\theta_{c2} \tag{51}\]
**Algorithm 1:**_LSA-DSAC_
1.Initialize the replay buffer \(\mathcal{D}\)
2.Initialize attention net of critic \(\theta_{att,c}\), attention net of policy \(\theta_{att,p}\), prediction nets of critic \(\theta_{c1}\) and \(\theta_{c2}\), and prediction net of policy \(\theta_{p}\) where
\(critic=[\theta_{att,c},\theta_{c1},\theta_{c2}]\), \(policy=[\theta_{att,p},\theta_{p}]\)
3.Initialize target critic \([\theta_{att,c},\theta_{c1},\theta_{c2}]\):
\(\vec{\theta}_{att,c}\leftarrow\theta_{att,c},\ \vec{\theta}_{c1}\leftarrow\theta_{c1},\ \vec{\theta}_{c2}\leftarrow\theta_{c2}\)
4.For episode \(i<N\)do
5.For \(t\neq T_{terminal}\) in episode \(i\)do
6.Execute action:
\(<\)\(s_{t},a_{t},r_{t},S_{t+1}>-\pi(a_{t}|s_{t};\{\theta_{att,p},\theta_{p}\})\)
7.Train If length (\(\mathcal{D}\)) \(<\) batch size \(l\)
8.Store data of this episode:
\(\mathcal{E}\)=\(\mathcal{E}\)\(<\)\(s_{t},a_{t},r_{t},S_{t+1}>\)
9.Update replay buffer: \(\mathcal{D}\leftarrow\mathcal{D}\cup\mathcal{E}\)
10.\(i=i+1\)
11.Save models: \(\theta_{att,c}\), \(\theta_{att,p}\), \(\theta_{c1}\), \(\theta_{c2}\) and \(\theta_{p}\)
**Algorithm 2: Train**
1.Sample \(K\)-batch experiences randomly from replay buffer \(\mathcal{D}\)
**//Prepare discounted next state value \(\boldsymbol{Q}_{next,dis}\)**
2.Compute next policy distribution \(p^{\prime}\) and its logit value \(\log p^{\prime}\):
**Forward-propagation-policy**
3.Compute next state value \(Q(s^{\prime})_{c1}\), \(Q(s^{\prime})_{c2}\) by target critic:
**Forward-propagation-critic**
4.Compute expectation of next state value:
\(\mathbb{E}_{s^{\prime}\sim p}[Q(s^{\prime})]=\sum[p^{\prime}\cdot\min(Q(s^{ \prime})_{c1},Q(s^{\prime})_{c2})-\alpha\cdot\log p^{\prime}]\)
5.Compute discounted next state value:
\(Q_{next,dis}=r(s,a)+\gamma\mathbb{E}_{s^{\prime}\sim p}[Q(s^{\prime})]\)
**//Prepare current action value \(\boldsymbol{Q}(a|s)_{c1}\), \(\boldsymbol{Q}(a|s)_{c2}\)**
6.Compute current state \(Q(s_{c1})\), \(Q(s_{c2}\):
**Forward-propagation-critic**
7.Compute current action value \(Q(a|s)_{c1}\), \(Q(a|s)_{c2}\):
\(Q(a|s)_{c1}=Q(s)_{c1}.gather(a)\)
\(Q(a|s)_{c2}=Q(s)_{c2}.gather(a)\)
**//Prepare Q value loss (critic loss)**
8.Compute Q value loss:
\[\mathcal{Loss}_{Q}=MSE\big{(}Q(a|s)_{c1},Q_{next,dis}\big{)}\] \[\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad
## IV Experiments
This section presents the implementation details of our algorithms. First, the network framework of our LSA-DSAC is given. Second, the model training details of our algorithms and the state-of-the-art are presented. Third, the model evaluations are conducted. The evaluations include the converged reward evaluation, interpretability evaluation, qualitative evaluation, quantitative evaluation, time complexity evaluation, transferability evaluation, and robustness evaluation. Finally, the physical implementation is presented.
### _Network framework_
In implementation, the network framework of our LSA-DSAC (Fig. 5) takes the architecture with separate attention networks. The prediction networks of the critic and policy connect with different attention network to form the critic and policy of DSAC. This contributes to overall convergence, compared with architecture with a shared attention network. The prediction network of the critic consists of two linear networks. Each linear network has three linear layers. The prediction network of the policy just has one linear network that also has three linear layers.
features from the circle-crossing simulator for the training. Both critic network and policy network consist of feature interpretation part and prediction part. The feature interpretation part is our LSA encoder which translates features of the robot and obstacles into the attention-based environment features \(S^{aw}\). Attention-based environment features \(S^{aw}\) is the combination of the robot feature \(s_{r}\) and attention-based obstacle features \(S^{aw}_{o}\). Attention-based environment features \(S^{aw}\) are then fed into prediction networks which are the linear neural layers. Double prediction networks are used in the critic network to reduce the overestimation of Q value, while single prediction network is used in the policy network.
### _Model training_
We first implemented RG-DSAC for motion planning in the circle-crossing simulator. The training result (Fig. 6a) demonstrated that RG-DSAC converged faster than DSAC with source environmental state. Moreover, the converged result of RG-DSAC also outperformed the DSAC with source environmental state.
The output of the relational graph is the matrix \(H^{t+1}\) (layer number \(l\)=\(2\)). The final interpreted feature to feed DSAC \(S^{\sigma g}\) is the combination of the robot feature \(H^{t+1}[0;\cdot]\) and the pooled obstacle features \(LSTM(H^{t+1}[1;\cdot;\cdot])\) where obstacle features are pooled by LSTM. To prove the efficacy and efficiency of our method to prepare the final features for training (_rob+lstm(obs)_), we compared it with other potential features for training by the ablation experiments where other features include:
1) features based on the feature concatenation of the robot and obstacles (_rob+obs_),
2) source robot feature (_rob_),
3) features from summing concatenated feature of the robot and obstacle (_sum(rob+obs)_),
4) features from concatenating the robot feature and obstacle features pooled by MLP (_rob+mlp(obs)_),
5) pooled features of the robot and obstacles by MLP (_mlp(rob+obs)_), and
6) pooled features of the robot and obstacles by LSTM (_lstm(rob+obs)_).
Ablation experiments (Fig. 6b) demonstrate that interpreted features for training using our method outperforms other potential features. Experiments also indicate that the robot feature should be separated from the obstacle features pooled by LSTM or MLP, resulting in the features _rob+lstm(obs)_ and _rob+mlp(obs)_. This contributes to the expressive power of interpreted features for training. The interpreted feature _rob+mlp(obs)_ marginally contributes to the convergence, while the interpreted feature _rob+lstm(obs)_ by our LSA-DSAC dramatically improves the convergence.
We noticed that a separate architecture (two relation networks and two LSTM) outperforms a shared architecture (one shared relation network with shared and separate LSTMs) (Fig. 6c) in the implementation of RG-DSAC. Experiment shows that separate LSTM encoding contributes to the convergence dramatically (yellow and green curves), while a separate relation network also contributes to the convergence (blue and yellow curves).
Fig. 5: Network framework of our LSA-DSAC. The framework of LSA-DSAC consists of the critic network and policy network. The critic network or policy network receives the same environment
In Fig. 6d, the convergence is improved after the attention network replaces the relational graph for the feature interpretation. Then, the convergence is further improved by integrating the skip connection method and LSTM pooling to the attention network. Experiment also shows that AW-DSAC and SA-DSAC _overfit_ in training because of the sum operation that lacks robustness or injectivity. LSA-DSAC outperforms the rest algorithms in converged result by sacrificing the convergence speed at early-stage training. The experiment is extended to 10-obstacle cases (Fig. 6e) where our LSA-DSAC still outperforms the rest algorithms in overall convergence speed and converged result. LSA-DSAC is also trained in cases with 1, 2, 3 and 4 obstacles (Fig. 6f). The experiment shows that the increase of the environmental complexity (the number of dynamic obstacles) results in a decrease of convergence.
Finally, LSA-DSAC is compared with RG-DSAC and the state-of-the-art that includes CADRL, LSTM-A2C, LSTM-RL and SARL. Note that ORCA is not trainable and it is not included in the training comparisons. LSA-DSAC is compared with CADRL in 1-obstacle case because CADRL only supports single
Fig. 6: The training curves of our algorithms and state-of-the-art. (a) denotes that the environment state interpreted by the relational graph contributes more to the convergence of DSAC than the source environment state. (b) denotes that after acquiring the robot and obstacle features interpreted by the relational graph, the obstacle features should be encoded by the LSTM. Then, the robot features concatenate the LSTM-encoded obstacle features to form new features for the learning of DSAC. This method of preparing new features outperforms the rest of the methods in improving the convergence of DSAC. (c) denotes that the network architecture with the separate relational graph and separate LSTM (Fig. 5) contributes more to the convergence of DSAC than the architecture with the shared relational graph or share LSTM. (d-**e**) denotes that the feature interpretation based on the relational graph partly represents the relationship between the robot and the obstacles, resulting in a slow convergence of DSAC. The attention weight or attention network focuses on and precisely describes the relationship between the robot and the obstacles, resulting in a fast convergence of DSAC. However, attention weight overfits in the training, and the overfitting problem can be mitigated by applying the skip connection method and LSTM pooling method. (f) denotes that the convergence of LSA-DSAC becomes slow, with the increase of the dynamic obstacles in the environment. (g) denotes the comparison of LSA-DSAC and CADRL in convergence. CADRL does not support multi-agent training, therefore the comparison of LSA-DSAC and CADRL is presented in a separate figure. (h-i) denotes the comparisons of LSA-DSAC, RG-DSAC, and the state-of-the-arts that support multi-agent training.
obstacle training (Fig. 6g). The rest state-of-the-art supports multi-obstacle training and they are trained in cases with 5 and 10 obstacles (Fig. 6h-i). Experiment shows that our LSA-DSAC is superior to the state-of-the-art in both convergence speed and converged result. The training parameters of our LSA-DSAC is shown in TABLE I.
### Model evaluation
Trained models of all algorithms are evaluated in 5-obstacle case comprehensively in the circle-crossing simulator from seven perspectives. This includes the _converged reward evaluation_, _interpretability (explainable ability) evaluation_, _qualitative evaluation_, _quantitative evaluation_, _time complexity evaluation_, _transferability evaluation_, and _robustness evaluation_. Models are evaluated with 500 test sets (episodes).
**Converged reward evaluation.** Converged reward indicates the overall performance of the trained models in small test set during training. It provides a fast impression about how good the model is in training. It is easy to see that our LSA-DSAC outperforms the state-of-the-art in the converged reward, while SARL performs best among the state-of-the-art (TABLE II). CADRL just supports training with one obstacle, therefore its model is not included in comparison.
**Interpretability (explainable ability) evaluation.** Interpretability (explainable ability) here is defined as the ability to decide _directly, explicitly, and uniformly_ how good the motion planning performance is. Attention mechanism (attention network) provides the attention score (a post-training indicator) to evaluate the importance of obstacles, therefore justifying the robot's policy or actions. Then, motion planning strategy is generated based on the attention score. Motion planning strategy (Fig. 7) of our LSA-DSAC indicates that the attention score is an overall evaluation that considers the moving direction, moving speed, and distance of robot and obstacles (e.g., humans). The distance of robot and obstacles sometimes contributes less to the attention score (e.g., human 2 in Fig. 7 that has minimum distance to the robot). Interpretability comparisons of the models are shown in TABLE III. Our LSA-DSAC and SARL have interpretability because of the attention score, while the motion planning performance of the rest algorithms cannot be observed indirectly, explicitly, and unexplainably.
\begin{table}
\begin{tabular}{p{113.8pt}|p{113.8pt}} \hline Algorithms & Converged reward \\ \hline ORCA [8] & \(\rightarrow\) \\ \hline CADRL [27] & 0.61(training with one obstacle) \\ \hline LSTM-RL [15][10] & 0.49 \\ \hline SARL [15] & 0.55 \\ \hline LSTM-A2C [29][11][28] & 0.30 \\ \hline Our RG-DSAC & 0.50 \\ \hline Our LSA-DSAC & **0.57** \\ \hline \end{tabular}
\end{table} TABLE II: Converged reward of the models in training with five dynamic obstacles. ORCA is not trainable, therefore it is not included in the comparisons. CADRL does not support multi-agent training, and the converged result of CADRL is from the training with one dynamic obstacle.
\begin{table}
\begin{tabular}{p{113.8pt}|p{113.8pt}} \hline Parameters/ & Values of Parameters/ \\ & Hyper-parameters \\ \hline LSTM hidden size & 50 \\ \hline Number of MLP layer & 3 \\ \hline ReLU layer after MLP & Yes (First MLP layer) \\ \hline MLP input/output size (interaction and embedded layers) & [150, 100]-[100, 50] \\ \hline MLP input/output size (attention layer) & [100, 100]-[100, 1] \\ \hline Reward & Source reward \\ \hline Gamma & 0.95 \\ \hline Tau & 0.005 \\ \hline Learning rate & 3e-4 \\ \hline Alpha & 0.2 \\ \hline Frequency of network update & Per step \\ \hline Automatic entropy tuning & True \\ \hline Batch size & 128 \\ \hline Input layer size (DSAC network) & 6+50 \\ \hline Hidden layer size (DSAC network) & 128 \\ \hline Output size of policy network (DSAC network) & 81 \\ \hline \end{tabular}
\end{table} TABLE I: Parameters and hyper-parameters of LSA-DSAC
\begin{table}
\begin{tabular}{p{113.8pt}|p{113.8pt}} \hline Algorithm & Interpretability (explainable ability) \\ \hline ORCA & \(\rightarrow\) \\ \hline CADRL & No \\ \hline LSTMRL & No \\ \hline SARL & Yes \\ \hline LSTM-A2C & No \\ \hline RG-DSAC & No \\ \hline LSA-DSAC & **Yes** \\ \hline \end{tabular}
\end{table} TABLE III: Interpretability (explainable ability) evaluation. Here the interpretability is measured by whether there are post-training indicators generated to justify the actions or policies.
Fig. 7: Examples of obstacles with attention score in LSA-DSAC. The human here denotes the dynamic obstacles. The distance between the robot and obstacles is not so important to decide the attention score sometimes, such as the human (obstacle) 2 in (a) and (b). The obstacles heading toward the robot, such as human 0, are expected to have a higher attention score, but it doesn’t mean only the direction of the obstacle decides the attention score. Attention score is an overall evaluation that considers the direction of motion, speed, and distance between the robot and the obstacles. Hence, in (b), human 0 has a high attention score, but its attention score is slightly smaller than that of human 1 and human 3.
\begin{table}
\begin{tabular}{l|l} \hline Algorithm & Learnt motion planning strategy \\ \hline ORCA & Cross \\ CADRL & Cross/Follow-pass \\ LSTMRL & Partly-bypass/Follow-pass \\ SARL & Full-bypass/Partly-bypass \\ LSTM-A2C & Back-pass/Wait-pass \\ RG-DSAC & Partly-bypass/Follow-pass \\ LSA-DSAC & **Full-bypass/Partly-bypass** \\ \hline \end{tabular}
\end{table}
Table V: Qualitative evaluation. The quality here is measured according to the efficiency or the property of learned motion planning strategies.
Figure 8: Six learned motion planning strategies. The numbers along the trajectories represent the time step of each robot or obstacle. The full-bypass and partly-bypass are the most efficient motion planning strategies. The performance of wait-pass and follow-pass strategies is acceptable. The back-pass is the most time-consuming motion planning strategy. The cross strategy is efficient in the motion planning sometimes, but it causes many collisions.
\begin{table}
\begin{tabular}{l|l|l|l} \hline Strategy & Description & Speed & Collision \\ \hline Full-bypass & Bypass all obstacles & Fast & Less \\ Partly-bypass & Bypass most obstacles & Fast & Less \\ Follow-pass & Follow front obstacles and pass & Medium & Less \\ Wait-pass & Wait until obstacles move away and pass & Slow & Less \\ Back-pass & Move back until obstacles move away and pass & Slow & Less \\ Cross & Cross dense obstacles & High/Medium/Slow & More \\ \hline \end{tabular}
\end{table}
Table IV: Features of six motion planning strategies. The motion planning strategy here is defined by humans according to human experience.
**Qualitative evaluation.** The quality here refers to the trajectory quality of the robot in an episode. In 500 tests, the robot based on our algorithms and state-of-the-art (trainable) learnt some of the six motion planning strategies. These six strategies include _full-bypass_, _partly-bypass_, _follow-pass_, _wait-pass_, _back-pass_, and _cross_. Their features and examples are shown in TABLE IV and Fig. 8. For each algorithm, we sampled 50 trajectories from 500 tests and found that the most of sampled trajectories of the SARL and Our LSA-DSAC followed the high-quality full-bypass and partly-bypass strategies (TABLE V), while the trajectories of the CADRL, LSTMRL, LSTM-A2C, and RG-DSAC followed the medium-quality follow-pass, wait-pass, and back-pass strategies. CADRL and ORCA took low-quality cross strategy that caused more collisions, although the cross strategy led to fast speed sometimes. Fig. 9 presents some examples that indicate the superiority of our LSA-DSAC in trajectory quality when comparing with the state-of-the-art.
**Quantitative evaluation.** The quantity here refers to the statistical motion planning result in 500 tests of each algorithm from the perspectives of the _success rate_, _time to goal_, _collision rate_, _timeout rate_ (allowed time 25s), _mean distance of robot and obstacle_, and _mean reward_. The statistics (TABLE VI) shows that our LSA-DSAC outperforms the state-of-the-art in all perspectives, except for the time to goal. However, LSA-DSAC still maintains high performance (2\({}^{\text{nd}}\) place) in the time to goal.
Fig. 9: Superiority of LSA-DSAC in trajectory quality. Here this paper presents an example that demonstrates a good performance of our LSA-DSAC in the time cost to reach the goal.
\begin{table}
\begin{tabular}{l|l|l|l|l|l|l} \hline Algorithms & Success rate & Time to goal & Collision rate & Timeout rate & Mean distance & Mean reward \\ \hline ORCA & 0.43 & 10.86 & 0.564 & 0.006 & 0.08 & — \\ CADRL & 0.89 & 11.30 & 0.106 & 0.004 & 0.16 & 0.47 \\ LSTMRL & 0.96 & 12.10 & 0.02 & 0.01 & 0.16 & 0.49 \\ SARL & 0.99 & 10.96 & 0.01 & 0.00 & 0.18 & 0.56 \\ LSTM-A2C & 0.88 & 17.04 & 0.05 & 0.07 & 0.12 & 0.36 \\ RG-DSAC & 0.94 & 11.37 & 0.06 & 0.00 & 0.14 & 0.52 \\ LSA-DSAC & **0.996** & 10.94 & **0.004** & **0.00** & **0.15** & **0.57** \\ \hline \end{tabular}
\end{table}
Table 6: Statistical results of the quantitative evaluation.
\begin{table}
\begin{tabular}{l|l|l|l|l|l} \hline Algorithms & Time cost (hour/10K epi.) \\ \hline ORCA & — \\ CADRL & 7.4 (train with one obstacle) \\ LSTMRL & 16.08 \\ SARL & 14.72 \\ LSTM-A2C & 0.42 \\ RG-DSAC & 4.38 \\ LSA-DSAC & 4.56 \\ \hline \end{tabular}
\end{table}
Table 7: Time complexity evaluation. The time complexity here is measured by the time cost of all algorithms in training.
\begin{table}
\begin{tabular}{l|l|l|l|l|l|l} \hline Algorithms & Success rate & Time to goal & Collision rate & Timeout rate & Mean distance & Mean reward \\ \hline ORCA & 0.74 & 9.12 & 0.256 & 0.004 & 0.08 & — \\ CADRL & 0.88 & 11.19 & 0.01 & 0.11 & 0.17 & 0.48 \\ LSTMRL & 0.91 & 10.54 & 0.03 & 0.06 & 0.12 & 0.49 \\ SARL & 0.92 & 10.96 & 0.02 & 0.06 & 0.17 & 0.51 \\ LSTM-A2C & 0.45 & 15.61 & 0.41 & 0.14 & 0.10 & 0.12 \\ RG-DSAC & 0.40 & 11.09 & 0.59 & 0.01 & 0.11 & 0.10 \\ LSA-DSAC & **0.93** & 10.95 & 0.05 & 0.02 & **0.14** & **0.51** \\ \hline \end{tabular}
\end{table}
Table 8: Transferability evaluation. Here the transferability is measured by the performance of trained models (training in the circle-crossing simulator) in the new environment (the square-crossing simulator). The performance here is measured using the same metrics as that of quantity evaluation.
\begin{table}
\begin{tabular}{l|l|l|l|l|l|l} \hline Algorithms & Success rate & Time to goal & Collision rate & Timeout rate & Mean distance & Mean rewards \\ \hline ORCA & 0.310 & 1.560 & 0.308 & 0.002 & 0.000 & — \\ CADRL & 0.010 & 2.110 & 0.096 & 0.106 & 0.010 & 0.02 \\ LSTMRL & 0.078 & 1.160 & 0.008 & 0.050 & 0.020 & 0.04 \\ SARL & 0.070 & 0.000 & 0.010 & 0.060 & 0.010 & 0.05 \\ LSTM-A2C & 0.430 & 1.430 & 0.360 & 0.070 & 0.020 & 0.24 \\ RG-DSAC & 0.540 & 0.280 & 0.530 & 0.010 & 0.030 & 0.42 \\ LSA-DSAC & 0.066 & 0.010 & 0.046 & 0.020 & 0.010 & 0.06 \\ \hline \end{tabular}
\end{table}
Table 9: Statistical results of the quantitative evaluation.
Figure 10: An example of 500 tests in the square-crossing simulator. (a) presents the simulator, while (b) presents the trajectories of the robot and obstacles at the end of an episode. The square-crossing simulator is only used in transferability evaluations.
**Time complexity evaluation.** Here the time complexity is scaled by the _time cost_ of each algorithm in training. Online learning algorithm LSTM-A2C learns from online data. Hence, it takes less time 0.42h in training (TABLE VII), while other off-policy algorithms including our RG-DSAC and LSA-DSAC take much time. However, our LSA-DSAC and RG-DSAC still keep high performance (around 4h) when comparing with other off-policy algorithms.
**Transferability evaluation.** The transferability here refers to the performance of a trained model (trained in circle-crossing simulator) in a new environment (square-crossing simulator, Fig. 10). The test results (TABLE VIII) show that our LSA-DSAC keeps the best performance among all trained models in the success rate, mean distance of the robot and obstacle, and mean reward. For the time to goal, collision rate, and timeout rate, our LSA-DSAC still keeps high performance (\(3^{\text{rd}}\), \(4^{\text{th}}\), and \(3^{\text{rd}}\) place respectively).
**Robustness evaluation.** The robustness here denotes the _stability_ of a trained model in a new environment. The stability is described as the _value changes_ (the changes of the statistical results in the quantitative evaluation). TABLE IX presents the value changes of these models from the circle-crossing simulator to the square-crossing simulator. Although our LSA-DSAC does not perform best among all trained models, it still keeps high performance (\(2^{\text{nd}}\), \(2^{\text{nd}}\), \(3^{\text{rd}}\), \(3^{\text{rd}}\), \(2^{\text{nd}}\), and \(4^{\text{th}}\) place respectively).
### _Physical implementation_
This paper provides a demonstration of physical implementation. The motivation for physical implementation is to provide a possible way to implement the physical robot and enable the robot to navigate in dense and dynamic scenarios as the motion planning in the simulator. This paper emphasizes the evaluations of motion planning algorithms in simulators, instead of the evaluations in the real world, because simulators can provide as many tests as possible to extensively evaluate the performance of motion planning algorithms. Moreover, the errors introduced by simulators are predictable. In the real world, unexpected errors may result in an unfair environment when evaluating the motion planning performance of algorithms. For example, the false operations from humans and the measurement errors from the sensors may make the results of the real-world evaluations different from that in simulators under the same settings. This paper attempts to create a real-world environment that has the same settings as that of simulators to demonstrate the motion planning performance of algorithms. The unexpected errors from the real world are not considered in the physical implementation. The problems of unexpected errors are expected to be solved by integrating the model-based methods or Bayesian inference into the model-free RL. However, the model-based methods may bring new problems like expensive computation [40]. This paper is not going to extend this topic, but model-based methods may be considered in future works for motion planning in dense and dynamic scenarios.
In the physical implementation, the models of trainable motion planning algorithms are trained with the data from the simulator, while the testing data is collected by the robot sensors from the real world. The mechanism of physical implementation is as follows: A Local area network (LAN) is established in the robot operation system (ROS) [41]. Agents in the experimental area are connected to the LAN. They are equipped with a marker point that can be captured by cameras to create a rigid body model in the motion capture system. The workflow in the physical implementation is shown in Figure 11a. First, cameras capture the agent's location information (observations) via marked points. Second, observations are sent to the host to compute the agent's actions by the RL model and ORCA. Third, the actions are broadcasted to the agents by WiFi. Finally, the actions are executed by agents. Once the robot reaches the goal, the task is finished.
### _Workflow of physical implementation_
In action execution (Figure 11b), Jetson Nano converts digital actions to ROS messages. Stm32 then computes the wheel speed using ROS messages and converts it into messages of Pulse-Width Modulation (PWM). They can be recognized and executed by motors. The motion capture system is based on optical tracking technology [42][43][44] to localize agents. It consists of eight 3D cameras. Finally, our physical implementation is tested in both a ROS Gazebo environment and the real world (Figure 11c). Videos of the ROS Gazebo test and the real-world tests are available as follows:
Fig. 11: Details of physical implementation. (a) presents the detailed steps of physical implementation. (b) presents the hardware of the robot and obstacles for the action execution. (c) presents the motion capture system and an example of the tests in the gazebo and the real world. The motion capture system localizes the robot and obstacles in real-time to compute the positions and velocities of the robot and obstacles.
Figure 14: Real-world test in dense and dynamic scenarios. As Gazebo test and real-world test in a static environment, real-world test in dense and dynamic scenarios uses the model of LSA-DSAC. The model of LSA-DSAC is trained by 1k episodes in the circle-crossing simulator. Two robots use the same model of LSA-DSAC. Each robot treats another robot as an obstacle in the motion planning task. The obstacles spread along the robot’s routes to their destinations and randomly walk continuously. Finally, the robots reached their destinations and avoided all dynamic obstacles simultaneously. Note that this paper omitted the real-world test in dense and dynamic scenarios with one robot because each robot treats another robot as an obstacle in the motion planning.
Figure 12: The Gazebo test. The same trained models are evaluated in Gazebo environment and the circle-crossing simulator simultaneously. The robot and obstacles in these two environments are controlled by the LSA-DSAC and ORCA respectively. The model of LSA-DSAC is trained by 1k episodes in the circle-crossing simulator. Finally, the motion planning results in these two environments were almost the same, despite a few differences in the smoothness of the trajectory given the video results.
Figure 13: The real-world test in the static environment. As Gazebo test, the real-world test in a static environment uses the model of LSA-DSAC. The model of LSA-DSAC is trained by 1k episodes in the circle-crossing simulator. The obstacles spread along the robot’s route to the destination. Finally, the robot reached its destination and avoided all obstacles simultaneously.
1) Gazebo test (Figure 12). The video link is available at [https://youtu.be/A-GdHGoWwCk](https://youtu.be/A-GdHGoWwCk). The purpose of the Gazebo test is to compare the motion planning differences between the Gazebo environment and the simulator. The same trained models are evaluated in Gazebo environment and the circle-crossing simulator. The experiment demonstrates that the motion planning performance in Gazebo environment and circle-crossing simulator is almost the same under the same settings. Their difference is that the trajectories of the robot and the obstacles in Gazebo environment are not as smooth as that of the circle-crossing simulator, given the video demonstration. The sensor errors of Gazebo environment cause the positioning drift, reducing the smoothness of the trajectories. However, the robot can still reach the destination safely and efficiently by learning an efficient motion planning policy and keeping a safe distance to the obstacle.
2) Real-world test in the static environment (Figure 13). The video link is available at the website [https://www.youtube.com/watch?v=b1SFbA14AqE](https://www.youtube.com/watch?v=b1SFbA14AqE). The model tested in Gazebo environment is then tested in the static real-world environment. Given the test result, the robot can reach the destination and avoid all obstacles simultaneously.
3) Real-world test in dense and dynamic scenarios (Figure 14). The video is available at [https://youtu.be/UB6aC3XoZ6c](https://youtu.be/UB6aC3XoZ6c). The real-world test in dense and dynamic scenarios uses the same model as the above two tests. Given the test result, the robots can reach their destinations and avoid all dynamic obstacles simultaneously. As Gazebo test, real-world tests in static and dynamic scenarios have the same problem in the positioning drift caused by the sensor errors, reducing the smoothness of the trajectories, given the video demonstration. However, the robot can still reach the destination safely and efficiently by learning an efficient motion planning policy and keeping a safe distance from the obstacle.
## V V. Conclusion
This paper combines representation learning with reinforcement learning for robotic motion planning in the environment with dynamic and dense obstacles. First, relational graph combines with DSAC to form the RG-DSAC, and satisfactory performance of motion planning is achieved. Second, the expressive power of interpreted features is improved by the attention weight (attention network) to replace the relational graph in the feature interpretation. This improves network convergence. Third, the attention weight (network) is optimized by the skip connection method and LSTM pooling to eliminate overfittings in training. Therefore, the convergence speed and converged result are further improved. Extensive experiments (training and evaluations) of our algorithms and state-of-the-art are conducted. The results demonstrated that our LSA-DSAC outperforms the state-of-the-art in trainings and most evaluations. The details of physical implementation of the robot and dynamic obstacles are also given to provide a possible method to transplant the simulation into the real world. Motion planning experiments were conducted in indoor scenarios (ROS Gazebo environment and real world). This further demonstrates the credibility of our motion planning algorithm and physical implementation method in the real world.
Future research may focus on the design of independent objectives for the attention network to further improve the convergence and interpretability. We will also try tree models and Bayesian model-based method to infer the hidden features of the robot and obstacles. This contributes to better interpretability and reducing unexpected errors once the robot works in the real world, therefore improving the network convergence, and reducing the sim2real gap.
## Acknowledgment
The physical implementation is supported in part by the National Natural Science Foundation of China under Grant _62003218_, Guangdong Basic and Applied Basic Research Foundation under Grant _2019A1515110234_, and Shenzhen Science and Technology Program under Grant _RCBS20200714114921371_.
|
2310.00332 | MFL Data Preprocessing and CNN-based Oil Pipeline Defects Detection | Recently, the application of computer vision for anomaly detection has been
under attention in several industrial fields. An important example is oil
pipeline defect detection. Failure of one oil pipeline can interrupt the
operation of the entire transportation system or cause a far-reaching failure.
The automated defect detection could significantly decrease the inspection time
and the related costs. However, there is a gap in the related literature when
it comes to dealing with this task. The existing studies do not sufficiently
cover the research of the Magnetic Flux Leakage data and the preprocessing
techniques that allow overcoming the limitations set by the available data.
This work focuses on alleviating these issues. Moreover, in doing so, we
exploited the recent convolutional neural network structures and proposed
robust approaches, aiming to acquire high performance considering the related
metrics. The proposed approaches and their applicability were verified using
real-world data. | Iurii Katser, Vyacheslav Kozitsin, Igor Mozolin | 2023-09-30T10:37:12Z | http://arxiv.org/abs/2310.00332v1 | # MFL Data Preprocessing and CNN-based Oil Pipeline Defects Detection
###### Abstract
Recently, the application of computer vision for anomaly detection has been under attention in several industrial fields. An important example is oil pipeline defect detection. Failure of one oil pipeline can interrupt the operation of the entire transportation system or cause a far-reaching failure. The automated defect detection could significantly decrease the inspection time and the related costs. However, there is a gap in the related literature when it comes to dealing with this task. The existing studies do not sufficiently cover the research of the Magnetic Flux Leakage data and the preprocessing techniques that allow overcoming the limitations set by the available data. This work focuses on alleviating these issues. Moreover, in doing so, we exploited the recent convolutional neural network structures and proposed robust approaches, aiming to acquire high performance considering the related metrics. The proposed approaches and their applicability were verified using real-world data.
Deep learning Computer vision Convolutional neural networks Anomaly detection Fault detection Oil pipelines Magnetic Flux Leakage data Defect Technical diagnostics.
## 1 Introduction
Anomaly detection problems have a great importance in industrial applications because anomalies usually represent faults, failures or the emergence of such Chandola et al. (2009). To detect them automatically, advanced analytic algorithms, including machine learning- and deep learning-based, can be applied. In this work, we investigated if deep neural network would perform well enough to provide hindsight to oil pipeline diagnostics. An oil pipeline system spans over thousands of kilometers, which makes manual inspection very costly and sometimes impossible. The damage of pipelines that transport oil and gas products leads to severe environmental problems. Eliminating leakages and their consequences is expensive.
To avoid accidents, it is recommended to improve the efficiency of diagnostics and increase the frequency of in-line-inspection (ILI) tools deployment (Fig. 1). ILI tools, also referred to as pipeline inspection gauges, use the Hall effect for measuring localized Magnetic Flux Leakage (MFL) intensity along the pipe wall. While moving along the pipe, the gauge inspects the wall and detects the magnetic field leaks. The MFL technique is the most common approach for nondestructive testing of oil and gas pipelines nowadays Loskutov et al. (2006).
The data collected during the inspection can be further analyzed and used to solve the main diagnostics problems Katser et al. (2022): detection of damages and defects, their localization, diagnosis or defects classification. Such analysis results are useful for assets management and repair prioritizing. This data analysis step is partly automated, but still there is a lot of manual work done here. That is why it is quite expensive and time consuming. Data analysis and machine learning techniques are very useful in the tasks of making processes more efficient time- and money-wise. Thus, an improved diagnostic process allows running the whole ILI procedure more often and gain more knowledge about the pipeline health, resulting in better safety and fewer financial losses due to leakages.
The objectives of this research are to appraise the proficiency of data engineering and computer vision (CV) techniques in oil pipeline diagnostics.
## 2 Literature Review and Problem Statement
The MLF technique is the most common approach for nondestructive testing of oil and gas pipelines. The data obtained during the pipeline inspection is primarily analyzed by expert- and heuristics-based methods and since recently by regular machine learning (ML) methods. A comparison of performance among different ML methods for the defect identification problem is presented in Khodayari-Rostamabad et al. (2009). The main challenge for the ML approach is creating informative and important features that can be used as an input for ML methods. Usually, these diagnostic features are generated using expert knowledge and manually-created heuristics. So, on the one hand, the ML methods extend expert-based approaches and improve their quality. On the other hand, using manually generated features imposes the limitation on the quality of solving the ML-based defect detection problem that fails to fully automate the diagnostic process. A variety of most successful features is presented and analyzed in detail in Slesarev (2017).
To overcome the limitations of the expert-based and ML-based approaches, one can resort to Deep Learning (DL) techniques that showed significant progress and achieved incredible results in numerous applications just in the past few years. The image classification problem is one of the most successful applications of DL and Convolutional Neural Networks (CNNs) in particular. CNNs can also be used to automate the process of feature generation in MFL data analysis. As an advantage, they can solve defect detection, weld strength detection, classification and segmentation problems at the same time. In literature there are examples of applying CNNs for defect detection Feng et al. (2017), weld defect detection Shang et al. (2020), weld and defects classification Yang et al. (2020), and defect size estimation Lu et al. (2019). For all the mentioned applications, CNNs outperformed traditional approaches.
Nevertheless, still, there are few works dedicated to MFL data analysis using DL, and the existing DL approaches do not always achieve the required quality for full automation of the diagnostic process using such techniques. A number of particular problems that can be solved using the novel approach are not covered yet. For instance, we could not find any works on applying CNNs to the defect segmentation task, despite the importance of this problem according to Feng et al. (2017). This can be an extension of the current research. This work seeks to address three different problems:
1. Defect detection with the DL techniques,
2. Welds strength detection with the DL techniques,
3. MFL data preprocessing.
To solve the first two problems, it is proposed to apply CNNs of different architecture and compare their results with the existing state-of-the-art approaches. It was decided to formulate such defect detection problem as an image classification problem in terms of ML because the applied DL techniques are intended to solve the problem formulated so. To solve the first problem, we state the binary classification problem (healthy pipe or defected pipe). To solve the
Figure 1: In-line-inspection tool.
second problem, we state the multiclass classification problem (healthy pipe, defected pipe or defected weld), covering first two problems simultaneously.
Also, this research addresses different preprocessing techniques for dealing with typical issues in comparing the MFL data results of various preprocessing approaches, used with various CNNs. This work seeks to constructing such a preprocessing approach that improves the results of the defect detection problem best of all.
## 3 Dataset Description
There are three main classes of data that are attended to by diagnostic personnel. They are presented in Fig. 2. Some other classes of data (concerning pipe tees and bends) are out of the scope of this work as well as different classes of defects and different classes of welds (healthy and defected).
Although MFL data looks quite similar for different pipes and ILI tools, it can differ significantly. The data mainly depends on pipe size, wall width, sensor geometry, and other geometric characteristics. Moreover, ILI tools differ a lot for different pipe sizes. Therefore, the repeatability of the results for different datasets should be investigated additionally. Further on, we provide dataset characteristics, which are also presented in Table 1. The data was collected from a 219 mm in diameter pipe. The MFL dataset provides information about a single inspection tool run. The dataset has 64 features collected from 64 sensors installed at a constant step (10.75 mm) around the perimeter of the ILI tool. The data is collected as an array of 1x64 shape with a constant step (3.37 mm) along with the ILI tool movement inside the pipe. The dataset has 4,470,704 samples (steps along the pipe) that represent a 15,162.85 m part of pipeline. The sample values vary from 0 to 4,095 units. It has 745 defects of different types and 1,462 welds, 34 of which were found to be defected. Figure 2 shows examples of healthy data, data with a weld, and with a defect. A technical report, attached to the dataset, contains information about the location of welds and defects, defect types, sizes, and other related characteristics. The report is prepared manually by the domain expert, so it contains some inaccuracies and needs additional preprocessing, as well as the data itself.
## 4 Preprocessing Procedures
Raw data has several issues that make it unusable to solve CV problems without proper preprocessing. The issues are:
1. Sensor malfunctions (zeroed values cause bold horizontal line in Fig. 2),
2. Displaced origins between data and report coordinates,
3. Inaccurate annotations, e.g., missed defects, wrong defect location, etc.,
Figure 2: Image classes distinguished in this work.
4. No annotated data for the segmentation task.
The preprocessing stages and procedures that resolve these and other issues are given in this section.
Initial dataset transforming into separate imagesThe initial dataset represents a long table indexed over the coordinate along the pipe. To state and solve the image classification problem, we should first decompose this long table into smaller squared 64x64 subtables. It can be also interpreted as a sliding window that runs over the coordinate (index) of the initial dataset and clips the dataset into the non-overlapping subsets (figure 3). Each subset can be shown as an image of the pipe part. as a result, we had a dataset of 11,690 images of the healthy class, 711 images of the defected class, and 1,412 images with welds. The characteristics of the pipeline defect dataset are described in Table 3. These classes were assigned to the images according to the markup from the technical report, where coordinates of the welds and defects are noted. Thus, the image covering the range of coordinates with the defect is interpreted as an image with the defected class. From now on, we refer to this transformed dataset of 13,813 64x64 images.
Sensors malfunctions problemTo deal with sensor malfunctions, we propose to fill the gaps (zeroed values) with values calculated by different methods. Additionally, we will consider the values below 2,000 abnormal in this domain according to the experts and replace them with zeroes during the preprocessing.
1. Abnormal values are equal to 0. Then Min-Max scaling to \([0.5:1]\) range.
2. Abnormal values are equal to the mean of normal values from one picture. Then Min-Max scaling.
3. Abnormal values are equal to the mean of normal values over the column. Then Min-Max scaling.
4. Abnormal values are equal to the mean of neighboring sensors over the column. Then Min-Max scaling.
5. Abnormal values are equal to the interpolation results over the column. Then Min-Max scaling.
The results of all the applied methods are presented in Fig. 4. The Min-Max scaling can be applied using the whole dataset or just one image. Both approaches can be compared when the experiment is conducted.
Since the ILI tool location data did not match the defect location data from the report, it was necessary to merge the data. The key factor here turned out to be that the signal values from the magnetic flux sensors grew at the weld site. Hence the solution was to find the locations of the maxima of sensors data values and then to combine it with the weld coordinates.
\begin{table}
\begin{tabular}{l l} \hline Parameter & Value \\ \hline Pipeline diameter, mm & 219 \\ Pipeline length, m & 15162.85 \\ Number of samples & 4470704 \\ Number of features & 64 \\ Min value & 0 \\ Max value & 4095 \\ Number of defects & 745 \\ Number of welds (with defects) & 1462 (34) \\ \hline \end{tabular}
\end{table}
Table 1: Dataset characteristics
Figure 3: Non-overlapping sliding window scheme for data preprocessing.
Inaccurate annotations problemThis problem is a common one for nondestructive testing of oil and gas pipelines Khodayari-Rostamabad et al. (2009), as well as for the manual labelling. There appears to be a lot of missing defects that affect the quality of the problem. Besides, there are wrong defect types and locations. To eliminate the wrong location issue, we additionally searched extremums around the provided location and chose the defects or welds, taking into account new coordinates.
AugmentationAlthough we had a lot of data, we had small amount of defects and welds in comparison with healthy pipe wall instances. The augmentation procedure was used to balance the classes of images and improve the model quality by increasing the number of images in small classes (defects, welds). The Albumentations library Buslaev et al. (2020) was selected as an augmentation tool. All the applied augmentations both for welds and defects are presented in Table 2. Based on domain knowledge, not all selected augmentations were applied to images with welds. The applied augmentation details are presented in Buslaev et al. (2020) and references therein. The characteristics of the augmented dataset, used for the research, are described in Table 3. Examples of augmentations are shown in Fig. 5.
## 5 Defects Detection Methods
The Pipeline defect detection is composed of two problems. First, the defect should be detected, and second, it should be evaluated using the segmentation results. We propose here a novel CNN architecture for solving the first problem. Additionally, we present the existing architectures that achieve the best results in the MFL and X-ray defect detection problems.
### CNN Preliminaries
A CNN is a special type of a neural network that has proven effective in computer vision applications. State-of-the-art results can be achieved in the segmentation and classification tasks Sainath et al. (2013). Compared to the computer vision algorithms that do not take advantage of CNNs, much less pre-processing is required. More importantly, such networks are able to learn characteristics from data, which otherwise would have to be individually accounted for Huet et al. (2018).
Even though CNNs have been proposed in different architectures - to increase their efficiency for specific tasks and/or datasets, only three types of layers are used without exception, each with a specific propose. They are convolutional, pooling, and fully connected (linear) layers. The convolutional layers aim to extract feature maps of the input images by applying filters over different region of images. For instance, with \(k\) filters, each filter having weight and bias of \(w_{i}\) and \(b_{i}\), respectively, the convolution of an image patch, \(x_{n}\), can be written as follows:
\[f_{i,n}=\sigma(W_{i}x_{n}+b_{i}), \tag{1}\]
where \(\sigma\) is the activation function. Besides the rectified linear units (ReLU), sigmoid or softmax activation functions, a multitude of different options exist, all having their individual advantages. These are applied on the layers' output neurons (e.g. after a convolutional layer). After a number of convolutional layers, the pooling layers are commonly applied in prominent network architectures to reduce the size of particular dimensions. Max-pooling and average-pooling are two examples. The pooling layers, alongside the reducing dimension sizes, perform denoising when utilized on images. The fully connected layers are generally the last layers of CNNs, possessing a similar structure compared to the traditional neural networks Mingoti and Lima (2006).
### Existing CNNs
We implemented the CNN from Feng et al. (2017) with only one difference: we used squared pictures (64x64 pixels) as an input, so we omitted the Normalization layer. The interested reader can find all details and overall architecture parameters in Feng et al. (2017). From now on, this CNN is marked as CNN-2 by the number of Convolutional layers. We also implemented the CNN from Shang et al. (2020), which showed better results than the pretrained one, and fine-tuned OverFeatNet, VGGNet, and GoogleNet networks. Since our input size was smaller than in the original paper, we used smaller kernel size (3x3 instead of 7x7). All the details and CNN parameters are presented in Shang et al. (2020). From now on, this CNN is marked as RayNet.
### Proposed CNN-5 model
The proposed model in Fig. 6 consists of five convolutional layers overall. Each convolutional layer is followed by BN and Dropout sequentially (not shown in Fig. 6). All the convolutional layers have equal kernel size 5 x 5. All the MaxPooling layers have equal kernel size 2 x 2, and stride 2. From now on, this CNN is marked as CNN-5.
\begin{table}
\begin{tabular}{l c c c} \hline \hline Data & Healthy & Defect & Weld \\ \hline \multicolumn{4}{c}{Before augmentation} \\ \hline Train & 11106 & 569 & 1130 \\ Validation & 584 & 142 & 282 \\ \hline \multicolumn{4}{c}{After augmentation} \\ \hline Train & 11106 & 8535 & 11300 \\ Validation & 584 & 142 & 282 \\ \hline \hline \end{tabular}
\end{table}
Table 3: Dataset size before and after augmentation
### Performance metric
For each class, binary classification problems are evaluated according to the principle of one versus all. Recall is used as a binary classification metric for each class. Recall is defined by the formula from Olson and Delen (2008):
\[Recall=\frac{TP}{TP+FN} \tag{2}\]
where \(TP\) is the number of samples when model correctly identified the considered class, \(FN\) is the number of samples when model did not identify the considered class.
### Loss functions
Binary Cross-Entropy is used as a loss function of the CNN-5:
\[BCE=-\frac{1}{N}\sum_{i=1}^{N}y_{i}\cdot\log\left(p\left(\hat{y}_{i}\right) \right)+\cdot\left(1-y_{i}\right)\cdot\log\left(1-p\left(\hat{y_{i}}\right)\right) \tag{3}\]
## 6 Results
Table 4 presents the results of comparison for different preprocessing and feature engineering techniques and different CNN architectures for binary classification (normal pipe wall or defect/weld). Table 5 shows the multiclass classification problem (normal pipe wall, defect or weld).
Batch size was equal to 64, so the input to the network had shape (64, 1, 64, 64). In the experiments we used the Adam optimizer with initial learning rate 0.001 and the learning rate scheduler with parameters: threshold = 0.0001, factor = 0.5, min lr = 0.0001, patience = 484. Also, for all the experiments, the number of epochs was equal to 12 and the dropout rate was equal to 0.33.
Filling methods were researched for binary classification problem. Centering means using a peak (extremums) searching procedure to define the weld or defect coordinates correctly. The centering procedure was researched for both the binary and multiclass classification problems. Moreover, the Min-Max normalization, with using either a single image or whole dataset, was investigated. Finally, CNN-2 and CNN-5 were compared for centered images with the first filling method using the single image Min-Max normalization.
\begin{table}
\begin{tabular}{l c c c} \hline Method & \(\hat{y}=y=0\) & \(\hat{y}=y=1\) & Average \\ \hline CNN-2 & 95.55 & 82.08 & 89.88 \\ RayNet & 96.92 & 80.42 & 89.81 \\ CNN-5 & 97.95 & **91.51** & **95.24** \\ CNN-5+LRN & **98.29** & 89.86 & 94.74 \\ \hline \multicolumn{4}{c}{Filling techniques comparison} \\ \hline CNN-5 (filling 1) & 97.95 & **91.51** & **95.24** \\ CNN-5 (filling 2) & 97.95 & 84.20 & 92.16 \\ CNN-5 (filling 3) & 97.26 & 83.02 & 91.27 \\ CNN-5 (filling 4) & **98.63** & 81.13 & 91.27 \\ CNN-5 (filling 5) & 98.12 & 81.84 & 91.27 \\ \hline \end{tabular}
\end{table}
Table 4: Comparison of performance using Recall metric among different classification methods for binary classification problem. \(y=0\) - healthy; \(y=1\) - defect/weld
Figure 6: Proposed CNN architecture.
## 7 Conclusion
Today, manual analysis of a magnetographic image is being a bottleneck for the diagnostics of pipelines, since it is costly limited by human resources. This study proves that this process can be fully automated, which is likely to make the analysis more reliable, faster and cheaper.
The CNN-5 network that outperformed the currently used CNNs for pipeline defect detection was proposed. Moreover, the results of the experiments prove that proper preprocessing procedures, including missing values filling techniques and normalization strategies, helps significantly improve the results and achieve the high quality of the oil pipeline diagnostics.
Finally, there can be several project development options:
1. To increase sizes of the datasets,
2. To improve the preprocessing procedures, including manual pictures selection,
3. To try multiclass defects classification,
4. To try defected and healthy welds classification,
5. To apply defect depth evaluation,
6. To investigate the repeatability of the results for similar datasets or transfer learning possibility.
|
2308.00178 | Transparent conductive oxides and low loss nitride-rich silicon
waveguides as building blocks for neuromorphic photonics | Fully CMOS-compatible photonic memory holding devices hold a potential in a
development of ultrafast artificial neural networks. Leveraging the benefits of
photonics such as high-bandwidth, low latencies, low-energy interconnect and
high speed they can overcome the existing limits of the electronic processing.
To satisfy all these requirements a new photonic platform is proposed that
combines low-loss nitride-rich silicon as a guide and low-loss transparent
conductive oxides as an active material that can provide high nonlinearity and
bistability under both electrical and optical signals. | Jacek Gosciniak, Jacob B. Khurgin | 2023-07-31T22:20:11Z | http://arxiv.org/abs/2308.00178v1 | Transparent conductive oxides and low loss nitride-rich silicon waveguides as building blocks for neuromorphic photonics
###### Abstract
Fully CMOS-compatible photonic memory holding devices hold a potential in a development of ultrafast artificial neural networks. Leveraging the benefits of photonics such as high-bandwidth, low latencies, low-energy interconnect and high speed they can overcome the existing limits of the electronic processing. To satisfy all these requirements a new photonic platform is proposed that combines low-loss nitride-rich silicon as a guide and low-loss transparent conductive oxides as an active material that can provide high nonlinearity and bistability under both electrical and optical signals.
## Introduction
Neuromorphic computing refers to the way the signal is processed that try to mimic a signal processing by a brain [1]. In comparison to traditional computers that are based on von Neumann architecture with two separated memory and processing units and operating in a sequential way [2], the brain process signals in a parallel way [3, 4]. It provides huge benefits in terms of speed and energy efficiency as a data transfer is responsible for a large part of the power consumption. One of the ways to overcome some of those limitations is by developing new algorithms that can improve signal processing [5, 6], however, it still requires the data transfer between memory and processor what limits its efficiency. To deal with those limitations a lot of effort is put in the last years in a development of artificial neurons and synapses that can be implemented in the network [1].
Neuromorphic computing based on the photonics _i.e._, the neuromorphic photonics, avail photons as a signal carrier to transfer an information between different part of the network [7-12]. Thanks to almost unlimited bandwidth, compatibility with standard CMOS technology and almost zero power consumption to carry out basic matrix multiplication it can offer a huge improvement compared to neuromorphic electronics. The full parallelism can be achieved by busing multiple signals on a single waveguide at the speed of light. Simultaneously, the optical weights can offer low latency of the computation. By combining those advantages at least few orders of magnitude improvement compared to electronic counterparts is expected. However, a realization of such demanding task requires new material platform and low-loss architecture that is still missing.
Silicon nitride (SiN) is a ubiquitous material for photonic integrated circuit (PIC) technologies since it is compatible with standard CMOS processes [13, 14]. It allows for cost-effective construction of devices and co-integration of electronic and photonics components on a single chip. Furthermore, the photonic devices based on SiN platform are characterized by higher tolerance to the temperature drifts compared to other materials, lower optical losses and broad wavelength range operation, wider wavelength transparency and improved crosstalk values [14]. Already, SiN has proved to be a proper material platform for a realization of neural networks showing the increased degree of freedom is design linear neurons [8, 9]. Thus, SiN platform can play a key role as routing layer in neuromorphic photonics [9].
Among many active materials available for implementation in neuromorphic networks [1], transparent conductive oxides (TCOs) seem to be a material of choice for such tasks as it provides nonlinearity and bistability under both electrical signal and optical power coupled to the waveguide [15, 16]. Thus, it can provide a dual-mode operation and bring a lot of flexibility in terms of operation conditions [17]. As it has already shown [15, 16], it exhibits two stable states that depends on the history of the system, thus it can act as a memristor [18-21]. TCOs belong to the epsilon-near-zero (ENZ) materials that show large permittivity tunability under an applied voltage and/or light illumination [17, 22-24]. They are characterized by fast switching time and low switching voltage when operating under an electrical switching mechanism [17, 22-24] what is a huge benefit for a realization of efficient neuromorphic networks. And similarly to the SiN platform, TCOs are CMOS-compatible and operate under low optical loss. Thus, a combination of SiN and TCOs can provide an ideal material platform for a realization of low-loss, CMOS-compatible and extremally fast neuromorphic systems able to process an information in-place and under a low operation power.
To process all information in-place, system has to possess some type of bistability, _i.e._, a special activity that take place in biological neurons where neurons can switch between active and non-active states under some action of neuromodulating substances [3]. Thus, bistability is a property of a system that exhibits two stable steady states and the system rests in one of those states depending on the history of this system [15, 16, 18, 25-32]. It can refer to two opposite magnetizations of magnet, low or high resistance of the electronic devices, low or high signal transmitted through a device and etc. The two states represent two values of a binary digit _i.e._, bit. To meet the demands of modern systems they should operate at high speed, under low power consumption and in wide operation bandwidth. However, up to now, most of the proposed bistable devices suffer either from high power consumption, incompatibility with standard processing technology, narrow bandwidth or complicated design that is a combination of a nonlinear material and resonant cavity [31]. The reliable bistable all-optical devices can bring progress in many fields, especially, in the all-optical neural networks, thus, the search for such device intensify in the last few years [15, 16, 31, 32].
### Switching mechanism
The photonic devices with TCO materials can operate in dual-mode operation, electrical and/or optical, thanks to the unique properties of TCO materials which exhibit a dispersion of its real electrical permittivity under applying an electric field or optical pump, thus, either generating or exciting free carriers [17, 22]. Depending on the requirements and working conditions each of the processes can be implemented to the proposed device.
### Electrical switching
Under an applied voltage the electrons accumulate at the TCO what increases the local density of electrons and reduces the permittivity according to the Drude dispersion formula:
\[\varepsilon(\omega)=\varepsilon_{\infty}-\frac{N_{c}e^{2}}{\varepsilon_{0}( \omega^{2}+i\omega)m^{*}(E)}\]
where \(\varepsilon_{\infty}\) is the permittivity due to the bound electrons, \(N_{c}\) is the carrier density, \(e\) is the electron charge, \(\omega\) is the working frequency, \(m^{*}(E)\) is the energy-dependent effective mass, and \(\nu\) is the scattering rate. As it has been previously showed [33], even unity order permittivity change can be obtained under a reasonable voltage. The increased carrier concentration leads to decreasing a permittivity and shifting into the ENZ region what leads to higher absorption and increases the absorption losses of a device as the mode is more confined to the TCO material. Once a voltage is removed, electrons flow away from the TCO and TCO returns to its initial low-loss state. It should be emphasized that the switching process under the electrical modulation is limited by RC delays that scale with device size [34].
All-optical switching
In comparison, all-optical switching with TCO operates via two mechanisms, either via interband absorption, or through intraband absorption of light. For interband absorption, the energy of the optical pump has to be greater than the bandgap of the TCO to excite photocarriers from the valence band to the conduction band [17, 23]. As in a case of the electrical switching, the photoexcited carriers lower the permittivity of the TCO via Drude dispersion and move TCO closer to the ENZ region. On the other hand, intraband absorption with the pump energy lower than the bandgap, heats up electrons in the conduction band what move it toward higher energies. Due to the non-parabolic nature of the conduction bands in TCO, these excited electrons have a greater effective mass. For a Drude formula it can be seen that as the effective mass of electrons increase the plasma frequency decreases and, in consequence, it leads to increases of the TCO permittivity. When the optical pump is off, the electrons cool down in sub-picosecond time scale. Thus, all-optical switching is a very promising mechanism for realization of active photonic components operating in the femtosecond time scale. Furthermore, when operating under intraband absorption, the same light source can be used as a pump source and signal what reduces the complexity of the system.
### Design
Here we examine a concept of bistability in a SiN rib photonic waveguide arrangement with TCO placed between SiN rib and ridge and utilizing an intraband absorption of light. Compared to our previous papers [15, 16] in which we utilized plasmonic slot waveguides to enhance an electric field into TCO and thus enhances the interaction of light with TCO, here we focused on all-dielectric device. It may provide lower electric field enhancement inside TCO but simultaneously it facilitates integration with photonic platform as it will not require any additional fabrication steps. Furthermore, a coupling efficiency between photonic waveguide and plasmonic slot waveguide usually does not exceed 50 %. In comparison, a proposed all-dielectric device can be easily integrated with the SiN photonic platform with an extremally high coupling efficiency exceeding 95% and it not require any additional fabrication steps
Here, the concept of bistability was investigated using 2D finite element method (FEM) simulations at the telecom wavelength of 1550 nm using a commercial software COMSOL and Lumerical. The thickness of TCO was chosen at 10 nm, while the thickness of SiN rib at 200 nm. The thickness and width of the SiN ridge was taken at \(h\)=300 nm and w=500 nm. The refractive index of SiN is assumed to be \(n\)=1.9963. For all TCOs considered here, the calculations were performed for a thermalization time r = 500 fs. The ITO properties were taken as \(\omega_{p}\)=2.52\(\cdot\)10\({}^{15}\) (rad/s), \(\nu\)=1.8\(\cdot\)10\({}^{14}\) (rad/s), \(\varepsilon_{\infty}\)=3.9 [11] where \(\omega_{p}\) is the plasma frequency, \(\nu\) is the scattering rate and \(\varepsilon_{\infty}\) is the permittivity due to the bound electrons. Similarly, the 6% Ga:ZnO (GZO) properties were taken at \(\omega_{p}\)=2.93\(\cdot\)10\({}^{15}\) (rad/s), \(\nu\)=1.78\(\cdot\)10\({}^{14}\) (rad/s), \(\varepsilon_{\infty}\)=2.475 [35], 10 % Al:ZnO (AZO) at \(\omega_{p}\)=1.137\(\cdot\)10\({}^{15}\) (rad/s), \(\nu\)=1.27\(\cdot\)10\({}^{14}\) (rad/s), \(\varepsilon_{\infty}\)=3.8825 [36] while In doped CdO at \(\omega_{p}\)=2.41\(\cdot\)10\({}^{15}\) (rad/s), \(\nu\)=3.06\(\cdot\)10\({}^{13}\) (rad/s), \(\varepsilon_{\infty}\)=5.5 [37-39].
The TCO parameters presented above allowed to calculate wavelength and plasma frequency dependent complex permittivity of all TCO materials examined in this paper.
In our previous papers we focused on ITO [15, 16] as it is currently the most popular TCO material commonly found in a literature [17, 22-24, 32-34]. However, the family of TCO materials is very broad and, depending on applications and an operation wavelength range, the proper TCO material can be identify. In this paper we examine first four TCO materials: AZO, GZO, ITO and In doped CdO that represents wide spectrum of ENZ wavelengths ranging from \(\lambda\)=1.0 \(\upmu\)m for 6 % Ga:ZnO (GZO) through, \(\lambda\)=1.5 \(\upmu\)m for ITO, \(\lambda\)=1.82 \(\upmu\)m for In:CdO to \(\lambda\)=3.34 \(\upmu\)m for 10 % Al:ZnO (AZO). As observed, AZO and In:CdO are characterized by the lowest imaginary permittivity, thus losses, while the imaginary part of permittivity of ITO at ENZ wavelength is pretty high (Fig. 2a). As we are here interested in telecom wavelengths, in the rest of the paper we focus on GZO, ITO and In:CdO (Fig. 2b). GZO shows the lowest plasma frequency in the telecom wavelength of 1550 nm while the plasma frequency of In:CdO is the highest. However, as in previous case (Fig. 2a), the imaginary part of permittivity is lowest for In:CdO (Fig. 2b). Furthermore, it should be remembered that In:CdO is characterized by an order of magnitude higher mobility compared to any other TCOs what highly influences its scattering rate, damping factor, \(\nu\) (\(\nu\)=\(e/\mu m_{eff}\)) where \(\mu\) is the material mobility [37, 38].
Figure 1: Geometry of the proposed photonic bistable device.
By comparing a real part of permittivity in a function of wavelength it can be observed that change of real part of permittivity close to ENZ wavelength is higher for Indium doped CdO and GZO compared to ITO and AZO. Similarly, AZO is characterized by the smoothest transition close to the ENZ wavelength. From our previous papers we can deduce [15, 16] that the steeper slope of the permittivity close to the ENZ point the narrower absorption curve of the device what means that less power is required to switch between two transmission levels of a bistable device.
As observed from **Fig. 3**, close to the ENZ region of the TCO, the electric field is confined mostly in the TCO while the electric field out of TCO decreases (blue curve at **Fig. 3**).
Figure 3: Electric field distribution in the SiN waveguide with ITO for different value of ITO permittivity.
Figure 2: (a, b) Dispersion of real and imaginary parts of dielectric permittivity of ITO, GZO, AZO and In:CdO as a function of wavelength and plasma frequency.
In consequence, the mode power attenuation is the highest at ENZ region as observed from **Fig. 4a** for ITO and decreases fast when moving out from ENZ point. The absorption curve reminds well-known bell shape. Depending on the power coupled to the SiN rib waveguide and the carrier concentration of ITO that is part of the waveguide, a device can operate in a bistable region with two different stable levels of transmitted output power for the same input (**Fig. 4b-e**) [15, 16]. Thus, it can serve as a memristor that mimics the biological synaptic response and allow to co-locate both processing and storage. Memristors have opened new doors to integrated circuits as it allows to actively modulate electrical or optical signals and hold memory states comparable to synaptic activity in the brain [18-21].
As observed from **Fig. 4**, the optical power required to move into a bistable region for SiN photonic waveguide with ITO is pretty high in a range of few Watts. The higher carrier concentration provides wider bistable region but at the cost of input optical power that arises. Simultaneously, the longer device no influence the bistability region range, however it highly influences the output power contrast between low and high transmission levels. And, with longer devices the absorption arises for both transmissions. For higher carrier concentration the bistability region ranges from 3.25 W to 5.3 W while for lower carrier concentration from 2.55 W to 3.15 W. For a device length of _l_=500 nm the output power difference in a low and high transmission level ranges from 3 W to 4.8 W for a high transmission
Figure 4: (a) Illustration of bistability and switching for different optical powers in the waveguide and under a different carrier concentration of ITO \(-\)\(N_{c}\)=0.93\(\cdot\)10\({}^{27}\) m\({}^{-3}\) and \(N_{c}\)=1.0\(\cdot\)10\({}^{27}\) m\({}^{-3}\). The mechanism of bistability was explained in detail in ref. **15**, **16**. (b) Absorptive loss as a function of the propagating power exhibiting hysteresis and manifesting all-optical bistability. (c, d, e) Input–output characteristics of the photonic bistable device of (b) 500 nm, (c) 1000 nm and (d) 1000 nm length for different carrier concentration of 0.93\(\cdot\)10\({}^{27}\) m\({}^{-3}\) and 1.0\(\cdot\)10\({}^{27}\) m\({}^{-3}\).
level and from 2.2 W to 3.9 W for a low transmission level. In comparison, for a longer device of \(I\)=4000 nm it ranges from 1.9 W to 2.35 W and from 0.13 W to 0.46W for high and low transmission levels, respectively. For a shorter device a difference is around 1.8 W while for longer device is around 0.35-0.45 W. As observed, the change of input optical power from 3.25 W to 5.3 W causes only small change in the output power for longer device - both transmission lines flatten out.
The operation conditions of the proposed photonic device can be changed when ITO is replaced by other TCO material (**Fig. 5**). For the same plasma frequency, the power required to operate in a bistable region drops from 3.25-5.30 W for structure with ITO (**Fig. 4**) to only 0.18-0.37 W for structure with In:CdO (**Fig. 6**). It is over 18 times reduction in the input power required to move into a bistability region. Even GZO can be helpful to reduce the power while working into a lower plasma frequency (**Fig. 5**). It means that even with lower carrier concentration in the GZO, the power can be reduced over few times compared to a device with ITO.
Furthermore, the absorption curve of a proposed device should be as narrow and steep as possible to ensure low power consumption (**Fig. 5**). It is directly related to the TCO material properties what was mentioned previously - the steeper a real part of permittivity close to the ENZ region the narrower absorption curve. And lower imaginary part of permittivity translates on higher absorption contrast as absorption highly arises only if the TCO material works close to the ENZ region (**Fig. 2**). From **Fig. 5** and **Fig. 2** can be deduced that ultra-low loss TCO materials with sharp index dispersion in the ENZ range are preferred as they offer sharp and narrow absorption curve of a device what reduces a power required for switching.
For a device with _I_=1000nm long in:CdO, a difference between low transmission level and high transmission level in a bistable region of a device changes from 10 mW to 135 mW for an input power of 180 mW and from 63 mW to 220 mW for an input power of 370 mW. Higher contrast is possible with longer devices but at the cost of output power that drops for longer devices. Here we operate at telecom wavelength of 1550 nm, thus highly doped In:CdO is required, however, we can reduce a doping level when working close to the ENZ wavelength of 1820 nm (Fig. 2a).
The proposed photonic bistable device can serve as a building block for complex photonic neural networks. The proper choice of TCO materials and operation wavelength allow to define the operation conditions of the device while a design allow to enhance interaction of light with TCO material. To imitate brain performances such devices should be arranged in more complex architectures that can serve for a neuromorphic computing based on a photonic platform.
#### Dual-mode operation
By playing simultaneously with both electrical and optical switching [40, 41] or only all-optical switching but under both interband and intraband absorption (two light sources at different wavelength) we can take a full advantage of the switching possibilities of TCO materials. As observed from **Fig. 7**, even when we change simultaneously or step by step a carrier concentration in TCO and effective mass through coupling a light to the TCO, we can still stay at the same value of permittivity. From this point, the performance of a device does no change (points E and F and solid line in **Fig. 7a**). However, when we increase an effective mass of the TCO through coupling a short pulse of light to a device and simultaneously increase a carrier concentration in the TCO through either intraband pump or electric voltage, and when a light pulse is off, a device transfer from a high loss regime, \(\varepsilon\)=0, to a low loss regime, \(\varepsilon\)\(\sim\)-1.8 (points A and B and dotted line in **Fig. 7a**). In a contrary, by working in a different parameters range, we can transfer from a low loss regime \(\varepsilon_{r}\)=-2.0 (point C) to a high loss regime \(\varepsilon_{r}\)=0 (point D) by coupling a light to a device and simultaneously either applying a short electrical pulse to the TCO or coupling a short optical pulse to a device (dashed line in **Fig. 7a**).
Figure 6: (a) Absorptive loss as a function of the propagating power for Indium doped CdO and for carrier concentration _N_=1.02\(\cdot\)10\({}^{27}\) m\({}^{3}\). (b, c) Input–output characteristics of the photonic bistable device of (b) 100 nm and (c) 1000 nm length for carrier concentration of 1.02\(\cdot\)10\({}^{27}\) m\({}^{3}\).
hus, transparent conductive oxides (TCOs) open new possibilities in both photonic integrated circuits (PIC) and neuromorphic photonics that can provide a lot of freedom in a design and can bring a network to the next operational level.
### Biological brain
As the goal of the neuromorphic computing is to mimic a behavior of a biological brain we should, at first, recall how a signal is processed in biological systems [3, 4]. In a brain, two types of synaptic integration take place, and both of them are very essential for a signal processing. First, the spatial summation - the process in which synaptic potential generated at many different synapses on a dendrite of the same neuron are added together at the soma region. Second, the temporal summation - the process in which many synaptic potentials generated at the same synapse are added together if they occur in rapid succession. Thus, it requires high-frequency presynaptic activity to summate all the postsynaptic responses. Going into more details - in the absence of any signals in the neuron, the membrane of the individual neuron stays at so-called the resting potential. To generate an action potential the membrane potential must be reduced below threshold what is called depolarization. As the depolarization enhances a cell's ability to generate an action potential, it is excitatory. It has been already mentioned that to achieve the necessary depolarization the synapses must be stimulated at high frequencies. Furthermore, to achieve a significant spatial summation enough synapses must be active simultaneously. This second requirement in a biology is called cooperativity as many coactive synapses must cooperate to produce enough depolarization to cause long-term potentiation, _i.e._, activity. To achieve a sufficient temporal summation, the individual presynaptic potential must persist long enough to maintain depolarization and even deepen it before the next presynaptic potential arrives. Thus, it defines the membrane time constant that determines the time course of the synaptic potential and thus controls temporal summation. In a human brain, a time constant is in the range of 1-15 ms. In consequence, the neurons with a larger membrane time constant have a greater capacity
Figure 7: (a) Real and (b) imaginary part of permittivity map for different carrier concentration and effective mass.
for temporal summation as there is higher probability that two consecutive signals from presynaptic neuron will summate and bring the membrane to the threshold for an action potential.
**Device performance in neural networks**
For the proposed structure, the optical signal corresponds to a biological equivalent of action potential in neurons while a thermalization time of electrons in TCO corresponds to the membrane time constant. The membrane time constant defines how long the depolarization is maintained by the neurons while a thermalization time of electrons defines a time needed for excited electrons to return to its initial unexcited state. Consequently, while a depolarization defines a membrane potential that is reduced below threshold, its equivalent in proposed device defines the lower output optical power for a given input optical power for an all-optical switching mechanism or higher output optical power for the same carrier concentration for an electrical switching mechanism. Similarly to the biological counterpart, a temporal summation in the proposed device that is based on the TCO materials require high-frequency input optical pulses to summate all the signals. A time constant between consequent optical pulses should be shorter than a thermalization time of electrons in the TCO so the energy provided to the electrons from the next optical pulse should give rise to further increases of the energy of the electron gas. For a time constant between consequent optical pulses longer than an electron-lattice relaxation time of electrons into TCO, the electrons excited by a first optical pulse return to its initial unexcited state before the next optical pulse arrives.
When the consequent pulses are high enough and are combined in the integration area in a time shorter than a thermalization time of electrons in TCO, each pulse slightly heats up electrons and move it higher in the conduction band. The output power follows the red curve as shown in **Fig. 8**. However, when the combined optical power exceeds the threshold, the optical transmission drops and now follows the blue curve. When the optical pulses delivered to the device decrease or if a distance between consequent pulses exceeds a thermalization time of electrons in TCO, the electrons can thermalize and return to its initial energy level, thus in consequence, the effective mass decreases and transmission drops to lower level for the same input power (points X and Y in **Fig. 8**). Further decreases of the optical input power and thus the electron temperature reset a device and moves its back to its
Figure 8: (a) An optical pulse train from waveguide before and after a proposed device. (b) Absorbed pump energy \(U_{s}\),..., \(U_{n}\) under consequent pulses increases electron energy and, thus, the electrons effective mass \(m^{*}(E)\) and (c) operation principles.
initial state indicated by point A. In this arrangement, the integration of pulses can be both in spatial and temporal domains where the pulses from other neurons can be combined into a single waveguide using wavelength division multiplexing (WDM). In this case, as the switching of the TCO occurs only above a certain threshold value, the neuron only stays at low output power if the weighted sum of the input optical power exceeds this threshold. Thus, the system naturally emulates the basic integrate-and-fire functionality of a biological neuron but in inverse schema - system stays in low power only when the threshold is reached. This artificial neuron can integrate over the optical power and over time what make it very similar to a biological neuron.
## Conclusion
For a first time we have examined a bistable device on the low-loss nitride-rich silicon platform with the TCO active materials arranged in the photonic rib waveguide for application in artificial neural networks. Different TCO materials were examined showing that significant reduction in optical power can be achieved under proper choice of material. The proposed photonic device can serve as both a linear weight for a single photonic signal and a simultaneously spatial and temporal summation unit integrating many photonic signals. Furthermore, depending on the overall summated signal value the proposed device can keep history about previous state and thus can serve as a memristor what bring it closer to the brain. The proposed device can be easily integrated with the photonic SiN waveguides serving as an interconnector with a coupling efficiency exceeding even 95 %. Furthermore, both materials _i.e._, silicon nitride and transparent conductive oxides are CMOS-compatible and are characterized by very low losses what open new possibilities for a further development of neural networks.
## Acknowledgements
J.G. thanks the "ENSEMBLE3 - Centre of Excellence for Nanophotonics, advanced materials and novel crystal growth-based technologies" project (GA No. MAB/2020/14) carried out within the International Research Agendas program of the Foundation for Polish Science co-financed by the European Union under the European Regional Development Fund and the European Union's Horizon 2020 research and innovation program Teaming for Excellence (Grant Agreement No. 857543) for support of this work.
|
2309.10042 | Covariant operator bases for continuous variables | Coherent-state representations are a standard tool to deal with
continuous-variable systems, as they allow one to efficiently visualize quantum
states in phase space. Here, we work out an alternative basis consisting of
monomials on the basic observables, with the crucial property of behaving well
under symplectic transformations. This basis is the analogue of the irreducible
tensors widely used in the context of SU(2) symmetry. Given the density matrix
of a state, the expansion coefficients in that basis constitute the multipoles,
which describe the state in a canonically covariant form that is both concise
and explicit. We use these quantities to assess properties such as quantumness
or Gaussianity and to furnish direct connections between tomographic
measurements and quasiprobability distribution reconstructions. | A. Z. Goldberg, A. B. Klimov, G. Leuchs, L. L. Sanchez-Soto | 2023-09-18T18:00:15Z | http://arxiv.org/abs/2309.10042v2 | # Covariant operator bases for continuous variables
###### Abstract
Coherent-state representations are a standard tool to deal with continuous-variable systems, as they allow one to efficiently visualize quantum states in phase space. Here, we work out an alternative basis consisting of monomials on the basic observables, with the crucial property of behaving well under symplectic transformations. This basis is the analogue of the irreducible tensors widely used in the context of SU(2) symmetry. Given the density matrix of a state, the corresponding expansion coefficients in that basis constitute the state multipoles, which describe the state in a canonically covariant form that is both concise and explicit. We use these quantities to assess properties such as quantumness or Gaussianity.
## 1 Introduction
The notion of observable plays a central role in quantum physics [1]. The term was first used by Heisenberg [2] (_beobachtbare Grosse_) to refer to quantities involved in physical measurements and thus having an operational meaning. They give us information about the state of a physical system and may be predicted by the theory. According to the conventional formulation, observables are represented by selfadjoint operators acting on the Hilbert space associated with the system [3, 4].
Given an abstract observable, one has to find its practical implementation. For discrete degrees of freedom, the associated Hilbert space is finite dimensional and the observable is then represented by a matrix whose explicit form depends on the basis. Choosing this basis such that it possesses specific properties can be tricky [5, 6, 7, 8]. Especially, when the system has an intrinsic symmetry, the basis should have the suitable transformation properties under the action of that symmetry. This idea is the rationale behind the construction of irreducible tensorial sets [9], which are crucial for the description of rotationally invariant systems [10] and can be generalized to other invariances [11].
Things get more complicated in the continuous-variable setting, when the Hilbert space has infinite dimensions. The paradigmatic example is that of a single bosonic mode, where the Weyl-Heisenberg group emerges as a hallmark of noncommutativity [12]. As Fock and coherent states are frequently regarded as the most and least quantum states, respectively, they are typically used as bases in quantum optics. Coherent states constitute an overcomplete basis which is at the realm of the phase-space formulation of quantum theory [13, 14, 15, 16, 17, 18, 19, 20, 21, 22] where observables become \(c\)-number functions (the _symbols_ of the operators). This is the most convenient construct for visualizing quantum states and processes for continuous variables (CV).
In this phase-space approach the operator bases used are recognised to be simple ordered exponentials of the dynamical variables. However, our physical intuition seems to require an explicit invariance under symplectic transformations (i.e., linear canonical transformations), which
is not apparent at first sight [23]. This seems to call for proper tensorial sets for CV. In Ref. [24] it was suggested that for a single mode, the monomials
\[\hat{T}_{Kq}=\hat{a}^{\dagger K+q}\hat{a}^{K-q} \tag{1}\]
with \(K=0,1/2,1,\ldots\) and \(q=-K,\ldots,+K\) behave as proper tensor operators for the problem at hand. Here \(\hat{a}\) and \(\hat{a}^{\dagger}\) are the bosonic creation and annihilation operator for the mode. In this work, we examine the properties of these monomials and derive their inverses, which can then be used to directly expand any quantum operator. These operators can then be added to the quantum optician's toolbox and used by anyone working in CV.
When the density matrix is expanded in the basis (1), its expansion coefficients are the moments, dubbed as state multipoles, which convey complete information. For CV, moments have been considered for studying quantumness [25, 26]. Here, we inspect how the multipoles characterize the state. Drawing inspiration from SU(2), we compare states that hide their information in the large-\(K\) coefficients to those whose information is mostly contained in the smallest-\(K\) multipoles. The result is an intriguing counterplay between the extremal states in the other representations, including Fock states, coherent states, and states with maximal off-diagonal coefficients in the Fock basis.
There are many avenues to explore with the monomials representation. After a brief review of the basic concepts required in Sec. 2, we examine the properties of the basis (1) and its inverse in Sec. 3. The corresponding multipoles appear as the expansion coefficients of the density matrix in that basis. The covariance under symplectic transformations tells us how the different parts of a state are interconverted through standard operations. Note that we are considering only normally ordered polynomials, but everything can be extended for antinormally and symmetrically ordered monomials. In Sec. 4 we introduce the concept of cumulative multipole distribution and its inverse and find the extremal states for those quantities and determine in this way which states are the most and least quantum. Our conclusions are finally summarized in Sec. 5.
## 2 Background
We provide here a self-contained background that is familiar to quantum opticians. The reader can find more details in the previously quoted literature [13, 14, 15, 16, 17, 18, 19, 20, 21, 22]. A single bosonic mode has creation and annihilation operators satisfying the commutation relations
\[[\hat{a},\hat{a}^{\dagger}]=\mathbb{1}. \tag{2}\]
These can be used to define the Fock states as excitations
\[\ket{n}=\frac{\hat{a}^{\dagger n}}{\sqrt{n!}}\ket{\mathrm{vac}} \tag{3}\]
of the vacuum \(\ket{\mathrm{vac}}\) annihilated as \(\hat{a}\ket{\mathrm{vac}}=0\), as well as the canonical coherent states
\[\ket{\alpha}=\mathrm{e}^{-\frac{\ket{\alpha}^{2}}{2}}\sum_{n=0}^{\infty}\frac{ \alpha^{n}}{\sqrt{n!}}\ket{n}. \tag{4}\]
These can both be used to resolve the identity:
\[\mathbb{1}=\sum_{n=0}^{\infty}\ket{n}\bra{n}=\frac{1}{\pi}\int d^{2}\alpha\ket {\alpha}\bra{\alpha}. \tag{5}\]
The coherent states can also be defined as displaced versions of the vacuum state \(\left|\alpha\right\rangle=\hat{D}(\alpha)\left|\mathrm{vac}\right\rangle\) via the displacement operators that take numerous useful forms
\[\hat{D}(\alpha)=e^{\alpha\hat{a}^{\dagger}-\alpha^{*}\hat{a}}=e^{-\frac{\left| \alpha\right|^{2}}{2}}\,e^{\alpha\hat{a}^{\dagger}}e^{-\alpha^{*}\hat{a}}=e^{ \frac{\left|\alpha\right|^{2}}{2}}\,e^{-\alpha^{*}\hat{a}}e^{\alpha\hat{a}^{ \dagger}}\,. \tag{6}\]
These obey the composition law
\[\hat{D}(\alpha)\hat{D}(\beta)=e^{i\,\mathrm{Im}(\alpha\beta^{*})}\hat{D}( \alpha+\beta) \tag{7}\]
and the trace-orthogonality condition
\[\mathrm{Tr}[\hat{D}(\alpha)\hat{D}(-\beta)]=\pi\delta^{2}(\alpha-\beta)\,. \tag{8}\]
Their matrix elements in the coherent-state basis can be found from the composition law and in the Fock-state basis are given by [27]
\[\left\langle m\right|\hat{D}(\alpha)\left|n\right\rangle=\begin{cases} \sqrt{\frac{n!}{m!}}e^{-\frac{\left|\alpha\right|^{2}}{2}}\alpha^{m-n}\,L_{n} ^{(m-n)}(|\alpha|^{2}),&m\leq n,\\ \\ \sqrt{\frac{m!}{n!}}e^{-\frac{\left|\alpha\right|^{2}}{2}}(-\alpha^{*})^{n-m}L _{m}^{(n-m)}(|\alpha|^{2}),&n\leq m,\end{cases} \tag{9}\]
where \(L_{n}^{(\alpha)}(\cdot)\) denotes the generalized Laguerre polynomial [28].
Given any operator \(\hat{F}\), it can be expressed in the Fock basis as
\[\hat{F}=\sum_{m,n}F_{m,n}\left|m\right\rangle\left\langle n\right|,\qquad F_{m,n}=\left\langle m\right|\hat{F}\left|n\right\rangle \tag{10}\]
and in the coherent-state basis as
\[\hat{F}=\frac{1}{\pi^{2}}\int d^{2}\alpha d^{2}\beta F(\alpha,\beta)\left| \alpha\right\rangle\left\langle\beta\right|,\qquad F\left(\alpha,\beta\right) =\left\langle\alpha\right|\hat{F}\left|\beta\right\rangle. \tag{11}\]
However, it is always possible to express this coherent-state representation in a diagonal form. For the particular case of the density operator \(\hat{\varrho}\) this yields the Glauber-Sudarshan \(P\)-function [29, 30]
\[\hat{\varrho}=\int d^{2}\alpha\,P(\alpha)\left|\alpha\right\rangle\left\langle \alpha\right|\,, \tag{12}\]
with [31]
\[P(\alpha)=\frac{e^{\left|\alpha\right|^{2}}}{\pi^{2}}\int d^{2}\beta\left\langle -\beta\right|\hat{\varrho}\left|\beta\right\rangle e^{\left|\beta\right|^{2} +2i\,\mathrm{Im}(\alpha\beta^{*})}. \tag{13}\]
The same holds true for any operator \(\hat{F}\) for which \(\left\langle-\beta\right|\hat{F}\left|\beta\right\rangle e^{\left|\beta \right|^{2}}\) is square-integrable.
One identity that often shows up in this realm is an expression for the vacuum in terms of normally ordered polynomials:
\[\left|\mathrm{vac}\right\rangle\left\langle\mathrm{vac}\right|=:e^{-\hat{a}^{ \dagger}\hat{a}}:\,. \tag{14}\]
This allows us to express any unit-rank operator from the Fock basis as
\[\left|m\right\rangle\left\langle n\right|=\frac{1}{\sqrt{m!n!}}:\hat{a}^{ \dagger}{}^{m}\mathrm{e}^{-\hat{a}^{\dagger}\hat{a}}\hat{a}^{n}:\,. \tag{15}\]
This directly guarantees that a normally ordered expression will always exist for any operator.
State multipoles
As heralded in the Introduction, the monomials (1) are the components of finite-dimensional tensor operators with respect to the symplectic group Sp(2, \(\mathbb{R}\)). Their transformation properties are examined in the Appendix A. For completeness, we have to seek operators \(\hat{\mathfrak{Z}}_{Kq}\) satisfying the proper orthonormality conditions to be inverses of the monomials:
\[\operatorname{Tr}(\hat{\mathfrak{Z}}_{Kq}\hat{T}_{K^{\prime}q^{\prime}})= \delta_{KK^{\prime}}\delta_{Kq^{\prime}}. \tag{16}\]
Using the trace-orthogonality conditions of the displacement operators, we can rewrite this condition as
\[\operatorname{Tr}(\hat{\mathfrak{Z}}_{Kq}\hat{T}_{K^{\prime}q^{ \prime}}) =\frac{1}{\pi}\int d^{2}\beta\ \operatorname{Tr}[D(\beta)\hat{T}_{K^{\prime}q^{\prime}}] \operatorname{Tr}[D(-\beta)\hat{\mathfrak{Z}}_{Kq}]\] \[=\frac{1}{\pi^{2}}\int d^{2}\alpha d^{2}\beta\,e^{\frac{|\beta|^{ 2}}{2}}e^{\beta\alpha^{*}-\beta^{*}\alpha}\alpha^{*K^{\prime}+q^{\prime}} \alpha^{K^{\prime}-q^{\prime}}\operatorname{Tr}[D(-\beta)\hat{\mathfrak{Z}}_ {Kq}]\,. \tag{17}\]
Now, by inspection, we attain orthonormality when
\[e^{\frac{|\beta|^{2}}{2}}\operatorname{Tr}[D(-\beta)\hat{\mathfrak{Z}}_{Kq}] =(-1)^{2K}\frac{\beta^{K+q}(-\beta^{*})^{K-q}}{(K+q)!(K-q)!}. \tag{18}\]
In consequence, we have
\[\hat{\mathfrak{Z}}_{Kq}=\frac{(-1)^{K+q}}{(K+q)!\,(K-q)!}\frac{1}{\pi}\int d^{ 2}\beta\ e^{-\frac{|\beta|^{2}}{2}}\hat{D}(\beta)\ \beta^{K+q}\beta^{*K-q}. \tag{19}\]
Interestingly, they appear as moments of the operators introduced in the pioneering work by Agarwal and Wolf [32]. This inversion process can be repeated with other ordered polynomials and we find the inverse operators to again appear as moments of the other operators introduced therein. In Appendix B we sketch the procedure for the case of symmetric order. Once they are known, it is easy to expand any operator, such as a density matrix \(\hat{\varrho}\), through
\[\hat{\varrho}=\sum_{Kq}\langle\hat{\mathfrak{Z}}_{Kq}\rangle\ \hat{T}_{Kq}\,, \tag{20}\]
where \(\langle\hat{\mathfrak{Z}}_{Kq}\rangle=\operatorname{Tr}(\hat{\varrho} \mathfrak{Z}_{Kq})\), following the standard notation for SU(2) [10], will be called the state multipoles. They correspond to moments of the basic variables, properly arranged.
Conversely, we can expand operators in the basis of the inverse operators,
\[\hat{\varrho}=\sum_{Kq}\langle\hat{T}_{Kq}\rangle\ \hat{\mathfrak{Z}}_{Kq}\,, \tag{21}\]
with \(\langle\hat{T}_{Kq}\rangle=\operatorname{Tr}(\hat{\varrho}T_{Kq})\) now being the inverse multipoles.
Since inverse operators inherit the Hermitian conjugation properties of the monomials,
\[\hat{T}_{Kq}^{\dagger}=\hat{T}_{K\,-q},\qquad\qquad\hat{\mathfrak{Z}}_{Kq}^{ \dagger}=\hat{\mathfrak{Z}}_{K\,-q}, \tag{22}\]
the multipoles and inverse multipoles simply transform as \(q\leftrightarrow-q\) under complex conjugation.
The purity of a state has a simple expression in terms of the multipoles
\[\operatorname{Tr}(\hat{\varrho}^{2})=\sum_{Kq}\langle\hat{\mathfrak{Z}}_{Kq} \rangle\langle\hat{T}_{Kq}\rangle\,. \tag{23}\]
It is more challenging to express the trace of a state in terms of the multipoles because the operators \(\hat{T}_{Kq}\) are not trace-class; however, by formally writing \(\mathrm{Tr}[\hat{D}(\beta)]=\pi\delta^{2}(\beta)\exp(-|\beta|^{2}/2)\), we can compute
\[\mathrm{Tr}(\hat{\mathfrak{T}}_{Kq})=\delta_{K0}\delta_{q0} \tag{24}\]
such that normalization dictates that the inverse multipoles satisfy \(1=\mathrm{Tr}(\hat{\varrho})=\langle\hat{T}_{00}\rangle\).
In principle, the complete characterization of a CV state requires the knowledge of infinite multipoles. For a Gaussian state, only moments up until \(K=1\) are needed. This suggests that either the inverse multipoles \(\langle\hat{T}_{Kq}\rangle\) for larger values of \(K\) or the multipoles \(\langle\hat{\mathfrak{T}}_{Kq}\rangle\) characterize the non-Gaussianity of a state.
In consequence, we have to calculate the multipoles of arbitrary states. Before that, we consider the simplest cases of coherent and Fock states, for which the calculations are straightforward. Starting with coherent states, using \((\ref{eq:201})\) and recalling the Rodrigues formula for the generalized Laguerre polynomials [28], we get
\[\langle\alpha|\hat{\mathfrak{T}}_{Kq}|\alpha\rangle=\frac{(-1)^{K+q}}{(K-q)! }\frac{e^{-|\alpha|^{2}}}{\alpha^{*2q}}L_{K+q}^{(-2q)}(|\alpha|^{2})\,. \tag{25}\]
The magnitudes of these multipole moments versus \(|\alpha|^{2}\) for various values of \(K\) and \(q\) are plotted in Fig. 1. As we can appreciate, they decrease rapidly with \(K\) and large \(|\alpha|\).
As for Fock states, we use the matrix elements of the displacement operator \(\langle n|\,\hat{D}(\beta)\,|n\rangle=\exp(-|\beta|^{2}/2)L_{n}(|\beta|^{2})\). Since these only depend on \(|\beta|\) and not its phase, the \(q=0\) terms all vanish, leaving us with
\[\langle n|\hat{\mathfrak{T}}_{Kq}|n\rangle=\delta_{q0}\frac{(-1)^{K}}{K!^{2}} \int_{0}^{\infty}rdr\,2\mathrm{e}^{-r^{2}}r^{2K}L_{n}(r^{2})=\delta_{q0}\frac {(-1)^{K+n}}{n!(K-n)!}. \tag{26}\]
The inverse multipoles are trivial in both cases, with
\[\langle\alpha|\hat{T}_{Kq}|\alpha\rangle=\alpha^{*K+q}\alpha^{K-q}\,,\qquad \qquad\langle n|\hat{T}_{Kq}|n\rangle=\delta_{q0}K!\binom{n}{K}\,. \tag{27}\]
Note that the multipoles that vanish for Fock states have \(n>K\) and the inverse multipoles that vanish for Fock states have \(K>n\).
For arbitrary states, we note that, since any state can be expressed in terms of its \(P\)-function, we can write
\[\langle\widehat{\mathfrak{L}}_{Kq}\rangle=\int d^{2}\alpha\ P(\alpha)\langle \alpha|\widehat{\mathfrak{L}}_{Kq}|\alpha\rangle=\int d^{2}\alpha\ P(\alpha) \,\frac{(-1)^{K+q}}{(K-q)!}\frac{e^{-|\alpha|^{2}}}{\alpha^{*2q}}L_{K+q}^{(-2 q)}(|\alpha|^{2}). \tag{28}\]
To get more of a handle on these multipoles, expecially when \(P\) is not a well-behaved function, it is more convenient to have an expression in terms of the matrix elements \(\varrho_{mn}=\langle m|\,\hat{\varrho}\,|n\rangle\). This can be provided by expressing \(P(\alpha)\) in terms of matrix elements of the state in the Fock basis and derivatives of delta functions. More directly, we can compute (\(m\leq n\))
\[\langle n|\widehat{\mathfrak{L}}_{Kq}|m\rangle =\frac{(-1)^{K+q}}{(K+q)!\,(K-q)!}\frac{1}{\pi}\int d^{2}\beta e^ {-|\beta|^{2}}\sqrt{\frac{n!}{m!}}\beta^{m-n}L_{n}^{(m-n)}(|\beta|^{2})\beta^{ K+q}\beta^{*K-q}\] \[=\delta_{n-m,2q}\,(-1)^{K+q+n}\,\sqrt{\frac{n!}{(n-2q)!}}{K+q \choose n}\frac{1}{(K+q)!}. \tag{29}\]
These give the matrix elements of the inverse operators \(\widehat{\mathfrak{L}}_{Kq}\) in the Fock basis and show that \(\widehat{\mathfrak{L}}_{Kq}\) can only have nonnull eigenstates when \(q=0\). Putting these together for an arbitrary state, we find
\[\langle\widehat{\mathfrak{L}}_{Kq}\rangle=\begin{cases}\sum_{n\geq m}\varrho_{ nm}\delta_{n-m,2q}\,(-1)^{K+q+n}\,\sqrt{\frac{n!}{(n-2q)!}}{K+q\choose n}\frac{1}{(K+q)!},&q\geq 0\,,\\ \\ \sum_{m\geq n}\varrho_{nm}^{*}\delta_{n-m,-2q}\,(-1)^{K-q+n}\,\sqrt{\frac{n!}{ (n+2q)!}}{K-q\choose n}\frac{1}{(K-q)!},&q\leq 0\,.\end{cases} \tag{30}\]
Figure 2: Parts of the state in the Fock state basis coupled to by a particular inverse operator \(\widehat{\mathfrak{L}}_{Kq}\). Each value of \(q\) labels the off-diagonal stripe of the matrix that affects the value of \(\langle\widehat{\mathfrak{L}}_{Kq}\rangle\). Each value of \(K\) labels the maximal antidiagonal row that contributes to the value of \(\langle\widehat{\mathfrak{L}}_{Kq}\rangle\). This antidiagonal row is characterized by the row and column number summing to \(2K\).
In this way, we get a simple expression for the inverse monomials in the Fock basis:
\[\mathfrak{\hat{T}}_{Kq}=\begin{cases}\sum_{n=2q}^{K+q}\frac{(-1)^{K+q+n}}{\sqrt{n! (n-2q)!}(K+q-n)!}\,\left|n-2q\right\rangle\left\langle n\right|,&q\geq 0\,, \\ \\ \sum_{n=-2q}^{K-q}\frac{(-1)^{K-q+n}}{\sqrt{n!(n+2q)!(K-q-n)!}}\, \left|n\right\rangle\left\langle n+2q\right|,&q\leq 0\,,\end{cases} \tag{31}\]
whose orthonormality with the operators \(\hat{T}_{Kq}\) can be directly verified. This expression equally serves to furnish a representation of the moments of the displacement operator in the Fock basis.
To understand this result, we plot in Fig. 2 a representation of the nonzero parts of different operators \(\mathfrak{\hat{T}}_{Kq}\) in the Fock basis, which equivalently represents which elements of a density matrix \(\varrho_{mn}\) contribute to a given multipole \(\langle\mathfrak{\hat{T}}_{Kq}\rangle\). The contributing elements are all on the \(2q\)th diagonal, ranging over the first \(2K+1\) antidiagonals. The inverse multipoles \(\langle\hat{T}_{Kq}\rangle\) depend on the \(-q\)th diagonal and all of the antidiagonals starting from the \(2K\)th antidiagonal. This picture makes clear a number of properties that will become useful for our purposes.
To conclude, it is common to find operators of a generic form \(f(\hat{a},\hat{a}^{\dagger})\). Quite often, it is necessary to find their normally ordered form \(:\!f(\hat{a},\hat{a}^{\dagger})\!:\), where \(:\!:\) denotes normal ordering. Such is necessary, for example, in photodetection theory [33]. Although algebraic techniques are available [34], the multipolar expansion that we have developed makes this computation quite tractable. We first compute
\[\operatorname{Tr}[\hat{D}(\beta)\,:\!f(\hat{a},\hat{a}^{\dagger})\!:]=e^{\frac {\left|\beta\right|^{2}}{2}}\,\operatorname{Tr}[:\!e^{\beta\hat{a}^{\dagger}} \,f(\hat{a},\hat{a}^{\dagger})\,e^{-\beta^{*}\hat{a}}\!:]=\frac{e^{\frac{ \left|\beta\right|^{2}}{2}}}{\pi}\int d^{2}\alpha\,f(\alpha,\alpha^{*})\,e^{ \beta\alpha^{*}-\beta^{*}\alpha}\,. \tag{32}\]
The integral is nothing but the Fourier transform of the function \(f(\alpha,\alpha^{*})\) with respect to both of its arguments. If we call \(F(\beta,\beta^{*})\) this transform, the multipole moments of \(:f(\hat{a},\hat{a}^{\dagger}):\), denoted by \(F_{Kq}\), become
\[F_{Kq}=\frac{(-1)^{K+q}}{\pi(K+q)!(K-q)!}\int d^{2}\beta\,F(\beta,\beta^{*})\, \beta^{K+q}\beta^{*K-q}\,. \tag{33}\]
In other words, the moments of the Fourier transform of \(f(\alpha,\alpha^{*})\) give the expansion coefficients of \(:f(\hat{a},\hat{a}^{\dagger}):\) in the \(\hat{T}_{Kq}\) basis.
## 4 Extremal states
### Cumulative multipolar distribution
We turn now our attention to cumulative multipole distribution; that is,
\[\mathfrak{A}_{M}(\hat{\varrho})=\sum_{K=0}^{M}\mathfrak{T}_{K}^{2}(\hat{ \varrho}) \tag{34}\]
with \(M=0,1/2,1,\ldots\) and where
\[\mathfrak{T}_{K}^{2}(\hat{\varrho})=\sum_{q=-K}^{K}|\operatorname{Tr}(\mathfrak{ \hat{T}}_{Kq}\hat{\varrho})|^{2} \tag{35}\]
is the Euclidean norm of the \(K\)th multipole. The quantities \(\mathfrak{A}_{M}(\hat{\varrho})\) can be be used to furnish a generalized uncertainty principle [24] and they are a good indicator of quantumness [35, 36]. For
spin variables, it has been shown that \(\mathfrak{A}_{M}(\hat{\varrho})\) are maximized to all orders \(M\) by SU(2)-coherent states, which are the least quantum states in this context, and vanish for the most quantum states, which are called the Kings of Quantumness, the furthest in some sense from coherent states [37, 38, 39].
What states maximize and minimize these cumulative variables for CV? Let us begin by examining a few of the lowest orders.
\(\boldsymbol{M=0}\): For an arbitrary state, we can write \(\mathfrak{A}_{0}\) in terms of the Fock-state coefficients as
\[\mathfrak{A}_{0}=\left|\sum_{n}(-1)^{n}\varrho_{nn}{0\choose n}\right|^{2}=| \varrho_{00}|^{2}. \tag{36}\]
This is uniquely maximized by the vacuum state \(|{\rm vac}\rangle\), with \(\varrho_{00}=1\), which is a minimal-energy coherent state and can be considered the least quantum state in this context. The quantity \(\mathfrak{A}_{0}\), on the other hand, is minimized by any state with \(\varrho_{00}=0\), which causes \(\mathfrak{A}_{0}\) to vanish. This is easily attained by Fock states \(|n\rangle\) with \(n>0\). In this sense, all Fock states that are not the vacuum are the most quantum. States becomes more quantum as they gain more energy and their vacuum component \(\varrho_{00}\) diminishes in magnitude.
\(\boldsymbol{M=1/2}\): For \(K=1/2\), we can readily compute
\[\mathfrak{T}_{1/2}=|\varrho_{01}|^{2}+|\varrho_{10}|^{2}=2|\varrho_{01}|^{2}. \tag{37}\]
This is minimized by any state with no coherences in the Fock basis (such as, e.g., number states). On the other hand, it is maximized by states with maximal coherence in the smallest-energy section of the Fock basis: \(|\psi_{+}\rangle=\frac{1}{\sqrt{2}}(|0\rangle+e^{i\varphi}\,|1\rangle)\), with \(\varphi\in\mathbb{R}\). Together, \(\mathfrak{A}_{1/2}\) is minimized by any state with \(\varrho_{00}=0\), because that forces \(\varrho_{01}\) to vanish by positivity of the density matrix, and it is still uniquely maximized by the vacuum state, again because of the positivity constraint \(|\varrho_{01}|\leq\sqrt{\varrho_{00}(1-\varrho_{00})}\).
\(\boldsymbol{M=1}\): Now, we find
\[\mathfrak{T}_{1}=|\varrho_{00}-\varrho_{11}|^{2}+\tfrac{1}{2}|\varrho_{02}|^{ 2}+\tfrac{1}{2}|\varrho_{20}|^{2}=(\varrho_{00}-\varrho_{11})^{2}+|\varrho_{0 2}|^{2}. \tag{38}\]
This is minimized by all states with \(\varrho_{00}=\varrho_{11}=0\), again including Fock states but now with more than one excitation, but it is also _minimized_ by the state \(|\psi_{+}\rangle\) that _maximized_\(\mathfrak{A}_{1/2}\). It is again maximized by the vacuum state with \(\varrho_{00}=1\), but it is also maximized by the single-photon state with \(\varrho_{11}=1\). The cumulative distribution is again the more sensible quantity: \(\mathfrak{A}_{1}\) is minimized by states with vanishing components in the zero- and single-excitation subspaces, of which the Fock state \(|2\rangle\) has the lowest energy, and is uniquely maximized by the vacuum (coherent) state.
\(\boldsymbol{M=3/2}\): We find
\[\mathfrak{T}_{3/2}=\tfrac{2}{3!}|\varrho_{30}|^{2}+2\left|\varrho_{10}-\tfrac {1}{\sqrt{2}}\varrho_{21}\right|^{2}. \tag{39}\]
As usual this is minimized by any Fock state and by any state with no probability in photon-number sectors up until \(n=3\), while it is maximized by pure states of the form \(|\psi\rangle=e^{i\varphi}\tfrac{1}{\sqrt{3}}\,|0\rangle+\tfrac{1}{\sqrt{2}} \,|1\rangle-e^{-i\varphi}\tfrac{1}{\sqrt{6}}\,|2\rangle\). The cumulative \(\mathfrak{A}_{3/2}\) is again uniquely maximized by the vacuum state and minimized by any Fock state and by any state with no probability in photon-number sectors up until \(n=3\).
\(\boldsymbol{M>3/2}\): The consistent conclusion is that different Euclidean norms of the multipoles for different orders \(K\) can be maximized by different states, but that the cumulative distribution is always maximized by the vacuum state. All of the orders of multipoles and their cumulative distribution vanish for sufficiently large Fock states, cementing Fock states as maximally quantum according to this condition. We as of yet have only a circuitous proof that \(\mathfrak{A}_{M}(\hat{\varrho})\) is uniquely maximized by \(|{\rm vac}\rangle\) for arbitrarily large \(M\): in Appendix C, we provide joint analytical and numerical arguments that this pattern continues for all \(M\), such that the vacuum state may be considered minimally quantum according to this condition.
We can compute this maximal cumulative multipole moment, that of the vacuum, at any order:
\[\mathfrak{A}_{M}(|\mathrm{vac})=\sum_{K=0}^{M}\frac{1}{K!^{2}}=I_{0}(2)-\,_{1} \tilde{F}_{2}(1;\lfloor M\rfloor+2,\lfloor M\rfloor+2;1), \tag{40}\]
with a Bessel function [28] and a regularized hypergeometric function [40]. This approaches \(I_{0}(2)\approx 2.27959\) in the limit of large \(M\). Moreover, by computing \(\mathfrak{A}_{\infty}(|n\rangle)=I_{0}(2)/n!^{2}\), we realize why only \(|0\rangle\) and \(|1\rangle\) behave so similarly in the large-\(M\) limit.
Finally, note that the cumulative multipole operators also take the intriguing form
\[\mathfrak{\hat{A}}_{M}=\frac{1}{\pi^{2}}\int d^{2}\alpha d^{2}\beta\,e^{-\frac {|\alpha|^{2}+|\beta|^{2}}{2}}\hat{D}(-\alpha)\otimes\hat{D}(\beta)\sum_{K}^{ M}\frac{(\alpha\beta^{*}-\alpha^{*}\beta)^{2K}}{(2K)!^{2}}P_{2K}\left(\frac{ \alpha\beta^{*}+\alpha^{*}\beta}{\alpha^{*}\beta-\alpha\beta^{*}}\right) \tag{41}\]
\[\mathfrak{\hat{A}}_{\infty}=\frac{1}{\pi^{2}}\int d^{2}\alpha d^{2}\beta\,e^{ -\frac{|\alpha|^{2}+|\beta|^{2}}{2}}\hat{D}(-\alpha)\otimes\hat{D}(\beta)\left| I_{0}(2\sqrt{\alpha\beta^{*}})\right|^{2},\]
where \(P_{n}(\alpha)=\exp-|\alpha|^{2}/2\alpha^{n}/\sqrt{n!}\) is the Poissonian amplitude.
### Inverse multipole distribution
An important question arises: how does one measure a state's multipole moments? Homodyne detection provides one immediate answer. By interfering a given state \(\hat{\varrho}\) with a coherent state \(|\alpha\rangle\) on a balanced beamsplitter and measuring the difference of the photocurrents of detectors placed at both output ports, one collects a signal proportional to \(x(\theta)=\left\langle\hat{a}e^{-i\theta}+\hat{a}^{\dagger}e^{i\theta}\right\rangle\), where \(\theta\) can be varied by changing the phase \(\arg\alpha\) of the reference beam. Collecting statistics of the quadrature \(x(\theta)\) up to its \(K\)th-order moments for a variety of phases \(\theta\) allows one to read off the moments \(\left\langle\hat{T}_{Kq}\right\rangle=\left\langle\hat{a}^{\dagger\,K+q}\hat{ a}^{K-q}\right\rangle\). This provokes the question: what states maximize and minimize the cumulative multipole moments in the inverse basis?
We start by defining, in analogy to Eq. (34), the cumulative distribution
\[A_{M}(\hat{\varrho})=\sum_{K}^{M}\sum_{q=-K}^{K}\left|\langle\hat{T}_{Kq} \rangle\right|^{2}\,. \tag{42}\]
This quantity directly depends on the energy of the state, vanishing if an only if the state is the vacuum. As for the maximization, it is clear that coherent states with more energy cause the cumulative sum \(A_{M}\) to increase, so we must fix the average energy \(\bar{n}=\left\langle\hat{a}^{\dagger}\hat{a}\right\rangle\) when comparing which states maximize and minimize the sum.
Maximizing \(A_{M}\) for a fixed average energy is straightforward because each inverse multipole satisfies
\[|\langle\hat{T}_{Kq}\rangle|^{2}\leq\left\langle\hat{a}^{\dagger K+q}\hat{a}^{ K+q}\right\rangle\left\langle\hat{a}^{\dagger K-q}\hat{a}^{K-q}\right\rangle. \tag{43}\]
The inequality is saturated if and only if \(\hat{a}^{K+q}\left|\psi\right\rangle\propto\hat{a}^{K-q}\left|\psi\right\rangle\); that is, \(\hat{a}^{2q}\left|\psi\right\rangle\propto\left|\psi\right\rangle\), which, for \(q\neq 0\), requires coherent states or superpositions of coherent states with particular phase relationships akin to higher-order cat states [41, 42, 43]:
\[\left|\psi^{(q)}\right\rangle=\sum_{l=0}^{2q-1}\psi_{l}\left|\alpha e^{\frac{2 \pi il}{2q}}\right\rangle. \tag{44}\]
Each of these states provides the same value \(|\langle\hat{T}_{Kq}\rangle|^{2}=|\alpha|^{4K}\). Then, since saturating the inequality for all \(q\) requires \(\psi_{0}=0\), only a coherent state maximizes the cumulative sum \(A_{M}\) for any fixed energy \(\bar{n}=|\alpha|^{2}\).
We already know that \(\left|\mathrm{vac}\right\rangle\) minimizes \(A_{M}\). For a given, fixed \(\bar{n}>0\), one can ask what state minimizes the cumulative multipoles. All of the multipoles with \(q\neq 0\) vanish for Fock states; this is because they vanish for any state that is unchanged after undergoing a rotation by \(\pi/2q\) about the origin in phase space. The \(q=0\) multipoles, on the other hand, depend only on the diagonal coefficients of the density matrix in the Fock basis, which can be minimized in parallel.
To minimize a multipole moment
\[\left|\langle\widehat{T}_{K0}\rangle\right|=K!\sum_{n\geq K}\binom{n}{K}\varrho _{nn}, \tag{45}\]
there are two cases to consider: \(\bar{n}<K\) and \(\bar{n}\geq K\). If \(\bar{n}<K\), the multipole vanishes by simply partitioning all of the probability among the Fock states with fewer than \(K\) photons and arranging those states in a convex combination with no coherences in the Fock basis. If \(\bar{n}\geq K\), the sum is ideally minimized by setting \(\varrho_{\bar{n}\bar{n}}=1\), by convexity properties of the binomial coefficients (they grow by a larger amount when \(n\) increases than the amount that they shrink when \(n\) decreases). For noninteger \(\bar{n}\), the minimum is achieved by setting
\[\varrho_{\lceil\bar{n}\rceil\lceil\bar{n}\rceil}=1-(\lceil\bar{n}\rceil- \bar{n}),\qquad\qquad\varrho_{\lceil\bar{n}\rceil-1\,\lceil\bar{n}\rceil-1}= \lceil\bar{n}\rceil-\bar{n} \tag{46}\]
with no coherences between these two Fock states. Here, \(\lceil x\rceil\) is the ceiling function that gives the smallest integer value that is bigger than or equal to \(x\). Since this minimization does not depend on \(K\), we have thus found the unique state that minimizes \(A_{M}\) for all \(M\) with arbitrary \(\bar{n}\):
\[\arg\max A_{M}(\hat{\rho}|\bar{n})=(\lceil\bar{n}\rceil-\bar{n})\left|\lceil \bar{n}\rceil-1\right\rangle\left\langle\lceil\bar{n}\rceil-1\right|+(1+\bar{ n}-\lceil\bar{n}\rceil)\left|\lceil\bar{n}\rceil-1\right\rangle\left\langle \lceil\bar{n}\rceil-1\right|. \tag{47}\]
It is intriguing that coherent states and Fock states respectively maximize and minimize this sum for integer-valued energies, while a convex combination of the nearest-integer Fock states minimize this sum for a noninteger energy. These results should be compared against those for the sum \(\mathfrak{A}_{M}\), which was uniquely maximized by the vacuum state that minimizes the sums here and for which the states that made it vanish were Fock states with large energies. Both sums are minimized for some Fock states and both sums are maximized by some coherent states, but the scalings with energy are opposite, where smaller energy leads to larger \(\mathfrak{A}_{M}\) and smaller \(A_{M}\) while larger energy leads to smaller \(\mathfrak{A}_{M}\) and larger \(A_{M}\); it just so happens that the state with smallest energy is both a Fock state and a coherent state.
## 5 Concluding remarks
Expanding the density operator in a conveniently chosen operator set has considerable advantages. By using explicitly the algebraic properties of the basis operators the calculations are often greatly simplified. But the usefulness of the method depends on the choice of the basis operator set. The idea of irreducible tensor operators is to provide a well-developed and efficient way of using the inherent symmetry of the system.
However, the irreducible-tensor machinery was missing for CV, in spite of the importance of these systems in modern quantum science and technology. We have provided a complete account of the use of such bases, which should constitute an invaluable tool for quantum optics.
## Acknowledgments
We thank H. de Guise and U. Seyfarth for discussions. This work received funding from the European Union's Horizon 2020 research and innovation programme project STORMYTUNE
under grant Agreement No. 899587. AZG acknowledges that the NRC headquarters is located on the traditional unceded territory of the Algonquin Anishinaabe and Mohawk people, as well as support from the NSERC PDF program. LLSS acknowledges support from Ministerio de Ciencia e Innovacion (Grant PID2021-127781NB-I00).
## Appendix A Transformation properties of the operators
We present in this appendix some properties of the composition law of two tensors operators. Writing the inverse operators \(\hat{\mathfrak{Z}}_{Kq}\) in the basis of monomial operators \(\hat{T}_{Kq}\) is as simple as reading off coefficients using Fig. 2. We have already identified that each inverse operator \(\hat{\mathfrak{Z}}_{Kq}\) has contributions from a finite stripe with \(K-|q|\) elements along the \(q\)th diagonal. The monomials, on the other hand, have contributions on the \(-q\)th stripe, starting from the \((K-|q|)\)th element and going to infinity. The expansion is thus given by a sum of monomials \(\hat{T}_{K-q}\) for all possible values of \(K\) up until infinity, whose expansion coefficients can be found iteratively. The coefficient with the lowest value of \(K\) is just given by the coefficient of the top-left element of \(\hat{\mathfrak{Z}}_{Kq}\) in Fig. 2. The coefficient with the next-lowest value of \(K\) can be found iteratively by canceling the contribution from the monomial that begins at the top-left corner and adding the contribution from the monomial that begins after the top-left corner. The iteration must continue to infinity in order to make sure all of the contributions after the \((2K+1)\)th antidiagonal vanish.
Another method of finding these expansion coefficients considers the quantity \(\operatorname{Tr}(\hat{\mathfrak{Z}}_{Kq}\hat{\mathfrak{Z}}_{K^{\prime}q^{ \prime}})\). We already know by inspection that this will vanish unless \(q=-q^{\prime}\). We can directly compute these overlaps by summing terms from Eq.\((\ref{eq:1})\):
\[\hat{\mathfrak{Z}}_{Kq} =\sum_{K^{\prime}q^{\prime}}\hat{T}_{K^{\prime}q^{\prime}} \operatorname{Tr}(\hat{\mathfrak{Z}}_{Kq}\hat{\mathfrak{Z}}_{K^{\prime}q^{ \prime}}) \tag{48}\] \[\operatorname{Tr}(\hat{\mathfrak{Z}}_{Kq}\hat{\mathfrak{Z}}_{K^ {\prime}q^{\prime}}) =\delta_{q,-q^{\prime}}\frac{(-1)^{K+K^{\prime}+2|q|}\,_{2}F_{1} (|q|-K,|q|-K^{\prime};2|q|+1;1)}{(2|q|)!(K-|q|)!(K^{\prime}-|q|)!},\]
which provides a useful alternative formula for the integrals
\[\operatorname{Tr}(\hat{\mathfrak{Z}}_{Kq}\hat{\mathfrak{Z}}_{K^ {\prime}q^{\prime}}) =\frac{(-1)^{K+K^{\prime}+q+q^{\prime}}}{(K+q)!\,(K-q)!\,(K^{ \prime}+q^{\prime})!\,(K^{\prime}-q^{\prime})!} \tag{49}\] \[\times\frac{1}{\pi^{3}}\int d^{2}\alpha d^{2}\beta d^{2}\gamma e^ {-\frac{|\beta|^{2}+|\gamma|^{2}}{2}}\left\langle\alpha\right|\hat{D}\left( \beta\right)\hat{D}\left(\gamma\right)\left|\alpha\right\rangle\beta^{K+q} \beta^{*K-q}\gamma^{K^{\prime}+q^{\prime}}\gamma^{*K^{\prime}-q^{\prime}}.\]
Just because a particular product \(\hat{\mathfrak{Z}}_{Kq}\hat{\mathfrak{Z}}_{K^{\prime}q^{\prime}}\) with \(q\neq q^{\prime}\) is traceless does not mean that it necessarily vanishes. In fact, we can directly compute the product of two such operators to find their structure constants. Each inverse operator \(\hat{\mathfrak{Z}}_{Kq}\) serves to decrease the number of photons in a state by \(2q\), so the product of two inverse operators must be a finite sum of inverse operators whose second index satisfies \(q^{\prime\prime}=q+q^{\prime}\).
We start by writing
\[\hat{\mathfrak{Z}}_{Kq}\hat{\mathfrak{Z}}_{K^{\prime}q^{\prime}}=\sum_{K^{ \prime\prime}}f_{K^{\prime\prime}}(K,K^{\prime},q,q^{\prime})\hat{\mathfrak{Z }}_{K^{\prime\prime},q+q^{\prime}}. \tag{50}\]
In theory, the coefficients \(f_{K^{\prime\prime}}\) are formally given by \(\operatorname{Tr}(\hat{\mathfrak{Z}}_{Kq}\hat{\mathfrak{Z}}_{K^{\prime}q^{ \prime}}\hat{T}_{K^{\prime\prime},q+q^{\prime}})\). Inspecting Eq.\(\ref{eq:1}\), we find some interesting, immediate results: for example, when \(q,q^{\prime}\geq 0\) and \(2q>K^{\prime}-q^{\prime}\), all of the structure constants \(f_{K^{\prime\prime}}\) vanish and we have \(\hat{\mathfrak{Z}}_{Kq}\hat{\mathfrak{Z}}_{K^{\prime}q^{\prime}}=0\). Similar vanishing segments can be found for any combination of the signs of \(q\) and \(q^{\prime}\), which is not readily apparent from multiplications of displacement operators from Eq.\(\ref{eq:1}\).
The nonzero structure constants can be found via iteration, using Fig. 2 as a guide. Taking, for example, \(q,q^{\prime}\geq 0\), we find products of the form
\[\hat{\mathfrak{Z}}_{Kq}\hat{\mathfrak{Z}}_{K^{\prime}q^{\prime}}=\sum_{n=2q+2q^{ \prime}}^{\min(K^{\prime}+q^{\prime},K+q+2q^{\prime})}\frac{(-1)^{K+K^{\prime} +q-q^{\prime}}(K+q+2q^{\prime}-n)!^{-1}}{\sqrt{n!(n-2q-2q^{\prime})!(n-2q^{ \prime})!(K^{\prime}+q^{\prime}-n)!}}\left|n-2q-2q^{\prime}\right\rangle\left \langle n\right|; \tag{51}\]
the nonzero structure constants obey \(K^{\prime\prime}\leq K^{\prime\prime}_{\max}=\min(K+q^{\prime},K^{\prime}-q)\). The one with the largest \(K^{\prime\prime}\) is the only one that has the term \(\left|K^{\prime\prime}_{\max}-q-q^{\prime}\right\rangle\left\langle K^{\prime \prime}_{\max}+q+q^{\prime}\right|\), so its structure constant must balance the unique contribution to that term from \(\hat{\mathfrak{Z}}_{K^{\prime\prime}_{\max}+q+q^{\prime}}\). This means that
\[f_{K^{\prime\prime}_{\max}}(K,K^{\prime},q,q^{\prime})=\frac{(-1)^{K+K^{\prime }+q-q^{\prime}}}{(K^{\prime\prime}_{\max}+q-q^{\prime})!(K^{\prime}-q-K^{ \prime\prime}_{\max})!(K+q^{\prime}-K^{\prime\prime}_{\max})!}, \tag{52}\]
where one of the final two terms in the denominator will simply be \(0!=1\). Then, by iteration, one can balance the contribution of \(\hat{\mathfrak{Z}}_{K^{\prime\prime}_{\max}-k,q+q^{\prime}}\) in order to find the structure constants \(f_{K^{\prime\prime}_{\max}-k}(K,K^{\prime},q,q^{\prime})\).
The structure constants for the monomial operators are already known. One can compute [44]
\[\hat{T}_{Kq}\hat{T}_{K^{\prime}q^{\prime}}=\sum_{n}c_{n}\hat{a}^{\dagger K+q+K^ {\prime}+q^{\prime}-n}\hat{a}^{K+K^{\prime}-q-q^{\prime}-n} \tag{53}\]
from normal ordering.
The inverse operators transform nicely under displacements:
\[\hat{D}(\alpha)\hat{\mathfrak{Z}}_{Kq}\hat{D}(\alpha)^{\dagger} =\frac{(-1)^{K+q}}{\pi(K+q)!(K-q)!}\int d^{2}\beta e^{|\beta|^{2} /2}e^{\alpha\beta^{*}-\alpha^{*}\beta}\hat{D}(\beta)\beta^{K+q}\beta^{*K-q}\] \[=\sum_{S}^{\infty}\sum_{l=-S}^{S}\alpha^{S-l}\alpha^{*S+l}\binom{ K+S+q+l}{K+q}\binom{K+S-q-l}{K-q}\hat{\mathfrak{Z}}_{K+S,q+l}. \tag{54}\]
These displaced operators are inverse to the displaced monomials
\[\hat{D}(\alpha)\hat{T}_{Kq}\hat{D}(\alpha)^{\dagger}=\sum_{S=0,1/2}^{K}\sum_{ l=-S}^{S}\binom{K+q}{S+l}\binom{K-q}{S-l}(-\alpha^{*})^{K+q-S-l}(-\alpha)^{K-q-S+l} \hat{T}_{Sl}. \tag{55}\]
It is interesting to note that the displaced inverse operators are given by an infinite sum of inverse operators and the displaced monomials by a finite sum of monomials, in contrast to the number of terms \(\left|m\right\rangle\left\langle n\right|\) required to expand the original operators in the Fock basis.
## Appendix B Symmetrically ordered monomials
We briefly consider here the example of symmetrically ordered multinomials \(\hat{T}^{W}_{Kq}\). We can write them explicitly in terms of the normally ordered polynomials as
\[\hat{T}^{W}_{Kq}=\{\hat{a}^{\dagger K+q}\hat{a}^{K-q}\}_{\mathrm{sym}}=\sum_{n =0}^{\min(K+q,K-q)}\frac{(K+q)!(K-q)!}{2^{n}n!(K+q-n)!(K-q-n)!}\hat{T}_{K-n,q}\,, \tag{56}\]
where \(\{\cdot\}_{\mathrm{sym}}\) denotes the symmetric (or Weyl) order or operators [44]. An important expression for the symmetrically ordered polynomials is
\[\hat{T}^{W}_{Kq}=\frac{\partial^{2K}}{\partial\beta^{K+q}\partial(-\beta^{*})^{ K-q}}\hat{D}(\beta)\bigg{|}_{\beta=0}. \tag{57}\]
We thus look for inverse operators through
\[\operatorname{Tr}(\hat{\mathfrak{L}}_{Kq}^{W}\hat{T}_{K^{\prime}q^{ \prime}}^{W}) =\frac{\partial^{2K^{\prime}}}{\partial\beta^{K^{\prime}+q^{\prime}} \partial(-\beta^{*})^{K^{\prime}-q^{\prime}}}\operatorname{Tr}[\hat{\mathfrak{ L}}_{Kq}^{W}\hat{D}(\beta)]\bigg{|}_{\beta=0}\] \[=\frac{1}{\pi}\int d^{2}\beta\operatorname{Tr}[\hat{D}(-\beta) \hat{\mathfrak{L}}_{Kq}^{W}]\operatorname{Tr}\left[\hat{D}(\beta)\frac{ \partial^{2K}}{\partial\alpha^{K+q}\partial(-\alpha^{*})^{K-q}}\hat{D}(\alpha) \bigg{|}_{\alpha=0}\right]\] \[=\int d^{2}\beta\operatorname{Tr}[\hat{D}(-\beta)\hat{\mathfrak{ L}}_{Kq}^{W}]\left.\frac{\partial^{2K}}{\partial\alpha^{K+q}\partial(-\alpha^{*})^{K-q} }\delta^{2}(\alpha+\beta)\right|_{\alpha=0}. \tag{58}\]
By inspection, we attain orthonormality when
\[\operatorname{Tr}[\hat{\mathfrak{L}}_{Kq}^{W}\hat{D}(\beta)]=\frac{\beta^{K+q} (-\beta^{*})^{K-q}}{(K+q)!(K-q)!}, \tag{59}\]
which corresponds to
\[\hat{\mathfrak{L}}_{Kq}^{W}=\frac{1}{\pi}\int d^{2}\beta\hat{D}(-\beta)\frac{ \beta^{K+q}(-\beta^{*})^{K-q}}{(K+q)!(K-q)!}=\frac{(-1)^{K+q}}{\pi(K+q)!(K-q)! }\int d^{2}\beta\hat{D}(\beta)\beta^{K+q}\beta^{*K-q}, \tag{60}\]
simply differing from the expression (19) for \(\hat{\mathfrak{L}}_{Kq}\) by removing the factor of \(\exp(-|\beta|^{2}/2)\).
We can find the multipoles for specific states. We simply quote the results
\[\langle\alpha|\hat{\mathfrak{L}}_{Kq}^{W}|\alpha\rangle=\frac{2^{K-q+1}(-1)^{ K+q}}{(K-q)!}\frac{\text{e}^{-2|\alpha|^{2}}}{\alpha^{*2q}}L_{K+q}^{(-2q)}(2| \alpha|^{2}) \tag{61}\]
and
\[\langle n|\hat{\mathfrak{L}}_{Kq}^{W}|n\rangle=\delta_{q0}\frac{(-1)^{K}}{K!^ {2}}2\int_{0}^{\infty}r^{2K+1}\text{e}^{-r^{2}/2}L_{n}(r^{2})=\delta_{q0}\frac {(-1)^{K}2^{K+1}\,_{2}F_{1}(K+1,-n;1;2)}{K!}. \tag{62}\]
For arbitrary states, we can follow the same procedure as we used for normal order; the final result is (\(m\leq n\))
\[\langle n|\hat{\mathfrak{L}}_{Kq}^{W}|n\rangle =\frac{(-1)^{K+q}}{(K+q)!\,(K-q)!}\int\frac{d^{2}\beta}{\pi} \text{e}^{-|\beta|^{2}/2}\sqrt{\frac{n!}{m!}}\beta^{m-n}L_{n}^{(m-n)}(|\beta|^ {2})\beta^{K+q}\beta^{*K-q}\] \[=\delta_{n-m\,\,2q}\frac{(-1)^{K+3q}}{(K+q)!}\sqrt{\frac{n!}{(n-2 q)!}}2^{K+q+1}\,_{2}\tilde{F}_{1}(k+q+1,2q-n;2q+1;2). \tag{63}\]
Finally, it is direct to check that the tensors \(\hat{T}_{Kq}^{W}\) are covariant under symplectic transformations [24].
## Appendix C Vacuum state as maximizing the cumulative multipolar distribution
We here provide analytical and numerical evidence that the vacuum state uniquely maximizes the cumulative multipolar distribution to arbitrary orders \(M>3/2\).
First, we note by convexity that the multipole moments are all largest for pure states. We next ask how to maximize a single multipole moment \(|\langle\hat{\mathfrak{L}}_{Kq}\rangle|\). The phases can be arranged such that \(\varrho_{nm}(-1)^{n}>0\) for all \(n\) and \(m\) in Eq. (30), while each term is bounded as \(|\varrho_{nm}|\leq\sqrt{\varrho_{mm}\varrho_{nn}}\). It is tempting to use a Cauchy-Schwarz inequality to say that this expression is maximized by states with the relationship \(\varrho_{nn}=\lambda n!\) for some normalization constant \(\lambda\). This fails, however,
for two related reasons: one cannot simultaneously saturate the inequality \(|\varrho_{nm}|\leq\sqrt{\varrho_{mm}\varrho_{nn}}\) for all \(m\) and \(n\) while retaining a positive density operator \(\hat{\varrho}\); similarly, the trace of \(\hat{\varrho}\) is bounded, which the Cauchy-Schwarz inequality does not take into consideration. One can outperform this Cauchy-Schwarz bound by concentrating all of the probability in the term with the largest value of \(1/\sqrt{n!(n-2q)!(K+q-n)!^{2}}\). Taking
\[\tilde{n}=\arg\max_{n}\frac{1}{\sqrt{n!(n-2q)!(K+q-n)!}}, \tag{64}\]
\(|\langle\hat{\mathfrak{T}}_{Kq}\rangle|\) is maximized by any pure state with \(\varrho_{\tilde{n}\tilde{n}}=\varrho_{\tilde{n}-2q,\tilde{n}-2q}=1/2\):
\[\max|\langle\hat{\mathfrak{T}}_{Kq}\rangle|^{2}=\frac{1}{4\tilde{n}!(\tilde{ n}-2)q!(K+q-\tilde{n})!^{2}} \tag{65}\]
This condition changes with \(K\) and \(q\), so there will always be a competition between which terms \(|\langle\hat{\mathfrak{T}}_{Kq}\rangle|^{2}\) are maximized in the cumulative sum.
The contributions to \(\mathfrak{A}_{M}\) by the various terms \(|\langle\hat{\mathfrak{T}}_{Kq}\rangle|^{2}\) diminish with increasing \(K\), which can be seen through the following argument. As \(M\) increases by \(1/2\), the number of new terms contributing to the sum increases quadratically: there are \(2M+1\) new multipoles to consider and each multipole is a sum of at most \(M+1\) terms. From the preceding discussion, each multipole is individually maximized when it is made from only a single term, the cumulative multipole moment \(\mathfrak{A}_{M}\) can only increase by the addition of \(\mathcal{O}(M)\) (competing) terms. In contrast, the magnitudes of each of the multipole moments decay exponentially with increasing \(M\), due to the factorials in the denominator Eq. (65), stemming from Eq. (30). One can, therefore, guarantee that a state maximizing \(\mathfrak{A}_{M}\) for sufficiently large \(M\) will continue to maximize \(\mathfrak{A}_{M}\) for all larger values of \(M\), at least approximately.
We can also inspect the inverse operators directly to understand the maximization properties. The multipoles being summed as an indicator of quantumness, \(|\langle\hat{\mathfrak{T}}_{Kq}\rangle|^{2}\), can be expressed as expectation values of the duplicated operator \(\hat{\mathfrak{T}}_{Kq}\otimes\hat{\mathfrak{T}}_{Kq}^{\dagger}=\hat{ \mathfrak{T}}_{Kq}\otimes\hat{\mathfrak{T}}_{K,-q}\) with respect to the duplicated states \(\hat{\varrho}\otimes\hat{\varrho}\). The vacuum state \(|0\rangle\otimes|0\rangle\) is the only duplicated state that is an eigenstate of all of the duplicated operators for all \(K\) and \(q\), albeit with different eigenvalues for each operator. These operators act on Fock states as
\[(\hat{\mathfrak{T}}_{Kq}\otimes\hat{\mathfrak{T}}_{Kq}^{\dagger})\ket{n} \otimes\ket{n}\propto|n-2q\rangle\otimes|n+2q\rangle \tag{66}\]
Figure 3: Eigenvalues of \(\hat{\mathfrak{A}}_{M}\) with the eight largest magnitudes up until \(M=10\). The negative eigenvalue with the largest magnitude corresponds to the entangled state \(|0\rangle\otimes|1\rangle-|1\rangle\otimes|2\rangle\), the positive eigenvalue with the largest magnitude is \(|0\rangle\otimes|2\rangle-c\ket{1}\otimes|1\rangle+|2\rangle\otimes|0\rangle\) for some positive constant \(c>1\), and the positive eigenvalue with the second largest magnitude is \(|0\rangle\otimes|0\rangle\). These dictate that the symmetric state \(|\psi\rangle\otimes|\psi\rangle\) for which the expectation value of \(\hat{\mathfrak{A}}_{M}\) is largest must be confined to the sector spanned by \(|0\rangle\), \(|1\rangle\), and \(|2\rangle\).
and have nonzero matrix elements given by Kronecker products of the stripes found in Fig. 2 (some combinations of \(K\)\(q\), and \(n\) cause the proportionality constant to be zero). These can be used to help finding the eigenstates and eigenvalues of the summed joint operators
\[\mathfrak{\hat{A}}_{M}=\sum_{K=0}^{M}\sum_{q=-K}^{K}\mathfrak{\hat{A}}_{Kq} \otimes\mathfrak{\hat{A}}_{Kq}^{\dagger}. \tag{67}\]
As mentioned previously, each individual operator \(\mathfrak{\hat{A}}_{Kq}\) only has null eigenstates, unless \(q=0\); this can be seen from the striped pattern in Fig. 2. The same is true of the joint operators \(\mathfrak{\hat{A}}_{Kq}\otimes\mathfrak{\hat{A}}_{Kq}^{\dagger}\), but is not true of the summed joint operators \(\mathfrak{\hat{A}}_{M}\). The latter are represented in the Fock basis by sparse antitriangular matrices, which can be visualized by Kronecker products of pairs of matrices from Fig. 2. The eigenstates and eigenvalues can thus be found directly for any \(M\). For example, the joint Fock state with maximal eigenvalue is the joint vacuum state \(\ket{0}\otimes\ket{0}\).
The cumulative operators \(\mathfrak{\hat{A}}_{M}\) have positive expectation values when taken with respect to any duplicated state \(\hat{\rho}\otimes\hat{\rho}\). However, \(\mathfrak{\hat{A}}_{M}\) may have negative eigenvalues, because some of the eigenstates may not be of the form \(\hat{\varrho}\otimes\hat{\varrho}\). For example, the eigenstate whose eigenvalue has the largest magnitude is always found to be the maximally entangled state \((\ket{0}\otimes\ket{1}-\ket{1}\otimes\ket{0})/\sqrt{2}\), with a large, negative eigenvalue. This is orthogonal to any duplicated state \(\hat{\varrho}\otimes\hat{\varrho}\) because the latter is permutation symmetric, not antisymmetric, so we can readily ignore all contributions to \(\mathfrak{\hat{A}}_{M}\) from this part of its spectrum.
Another entangled state is the eigenstate with the next largest eigenvalue: \((\ket{0}\otimes\ket{2}-c\ket{1}\otimes\ket{1}+\ket{2}\otimes\ket{0})/{\cal N}\) for some positive constants \(c\) and \({\cal N}=\sqrt{2+c^{2}}\). This eigenstate obeys permutation symmetry, so it will contribute to the multipole moments. The maximum contribution will come from a state of the form
\[\ket{\psi}=\sqrt{p_{0}}\ket{0}+\sqrt{p_{1}}\mathrm{e}^{\mathrm{i}\psi}\ket{1}- \sqrt{1-p_{0}-p_{1}}\ket{2}, \tag{68}\]
specifically with \(p_{0}=1-p_{0}-p_{1}\). Since \(c>1\), the contribution is uniquely maximized by \(p_{0}=0\) and \(p_{1}=1\), so again we need only consider the joint Fock states in the analysis. The overlap of \(\ket{1}\otimes\ket{1}\) with this eigenstate is \(c^{2}/{\cal N}^{2}\approx 0.621\).
The eigenstate with the third largest-magnitude eigenvalue is the joint vacuum state \(\ket{0}\otimes\ket{0}\). The ratio of its eigenvalue to that with the second largest magnitude approaches \(\approx 0.647>c^{2}/{\cal N}^{2}\) as \(M\) increases. This is enough to ensure that the joint vacuum state uniquely maximizes the cumulative multipole moments for all \(M\). We stress that these optima have not been found through
Figure 4: Coefficients of the cumulative mutipole sum for the different weights in the optimal state \(\ket{\psi_{\mathrm{opt}}}\). The coefficients rapidly converge for moderate \(M\), with those of \(p_{0}^{2}\) and \(p_{1}^{2}\) rapidly approaching each other.
a numerical optimization, but rather through an exact diagonalization of the operators \(\hat{\mathfrak{A}}_{M}\), which means our analysis does not have to worry about local minima or other numerical optimization hazards.
How can this be made more rigorous? The eigenvalues and eigenstates can be found exactly for any value of \(M\) by diagonalizing the sparse matrix \(\hat{\mathfrak{A}}_{M}\). By \(M=9/2\), the largest eigenvalues have already converged to three significant digits and \(c^{2}/\mathcal{N}^{2}\) to four; by \(M=7\), the they have all converged to six significant digits. The contributions from a new, larger value of \(K=M\) strictly reduce the magnitude of each expansion coefficient in the sum of Eq. (31) by a multiplicative factor, ranging from \(1/(M+q)\) for the term with the smallest \(n\) that has appeared the most times in the cumulative multipole to \(1\) for the term with the largest \(n\) that has only appeared once previously. There is also the addition of an extra term for \(\left|M-q\right\rangle\left\langle M+q\right|\), normalized by the large factor \(1/\sqrt{(M+q)!(M-q)!}\). Each term gets divided by an increasingly large factor as \(M\) increases; the factor that decreases the slowest has already started out with a tiny magnitude due to the normalization factor \(1/\sqrt{(M+q)!(M-q)!}\). The magnitudes of the expansion coefficients in the cumulative sums decrease at least exponentially in \(\hat{\mathfrak{A}}_{M}-\hat{\mathfrak{A}}_{M-1/2}\), so the largest eigenvalues and eigenstates of \(\mathfrak{A}_{M}\) are fixed once they are known for moderate \(M\) (see visualization in Fig. 3).
The above demonstrates that the state maximizing the cumulative multipole moments for any value of \(M\) must take the form (\(p_{0}+p_{1}+p_{2}=1\))
\[\left|\psi_{\mathrm{opt}}\right\rangle=\sqrt{p_{0}}\left|0\right\rangle+\sqrt{ p_{1}}\mathrm{e}^{\mathrm{i}\psi}\left|1\right\rangle+\sqrt{p_{2}}\mathrm{e}^{ \mathrm{i}\phi}\left|2\right\rangle, \tag{69}\]
because such a states concentrates maximal probability in the subspace with the largest eigenvalues of \(\hat{\mathfrak{A}}_{M}\). We can compute the cumulative multipole moments for such a state, which equal
\[\mathfrak{A}_{M}(\left|\psi_{\mathrm{opt}}\right\rangle) =\sum_{K\in\mathbb{Z}}^{M}\left|\frac{\varrho_{00}}{K!}-\frac{ \varrho_{11}}{(K-1)!}+\frac{\varrho_{22}}{2!(K-2)!}\right|^{2}+2\frac{\left| \varrho_{20}\right|^{2}}{2(K-1)!^{2}}\] \[+\sum_{K\in\mathbb{Z}+\frac{1}{2}}^{M}2\left|\frac{\varrho_{10}} {(K+\frac{1}{2})!}-\frac{\varrho_{21}}{\sqrt{2}(K-\frac{3}{2})!}\right|^{2}. \tag{70}\]
Figure 5: Cumulative multipole sum for optimal state \(\left|\psi_{\mathrm{opt}}\right\rangle\) as a function of the two independent probabilities \(p_{0}\) and \(p_{1}\). The multipoles to order \(M=100\) are included, by which point they have converged well beyond machine precision. It is clear that the maximum is obtained by setting all of the probability to go to either \(p_{0}\) or \(p_{1}\) with no shared probability between the two.
The relative phases that maximize this sum satisfy \(2\psi-\phi=\pi\), so we can set \(\mathrm{e}^{\mathrm{i}\psi}=1\) and \(\mathrm{e}^{\mathrm{i}\phi}=-1\) without loss of generality. There are now only two constants to optimize over in the sum
\[\mathfrak{A}_{M}(|\psi_{\mathrm{opt}}\rangle) =\sum_{K\in\mathds{Z}}^{M}\left|\frac{p_{0}}{K!}-\frac{p_{1}}{(K -1)!}+\frac{p_{2}}{2!(K-2)!}\right|^{2}+\frac{p_{0}p_{2}}{(K-1)!^{2}}\] \[+\sum_{K\in\mathds{Z}+\frac{1}{2}}^{M}2\left|\frac{\sqrt{p_{0}p_{ 1}}}{(K+\frac{1}{2})!}+\frac{\sqrt{p_{1}p_{2}}}{\sqrt{2}(K-\frac{3}{2})!} \right|^{2}\,. \tag{71}\]
All of the terms decay at least exponentially with \(K\), so it is again evident that optimizing the sum for moderate \(M\) will approximately optimize the sum for all larger \(M\). Computing the contributions to \(\mathfrak{A}_{M}\), we find
\[\mathfrak{A}_{M}(|\psi_{\mathrm{opt}}\rangle) \approx 2.27959p_{0}^{2}+2.27959p_{1}^{2}+0.569896p_{2}^{2}\] \[-0.622103p_{0}p_{1}+2.96853p_{0}p_{2}+0.688948p_{1}p_{2}+1.94864p _{1}\sqrt{p_{0}p_{2}}, \tag{72}\]
which converges to this value by \(M=7\) (see Fig. 4) and we have verified that these digits remain unchanged beyond \(M=100\). This means that the sum will be maximized by either \(p_{0}=1\) or \(p_{1}=1\) (visualization in Fig. 5). We can directly compute \(\mathfrak{A}_{M}(|0\rangle)-\mathfrak{A}_{M}(|1\rangle)=1/\lfloor M\rfloor!^{2}\), where \(\lfloor x\rfloor\) is the floor function that gives the greatest integer less than or equal to \(x\). This means that the vacuum state is the unique state with the maximal cumulative multipole moment for all \(M\), while its supremacy diminishes exponentially with \(M\).
|
2301.13369 | Free boundary problem with a nonlocal kernel | In this paper, we propose a new nonlocal model for two-phase Stefan problem,
where the nonlocal version of the one-phase Stefan problem arises naturally as
a special case. Among other things, we obtain the optimal condition for the
pointwise convergence between local and nonlocal one-phase Stefan problem and
an equivalent characterization of this optimal condition. Moreover, we provide
some sufficient criteria for the continuous expansion of free boundaries, and
when the sufficient conditions are violated, we construct examples to
demonstrate that the jumping phenomena could happen on the free boundaries. The
jumping phenomena is essentially induced by the nonlocal diffusion and thus it
does not appear in the classical Stefan problem. | Xinfu Chen, Fang Li, Maolin Zhou | 2023-01-31T02:29:55Z | http://arxiv.org/abs/2301.13369v1 | # Free boundary problem with a nonlocal kernel +
###### Abstract
In this paper, we propose a new nonlocal model for two-phase Stefan problem, where the nonlocal version of the one-phase Stefan problem arises naturally as a special case. Among other things, we obtain the optimal condition for the pointwise convergence between local and nonlocal one-phase Stefan problem and an equivalent characterization of this optimal condition. Moreover, we provide some sufficient criteria for the continuous expansion of free boundaries, and when the sufficient conditions are violated, we construct examples to demonstrate that the jumping phenomena could happen on the free boundaries. The jumping phenomena is essentially induced by the nonlocal diffusion and thus it does not appear in the classical Stefan problem.
**Keywords**: nonlocal Stefan problem, free boundary, jumping phenomena
**MSC (2020)**: 35K57, 45K05, 35R35
## 1 Introduction
The _classical Stefan problem_ is well known to describe the evolution of the interface between two phases of a substance undergoing a phase change, for example the melting of a solid, such as ice to water. _Latent heat_, defined as the heat or energy that is absorbed or released during a
phase change of a substance, acts as an energy source or sink at a moving solid-liquid interface, and the resulting boundary condition is known as the _Stefan boundary condition_.
In this paper, we propose and study _the nonlocal version of two-phase Stefan problem_
\[\begin{cases}\gamma_{t}(t,x)=a\int_{\{\gamma>0\}}\!\!k(x-y)\gamma(t,y)dy-a\gamma (t,x)\chi_{\{\gamma>0\}}\\ \qquad+b\int_{\{\gamma<-\ell_{0}\}}\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!
On the basis of latent heat, _the nonlocal version of one-phase Stefan problem_ is proposed as follows
\[\begin{cases}\gamma_{t}(t,x)=d\int_{\{\gamma>0\}}\!\!\!k(x-y)\gamma(t,y)dy-d \gamma(t,x)\chi_{\{\gamma>0\}}&t>0,\ x\in\mathbb{R}^{n},\\ \gamma(0,x)=\gamma_{0}(x)&x\in\mathbb{R}^{n},\end{cases} \tag{1.6}\]
where the kernel function \(k\) satisfies **(K)**, and for the initial data, we assume that
\[\gamma_{0}(x)\in L^{\infty}(\mathbb{R}^{n}),\ \gamma_{0}(x)=-\ell_{0}\ \ \text{for}\ x\in \mathbb{R}^{n}\setminus\bar{\Omega}_{0},\ \gamma_{0}|_{\bar{\Omega}_{0}}\geq 0,\ \gamma_{0}|_{\bar{\Omega}_{0}}\not \equiv 0. \tag{1.7}\]
The essence of the nonlocal Stefan problem (1.6) is at the time \(t\),
* if \(x\in\{x\in\mathbb{R}^{n}\,|\,\gamma(t,x)\leq 0\}\), then only it can absorb energy from outside;
* if \(x\in\{x\in\mathbb{R}^{n}\,|\,\gamma(t,x)>0\}\), then not only it can absorb energy from outside, but also transfer its energy outside.
Here, the value \(\ell_{0}\) is in the status of latent heat, \(\gamma\) equal to \(-\ell_{0}\) corresponds to the status of ice at zero degree centigrade and \(\gamma\) reaching zero represents that there has already been sufficient energy accumulated here for the phase change.
The nonlocal version of the two-phase Stefan problem (1.1) is proposed in the same spirit. The phase change happens at either \(\gamma\) reaching zero or \(\gamma\) reaching \(-\ell_{0}\) and the initial data \(\gamma=-\alpha_{0}\) in \(\mathbb{R}^{n}\setminus\bar{\Omega}_{0}\), where \(\alpha_{0}\in(0,\ell_{0})\), corresponds to the mixture of water and ice at zero degree centigrade. Different from the one-phase case, in the initial data, \(\gamma_{0}|_{\bar{\Omega}_{0}}\) could change signs and in particular, when both \(\{x\in\mathbb{R}^{n}\,|\,\gamma(t,x)<-\ell_{0}\}\) and \(\{x\in\mathbb{R}^{n}\,|\,\gamma(t,x)>0\}\) are nonempty, in the set \(\{x\in\mathbb{R}^{n}\,|\,-\ell_{0}\leq\gamma(t,x)\leq 0\}\), the energy could be absorbed and released from outside simultaneously.
We point out that the nonlocal version of the one-phase Stefan problems was also proposed and studied in [5]. Some discussions will be placed when the results obtained in this paper are related to those derived in [5]. Moreover, the fractional two-phase Stefan problem was treated in [2], and more general, the two-phase Stefan problem with anomalous diffusion was investigated in [3].
_The main purpose of this paper is to study effects of nonlocal diffusion operators on the evolution of free boundaries and explore connections and discrepancies between the local and nonlocal Stefan problems._
First of all, we establish results about local existence and global existence for the nonlocal Stefan problems.
**Theorem 1.1**.: _Assume that in the problem (1.1), the kernel functions satisfy the assumption **(K)**, the condition (1.2) is valid and the initial data satisfies (1.3). Then the problem (1.1) admits a unique classical solution \(\gamma(t,\cdot)\in L^{\infty}(\mathbb{R}^{n})\) defined for all \(t>0\), and \(\gamma\) satisfies the estimate_
\[\operatorname*{ess}\inf_{\mathbb{R}^{n}}\gamma_{0}\leq\gamma(t,x)\leq \operatorname*{ess}\sup_{\mathbb{R}^{n}}\gamma_{0}\quad\text{for}\ t>0,\,x\in \mathbb{R}^{n}. \tag{1.8}\]
_Moreover, if \(\gamma_{0}|_{\bar{\Omega}_{0}}\in C(\bar{\Omega}_{0})\), then \(\gamma(t,\cdot)\) is continuous in \(\bar{\Omega}_{0}\) and \(\mathbb{R}^{n}\setminus\bar{\Omega}_{0}\) for any \(t>0\)._
Next, we investigate the convergence relations between local and nonlocal Stefan problems. For simplicity, for \(\epsilon>0\), denote
\[k_{\epsilon}(x)=\frac{1}{\epsilon^{n}}k(\frac{x}{\epsilon}),\ \eta_{\epsilon}(x)= \frac{1}{\epsilon^{n}}\eta(\frac{x}{\epsilon}).\]
Before we present the main results, we briefly explain what should be the natural and optimal assumptions on the nonlocal kernel functions in the studies of convergence relations between models with local and nonlocal diffusions. Define the Fourier transform of the kernel function \(k\) as follows
\[\hat{k}(\xi)=\int_{\mathbb{R}^{n}}e^{-ix\cdot\xi}k(x)dx.\]
Based on the properties of the Fourier transform, one observes that for \(\phi\in L^{1}(\mathbb{R}^{n})\bigcap C^{2}(\mathbb{R}^{n})\)
\[\int_{\mathbb{R}^{n}}e^{-ix\cdot\xi}\left(\frac{1}{\epsilon^{2}}\int_{ \mathbb{R}^{n}}k_{\epsilon}(x-y)\phi(y)dy-\frac{1}{\epsilon^{2}}\phi(x)\right) dx=\frac{1}{\epsilon^{2}}\left(\hat{k}(\epsilon\xi)-1\right)\hat{\phi}(\xi),\]
\[\int_{\mathbb{R}^{n}}e^{-ix\cdot\xi}\Delta\phi(x)dx=-|\xi|^{2}\hat{\phi}(\xi),\]
and for fixed \(\xi\),
\[\lim_{\epsilon\to 0}\frac{1}{\epsilon^{2}}\left(\hat{k}(\epsilon\xi)-1 \right)\hat{\phi}(\xi)=-A|\xi|^{2}\hat{\phi}(\xi)\]
under the condition
\[\hat{k}(\xi)=1-A|\xi|^{2}+o(|\xi|^{2})\quad\text{as $\xi\to 0$}, \tag{1.9}\]
where \(A>0\) is a constant. This observation indicates that the condition (1.9) is optimal in the studies of nonlocal approximation of Laplacian operator. Indeed, the nonlocal approximation of heat equation is verified under this condition. See [1] for details.
We establish an important equivalent characterization of the condition (1.9).
**Proposition 1.2**.: _Assume that \(k\) satisfies the assumption **(K)**. Then the following two statements are equivalent._
* _For_ \(1\leq j,h\leq n,\ j\neq h\)_,_ \(\int_{\mathbb{R}^{n}}x_{j}k(x)dx=0,\ \int_{\mathbb{R}^{n}}x_{j}x_{h}k(x)dx=0,\ \int_{\mathbb{R}^{n}}x_{j}^{2}k(x)dx= \frac{1}{n}\int_{\mathbb{R}^{n}}|x|^{2}k(x)dx<+\infty.\)__
* _The Fourier transform of_ \(k\) _satisfies the assumption (_1.9_)._
_Moreover, \(\frac{1}{2n}\int_{\mathbb{R}^{n}}|x|^{2}k(x)dx=A.\)_
In order not to interrupt the main theme of this paper, we leave the proof of this proposition in the appendix.
We first establish the convergence result about the two-phase Stefan problem. Let \(\gamma_{\epsilon}\) be the solution of the following problem
\[\begin{cases}(\gamma_{\epsilon})_{t}(t,x)=\frac{a}{\epsilon^{2}}\int_{\{\gamma_{ \epsilon}>0\}}k_{\epsilon}(x-y)\gamma_{\epsilon}(t,y)dy-\frac{a}{\epsilon^{2}} \gamma_{\epsilon}(t,x)\chi_{\{\gamma_{\epsilon}>0\}}\\ \qquad+\frac{b}{\epsilon^{2}}\int_{\{\gamma_{\epsilon}<-\ell_{0}\}}\!\!\!\!\! \eta_{\epsilon}(x-y)(\gamma_{\epsilon}(t,y)+\ell_{0})dy-\frac{b}{\epsilon^{2}} (\gamma_{\epsilon}(t,x)+\ell_{0})\chi_{\{\gamma_{\epsilon}<-\ell_{0}\}}&t>0, \ x\in\mathbb{R}^{n},\\ \gamma_{\epsilon}(0,x)=\gamma_{0}(x)&x\in\mathbb{R}^{n}.\end{cases} \tag{1.10}\]
**Theorem 1.3**.: _In the problem (1.10), assume that the conditions of Theorem 1.1 are valid. In addition, assume that the kernel functions satisfy Proposition 1.2(i) and_
\[\int_{\mathbb{R}^{n}}|x|^{3}k(x)dx<+\infty. \tag{1.11}\]
_Then for any given \(T>0\), \(0<t<T\),\(\gamma_{\epsilon}(t,\cdot)\) converges to \(\gamma(t,\cdot)\) in \(L^{1}_{loc}(\mathbb{R}^{n})\) as \(\epsilon\to 0^{+}\), where \(\gamma\in L^{\infty}((0,T)\times\mathbb{R}^{n})\) is the generalized solution of_
\[\begin{cases}\Delta u\in\beta(u)_{t},\\ \beta(u)(0,x)=\gamma_{0}(x),\end{cases} \tag{1.12}\]
_where \( A=\frac{a}{2}\int_{\mathbb{R}^{n}}|z|^{2}k(z)dz,\ B=\frac{b}{2} \int_{\mathbb{R}^{n}}|z|^{2}\eta(z)dz\),_
\[u=\begin{cases}A\gamma&\text{for }\gamma>0\\ 0&\text{for }-\ell_{0}\leq\gamma\leq 0\\ B(\gamma+\ell_{0})&\text{for }\gamma<-l_{0}\end{cases}\]
_and \(\beta(u)\) is a multivalued mapping defined as follows_
\[\beta(u)=\begin{cases}\frac{1}{B}u-\ell_{0}&\text{for }u<0\\ [-\ell_{0},0]&\text{for }u=0\\ \frac{1}{A}u&\text{for }u>0.\end{cases}\]
Thanks to Proposition 1.2, one sees that in Theorem 1.3, only the condition (1.11) is extra in the studies of convergence relations. Obviously, the kernel functions which are radially symmetric and compactly supported satisfy the extra condition (1.11).
Next, the convergence relations between local and nonlocal one-phase Stefan problems are verified under the optimal condition (1.9). Similar to (1.10), we rescale the problem (1.6) as follows
\[\begin{cases}\gamma_{\epsilon t}(t,x)=\frac{1}{\epsilon^{2}}\int_{\mathbb{R}^ {n}}k_{\epsilon}(x-y)\gamma_{\epsilon}^{+}(t,y)dy-\frac{1}{\epsilon^{2}} \gamma_{\epsilon}^{+}(t,x)&t>0,\ x\in\mathbb{R}^{n},\\ \gamma_{\epsilon}(0,x)=\gamma_{0}(x)&x\in\mathbb{R}^{n},\end{cases} \tag{1.13}\]
where for simplicity, set \(d=1\) and denote
\[\gamma_{\epsilon}^{+}(t,x)=\gamma_{\epsilon}(t,x)\chi_{\{\gamma_{\epsilon}(t, x)>0\}}.\]
**Theorem 1.4**.: _In the problem (1.13), assume that the kernel function satisfies the assumption **(K)**, the condition (1.2) is valid and the initial data satisfies (1.7). Also, assume that the Fourier transform of \(k\) satisfies (1.9). Then for any given \(T>0\), \(\gamma_{\epsilon}^{+}\) converges to the solution \(\theta\) of the one-phase Stefan problem (1.5) in the following sense:_
\[\int_{0}^{t}\gamma_{\epsilon}^{+}(\tau,x)d\tau\to\int_{\min\{s(x),t\}}^{t} \theta(\tau,x)d\tau\ \ \text{a.e. in }(0,T)\times\mathbb{R}^{n},\]
_where we set \(d=A\) in the problem (1.5)._
The convergence relations between local and nonlocal one-phase Stefan problems is also studied in [5] under the additional conditions that the kernel function is radially symmetric and compactly supported.
From now on, we mainly focus on the nonlocal one-phase Stefan problem and derive some interesting and fundamental properties related to _expansion, boundedness and continuity_ of free boundaries in the nonlocal one-phase Stefan problem (1.6). Due to the lack of regularity in the nonlocal Stefan problems, we will impose an extra condition that \(\gamma_{0}|_{\bar{\Omega}_{0}}\in C(\bar{\Omega}_{0})\) on the initial data \(\gamma_{0}\) when discussing the properties of free boundaries.
**Theorem 1.5**.: _In the problem (1.6), assume that the kernel function satisfies the assumption **(K)**, the condition (1.2) is valid, and the initial data \(\gamma_{0}\) satisfies (1.7) and the extra condition that \(\gamma_{0}|_{\bar{\Omega}_{0}}\in C(\bar{\Omega}_{0})\). We have the following statements._
1. Expansion: _there exists_ \(t_{0}>0\) _such that_ \(\Omega(t)=\Omega(0)\) _for_ \(0\leq t\leq t_{0}\) _and_ \(\Omega(t_{1})\subseteq\Omega(t_{2})\) _for_ \(0<t_{1}<t_{2}\)_._
2. Boundedness: _there exists_ \(R>0\)_, which depends on the initial data only, such that_ \(\Omega(t)\subseteq B_{R}(0)\) _for all_ \(t>0\)_._
Theorem 1.5(i) is also proved in [5], where the kernel function is assumed to be compactly supported and radially symmetric. For the nonlocal two-phase Stefan problem (1.1), due to the interaction between \(\Omega(t)\) and \(\Omega^{-}(t)\) denoted in (1.4), Theorem 1.5(i) might not hold. However, thanks to the comparison principle, Theorem 1.5(ii) remains true for both \(\Omega(t)\) and \(\Omega^{-}(t)\).
We further investigate the continuity of the free boundary in the nonlocal one-phase Stefan problem. For convenience, we prepare an extra assumption about the kernel function as follows
**(K1)**: \(k(x)\) is radially symmetric, decreasing in \(|x|\).
**Theorem 1.6**.: _Under the conditions of Theorem 1.5, if additionally assume that \(\bar{\Omega}_{0}\) is convex and the assumption **(K1)** is valid, then \(\Omega(t)\) expands continuously._
In Theorem 1.6, extra conditions on the kernel function \(k(x)\) and the initial domain \(\Omega_{0}\) are needed to guarantee the continuous expansion of the free boundary \(\partial\Omega(t)\). A natural question is what happens without these extra conditions. Two examples are constructed to show that when either the extra condition on the kernel function or that on the initial domain \(\Omega_{0}\) in Theorem 1.6 is violated, the population range could generate at a distant place. This is so called _jumping
phenomena_. Since the nonlocal dispersal describes the movement between non-adjacent spatial locations, the jumping phenomena is natural. It also reflects the essential differences between local and nonlocal dispersal operators.
We also point out that, if allowing the initial data to be nonconstant outside \(\bar{\Omega}_{0}\), similar to [5, Theorem 4.6], where the kernel function is assumed to be compactly supported and radially symmetric, jumping phenomena could happen by choosing initial data properly. Indeed, the conclusion is valid as long as the kernel function satisfies the assumption **(K)**. We omit the proof since it is similar.
At the end, the main features of our paper are summarized as follows.
* Formulation of a new nonlocal model for two-phase Stefan problem, where the nonlocal version of the one-phase Stefan problem arises naturally as a special case.
* The optimal condition (1.9) for the pointwise convergence between local and nonlocal one-phase Stefan problem in Theorem 1.4.
* An equivalent characterization between the conditions (i) about the kernel function and (ii), i.e., (1.9), about the Fourier transform of the kernel function in Proposition 1.2.
* For local and global existence in Theorem 1.1, expansion and boundedness of free boundaries in Theorem 1.5, we only require the basic assumption **(K)** on the kernel functions.
* The sufficient conditions derived in Theorem 1.6 for the continuous expansion of the free boundary when the initial data outside initial domain \(\Omega_{0}\) is assumed to be a negative constant. Counterexamples are constructed to demonstrate that the jumping phenomena could happen when the sufficient conditions are violated.
This paper is organized as follows. Theorem 1.1 and some preliminary results for the problem (1.1) are established in Section 2. In Section 3, we focus on the convergence relations between local and nonlocal Stefan problems and present the proofs of Theorem 1.3 and Theorem 1.4. In Section 4, Theorems 1.5 and 1.6 related to properties about the free boundary of the nonlocal Stefan problem are verified. Moreover, we construct two examples where jumping phenomena happen, when one of the additional assumptions in Theorem 1.6 is violated. At the end, the proof of Proposition 1.2 is included in the appendix.
## 2 Wellposedness and preliminaries
### Local and global existence
We first verify the local and global existence to the nonlocal version of the two-phase Stefan problem (1.1). The same arguments can be applied to the the nonlocal version of the one-phase Stefan problem (1.6) word by word.
Proof of Theorem 1.1.: Denote \(M_{0}=\|\gamma_{0}\|_{L^{\infty}(\mathbb{R}^{n})}\), \(\mathbb{Y}=L^{\infty}(\mathbb{R}^{n})\), for \(s>0\),
\[\mathbb{X}_{s}=\left\{\phi\in C([0,s),\mathbb{Y})\,\big{|}\,\phi(0,\cdot)=\gamma _{0}(\cdot),\ \|\phi(t,\cdot)\|_{L^{\infty}(\mathbb{R}^{n})}\leq 2M_{0},\,t\in[0,s) \right\},\]
and
\[\|\phi\|_{C([0,s),\mathbb{Y})}=\sup_{0\leq t<s}\|\phi(t,\cdot)\|_{L^{\infty}( \mathbb{R}^{n})}.\]
For \(\phi\in\mathbb{X}_{s}\), \(0<t<s\), define
\[\mathcal{T}\phi =\gamma_{0}(x)+a\int_{0}^{t}\int_{\{\phi>0\}}k(x-y)\phi(\tau,y) dyd\tau-a\int_{0}^{t}\phi(\tau,x)\chi_{\{\phi>0\}}d\tau\] \[+b\int_{0}^{t}\int_{\{\phi<-\ell_{0}\}}\eta(x-y)(\phi(\tau,y)+ \ell_{0})dyd\tau-b\int_{0}^{t}(\phi(\tau,x)+\ell_{0})\chi_{\{\phi<-\ell_{0}\} }d\tau.\]
Then it is routine to show that \(\mathcal{T}\phi\in C([0,s),\mathbb{Y})\), \(\mathcal{T}\phi(0,\cdot)=\gamma_{0}(\cdot)\) and
\[\|\mathcal{T}\phi\|_{C([0,s),L^{\infty}(\mathbb{R}^{n}))}\leq M_{0}+2as\|\phi \|_{C([0,s),\mathbb{Y})}+2bs\|\phi\|_{C([0,s),\mathbb{Y})}\leq M_{0}+4s\left( a+b\right)M_{0}.\]
Moreover, for \(\phi_{1},\phi_{2}\in\mathbb{X}_{s}\),
\[\|\mathcal{T}\phi_{1}-\mathcal{T}\phi_{2}\|_{C([0,s),\mathbb{Y})}\leq 2as\| \phi_{1}-\phi_{2}\|_{C([0,s),\mathbb{Y})}+2bs\|\phi_{1}-\phi_{2}\|_{C([0,s), \mathbb{Y})}.\]
Thus it is obvious that there exists \(t_{0}>0\), which depends \(a\), \(b\) and \(M_{0}\) only and is sufficiently small, such that for \(0<s\leq t_{0}\), \(\mathcal{T}\) maps \(\mathbb{X}_{s}\) into \(\mathbb{X}_{s}\) and \(\mathcal{T}\) is a contraction mapping in \(\mathbb{X}_{s}\). Hence by the contraction mapping theorem, for \(0<s\leq t_{0}\), there exists a unique \(\gamma\in\mathbb{X}_{s}\) satisfying
\[\gamma(t,x) =\gamma_{0}(x)+a\int_{0}^{t}\int_{\{\gamma>0\}}k(x-y)\gamma(\tau, y)dyd\tau-a\int_{0}^{t}\gamma(\tau,x)\chi_{\{\gamma>0\}}d\tau\] \[+b\int_{0}^{t}\int_{\{\gamma<-\ell_{0}\}}\eta(x-y)(\gamma(\tau,y) +\ell_{0})dyd\tau-b\int_{0}^{t}(\gamma(\tau,x)+\ell_{0})\chi_{\{\gamma<-\ell_ {0}\}}d\tau\]
for \(0<t<s\), \(x\in\mathbb{R}^{n}\). Thus, obviously \(\gamma\) is the unique solution to the problem (1.1).
Let \((0,T_{\max})\) denote the maximal time interval for which the solution \(\gamma(t,x)\) of the problem (1.1) exists. It remains to show \(T_{\max}=+\infty\). For this purpose, it suffices to show that \(\|\gamma(t,\cdot)\|_{L^{\infty}(\mathbb{R}^{n})}\) is bounded in \((0,T_{\max})\). To be more specific, we claim that \(\gamma\)_satisfies the estimate (1.8) in \((0,T_{\max})\)._
Fix any \(0<T<T_{\max}\). First, assume that the kernel functions \(k\) and \(\eta\) are compactly supported. Then since \(\bar{\Omega}_{0}\) is bounded, it is standard to show that \(\{\gamma(t,x)\geq 0\}\) and \(\{\gamma(t,x)\leq-\ell_{0}\}\) remain bounded for \(0<t<T\).
Notice that if \(|\{\gamma_{0}(x)>0\}|=0\), then by the equation satisfied by \(\gamma(t,x)\), one has
\[\gamma(t,x)\leq\operatorname*{ess}\sup_{\mathbb{R}^{n}}\gamma_{0},\ \ 0<t<T,\,x\in\mathbb{R}^{n}.\]
Now we consider the case that \(|\{\gamma_{0}(x)>0\}|>0\). Based on the problem (1.1), for any \(1<p<+\infty\), \(0<t<T\), one has
\[(\gamma^{+})^{p-1}\gamma_{t}(t,x)\leq(\gamma^{+})^{p-1}\left(a\int_{\{\gamma>0 \}}\!\!\!k(x-y)\gamma(t,y)dy-a\gamma(t,x)\chi_{\{\gamma>0\}}\right).\]
Then direct computation yields that for \(0<t<T\),
\[\frac{1}{p}\frac{d}{dt}\int_{\mathbb{R}^{n}}(\gamma^{+}(t,x))^{p }dx\leq a\int_{\mathbb{R}^{n}}(\gamma^{+}(t,x))^{p-1}\left(\int_{\mathbb{R}^{n }}k(x-y)\gamma^{+}(t,y)dy-\gamma^{+}(t,x)\right)dx\] \[\leq a\int_{\mathbb{R}^{n}}(\gamma^{+}(t,x))^{p-1}\left(\int_{ \mathbb{R}^{n}}k(x,y)dy\right)^{\frac{p-1}{p}}\left(\int_{\mathbb{R}^{n}}k(x, y)(\gamma^{+}(t,y))^{p}dy\right)^{\frac{1}{p}}dx-a\|\gamma^{+}(t,\cdot)\|_{L^{p}( \mathbb{R}^{n})}^{p}\] \[\leq a\|\gamma^{+}(t,\cdot)\|_{L^{p}(\mathbb{R}^{n})}^{p-1}\left(\int _{\mathbb{R}^{n}}\int_{\mathbb{R}^{n}}k(x,y)(\gamma^{+}(t,y))^{p}dydx\right)^ {\frac{1}{p}}-a\|\gamma^{+}(t,\cdot)\|_{L^{p}(\mathbb{R}^{n})}^{p}\leq 0.\]
Hence for any \(1<p<+\infty\), \(0<t<T\),
\[\|\gamma^{+}(t,\cdot)\|_{L^{p}(\mathbb{R}^{n})}\leq\|\gamma^{+}(0,\cdot)\|_{L^ {p}(\mathbb{R}^{n})},\]
and it follows that
\[\gamma(t,x)\leq\mbox{ess}\sup_{\mathbb{R}^{n}}\gamma_{0},\ \ 0<t<T,\,x\in\mathbb{R}^{n}.\]
Similar arguments can be applied on \((\gamma(t,x)+\ell_{0})^{-}\) to derive that
\[\gamma(t,x)\geq\mbox{ess}\inf_{\mathbb{R}^{n}}\gamma_{0},\ \ 0<t<T,\,x\in \mathbb{R}^{n}.\]
The claim is proved for compactly supported kernel functions since \(T\in(0,T_{\max})\) is arbitrary.
Now consider the case that the kernel functions \(k\) and \(\eta\) are not compactly supported. Then there exists a series of kernels \(k_{j}\), \(\eta_{j}\), \(j\geq 1\), which are compactly supported, satisfy the assumption **(K)**, and
\[\lim_{j\to\infty}\|k_{j}-k\|_{L^{1}(\mathbb{R}^{n})}=0,\ \lim_{j\to\infty}\|\eta_{j}-\eta\|_{L^{1}( \mathbb{R}^{n})}=0. \tag{2.1}\]
Let \(\gamma_{j}\) denote the solution to the problem (1.1) with \(k\) replaced by \(k_{j}\) and \(\eta\) replaced by \(\eta_{j}\). Set \(w_{j}=\gamma-\gamma_{j}\), \(j\geq 1\). Then \(w_{j}\) satisfies
\[\begin{cases}(w_{j})_{t}(t,x)=a\int_{\{\gamma>0\}}k(x-y)\gamma(t,y)dy-a\gamma( t,x)\chi_{\{\gamma>0\}}\\ \qquad-a\int_{\{\gamma_{j}>0\}}k_{j}(x-y)\gamma_{j}(t,y)dy+a\gamma_{j}(t,x) \chi_{\{\gamma_{j}>0\}}\\ \qquad+b\int_{\{\gamma<-\ell_{0}\}}\eta(x-y)(\gamma(t,y)+\ell_{0})dy-b(\gamma (t,x)+\ell_{0})\chi_{\{\gamma<-\ell_{0}\}}\\ \qquad-b\int_{\{\gamma_{j}<-\ell_{0}\}}\eta_{j}(x-y)(\gamma_{j}(t,y)+\ell_{0}) dy+b(\gamma_{j}(t,x)+l_{0})\chi_{\{\gamma_{j}<-\ell_{0}\}}&0<t<T_{\max},\ x\in\mathbb{R}^{n},\\ w_{j}(0,x)=0&x\in\mathbb{R}^{n}.\end{cases}\]
Then for \(w_{j}>0\), direct computation yields that
\[(w_{j})_{t}(t,x) \leq a\int_{\{\gamma>0\}}k(x-y)\left(\gamma(t,y)-\gamma_{j}(t,y)\right) dy+a\int_{\{\gamma>0\}}k(x-y)\gamma_{j}(t,y)dy\] \[-a\int_{\{\gamma_{j}>0\}}\left(k_{j}(x-y)-k(x-y)\right)\gamma_{j}( t,y)dy-a\int_{\{\gamma_{j}>0\}}k(x-y)\gamma_{j}(t,y)dy\] \[+b\int_{\{\gamma<-\ell_{0}\}}\eta(x-y)(\gamma(t,y)+\ell_{0})dy-b \int_{\{\gamma_{j}<-\ell_{0}\}}\eta(x-y)(\gamma(t,y)+\ell_{0})dy\] \[+b\int_{\{\gamma_{j}<-\ell_{0}\}}\eta(x-y)(\gamma(t,y)-\gamma_{j} (t,y))dy\] \[+b\int_{\{\gamma_{j}<-\ell_{0}\}}(\eta(x-y)-\eta_{j}(x-y))(\gamma _{j}(t,y)+\ell_{0})dy\] \[\leq (a+b)\|w_{j}\|_{L^{\infty}(\mathbb{R}^{n})}+aM_{0}\|k_{j}-k\|_{L ^{1}(\mathbb{R}^{n})}+bM_{0}\|\eta_{j}-\eta\|_{L^{1}(\mathbb{R}^{n})},\]
where the last inequality follows from the fact that \(\gamma_{j}\) satisfies the estimate (1.8). Similarly for \(w_{j}<0\), we have
\[(-w_{j})_{t}(t,x)\leq(a+b)\|w_{j}\|_{L^{\infty}(\mathbb{R}^{n})}+aM_{0}\|k_{j} -k\|_{L^{1}(\mathbb{R}^{n})}+bM_{0}\|\eta_{j}-\eta\|_{L^{1}(\mathbb{R}^{n})}.\]
The above two inequalities indicate that for \(0<t<T_{\max}\),
\[|w_{j}(t,x)|=\lim_{\delta\to 0}\int_{0}^{t}\frac{\partial}{ \partial\tau}\left[w_{j}^{2}(\tau,x)+\delta^{2}\right]^{\frac{1}{2}}\,d\tau \tag{2.2}\] \[= \lim_{\delta\to 0}\int_{0}^{t}\frac{w_{j}(\tau,x)}{\left[w_{j}^{2}( \tau,x)+\delta^{2}\right]^{\frac{1}{2}}}\frac{\partial}{\partial\tau}w_{j}( \tau,x)d\tau\] \[\leq (a+b)\int_{0}^{t}\|w_{j}\|_{L^{\infty}(\mathbb{R}^{n})}(\tau)d \tau+aM_{0}\|k_{j}-k\|_{L^{1}(\mathbb{R}^{n})}t+bM_{0}\|\eta_{j}-\eta\|_{L^{1} (\mathbb{R}^{n})}t.\]
Denote
\[h_{j}(t)=\int_{0}^{t}\|w_{j}\|_{L^{\infty}(\mathbb{R}^{n})}(\tau)d\tau,\]
then (2.2) implies that for \(0<t<T_{\max}\),
\[h_{j}^{\prime}(t)\leq(a+b)h_{j}(t)+aM_{0}\|k_{j}-k\|_{L^{1}(\mathbb{R}^{n})}t+ bM_{0}\|\eta_{j}-\eta\|_{L^{1}(\mathbb{R}^{n})}t.\]
Direct computation yields that for \(0<t<T_{\max}\),
\[h_{j}(t)\leq\frac{M_{0}}{(a+b)^{2}}e^{(a+b)t}\left(a\|k_{j}-k\|_{L^{1}( \mathbb{R}^{n})}+b\|\eta_{j}-\eta\|_{L^{1}(\mathbb{R}^{n})}\right),\]
which, together with (2.2), yields that for \(0<t<T_{\max}\),
\[\|w_{j}\|_{L^{\infty}(\mathbb{R}^{n})}(t)\leq M_{0}\left(\frac{1}{a+b}e^{(a+b )t}+t\right)\left(a\|k_{j}-k\|_{L^{1}(\mathbb{R}^{n})}+b\|\eta_{j}-\eta\|_{L^{ 1}(\mathbb{R}^{n})}\right). \tag{2.3}\]
This, together with (2.1) and the fact that \(\gamma_{j}\) satisfies the estimate (1.8) for all \(j\geq 1\), implies the desired claim for general kernel functions under the assumption **(K)**.
At the end, it is routine to verify that if \(\gamma_{0}|_{\bar{\Omega}_{0}}\in C(\bar{\Omega}_{0})\), then \(\gamma(t,\cdot)\) is continuous in \(\bar{\Omega}_{0}\) and \(\mathbb{R}^{n}\setminus\bar{\Omega}_{0}\) for any \(t>0\).
### Preliminaries
We first present the comparison principle for the nonlocal version of the two-phase Stefan problem (1.1) and omit the proof since it is standard. Similarly, the comparison principle is also valid for the nonlocal version of the one-phase Stefan problem (1.6).
**Proposition 2.1**.: _Assume that the conditions of Theorem 1.1 are valid. Also assume that \(\gamma_{0}^{*}\in L^{\infty}(\mathbb{R}^{n})\), \(\gamma_{0}^{*}(x)=-\alpha_{0}^{*}\) for \(x\in\mathbb{R}^{n}\setminus\bar{\Omega}_{0},\ \alpha_{0}^{*}\in(0,\ell_{0})\). Let \(\gamma^{*}\) denote the solution to the problem (1.1) with initial data \(\gamma_{0}^{*}\). If \(\gamma_{0}^{*}\geq\gamma_{0}\), then \(\gamma^{*}\geq\gamma\) for all \(t>0\)._
Moreover, we present a type of strong maximum principle for the nonlocal version of one-phase Stefan problem (1.6).
**Proposition 2.2**.: _Under the conditions of Theorem 1.5, given \(s\geq 0\), we have \(\gamma(t,x)>0\) in \(\Omega(s)\) for \(t\geq s\)._
Proof.: First, we claim that _if \(x\in\{x\in\Omega(s)\,|\,\gamma(s,x)>0\}\), then for \(t>s\), \(\gamma(t,x)>0\)_. Due to the continuity of the solution in \(t\), we only need consider the case that \(s>0\). According to
\[\gamma_{t}(t,x)=d\int_{\{\gamma>0\}}k(x-y)\gamma(t,y)dy-d\gamma(t,x)\chi_{\{ \gamma>0\}}\geq-d\gamma(t,x)\chi_{\{\gamma>0\}},\]
the claim follows immediately.
Next we consider the initial domain \(\bar{\Omega}_{0}\). Set
\[\gamma_{0\delta}(x)=\begin{cases}\gamma_{0}(x)+\delta&x\in\bar{\Omega},\\ \gamma_{0}(x)&x\in\mathbb{R}^{n}\setminus\bar{\Omega}_{0},\end{cases}\]
where \(\delta>0\), and let \(\gamma_{\delta}\) denote the solution to the problem (1.6) with the initial data (1.7), where \(\gamma_{0}\) is replaced by \(\gamma_{0\delta}\). Thanks to the above claim, one sees that \(\gamma_{\delta}(t,x)>0\) for \(t>0\), \(x\in\bar{\Omega}_{0}\). By letting \(\delta\to 0^{+}\), it is routine to derive that \(\gamma(t,x)\geq 0\) for \(t>0\), \(x\in\bar{\Omega}_{0}\), i.e., \(\bar{\Omega}_{0}\subseteq\Omega(t)\) for \(t>0\).
Moreover, since \(\gamma_{0}|_{\bar{\Omega}_{0}}\geq 0,\ \gamma_{0}|_{\bar{\Omega}_{0}}\not \equiv 0\), the claim at the beginning indicates that \(\{x\in\bar{\Omega}_{0}\,|\,\gamma(t,x)>0\}\) is not empty for \(t>0\). Suppose that there exists \(t_{0}>0\) such that \(\gamma(t_{0},x)\) touches zero somewhere in \(\bar{\Omega}_{0}\). By choosing
\[x_{0}\in\partial\{x\in\bar{\Omega}_{0}\,|\,\gamma(t_{0},x)>0\}\bigcap\{x\in \bar{\Omega}_{0}\,|\,\gamma(t_{0},x)=0\},\]
we have
\[0\geq\gamma_{t}(t_{0},x_{0})=d\int_{\{\gamma(t_{0},y)>0\}}k(x_{0}-y)\gamma(t_ {0},y)dy>0,\]
where the strict inequality is due to the assumption **(K)** and the choice of \(x_{0}\). This is a contradiction and thus \(\gamma(t,x)>0\) for \(t>0\), \(x\in\bar{\Omega}_{0}\).
It remains to consider the set \(\{x\in\Omega(s)\setminus\bar{\Omega}_{0}\,|\,\gamma(s,x)=0\}\), when it is not empty. Fix \(x^{*}\in\{x\in\Omega(s)\setminus\bar{\Omega}_{0}\,|\,\gamma(s,x)=0\}\) and let \(s_{1}\) denote the moment when \(\gamma(t,x^{*})\) first touches zero. Obviously \(s_{1}\leq s\) and by the equation satisfied by \(\gamma\), we have
\[\ell_{0}=\int_{0}^{s_{1}}d\int_{\{\gamma(t,y)>0\}}k(x^{*}-y)\gamma(t,y)dydt.\]
Then obviously there exists \(t_{1}\in(0,s_{1})\) such that
\[\int_{\{\gamma(t_{1},y)>0\}}k(x^{*}-y)\gamma(t_{1},y)dy>0. \tag{2.4}\]
We claim that _for any \(t>t_{1}\), \(\int_{\{\gamma(t,y)>0\}}k(x^{*}-y)\gamma(t,y)dy>0\)_. Suppose that the claim is not true, i.e., there exists \(t_{2}>t_{1}\) such that
\[\int_{\{\gamma(t_{2},y)>0\}}k(x^{*}-y)\gamma(t_{2},y)dy=0.\]
This implies that \(\gamma(t_{2},y)\leq 0\) in the set \(\{y\in\mathbb{R}^{n}\,|\,k(x^{*}-y)>0\}\). Again thanks to the claim at the beginning, we have \(\gamma(t_{1},y)\leq 0\) in the set \(\{y\in\mathbb{R}^{n}\,|\,k(x^{*}-y)>0\}\), which contradicts to (2.4). The claim is proved.
According to this claim and the choice of \(s\), \(x^{*}\), one sees that
\[\gamma_{t}(s,x^{*})=d\int_{\{\gamma(s,y)>0\}}k(x^{*}-y)\gamma(s,y)dy>0.\]
Hence for \(t>s\geq 0\), \(\gamma(t,x^{*})>0\).
At the end, some priori estimates are verified for the nonlocal version of the two-phase Stefan problem.
**Lemma 2.3**.: _Under the assumptions of Theorem 1.1, there exists a constant \(C_{1}>0\), which depends on the initial data only, such that for given \(1\leq p\leq\infty\), we have_
\[\|\gamma^{+}(t,\cdot)\|_{L^{p}(\mathbb{R}^{n})}\leq C_{1},\ \|(\gamma(t,\cdot)+ \ell_{0})^{-}\|_{L^{p}(\mathbb{R}^{n})}\leq C_{1},\ \ t>0.\]
Proof.: Notice that if \(\phi\in L^{1}(\mathbb{R}^{n})\bigcap L^{\infty}(\mathbb{R}^{n})\), then for any \(p>1\), \(\phi\in L^{p}(\mathbb{R}^{n})\) and
\[\|\phi\|_{L^{p}(\mathbb{R}^{n})}\leq\left(\|\phi\|_{L^{\infty}(\mathbb{R}^{n}) }^{p-1}\|\phi\|_{L^{1}(\mathbb{R}^{n})}\right)^{\frac{1}{p}}\leq\left(\|\phi\| _{L^{\infty}(\mathbb{R}^{n})}+1\right)\left(\|\phi\|_{L^{1}(\mathbb{R}^{n})}+1 \right).\]
Hence it suffices to verify the statements for \(p=1\) and \(p=\infty\).
Indeed, when \(p=\infty\), the conclusion is obvious due to Theorem 1.1, i.e.,
\[\|\gamma(t,\cdot)\|_{L^{\infty}(\mathbb{R}^{n})}\leq\|\gamma_{0}\|_{L^{\infty }(\mathbb{R}^{n})}. \tag{2.5}\]
In order to estimate \(\|\gamma^{+}(t,\cdot)\|_{L^{1}(\mathbb{R}^{n})}\), we first consider the case that both \(k\) and \(\eta\) are compactly supported. Let \(\hat{\gamma}(t,x)\) denote the solution to the problem (1.1) with the initial data replaced by
\[\hat{\gamma}(0,x)=\|\gamma_{0}|_{\bar{\Omega}_{0}}\|_{L^{\infty}(\bar{\Omega} _{0})},\ x\in\bar{\Omega}_{0},\ \ \hat{\gamma}(0,x)=-\alpha_{0},\ x\in\mathbb{R}^{n}\setminus\bar{\Omega}_{0}.\]
By Theorem 1.1 and Proposition 2.1, we have
\[\hat{\gamma}(t,x)\geq\gamma(t,x),\ -\alpha_{0}\leq\hat{\gamma}(t,x)\leq\| \gamma_{0}|_{\bar{\Omega}_{0}}\|_{L^{\infty}(\bar{\Omega}_{0})},\ \ t>0,\,x\in\mathbb{R}^{n}. \tag{2.6}\]
Since \(\bar{\Omega}_{0}\) is bounded and \(k\), \(\eta\) are compactly supported, for \(t>0\), it is routine to show that \(\{\hat{\gamma}(t,x)\geq 0\}\) remains bounded. Set
\[\Sigma^{+}(t)=\bigcup_{0<\tau<t}\{\hat{\gamma}(\tau,x)\geq 0\}.\]
Then by direct computation, for \(0<\tau<t\),
\[\int_{\Sigma^{+}(t)}\hat{\gamma}_{\tau}(\tau,x)dx\leq a\int_{\Sigma^{+}(t)} \int_{\mathbb{R}^{n}}k(x-y)\hat{\gamma}^{+}(\tau,y)dydx-a\int_{\Sigma^{+}(t)} \hat{\gamma}^{+}(\tau,x)dx\leq 0.\]
Thus
\[0\leq\int_{\Sigma^{+}(t)}\hat{\gamma}(t,x)dx\leq\int_{\Sigma^{+}(t)}\hat{ \gamma}(0,x)dx=-\alpha_{0}\ |\Sigma^{+}(t)\setminus\bar{\Omega}_{0}|+\|\gamma_{0}|_{\bar{\Omega}_{0}}\|_{L ^{\infty}(\bar{\Omega}_{0})}\ |\bar{\Omega}_{0}|,\]
which implies that
\[|\{\hat{\gamma}(t,x)\geq 0\}|\leq|\Sigma^{+}(t)|\leq\left(1+\frac{\|\gamma_{0} |_{\bar{\Omega}_{0}}\|_{L^{\infty}(\Omega_{0})}}{\alpha_{0}}\right)|\bar{ \Omega}_{0}|.\]
Hence thanks to (2.6), for any given \(t>0\)
\[\|\gamma^{+}(t,\cdot)\|_{L^{1}(\mathbb{R}^{n})}\leq\|\gamma_{0}|_{\bar{\Omega }_{0}}\|_{L^{\infty}(\Omega_{0})}\left(1+\frac{\|\gamma_{0}|_{\bar{\Omega}_{0} }\|_{L^{\infty}(\Omega_{0})}}{\alpha_{0}}\right)|\bar{\Omega}_{0}|. \tag{2.7}\]
Now consider the case that the kernel functions \(k\) and \(\eta\) satisfy the assumption **(K)**, but are not compactly supported. Then there exists a series of kernel functions \(k_{j}\), \(\eta_{j}\), \(j\geq 1\), which are compactly supported, satisfy the assumption **(K)**, and
\[\lim_{j\to\infty}\|k_{j}-k_{\epsilon}\|_{L^{1}(\mathbb{R}^{n})}=0,\ \lim_{j\to\infty}\|\eta_{j}-\eta_{\epsilon}\|_{L^{1}(\mathbb{R}^{n})}=0.\]
Let \(\gamma_{j}\) denotes the solution to the problem (1.1) with \(k\) replaced by \(k_{j}\) and \(\eta\) replaced by \(\eta_{j}\). Similar to the proof of Theorem 1.1, we have
\[\lim_{j\to\infty}\|\gamma_{j}^{+}-\gamma^{+}\|_{L^{\infty}(\mathbb{R}^{n})} \leq\lim_{j\to\infty}\|\gamma_{j}-\gamma\|_{L^{\infty}(\mathbb{R}^{n})}=0.\]
This, together with (2.7), implies that for any \(R>0\),
\[\int_{B_{R}(0)}\gamma^{+}(t,x)dx=\lim_{j\to\infty}\int_{B_{R}(0)}\gamma_{j}^{+ }(t,x)dx\leq\|\gamma_{0}|_{\bar{\Omega}_{0}}\|_{L^{\infty}(\bar{\Omega}_{0})} \left(1+\frac{\|\gamma_{0}|_{\bar{\Omega}_{0}}\|_{L^{\infty}(\bar{\Omega}_{0}) }}{\alpha_{0}}\right)|\bar{\Omega}_{0}|.\]
Since \(R\) is arbitrary, for any given \(t>0\),
\[\|\gamma^{+}(t,\cdot)\|_{L^{1}(\mathbb{R}^{n})}\leq\|\gamma_{0}|_{\bar{\Omega }_{0}}\|_{L^{\infty}(\bar{\Omega}_{0})}\left(1+\frac{\|\gamma_{0}|_{\bar{ \Omega}_{0}}\|_{L^{\infty}(\bar{\Omega}_{0})}}{\alpha_{0}}\right)|\bar{\Omega }_{0}|.\]
Obviously, \(\|(\gamma(t,\cdot)+\ell_{0})^{-}\|_{L^{1}(\mathbb{R}^{n})}\) can be estimated in a similar way. The proof is complete.
**Lemma 2.4**.: _Under the assumptions of Theorem 1.1, we have_
\[\int_{\mathbb{R}^{n}}|\gamma(t,x+h)-\gamma(t,x)|\,dx\leq\int_{\mathbb{R}^{n}}| \gamma_{0}(x+h)-\gamma_{0}(x)|\,dx,\ \ t>0,\,x\in\mathbb{R}^{n}.\]
Proof.: First of all, fix \(x\), \(h\in\mathbb{R}^{n}\). For \(\delta\neq 0\), introduce \(\mu_{\delta}(X)=\left(X^{2}+\delta^{2}\right)^{\frac{1}{2}}.\)
According to the problem (1.1) satisfied by \(\gamma\), it is routine to verify that
\[\frac{\partial}{\partial t}\mu_{\delta}(\gamma(t,x+h)-\gamma(t,x))\] \[= \frac{\gamma(t,x+h)-\gamma(t,x)}{\left[\left(\gamma(t,x+h)-\gamma (t,x)\right)^{2}+\delta^{2}\right]^{\frac{1}{2}}}\left(\gamma(t,x+h)-\gamma(t,x)\right)_{t}\] \[\leq \frac{|\gamma(t,x+h)-\gamma(t,x)|}{\left[\left(\gamma(t,x+h)- \gamma(t,x)\right)^{2}+\delta^{2}\right]^{\frac{1}{2}}}\] \[\times\left(a\int_{\mathbb{R}^{n}}k(x-y)|\gamma^{+}(t,y+h)- \gamma^{+}(t,y)|dy-a|\gamma^{+}(t,x+h)-\gamma^{+}(t,x)|\right)\] \[+\frac{|\gamma(t,x+h)-\gamma(t,x)|}{\left[\left(\gamma(t,x+h)- \gamma(t,x)\right)^{2}+\delta^{2}\right]^{\frac{1}{2}}}\cdot b\int_{\mathbb{R }^{n}}k(x-y)|(\gamma+\ell_{0})^{-}(t,y+h)-(\gamma+\ell_{0})^{-}(t,y)|dy\] \[-\frac{|\gamma(t,x+h)-\gamma(t,x)|}{\left[\left(\gamma(t,x+h)- \gamma(t,x)\right)^{2}+\delta^{2}\right]^{\frac{1}{2}}}\cdot b|(\gamma+\ell_{ 0})^{-}(t,x+h)-(\gamma+\ell_{0})^{-}(t,x)|,\]
which yields that
\[|\gamma(t,x+h)-\gamma(t,x)|-|\gamma_{0}(x+h)-\gamma_{0}(x)|\] \[= \lim_{\delta\to 0}\left[\mu_{\delta}(\gamma(t,x+h)-\gamma(t,x))- \mu_{\delta}(\gamma_{0}(x+h)-\gamma_{0}(x))\right]\] \[= \lim_{\delta\to 0}\int_{0}^{t}\frac{\partial}{\partial\tau}\mu_{ \delta}(\gamma(\tau,x+h)-\gamma(\tau,x))d\tau\] \[\leq a\int_{0}^{t}\left(\int_{\mathbb{R}^{n}}k(x-y)|\gamma^{+}(\tau,y +h)-\gamma^{+}(\tau,y)|dy-|\gamma^{+}(\tau,x+h)-\gamma^{+}(\tau,x)|\right)d\tau\] \[+b\int_{0}^{t}\int_{\mathbb{R}^{n}}k(x-y)|(\gamma+\ell_{0})^{-}( \tau,y+h)-(\gamma+\ell_{0})^{-}(\tau,y)|dyd\tau\] \[-b\int_{0}^{t}|(\gamma+\ell_{0})^{-}(\tau,x+h)-(\gamma+\ell_{0}) ^{-}(\tau,x)|d\tau.\]
Thus for any \(R>0\),
\[\int_{B_{R}(0)}|\gamma(t,x+h)-\gamma(t,x)|\,dx-\int_{B_{R}(0)}| \gamma_{0}(x+h)-\gamma_{0}(x)|\,dx\] \[\leq a\int_{0}^{t}\int_{B_{R}(0)}\left(\int_{\mathbb{R}^{n}}k(x-y)| \gamma^{+}(\tau,y+h)-\gamma^{+}(\tau,y)|dy-|\gamma^{+}(\tau,x+h)-\gamma^{+}( \tau,x)|\right)dxd\tau\] \[+b\int_{0}^{t}\int_{B_{R}(0)}\int_{\mathbb{R}^{n}}k(x-y)|(\gamma +\ell_{0})^{-}(\tau,y+h)-(\gamma+\ell_{0})^{-}(\tau,y)|dydxd\tau\]
\[-b\int_{0}^{t}\int_{B_{R}(0)}|(\gamma+\ell_{0})^{-}(\tau,x+h)-( \gamma+\ell_{0})^{-}(\tau,x)|dxd\tau\] \[\leq a\int_{0}^{t}\left(\int_{\mathbb{R}^{n}}|\gamma^{+}(\tau,x+h)- \gamma^{+}(\tau,x)|dx-\int_{B_{R}(0)}|\gamma^{+}(\tau,x+h)-\gamma^{+}(\tau,x)| dx\right)d\tau\] \[+b\int_{0}^{t}\int_{\mathbb{R}^{n}}|(\gamma+\ell_{0})^{-}(\tau,x+h )-(\gamma+\ell_{0})^{-}(\tau,x)|dxd\tau\] \[-b\int_{0}^{t}\int_{B_{R}(0)}|(\gamma+\ell_{0})^{-}(\tau,x+h)-( \gamma+\ell_{0})^{-}(\tau,x)|dxd\tau.\]
Notice that \(\gamma_{0}(x+h)-\gamma_{0}(x)\) is compactly supported. Then thanks to Lemma 2.3, by letting \(R\to\infty\), we have for \(t>0\),
\[\int_{\mathbb{R}^{n}}|\gamma(t,x+h)-\gamma(t,x)|\,dx\leq\int_{\mathbb{R}^{n} }|\gamma_{0}(x+h)-\gamma_{0}(x)|\,dx.\]
The proof is complete.
**Remark 2.1**.: _The priori estimates in Lemmas 2.3 and 2.4 are also valid for the nonlocal version of the one-phase Stefan problem based on the same arguments. In particular, these estimates play an important role in proving convergence relations between local and nonlocal Stefan problems._
## 3 Convergence to the local Stefan problem
### Convergence to the two-phase Stefan problem
Theorem 1.3 is about the convergence relations between local and nonlocal two-phase Stefan problems, where the additional assumptions that the kernel functions are radially symmetric and compactly supported are required.
Proof of Theorem 1.3.: Fix \(T>0\). For any test function \(\zeta\in C_{c}^{\infty}(\mathbb{R}^{n}\times[0,T))\), it is routine to show that
\[-\int_{0}^{T}\int_{\mathbb{R}^{n}}\gamma_{\epsilon}\zeta_{t}dxdt- \int_{\mathbb{R}^{n}}\gamma_{0}(x)\zeta(0,x)dx\] \[= a\int_{0}^{T}\int_{\mathbb{R}^{n}}\frac{1}{\epsilon^{2}}\left( \int_{\mathbb{R}^{n}}k_{\epsilon}(x-y)\zeta(t,y)dy-\zeta(t,x)\right)\gamma_{ \epsilon}^{+}(t,x)dxdt \tag{3.1}\] \[+b\int_{0}^{T}\int_{\mathbb{R}^{n}}\frac{1}{\epsilon^{2}}\left( \int_{\mathbb{R}^{n}}\eta_{\epsilon}(x-y)\zeta(t,y)dy-\zeta(t,x)\right)( \gamma_{\epsilon}(t,x)+\ell_{0})^{-}dxdt.\]
First, thanks to the conditions imposed on the kernel functions \(k\) and \(\eta\), and \(\zeta\in C_{c}^{\infty}(\mathbb{R}^{n}\times[0,T)\), we have
\[\lim_{\epsilon\to 0}\frac{1}{\epsilon^{2}}\left(\int_{ \mathbb{R}^{n}}k_{\epsilon}(x-y)\zeta(t,y)dy-\zeta(t,x)\right) = \lim_{\epsilon\to 0}\frac{1}{\epsilon^{2}}\int_{\mathbb{R}^{n}}k(z) \left(\zeta(t,x-\epsilon z)-\zeta(t,x)\right)dz\] \[= \lim_{\epsilon\to 0}\frac{1}{\epsilon^{2}}\int_{\mathbb{R}^{n}}k _{\epsilon}(x-y)\zeta(t,y)dy-\zeta(t,x)\] \[= \lim_{\epsilon\to 0}\frac{1}{\epsilon^{2}}\int_{\mathbb{R}^{n}}k _{\epsilon}(x-y)\zeta(t,
\[\lim_{j\to\infty}\gamma_{\epsilon_{j}}(t,\cdot)=\gamma(t,\cdot)\ \text{ weakly in }L^{p}_{loc}(\mathbb{R}^{n}). \tag{3.5}\]
To prove this claim, fix \(t\in(0,T)\bigcap\mathbb{Q}^{c}\), \(1<p<\infty\) and a bounded set \(\Omega\) in \(\mathbb{R}^{n}\). Obviously, there exist a subsequence of the sequence \(\{\epsilon_{j}\}\), denoted by \(\{\epsilon_{j_{\ell}}\}\), and \(\gamma_{\Omega}(t,\cdot)\in L^{p}(\Omega)\), such that
\[\lim_{j_{\ell}\to\infty}\gamma_{\epsilon_{j_{\ell}}}(t,\cdot)=\gamma_{\Omega} (t,\cdot)\ \text{ weakly in }L^{p}(\Omega). \tag{3.6}\]
We emphasize that the subsequence \(\{\epsilon_{j_{\ell}}\}\) depends on \(t\in(0,T)\bigcap\mathbb{Q}^{c}\). Then fix \(\phi(x)\in C_{c}(\Omega)\),
\[\int_{\Omega}\left(\gamma_{\epsilon_{j}}(t,x)-\gamma_{\Omega}(t, x)\right)\phi(x)dx \tag{3.7}\] \[= \int_{\Omega}\left(\gamma_{\epsilon_{j}}(t,x)-\gamma_{\epsilon_{ j_{\ell}}}(t,x)\right)\phi(x)dx+\int_{\Omega}\left(\gamma_{\epsilon_{j_{\ell}}}(t,x)- \gamma_{\Omega}(t,x)\right)\phi(x)dx\] \[= \int_{\Omega}\left(\gamma_{\epsilon_{j}}(s,x)-\gamma_{\epsilon_{ j_{\ell}}}(s,x)\right)\phi(x)dx+\int_{\Omega}\left(\gamma_{\epsilon_{j_{\ell}}}(t,x)- \gamma_{\Omega}(t,x)\right)\phi(x)dx\] \[+\int_{\Omega}\left(\gamma_{\epsilon_{j}}(t,x)-\gamma_{\epsilon _{j}}(s,x)\right)\phi(x)dx-\int_{\Omega}\left(\gamma_{\epsilon_{j_{\ell}}}(t, x)-\gamma_{\epsilon_{j_{\ell}}}(s,x)\right)\phi(x)dx,\]
where \(s\in(0,T)\bigcap\mathbb{Q}\). Notice that based on the problem (1.10), one has
\[\int_{\mathbb{R}^{n}}\left(\gamma_{\epsilon}(t,x)-\gamma_{\epsilon} (s,x)\right)\phi(x)dx\] \[= a\int_{s}^{t}\int_{\mathbb{R}^{n}}\frac{1}{\epsilon^{2}}\left( \int_{\mathbb{R}^{n}}k_{\epsilon}(x-y)\phi(y)dy-\phi(x)\right)\gamma_{\epsilon }^{+}(\tau,x)dxd\tau\] \[+b\int_{s}^{t}\int_{\mathbb{R}^{n}}\frac{1}{\epsilon^{2}}\left( \int_{\mathbb{R}^{n}}\eta_{\epsilon}(x-y)\phi(y)dy-\phi(x)\right)(\gamma_{ \epsilon}(\tau,x)+\ell_{0})^{-}dxd\tau.\]
Thanks to (1.8) and (3.2), there exists a constant \(C\), which is independent of \(\epsilon>0\), such that
\[\Big{|}\int_{\mathbb{R}^{n}}\left(\gamma_{\epsilon}(t,x)-\gamma_{\epsilon}(s,x)\right)\phi(x)dx\Big{|}\leq C|t-s|.\]
Thanks to this estimate, we can choose \(s\in\mathbb{Q}\) close enough to \(t\) to control the last two terms in (3.7). Hence, together with (3.4) and (3.6), it is standard to show that for any \(\phi(x)\in C_{c}(\Omega)\),
\[\lim_{j\to\infty}\int_{\Omega}\left(\gamma_{\epsilon_{j}}(t,x)-\gamma_{\Omega }(t,x)\right)\phi(x)dx=0.\]
Thus
\[\lim_{j\epsilon\to\infty}\gamma_{\epsilon_{j}}(t,\cdot)=\gamma_{\Omega}(t, \cdot)\ \mbox{ weakly in }L^{p}(\Omega).\]
Since \(1<p<\infty\) is arbitrary, thanks to Theorem 1.1, one sees that
\[\|\gamma_{\Omega}(t,\cdot)\|_{L^{\infty}(\Omega)}\leq\|\gamma_{0}\|_{L^{ \infty}(\mathbb{R}^{n})}.\]
Notice that \(\Omega\subseteq\mathbb{R}^{n}\) is any fixed bounded set, thus due to the uniqueness of weak convergence, we can define \(\gamma(t,\cdot)\in L^{\infty}(\mathbb{R}^{n})\) by setting
\[\gamma(t,x)=\gamma_{\Omega}(t,x)\ \mbox{ a.e. in }\Omega.\]
It follows that
\[\lim_{j\to\infty}\gamma_{\epsilon_{j}}(t,\cdot)=\gamma(t,\cdot)\ \mbox{ weakly in }L^{p}_{loc}(\mathbb{R}^{n}).\]
Since \(t\in(0,T)\bigcap\mathbb{Q}^{c}\) is arbitrary, the claim is proved.
Furthermore, we improve the weak convergence in (3.5) to the strong convergence in \(L^{1}_{loc}(\mathbb{R}^{n})\). For this purpose, fix \(t\in(0,T)\bigcap\mathbb{Q}^{c}\) and a bounded set \(\Omega\subseteq\mathbb{R}^{n}\). Recall that due to the Fr\(\acute{e}\)chet-Kolmogorov theorem, Lemmas 2.3 and 2.4, \(\{\gamma_{\epsilon_{j}}(t,\cdot)\}\) is precompact in \(L^{1}(\Omega)\). Thus thanks to (3.5) and the uniqueness of weak convergence, it is routine to verify that
\[\lim_{j\to\infty}\gamma_{\epsilon_{j}}(t,\cdot)=\gamma(t,\cdot)\ \mbox{ in }L^{1}(\Omega),\]
i.e., for any \(t\in(0,T)\bigcap\mathbb{Q}^{c}\),
\[\lim_{j\to\infty}\gamma_{\epsilon_{j}}(t,\cdot)=\gamma(t,\cdot)\ \mbox{ in }L^{1}_{loc}(\mathbb{R}^{n}). \tag{3.8}\]
Therefore, by letting \(j\to\infty\), it follows from (3.1), (3.2), (3.3), (3.4) and (3.8) that
\[\int_{0}^{T}\int_{\mathbb{R}^{n}}\left(\gamma\zeta_{t}+\left(A\gamma^{+}+B( \gamma+\ell_{0})^{-}\right)\Delta\zeta\right)dxdt+\int_{\mathbb{R}^{n}}\gamma _{0}(x)\zeta(0,x)dx=0.\]
The uniqueness of generalized solution to the problem (1.12) yields the desired conclusion.
### Convergence to the one-phase Stefan problem
This subsection is devoted to the proof of Theorem 1.4, where the convergence relations between local and nonlocal one-phase Stefan problems are verified under the optimal condition (1.9) imposed on the kernel function.
It is known that the classical one-phase problem (1.5) can be reduced to a parabolic variational inequality [11, Chapter 1.9]. To be more specific, define
\[v(t,x)=\begin{cases}\int_{0}^{t}\theta(\tau,x)d\tau&\text{if }x\in\bar{\Omega}_{0},\\ 0&\text{if }x\in\mathbb{R}^{n}\setminus\bar{\Omega}_{0},\,t\leq s(x),\\ \int_{s(x)}^{t}\theta(\tau,x)d\tau&\text{if }x\in\mathbb{R}^{n}\setminus\bar{ \Omega}_{0},\,t>s(x),\end{cases}\]
and then transform the problem (1.5) into a variational inequality for the function \(v(t,x)\) as follows
\[\begin{cases}v_{t}-A\Delta v\geq\bar{f}&\text{a.e. in }(0,T)\times\mathbb{R}^{n},\\ v\geq 0&\text{a.e. in }(0,T)\times\mathbb{R}^{n},\\ (v_{t}-A\Delta v-\bar{f})v=0&\text{a.e. in }(0,T)\times\mathbb{R}^{n},\end{cases} \tag{3.9}\]
where \(\bar{f}=\gamma_{0}\) defined in (1.7). It has been proved that there exists a unique solution of the problem (3.9), still denoted by \(v(t,x)\), and
\[D_{x}v,\,D_{x}^{2}v,\,D_{t}v\quad\text{belong to }L^{\infty}((0,T);L^{p}( \mathbb{R}^{n}))\ \text{ for }p<\infty.\]
See [11, Chapter 1.9] for details.
Borrowing this idea, define
\[v_{\epsilon}(t,x)=\int_{0}^{t}\gamma_{\epsilon}^{+}(\tau,x)d\tau, \tag{3.10}\]
Obviously, Theorem 1.4 is about the convergence relations between \(v_{\epsilon}\) and \(v\).
First we compute the equation satisfied by \(v_{\epsilon}\). For any \(x\in\mathbb{R}^{n}\setminus\bar{\Omega}_{0}\), let \(s_{\epsilon}(x)\) denote the time if exists when \(\gamma_{\epsilon}(t,x)\) first reaches zero. Thus
\[\ell_{0}=\frac{1}{\epsilon^{2}}\int_{0}^{s_{\epsilon}(x)}\int_{\mathbb{R}^{n} }k_{\epsilon}(x-y)\gamma_{\epsilon}^{+}(\tau,y)dyd\tau. \tag{3.11}\]
* if \(x\in\bar{\Omega}_{0},t>0\), then \[v_{\epsilon t}-\frac{1}{\epsilon^{2}}\int_{\mathbb{R}^{n}}k_{ \epsilon}(x-y)v_{\epsilon}(t,y)dy+\frac{1}{\epsilon^{2}}v_{\epsilon}(t,x)\] \[= \gamma_{\epsilon}^{+}(t,x)-\frac{1}{\epsilon^{2}}\int_{\mathbb{R }^{n}}k_{\epsilon}(x-y)\int_{0}^{t}\gamma_{\epsilon}^{+}(\tau,y)d\tau dy+ \frac{1}{\epsilon^{2}}\int_{0}^{t}\gamma_{\epsilon}^{+}(\tau,x)d\tau\] \[= \int_{0}^{t}\gamma_{\epsilon t}^{+}(\tau,x)d\tau+\gamma_{ \epsilon}^{+}(0,x)-\frac{1}{\epsilon^{2}}\int_{\mathbb{R}^{n}}k_{\epsilon}(x -y)\int_{0}^{t}\gamma_{\epsilon}^{+}(\tau,y)d\tau dy+\frac{1}{\epsilon^{2}} \int_{0}^{t}\gamma_{\epsilon}^{+}(\tau,x)d\tau\] \[= \gamma_{0}(x);\]
* if \(x\in\mathbb{R}^{n}\setminus\bar{\Omega}_{0},0<t\leq s_{\epsilon}(x)\), then \(v_{\epsilon}(t,x)=0\). Thus \[v_{\epsilon t}-\frac{1}{\epsilon^{2}}\int_{\mathbb{R}^{n}}k_{\epsilon}(x-y)v_{ \epsilon}(t,y)dy+\frac{1}{\epsilon^{2}}v_{\epsilon}(t,x)=-\frac{1}{\epsilon^{2 }}\int_{\mathbb{R}^{n}}k_{\epsilon}(x-y)v_{\epsilon}(t,y)dy;\]
* if \(x\in\mathbb{R}^{n}\setminus\bar{\Omega}_{0},t>s_{\epsilon}(x)\), then \[v_{\epsilon t}-\frac{1}{\epsilon^{2}}\int_{\mathbb{R}^{n}}k_{ \epsilon}(x-y)v_{\epsilon}(t,y)dy+\frac{1}{\epsilon^{2}}v_{\epsilon}(t,x)\] \[= \gamma_{\epsilon}^{+}(t,x)-\frac{1}{\epsilon^{2}}\int_{\mathbb{R }^{n}}k_{\epsilon}(x-y)\int_{0}^{t}\gamma_{\epsilon}^{+}(\tau,y)d\tau dy+ \frac{1}{\epsilon^{2}}\int_{0}^{t}\gamma_{\epsilon}^{+}(\tau,x)d\tau\] \[= \int_{s_{\epsilon}(x)}^{t}\gamma_{\epsilon t}^{+}(\tau,x)d\tau- \frac{1}{\epsilon^{2}}\int_{\mathbb{R}^{n}}k_{\epsilon}(x-y)\int_{s_{ \epsilon}(x)}^{t}\gamma_{\epsilon}^{+}(\tau,y)d\tau dy+\frac{1}{\epsilon^{2}} \int_{s_{\epsilon}(x)}^{t}\gamma_{\epsilon}^{+}(\tau,x)d\tau\] \[-\frac{1}{\epsilon^{2}}\int_{\mathbb{R}^{n}}k_{\epsilon}(x-y)\int _{0}^{s_{\epsilon}(x)}\gamma_{\epsilon}^{+}(\tau,y)d\tau dy\] \[= -\ell_{0},\] according to (3.11).
Hence one sees that \(v_{\epsilon}\) satisfies
\[\begin{cases}v_{\epsilon t}(t,x)=\frac{1}{\epsilon^{2}}\int_{\mathbb{R}^{n}}k _{\epsilon}(x-y)v_{\epsilon}(t,y)dy-\frac{1}{\epsilon^{2}}v_{\epsilon}(t,x)+f_ {\epsilon}(t,x)&t>0,\ x\in\mathbb{R}^{n},\\ v_{\epsilon}(0,x)=0&x\in\mathbb{R}^{n},\end{cases} \tag{3.12}\]
where
\[f_{\epsilon}(t,x)=\begin{cases}\gamma_{0}(x)&t>0,\ x\in\bar{\Omega}_{0},\\ -\int_{\mathbb{R}^{n}}\frac{1}{\epsilon^{2}}k_{\epsilon}(x-y)v_{ \epsilon}(t,y)dy&0<t\leq s_{\epsilon}(x),\ x\in\mathbb{R}^{n}\setminus\bar{ \Omega}_{0},\\ -\ell_{0}&t>s_{\epsilon}(x),\ x\in\mathbb{R}^{n}\setminus\bar{\Omega}_{0}. \end{cases}\]
Secondly, we prepare some useful estimates about \(f_{\epsilon}\).
**Lemma 3.1**.: _Assume that in the problem (1.13), the kernel function \(k\) satisfies the assumption **(K)**, the initial data satisfies (1.2) and \(u_{0}\geq 0\). Then for given \(1\leq p\leq\infty\), \(f_{\epsilon}(t,x)\) is uniformly bounded in \(L^{p}(\mathbb{R}^{n})\) for any \(\epsilon>0\), \(t>0\)._
Proof.: Similar to the proof of Lemma 2.3, if \(\phi\in L^{1}(\mathbb{R}^{n})\bigcap L^{\infty}(\mathbb{R}^{n})\), then for any \(p>1\), \(\phi\in L^{p}(\mathbb{R}^{n})\) and
\[\|\phi\|_{L^{p}(\mathbb{R}^{n})}\leq\left(\|\phi\|_{L^{\infty}(\mathbb{R}^{n}) }^{p-1}\|\phi\|_{L^{1}(\mathbb{R}^{n})}\right)^{\frac{1}{p}}\leq\left(\|\phi\|_ {L^{\infty}(\mathbb{R}^{n})}+1\right)\left(\|\phi\|_{L^{1}(\mathbb{R}^{n})}+1 \right).\]
Hence it suffices to verify the conclusion for \(p=1\) and \(p=\infty\).
Since \(f_{\epsilon}(t,x)=\gamma_{0}\) for \(x\in\bar{\Omega}_{0}\) and \(t>0\), we only need to estimate \(f_{\epsilon}\) outside \(\bar{\Omega}_{0}\). It mainly relies on the following estimates:
\[-f_{\epsilon}(t,x)\in[0,\ell_{0}]\ \ \text{for}\ x\in\mathbb{R}^{n}\setminus\bar{ \Omega}_{0},\ \ \int_{\mathbb{R}^{n}\setminus\bar{\Omega}_{0}}-f_{\epsilon}(t,x)dx\leq\int_{ \Omega_{0}}\gamma_{0}dx. \tag{3.13}\]
Assume that (3.13) holds. It immediately yields that
\[\|f_{\epsilon}(t,\cdot)\|_{L^{\infty}(\mathbb{R}^{n})}\leq\max\,\left\{\|\gamma_{ 0}|_{\bar{\Omega}_{0}}\|_{L^{\infty}(\bar{\Omega}_{0})},\ell_{0}\right\},\ \ \|f_{\epsilon}(t,\cdot)\|_{L^{1}(\mathbb{R}^{n})}\leq 2 \int_{\Omega_{0}}\gamma_{0}dx.\]
The desired conclusion follows.
Now it remains to verify (3.13). In fact, due to (3.11), the first estimate in (3.13) is obvious. Intuitively, the second estimate in (3.13) indicates that \(\int_{\mathbb{R}^{n}\setminus\bar{\Omega}_{0}}-f_{\epsilon}(t,x)dx\) is less than the total energy absorbed outside \(\bar{\Omega}_{0}\) from time \(0\) to \(t\), which can not exceed the total energy at the initial time, i.e. \(\int_{\Omega_{0}}\gamma_{0}dx\). To be more precise, by (1.13), one has for any large \(R>0\)
\[\int_{B_{R}(0)\setminus\bar{\Omega}_{0}}\left(\gamma_{\epsilon}( t,x)-\gamma_{\epsilon}(0,x)\right)dx \tag{3.14}\] \[= \int_{(B_{R}(0)\setminus\bar{\Omega}_{0})\bigcap\{s_{\epsilon}( x)<t\}}\left(\gamma_{\epsilon}(t,x)-\gamma_{\epsilon}(0,x)\right)dx+\int_{(B_{R}(0) \setminus\bar{\Omega}_{0})\bigcap\{s_{\epsilon}(x)\geq t\}}\left(\gamma_{ \epsilon}(t,x)-\gamma_{\epsilon}(0,x)\right)dx\] \[\geq \int_{(B_{R}(0)\setminus\bar{\Omega}_{0})\bigcap\{s_{\epsilon}( x)<t\}}\ell_{0}dx+\frac{1}{\epsilon^{2}}\int_{0}^{t}\int_{(B_{R}(0)\setminus\bar{ \Omega}_{0})\bigcap\{s_{\epsilon}(x)\geq t\}}\int_{\mathbb{R}^{n}}k_{\epsilon} (x-y)\gamma_{\epsilon}^{+}(\tau,y)dydxd\tau\] \[= \int_{B_{R}(0)\setminus\bar{\Omega}_{0}}-f_{\epsilon}(t,x)dx.\]
Moreover, it is easy to see that
\[\int_{B_{R}(0)}\gamma_{\epsilon t}(t,x)dx=\frac{1}{\epsilon^{2}} \int_{B_{R}(0)}\int_{\mathbb{R}^{n}}k_{\epsilon}(x-y)\gamma_{\epsilon}^{+}(t,y )dydx-\frac{1}{\epsilon^{2}}\int_{B_{R}(0)}\gamma_{\epsilon}^{+}(t,x)dx\] \[\leq \frac{1}{\epsilon^{2}}\int_{\Omega_{\epsilon}(t)}\gamma_{\epsilon }^{+}(t,x)dx-\frac{1}{\epsilon^{2}}\int_{B_{R}(0)}\gamma_{\epsilon}^{+}(t,x)dx,\]
where the validity of the above inequality is due to the property that \(\gamma_{\epsilon}^{+}(t,\cdot)\in L^{1}(\mathbb{R}^{n})\) proved Lemma 2.3. This implies that
\[\int_{B_{R}(0)\setminus\bar{\Omega}_{0}}\left(\gamma_{\epsilon}( t,x)-\gamma_{\epsilon}(0,x)\right)dx\] \[\leq -\int_{\bar{\Omega}_{0}}\left(\gamma_{\epsilon}(t,x)-\gamma_{ \epsilon}(0,x)\right)dx+\frac{1}{\epsilon^{2}}\int_{0}^{t}\left(\int_{\Omega _{\epsilon}(\tau)}\gamma_{\epsilon}^{+}(\tau,x)dx-\int_{B_{R}(0)}\gamma_{ \epsilon}^{+}(\tau,x)dx\right)d\tau\] \[\leq \int_{\Omega_{0}}\gamma_{0}dx+\frac{1}{\epsilon^{2}}\int_{0}^{t} \left(\int_{\Omega_{\epsilon}(\tau)}\gamma_{\epsilon}^{+}(\tau,x)dx-\int_{B_{R }(0)}\gamma_{\epsilon}^{+}(\tau,x)dx\right)d\tau.\]
This, together with (3.14), yields that
\[\int_{B_{R}(0)\setminus\bar{\Omega}_{0}}-f_{\epsilon}(t,x)dx\leq\int_{\Omega _{0}}\gamma_{0}dx+\frac{1}{\epsilon^{2}}\int_{0}^{t}\left(\int_{\Omega_{ \epsilon}(\tau)}\gamma_{\epsilon}^{+}(\tau,x)dx-\int_{B_{R}(0)}\gamma_{ \epsilon}^{+}(\tau,x)dx\right)d\tau,\]
and thus the second estimate in (3.13) follows immediately by letting \(R\to\infty\).
Moreover, as mentioned in Remark 2.1, on the basis of Lemmas 2.3 and 2.4, we establish some convergence results about \(v_{\epsilon}\) defined in (3.10).
**Lemma 3.2**.: _Assume that in the problem (1.13), the kernel function \(k\) satisfies the assumption **(K)**, the initial data satisfies (1.7). Then for any fixed \(t>0\), there exist a sequence \(\{\epsilon_{\ell}\}\), which depends on \(t\) and satisfies \(\lim_{\ell\to\infty}\epsilon_{\ell}=0\), and \(\tilde{v}^{t}\in L^{1}(\mathbb{R}^{n})\) such that \(v_{\epsilon_{\ell}}(t,\cdot)\to\tilde{v}^{t}(\cdot)\) a.e. in \(\mathbb{R}^{n}\)._
Proof.: Thanks to Lemma 2.4,
\[\int_{\mathbb{R}^{n}}\left|v_{\epsilon}(t,x+h)-v_{\epsilon}(t,x) \right|dx=\int_{\mathbb{R}^{n}}\left|\int_{0}^{t}\left(\gamma_{\epsilon}^{+}( \tau,x+h)-\gamma_{\epsilon}^{+}(\tau,x)\right)d\tau\right|dx\] \[\leq \int_{0}^{t}\int_{\mathbb{R}^{n}}\left|\gamma_{\epsilon}(\tau,x+ h)-\gamma_{\epsilon}(\tau,x)\right|dxd\tau\] \[\leq \int_{0}^{t}\int_{\mathbb{R}^{n}}\left|\gamma_{0}(x+h)-\gamma_{0 }(x)\right|dxd\tau=t\int_{\mathbb{R}^{n}}\left|\gamma_{0}(x+h)-\gamma_{0}(x) \right|dx.\]
This, together with the Fr\(\acute{e}\)chet-Kolmogorov theorem and Lemma 2.3, indicates that for any fixed \(t>0\) and bounded set \(\Omega\subseteq\mathbb{R}^{n}\), \(\{v_{\epsilon}(t,\cdot)\,|\,0<\epsilon<1\}\) is precompact in \(L^{1}(\Omega)\). Then it is easy to show that there exist a sequence \(\{\epsilon_{\ell}\}\) with \(\lim_{\ell\to\infty}\epsilon_{\ell}=0\) and \(\tilde{v}^{t}\in L^{1}(\mathbb{R}^{n})\) such that \(v_{\epsilon_{\ell}}(t,\cdot)\to\tilde{v}^{t}(\cdot)\) in \(L^{1}_{loc}(\mathbb{R}^{n})\) and \(v_{\epsilon_{\ell}}(t,\cdot)\to\tilde{v}^{t}(\cdot)\) a.e. in \(\mathbb{R}^{n}\).
We emphasize that so far the additional condition (1.9) has not been used yet. After previous preparations, we are ready to complete the proof of Theorem 1.4.
Proof of Theorem 1.4.: From now on, fix \(T>0\). Back to the problem (3.12) satisfied by \(v_{\epsilon}\), by the Fourier transform and the property \(\hat{k}_{\epsilon}(\xi)=\hat{k}(\epsilon\xi)\), we derive that
\[\hat{v}_{\epsilon}(t,\xi)=\int_{0}^{t}e^{\frac{1}{\epsilon^{2}}\left(\hat{k}( \epsilon\xi)-1\right)(t-\tau)}\hat{f}_{\epsilon}(\tau,\xi)d\tau. \tag{3.15}\]
Due to the Parseval formula,
\[\|f_{\epsilon}(t,\cdot)\|_{L^{2}(\mathbb{R}^{n})}=\|\hat{f}_{\epsilon}(t, \cdot)\|_{L^{2}(\mathbb{R}^{n})}.\]
Then thanks to Lemma 3.1, there exists a sequence \(\{\epsilon_{j}\}\) with \(\lim_{j\to\infty}\epsilon_{j}=0\) and \(f_{0},\,G_{0}\in L^{2}((0,T)\times\mathbb{R}^{n})\) such that
\[\lim_{j\to\infty}f_{\epsilon_{j}}=f_{0}\ \text{ weakly in }\ L^{2}((0,T)\times\mathbb{R}^{n}). \tag{3.16}\]
and
\[\lim_{j\to\infty}\hat{f}_{\epsilon_{j}}=G_{0}\ \text{ weakly in }\ L^{2}((0,T)\times\mathbb{R}^{n}). \tag{3.17}\]
Notice that for any test function \(\psi(t,\xi)\in C_{c}((0,T)\times\mathbb{R}^{n})\), on the one side, due to (3.16),
\[\lim_{j\to\infty}\int_{0}^{T}\int_{\mathbb{R}^{n}}\hat{f}_{\epsilon_{j}}(t, \xi)\psi(t,\xi)d\xi dt\]
\[= \lim_{j\to\infty}\int_{0}^{T}\int_{\mathbb{R}^{n}}\left(\int_{ \mathbb{R}^{n}}e^{-ix\cdot\xi}f_{\epsilon_{j}}(t,x)dx\right)\psi(t,\xi)d\xi dt\] \[= \lim_{j\to\infty}\int_{0}^{T}\int_{\mathbb{R}^{n}}\left(\int_{ \mathbb{R}^{n}}e^{-ix\cdot\xi}\psi(t,\xi)d\xi\right)f_{\epsilon_{j}}(t,x)dxdt\] \[= \int_{0}^{T}\int_{\mathbb{R}^{n}}\left(\int_{\mathbb{R}^{n}}e^{-ix \cdot\xi}\psi(t,\xi)d\xi\right)f_{0}(t,x)dxdt=\int_{0}^{T}\int_{\mathbb{R}^{n} }\hat{f}_{0}(t,\xi)\psi(t,\xi)d\xi dt.\]
On the other side, (3.17) yields that
\[\lim_{j\to\infty}\int_{0}^{T}\int_{\mathbb{R}^{n}}\hat{f}_{\epsilon_{j}}(t, \xi)\psi(t,\xi)d\xi dt=\int_{0}^{T}\int_{\mathbb{R}^{n}}G(t,\xi)\psi(t,\xi)d \xi dt.\]
Hence
\[G_{0}(t,\xi)=\hat{f}_{0}\ \ \text{a.e. in }(0,T)\times\mathbb{R}^{n},\]
i.e.
\[\lim_{j\to\infty}f_{\epsilon_{j}}=f_{0},\ \ \lim_{j\to\infty}\hat{f}_{\epsilon_{j}}= \hat{f}_{0}\ \ \text{weakly in }\ L^{2}((0,T)\times\mathbb{R}^{n}). \tag{3.18}\]
Introduce the following problem
\[\begin{cases}v_{t}=A\Delta v+f_{0}&0<t\leq T,\ x\in\mathbb{R}^{n},\\ v(0,x)=0&x\in\mathbb{R}^{n},\end{cases} \tag{3.19}\]
where \(v_{*}\) denote the unique generalized solution in \(V_{2}^{1,1/2}(\mathbb{R}^{n}\times[0,T])\)[12, Chapter III.5]. By applying the Fourier transform to the problem (3.19), we derive that
\[\hat{v}_{*}(t,\xi)=\int_{0}^{t}e^{-A|\xi|^{2}(t-\tau)}\hat{f}_{0}(\tau,\xi)d\tau.\]
Fix \(t\in(0,T)\). For any given \(\phi(\xi)\in C_{c}^{\infty}(\mathbb{R}^{n})\),
\[\lim_{j\to\infty}\int_{\mathbb{R}^{n}}\hat{v}_{\epsilon_{j}}(t, \xi)\phi(\xi)d\xi=\lim_{j\to\infty}\int_{\mathbb{R}^{n}}\left(\int_{0}^{t}e^{ \frac{1}{\epsilon_{j}^{2}}\left(\hat{k}(\epsilon_{j}\xi)-1\right)(t-\tau)} \hat{f}_{\epsilon_{j}}(\tau,\xi)d\tau\right)\phi(\xi)d\xi\] \[= \lim_{j\to\infty}\int_{\mathbb{R}^{n}}\int_{0}^{t}\left(e^{\frac {1}{\epsilon_{j}^{2}}\left(\hat{k}(\epsilon_{j}\xi)-1\right)(t-\tau)}-e^{-A| \xi|^{2}(t-\tau)}\right)\hat{f}_{\epsilon_{j}}(\tau,\xi)\phi(\xi)d\tau d\xi\] \[+\lim_{j\to\infty}\int_{\mathbb{R}^{n}}\int_{0}^{t}e^{-A|\xi|^{2} (t-\tau)}\hat{f}_{\epsilon_{j}}(\tau,\xi)\phi(\xi)d\tau d\xi.\]
Since \(\|\hat{f}_{\epsilon_{j}}(\tau,\cdot)\|_{L^{\infty}(\mathbb{R}^{n})}\leq\|f_{ \epsilon_{j}}(\tau,\cdot)\|_{L^{1}(\mathbb{R}^{n})}\), due to Lemma 3.1, the assumption (1.9) and (3.18), we have
\[\lim_{j\to\infty}\int_{\mathbb{R}^{n}}\hat{v}_{\epsilon_{j}}(t,\xi)\phi(\xi)d \xi=\int_{\mathbb{R}^{n}}\int_{0}^{t}e^{-A|\xi|^{2}(t-\tau)}\hat{f}_{0}(\tau, \xi)\phi(\xi)d\tau d\xi=\int_{\mathbb{R}^{n}}\hat{v}_{*}(t,\xi)\phi(\xi)d\xi. \tag{3.20}\]
Moreover, thanks to Lemma 2.3, there exists a subsequence of \(\{\epsilon_{j}\}\), denoted by \(\{\epsilon_{j_{t}}\}\), and \(v_{0}^{t}\) in \(L^{2}(\mathbb{R}^{n})\), such that \(v_{\epsilon_{j_{t}}}(t,\cdot)\rightharpoonup v_{0}^{t}(\cdot)\) in \(L^{2}(\mathbb{R}^{n})\). Then for any given \(\phi(\xi)\in C_{c}^{\infty}(\mathbb{R}^{n})\),
\[\lim_{j\to\infty}\int_{\mathbb{R}^{n}}\hat{v}_{\epsilon_{j_{t}}}(t,\xi)\phi( \xi)d\xi\]
\[= \lim_{j_{\ell}\to\infty}\int_{\mathbb{R}^{n}}\left(\int_{\mathbb{R}^{n} }e^{-ix\cdot\xi}v_{\epsilon_{j_{\ell}}}(t,x)dx\right)\phi(\xi)d\xi \tag{3.21}\] \[= \lim_{j\epsilon\to\infty}\int_{\mathbb{R}^{n}}\left(\int_{\mathbb{ R}^{n}}e^{-ix\cdot\xi}\phi(\xi)d\xi\right)v_{\epsilon_{j_{\ell}}}(t,x)dx=\int_{ \mathbb{R}^{n}}\left(\int_{\mathbb{R}^{n}}e^{-ix\cdot\xi}\phi(\xi)d\xi\right)v _{0}^{t}(x)dx\] \[= \int_{\mathbb{R}^{n}}\left(\int_{\mathbb{R}^{n}}e^{-ix\cdot\xi}v_ {0}^{t}(x)dx\right)\phi(\xi)d\xi.\]
Now (3.20) and (3.21) implies that
\[\hat{v}_{*}(t,\xi)=\int_{\mathbb{R}^{n}}e^{-ix\cdot\xi}v_{0}^{t}(x)dx\ \ \text{a.e. in }\mathbb{R}^{n}.\]
Thus \(v_{*}(t,x)=v_{0}^{t}(x)\) a.e. in \(\mathbb{R}^{n}\). Since \(v_{*}(t,x)\) is the unique solution to the problem (3.19), it follows immediately that for any \(0<t<T\), \(v_{\epsilon}(t,\cdot)\rightharpoonup v_{*}(t,\cdot)\) in \(L^{2}(\mathbb{R}^{n})\) as \(\epsilon\to 0\). This, together with Lemma 3.2, implies that
\[v_{\epsilon}(t,x)\to v_{*}(t,x)\ \ \text{a.e. in }(0,T)\times\mathbb{R}^{n}\ \ \text{as }\epsilon\to 0. \tag{3.22}\]
To complete the proof of Theorem 1.4, it remains to verify that \(v_{*}\) satisfies the parabolic variational inequality (3.9) as follows.
\[\begin{cases}v_{t}-A\Delta v\geq\bar{f}&\text{a.e. in }(0,T)\times\mathbb{R}^{n},\\ v\geq 0&\text{a.e. in }(0,T)\times\mathbb{R}^{n},\\ (v_{t}-A\Delta v-\bar{f})v=0&\text{a.e. in }(0,T)\times\mathbb{R}^{n},\end{cases}\]
where \(\bar{f}=\gamma_{0}\) for \(x\in\mathbb{R}^{n}\). Obviously \(v_{*}\geq 0\) satisfies the first two inequalities in (3.9) since \(v_{\epsilon}\) is always non-negative and \(f_{\epsilon}\geq\bar{f}\) for all \(t>0\) and \(x\in\mathbb{R}^{n}\). Moreover, thanks to Lemma 3.1, (3.16) and the uniqueness of weak convergence, it is standard to show that \(f_{0}\in L^{p}(\mathbb{R}^{n}\times[0,T])\) for any \(p>1\). Then by parabolic regularity theory and Sobolev embedding theorem, one obtains that \(v_{*}(t,\cdot)\) is continuous in \(\mathbb{R}^{n}\). Thus, the set \(\{v_{*}>0\}\) is open in \((0,T)\times\mathbb{R}^{n}\). Also notice that \(f_{\epsilon}=\bar{f}\) if \(v_{\epsilon}>0\). Hence thanks to (3.22), it is standard to verify that
\[f_{\epsilon}(t,x)\to\bar{f}(t,x)\ \ \text{a.e. in }\{v_{*}>0\}\ \ \text{as } \epsilon\to 0.\]
Thus due to (3.18), \(f_{0}=\bar{f}\) a.e. in \(\{v_{*}>0\}\), i.e., \(v_{*}\) satisfies the third equality in (3.9).
The proof of Theorem 1.4 is complete.
## 4 Fundamental properties of nonlocal Stefan problem
In this section, we investigate the fundamental properties of the nonlocal version of one-phase Stefan problem (1.6)
\[\begin{cases}\gamma_{t}(t,x)=d\int_{\mathbb{R}^{n}}k(x-y)\gamma^{+}(t,y)dy-d \gamma^{+}(t,x)&t>0,\ x\in\mathbb{R}^{n},\\ \gamma(0,x)=\gamma_{0}&x\in\mathbb{R}^{n}.\end{cases}\]
### Expansion and boundedness
Theorem 1.5(i) is about the expansion of \(\Omega(t)\).
Proof of Theorem 1.5(i).: Fix \(x\in\mathbb{R}^{n}\setminus\bar{\Omega}_{0}\). Let \(t=s(x)\) denote the moment when \(\gamma(s(x),x)=0\) while \(\gamma(t,x)<0\) for \(0<t<s(x)\). By (1.6), one has
\[\ell_{0}=d\int_{0}^{s(x)}\int_{\mathbb{R}^{n}}k(x-y)\gamma^{+}(\tau,y)dyd\tau=d \int_{0}^{s(x)}\int_{\Omega(\tau)}k(x-y)\gamma^{+}(\tau,y)dyd\tau. \tag{4.1}\]
Also thanks to Theorem 1.1, \(0\leq\gamma^{+}\leq\|\gamma_{0}|_{\bar{\Omega}_{0}}\|_{C(\bar{\Omega}_{0})}\). This yields that
\[\ell_{0}\leq d\int_{0}^{s(x)}\int_{\Omega(\tau)}k(x-y)\|\gamma_{0}|_{\bar{ \Omega}_{0}}\|_{C(\bar{\Omega}_{0})}dyd\tau\leq ds(x)\|\gamma_{0}|_{\bar{ \Omega}_{0}}\|_{C(\bar{\Omega}_{0})},\]
i.e. \(s(x)\geq\ell_{0}/\left(d\|\gamma_{0}|_{\bar{\Omega}_{0}}\|_{C(\bar{\Omega}_{0} )}\right)\). Hence by choosing \(t_{0}<\ell_{0}/\left(d\|\gamma_{0}|_{\bar{\Omega}_{0}}\|_{C(\bar{\Omega}_{0}) }\right)\), one has \(\Omega(t)=\Omega(0)\) for \(0\leq t\leq t_{0}\).
The rest follows directly from Proposition 2.2.
In the following, we prove Theorem 1.5(ii), which is about the uniform boundedness of \(\Omega(t)\).
Proof of Theorem 1.5(ii).: The proof is lengthy. To begin with, we introduce the first auxiliary \(1-\)dim problem
\[\begin{cases}\gamma_{t}(t,x_{1})=d\int_{\mathbb{R}}k_{1}(x_{1}-y_{1})\gamma^{ +}(t,y_{1})dy_{1}-d\gamma^{+}(t,x_{1})&t>0,\ x_{1}\in\mathbb{R},\\ \gamma(0,x_{1})=\|\gamma_{0}|_{\bar{\Omega}_{0}}\|_{C(\bar{\Omega}_{0})}&0 \leq x_{1}\leq M,\\ \gamma(0,x_{1})=-\ell_{0}&x_{1}<0\ \text{or}\ x_{1}>M,\end{cases} \tag{4.2}\]
where \(k_{1}(x_{1})=\int_{\mathbb{R}^{n-1}}k(x_{1},x^{\prime})dx^{\prime},\ x^{ \prime}=(x_{2},...,x_{n})\) and choose the constant \(M\) such that
\[\bar{\Omega}_{0}\subseteq\{x\in\mathbb{R}^{n}\ |\ 0<x_{1}<M,\ \text{where}\ x=(x_{1},...,x_{n})\}.\]
Such \(M\) exists since \(\bar{\Omega}_{0}\) is bounded. Let \(\gamma_{1}(t,x_{1})\) denote the solution to the problem (4.2). Notice that \(\gamma_{1}(t,x_{1})\) also satisfies the \(n-\)dim problem (1.6) with initial data
\[\gamma_{0}(x)=\begin{cases}\|\gamma_{0}|_{\bar{\Omega}_{0}}\|_{C(\bar{\Omega}_ {0})}&0\leq x_{1}\leq M,\\ -\ell_{0}&x_{1}<0\ \text{or}\ x_{1}>M,\ x=(x_{1},...,x_{n}).\end{cases}\]
Denote
\[\Sigma_{1}(t)=\{x_{1}\in\mathbb{R}\ |\ \gamma_{1}(t,x_{1})\geq 0\}\ \ \text{and}\ \ \Sigma_{1}^{\infty}=\bigcup_{t\geq 0}\ \Sigma_{1}(t).\]
By Proposition 2.1, \(\gamma_{1}(t,x_{1})\geq\gamma(t,x)\) in \(\mathbb{R}^{n}\), where \(\gamma\) denote the solution to the \(n-\)dim problem (1.6) with initial data (1.7) and \(x=(x_{1},...,x_{n})\).
_To prove Theorem 1.5(ii), it suffices to show that \(\Sigma_{1}^{\infty}\) is bounded, since the other \(n-1\) directions can be handled similarly and thus \(\Omega(t)\) will be constrained by a bounded cube._
We first show that \(|\Sigma_{1}^{\infty}|\) is bounded. Thanks to Lemma 2.3, \(\gamma_{1}^{+}(t,\cdot)\in L^{1}(\mathbb{R})\). By direct computation, for \(0<t<T\)
\[\int_{\Sigma_{1}(T)}\gamma_{1t}(t,x_{1})dx_{1}\] \[= d\int_{\Sigma_{1}(T)}\!\int_{\mathbb{R}}k_{1}(x_{1}\!-\!y_{1}) \gamma_{1}^{+}(t,y_{1})dy_{1}dx_{1}-d\int_{\Sigma_{1}(T)}\gamma_{1}^{+}(t,x_{1 })dx_{1}\] \[\leq d\int_{\mathbb{R}}\gamma_{1}^{+}(t,y_{1})dy_{1}-d\int_{\Sigma_{1} (T)}\gamma_{1}^{+}(t,x_{1})dx_{1}=0.\]
Thus
\[0\leq\int_{\Sigma_{1}(T)}\gamma_{1}(T,x_{1})dx_{1}\leq\int_{\Sigma_{1}(T)} \gamma_{1}(0,x_{1})dx_{1}=-\ell_{0}|\Sigma_{1}(T)\setminus[0,M]|+\|\gamma_{0 }|_{\bar{\Omega}_{0}}\|_{C(\bar{\Omega}_{0})}M,\]
which implies that
\[|\Sigma_{1}(T)|\leq\left(1+\frac{\|\gamma_{0}|_{\bar{\Omega}_{0}}\|_{C(\bar{ \Omega}_{0})}}{\ell_{0}}\right)M.\]
Since \(T\) is arbitrary, one has
\[|\Sigma_{1}^{\infty}|\leq\left(1+\frac{\|\gamma_{0}|_{\bar{\Omega}_{0}}\|_{C( \bar{\Omega}_{0})}}{\ell_{0}}\right)M. \tag{4.3}\]
Next we will show that \(\gamma_{1}(t,x_{1})\) decays exponentially as \(t\) goes to infinity. For this purpose, we introduce the second auxiliary \(1-\)dim problem with periodic initial data
\[\begin{cases}\gamma_{t}=d\int_{\mathbb{R}}k_{1}(x_{1}-y_{1})\gamma^{+}(t,y_{1 })dy_{1}-d\gamma^{+}(t,x_{1})&t>0,\ x_{1}\in\mathbb{R},\\ \gamma(0,x_{1})=\|\gamma_{0}|_{\bar{\Omega}_{0}}\|_{C(\bar{\Omega}_{0})}& \kappa(M+L)\leq x_{1}\leq\kappa(M+L)+M,\\ \gamma(0,x_{1})=-\ell_{0}&\kappa(M+L)+M<x_{1}<(\kappa+1)(M+L),\end{cases}\]
where \(\kappa\in\mathbb{Z}\), \(L>0\) is a constant to be determined later and let \(\tilde{\gamma}_{1}(t,x_{1})\) denote the solution. By Proposition 2.1,
\[\gamma_{1}(t,x_{1})\leq\tilde{\gamma}_{1}(t,x_{1})\ \ \text{for}\ t>0,\,x_{1}\in\mathbb{R}, \tag{4.4}\]
Obviously, \(\tilde{\gamma}_{1}(t,x_{1})\) is periodic in \(x_{1}\) with period \(M+L\). Thus this problem can be rewritten as follows
\[\begin{cases}\gamma_{t}=d{\int_{0}^{M+L}}k_{*}(x_{1}\!-\!y_{1}) \gamma^{+}(t,y_{1})dy_{1}\!-\!d\gamma^{+}(t,x_{1})&t>0,\ x_{1}\in(0,M+L),\\ \gamma(0,x_{1})=\|\gamma_{0}|_{\bar{\Omega}_{0}}\|_{C(\bar{\Omega}_{0})}&0 \leq x_{1}\leq M,\\ \gamma(0,x_{1})=-\ell_{0}<0&M<x_{1}<(M+L),\end{cases} \tag{4.5}\]
where
\[k_{*}(x_{1})=\sum_{\kappa\in\mathbb{Z}}k_{1}(x_{1}+\kappa(M+L))\ \ \mbox{and}\ \ \int_{0}^{M+L}k_{*}(x_{1})dx_{1}=1.\]
Denote
\[\tilde{\Sigma}_{1}(t)=\{x_{1}\in\mathbb{R}\ |\ \tilde{\gamma}_{1}(t,x_{1})\geq 0 \}\ \mbox{ and }\ \tilde{\Sigma}_{1}^{\infty}=\bigcup_{t\geq 0}\ \tilde{\Sigma}_{1}(t).\]
We claim that _if_\(L>\dfrac{\|\gamma_{0}\|_{\bar{\Omega}_{0}}\|_{C(\bar{\Omega}_{0})}}{\ell_{0}}M\)_, then_\(|\,\tilde{\Sigma}_{1}^{\infty}\,\bigcap\,(0,M+L)\,|<M+L\)_._
In (4.5), fix \(T>0\), by direct computation, one has for \(0<t<T\),
\[\int_{\tilde{\Sigma}_{1}(T)\,\bigcap\,(0,M+L)}\tilde{\gamma}_{1t} (t,x_{1})dx_{1}\] \[= d\int_{\tilde{\Sigma}_{1}(T)\,\bigcap\,(0,M+L)}\!\int_{0}^{M+L} \!k_{*}(x_{1}\!-\!y_{1})\tilde{\gamma}_{1}^{+}(t,y_{1})dy_{1}dx_{1}-d\int_{ \tilde{\Sigma}_{1}(T)\,\bigcap\,(0,M+L)}\tilde{\gamma}_{1}^{+}(t,x_{1})dx_{1}\] \[\leq d\int_{0}^{M+L}\tilde{\gamma}_{1}^{+}(t,y_{1})dy_{1}-d\int_{ \tilde{\Sigma}_{1}(T)\,\bigcap\,(0,M+L)}\tilde{\gamma}_{1}^{+}(t,x_{1})dx_{1 }=0.\]
Thus
\[0 \leq \int_{\tilde{\Sigma}_{1}(T)\,\bigcap\,(0,M+L)}\tilde{\gamma}_{1} (T,x_{1})dx_{1}\] \[\leq \int_{\tilde{\Sigma}_{1}(T)\,\bigcap\,(0,M+L)}\tilde{\gamma}_{1} (0,x_{1})dx_{1}=-\ell_{0}|\tilde{\Sigma}_{1}(T)\bigcap\,(M,M+L)|+\|\gamma_{0} \|_{\bar{\Omega}_{0}}\|_{C(\bar{\Omega}_{0})}M.\]
This implies that
\[|\tilde{\Sigma}_{1}(T)\bigcap\,(M,M+L)|\leq\dfrac{\|\gamma_{0}\|_{\bar{\Omega }_{0}}\|_{C(\bar{\Omega}_{0})}}{\ell_{0}}M. \tag{4.6}\]
Since \(T\) is arbitrary, it is easy to see that \(|\,\tilde{\Sigma}_{1}^{\infty}\,\bigcap\,(0,M+L)\,|<M+L\) provided that
\[L>\dfrac{\|\gamma_{0}\|_{\bar{\Omega}_{0}}\|_{C(\bar{\Omega}_{0})}}{\ell_{0}}M.\]
The claim is proved.
Thanks to the strong maximum principle established in Proposition 2.2, \([0,M]\subseteq\tilde{\Sigma}_{1}^{\infty}\) and \(\tilde{\Sigma}_{1}^{\infty}\) is open in \((M,M+L)\). Thus when fix \(L>\dfrac{\|\gamma_{0}\|_{\bar{\Omega}_{0}}\|_{C(\bar{\Omega}_{0})}}{\ell_{0}}M\), by the previous claim, one sees that there exists an open interval \((a,b)\subset(0,M+L)\) satisfying \((a,b)\bigcap\,\tilde{\Sigma}_{1}^{\infty}=\emptyset\). If necessary, we could choose \(b-a\) smaller such that \(k_{*}(b-a)>0\). Denote
\[\tilde{\Sigma}_{D}=(0,a)\bigcup\,(b,M+L)\subseteq\tilde{\Sigma}_{1}^{\infty}.\]
Under the condition that \(k_{*}(b-a)>0\), the proof of [13, Theorem 2.6 (i)] can be slightly modified to show that the eigenvalue problem
\[d\int_{\tilde{\Sigma}_{D}}k_{*}(x_{1}-y_{1})\phi(y_{1})dy_{1}-d\phi(x_{1})= \lambda\phi(x_{1})\quad\text{for }x_{1}\in\tilde{\Sigma}_{D}\]
admits a principal eigenvalue \(\lambda_{p}\) with the corresponding eigenfunction \(\phi_{p}\) satisfying \(\phi_{p}>0\) in \(\tilde{\Sigma}_{D}\) and then it is easy to see that \(\lambda_{p}<0\). Moreover, notice that \(v(t,x_{1})=\ell e^{\lambda_{p}t}\phi_{p}(x_{1})\), \(\ell>0\), satisfies the following problem
\[\begin{cases}v_{t}(t,x_{1})=d\int_{\tilde{\Sigma}_{D}}k_{*}(x_{1}-y_{1})v(t,y _{1})dy_{1}-dv(t,x_{1})&t>0,\ x_{1}\in\tilde{\Sigma}_{D},\\ v(0,x_{1})=\ell\phi_{p}(x_{1})&x_{1}\in\tilde{\Sigma}_{D}.\end{cases}\]
Choose \(\ell\) large enough such that \(v(0,x_{1})>\|\gamma_{0}|_{\tilde{\Omega}_{0}}\|_{C(\tilde{\Omega}_{0})}\) in \(\tilde{\Sigma}_{D}\). By the comparison principle, it follows that
\[\tilde{\gamma}_{1}(t,x_{1})\leq v(t,x_{1})=\ell e^{\lambda_{p}t}\phi_{p}(x_{1 })\quad\text{for }\ t>0,\ x_{1}\in\tilde{\Sigma}_{D}.\]
Therefore by (4.4), the choice of \(\tilde{\Sigma}_{D}\) and the fact that \(\tilde{\gamma}_{1}(t,x_{1})\) is periodic in \(x_{1}\) with period \(M+L\), we have
\[\gamma_{1}(t,x_{1})\leq\ell e^{\lambda_{p}t}\|\phi_{p}\|_{L^{\infty}(\tilde{ \Sigma}_{D})}\quad\text{for }\ t>0,\ x_{1}\in\mathbb{R}, \tag{4.7}\]
i.e., \(\gamma_{1}(t,x_{1})\) decays exponentially at infinity since \(\lambda_{p}<0\).
Now we are ready to complete the last piece of the proof of Theorem 1.5(ii). Suppose that \(\Sigma_{1}^{\infty}\) is unbounded, i.e., there exists a sequence \(\{x_{1i}\}_{i\geq 1}\subseteq\Sigma_{1}^{\infty}\) and \(\{s_{1i}\}_{i\geq 1}\) with \(|x_{1i}|\to\infty\) as \(i\to\infty\) such that
\[\ell_{0}=d\int_{0}^{s_{1i}}\int_{\mathbb{R}}k_{1}(x_{1i}-y_{1})\gamma_{1}^{+} (\tau,y_{1})dy_{1}d\tau,\]
where \(t=s_{1i}\) denote the moment when \(\gamma_{1}(s_{1i},x_{1i})=0\) while \(\gamma_{1}(t,x_{1i})<0\) for \(0<t<s_{1i}\).
To derive a contradiction, we need the following property
\[\lim_{|x_{1}|\to\infty}\int_{\Sigma_{1}^{\infty}}k_{1}(x_{1}-y_{1})dy_{1}=0, \tag{4.8}\]
which follows from the facts that \(k_{1}\in L^{1}(\mathbb{R})\) and \(|\Sigma_{1}^{\infty}|<+\infty\) due to (4.3).
Thanks to (4.7) and (4.8), it follows that
\[d\int_{0}^{\infty}\int_{\mathbb{R}}k_{1}(x_{1i}-y_{1})\gamma_{1} ^{+}(\tau,y_{1})dy_{1}d\tau\] \[\leq d\int_{0}^{\infty}\int_{\Sigma_{1}(\tau)}k_{1}(x_{1i}-y_{1})\ell e ^{\lambda_{p}\tau}\|\phi_{p}\|_{L^{\infty}(\tilde{\Sigma}_{D})}dy_{1}d\tau\] \[\leq \frac{d\ell}{-\lambda_{p}}\|\phi_{p}\|_{L^{\infty}(\tilde{\Sigma} _{D})}\int_{\Sigma_{1}^{\infty}}k_{1}(x_{1i}-y_{1})dy_{1}\to 0\quad\text{as }i\to\infty.\]
This contradicts to the existence of \(s_{1i}\) when \(i\) is large enough. Therefore, \(\Sigma_{1}^{\infty}\) is bounded and the desired conclusion follows.
### Continuous expansion and jumping phenomena
We first verify Theorem 1.6, which is about the continuous expansion of \(\Omega(t)\) under the extra conditions that the initial domain \(\bar{\Omega}_{0}\) is convex and the kernel function \(k\) satisfies **(K1)**.
Proof of Theorem 1.6.: Suppose that \(\Omega(t)\) first jumps at time \(t=T\), i.e., \(\Omega(t)\) is connected for \(t<T\) while \(\Omega(T)\) is disconnected. Let \(\Omega_{1}(T)\) denote the connected domain which contains \(\Omega(t)\) for \(t<T\). Choose \(y_{T}\in\Omega(T)\setminus\Omega_{1}(T)\). Since \(\Omega(0)=\bar{\Omega}_{0}\) is convex, there exists a unique \(x_{T}\in\partial\Omega(0)\) such that
\[|x_{T}-y_{T}|=\mbox{dist}\{\mbox{y}_{\rm T},\Omega(0)\}.\]
Moreover, there exists \(z_{T}\), which lies on the line segment \(\overline{x_{T}y_{T}}\) and satisfies \(z_{T}\not\in\Omega(T)\). Let \(\ell\) denote the line which passes through \((z_{T}+y_{T})/2\) and perpendicular to the line segment \(\overline{x_{T}y_{T}}\). W.l.o.g., assume that \(\ell=\{x\in\mathbb{R}^{n}\ |\ x_{1}=0\}\), where \(x=(x_{1},x_{2},...,x_{n})\) and \(x_{T1}<0\), where \(x_{T}=(x_{T1},x_{T2},...,x_{Tn})\). Since \(\Omega(0)\) is convex, obviously, \(\mbox{dist}\{\ell,\Omega(0)\}>0\).
For simplicity, denote
\[\mathbb{R}^{n}_{-}=\{x\in\mathbb{R}^{n}\ |\ x_{1}<0\},\ \mathbb{R}^{n}_{+}=\{x \in\mathbb{R}^{n}\ |\ x_{1}>0\},\ \tilde{x}=(-x_{1},x_{2},...,x_{n}),\]
and set
\[w(t,x)=\gamma(t,x)-\gamma(t,\tilde{x}),\ x\in\mathbb{R}^{n}_{-}.\]
Then \(y_{T}=\tilde{z}_{T}\) and
\[w(T,z_{T})=\gamma(T,z_{T})-\gamma(T,y_{T})<0. \tag{4.9}\]
Next it is standard to compute that for \(x\in\mathbb{R}^{n}_{-}\),
\[w_{t}(t,x)=\gamma_{t}(t,x)-\gamma_{t}(t,\tilde{x})\] \[= d\int_{\mathbb{R}^{n}}k(x-y)\gamma^{+}(t,y)dy-d\int_{\mathbb{R} ^{n}}k(\tilde{x}-y)\gamma^{+}(t,y)dy-d\gamma^{+}(t,x))+d\gamma^{+}(t,\tilde{ x})\] \[= d\int_{\mathbb{R}^{n}_{-}}k(x-y)\gamma^{+}(t,y)dy+d\int_{ \mathbb{R}^{n}_{+}}k(x-y)\gamma^{+}(t,y)dy\] \[-d\int_{\mathbb{R}^{n}_{-}}k(\tilde{x}-y)\gamma^{+}(t,y)dy-d\int _{\mathbb{R}^{n}_{+}}k(\tilde{x}-y)\gamma^{+}(t,y)dy-c(t,x)w(t,x)\] \[= d\int_{\mathbb{R}^{n}_{-}}k(x-y)\gamma^{+}(t,y)dy+d\int_{ \mathbb{R}^{n}_{-}}k(x-\tilde{y})\gamma^{+}(t,\tilde{y})dy\] \[-d\int_{\mathbb{R}^{n}_{-}}k(\tilde{x}-y)\gamma^{+}(t,y)dy-d\int _{\mathbb{R}^{n}_{-}}k(\tilde{x}-\tilde{y})\gamma^{+}(t,\tilde{y})dy-c(t,x)w (t,x)\] \[= \int_{\mathbb{R}^{n}_{-}}\left[k(x-y)-k(\tilde{x}-y)\right]c(t,y )w(t,y)dy-c(t,x)w(t,x),\]
where
\[c(t,x)=\frac{d\gamma^{+}(t,x)-d\gamma^{+}(t,\tilde{x})}{\gamma(t,x)-\gamma(t,\tilde{x})},\]
and \(k(x-y)-k(\tilde{x}-y)\geq 0\) for \(x,y\in\mathbb{R}^{n}_{-}\) since \(k(x)\) is decreasing in \(|x|\). Moreover, for \(x\in\ell\), \(w(t,x)=0\), and for \(x\in\mathbb{R}^{n}_{-}\),
\[w(0,x)=\gamma(0,x)-\gamma(0,\tilde{x})\geq 0,\]
since \(\Omega(0)\subseteq\mathbb{R}_{-}^{n}\). Thus by the comparison principle, one has \(w(t,x)\geq 0\) for \(t>0\), \(x\in\mathbb{R}_{-}^{n}\), which contradicts to (4.9). The proof is complete.
Notice that in Theorem 1.6, extra conditions on kernel functions and initial domains are needed to guarantee the continuous expansion of \(\Omega(t)\). Now we construct two examples to show that when one of these two extra conditions in Theorem 1.6 is violated, _jumping phenomena_ could happen.
**Example 1**.: _This example is about the assumption **(K1)** on kernel functions._
For simplicity, we focus on the the one dimensional case and assume that the initial domain is an internal. According to Theorem 1.6, if the kernel function \(k(x)\) is decreasing in \(|x|\), then \(\Omega(t)\) expands continuously. On the contrary, in this example, we choose a kernel function, which is not decreasing in \(|x|\), and jumping phenomena happens.
Define
\[k_{*}(x)=\begin{cases}\frac{1}{4\sigma}&1-\sigma\leq|x|\leq 1+\sigma,\\ 0&\text{otherwise},\end{cases}\]
where \(0<\sigma<\frac{1}{4}\) is small. Consider the problem
\[\begin{cases}\gamma_{t}(t,x)=\int_{\mathbb{R}}k_{j}(x-y)\gamma^{+}(t,y)dy- \gamma^{+}(t,x)&t>0,\ x\in\mathbb{R},\\ \gamma(0,x)=c_{0}&x\in\left(-\frac{1}{4},\frac{1}{4}\right),\\ \gamma(0,x)=-\ell_{0}&x\in\mathbb{R}\setminus\left(-\frac{1}{4},\frac{1}{4} \right),\end{cases} \tag{4.10}\]
where \(c_{0}\), \(\ell_{0}\) are positive constants, \(k_{j}\) satisfies the assumption **(K)** and
\[\lim_{j\to\infty}\|k_{j}-k_{*}\|_{L^{1}(\mathbb{R}^{n})}=0.\]
Let \(\gamma_{j}\) denote the solution to the problem (4.10). We claim that _if \(2\ell_{0}<c_{0}\), \(0<\sigma<\frac{1}{4}\), then the jumping phenomena happens for (4.10) when \(j\) is sufficiently large._
To prove the claim, first consider the problem
\[\begin{cases}\gamma_{t}(t,x)=\int_{\mathbb{R}}k_{*}(x-y)\gamma^{+}(t,y)dy- \gamma^{+}(t,x)&t>0,\ x\in\mathbb{R},\\ \gamma(0,x)=c_{0}&x\in\left(-\frac{1}{4},\frac{1}{4}\right),\\ \gamma(0,x)=-\ell_{0}&x\in\mathbb{R}\setminus\left(-\frac{1}{4},\frac{1}{4} \right).\end{cases} \tag{4.11}\]
The existence and uniqueness of the solution, denoted by \(\gamma_{*}\), to this problem can be verified by similar arguments in the proof of Theorem 1.1. Moreover, similar to the proof of (2.3) in the proof of Theorem 1.1, one has
\[\lim_{j\to\infty}\|\gamma_{j}-\gamma_{*}\|_{L^{\infty}(\mathbb{R}^{n})}=0.\]
Hence it suffices to show that the jumping phenomena happens in the limiting problem (4.11) if \(2\ell_{0}<c_{0}\), \(0<\sigma<\frac{1}{4}\).
Let \(t_{1}\) denote the moment when \(\gamma_{*}\) first touches zero somewhere in \(\mathbb{R}\setminus(-\frac{1}{4},\frac{1}{4})\). For \(x\in\left(-\frac{1}{4},\frac{1}{4}\right)\), \(0<t<t_{1}\), it is easy to see that \(\int_{\mathbb{R}^{n}}k_{*}(x-y)\gamma_{*}^{+}(t,y)dy=0\) due to the definition of \(k_{*}\) and the choice of \(\sigma\). Thus
\[\begin{cases}(\gamma_{*}^{+})_{t}(t,x)=-\gamma_{*}^{+}(t,x)&0<t<t_{1},\ x\in \left(-\frac{1}{4},\frac{1}{4}\right),\\ \gamma_{*}^{+}(0,x)=c_{0}&x\in\left(-\frac{1}{4},\frac{1}{4}\right),\end{cases}\]
thus
\[\gamma_{*}^{+}(t,x)=c_{0}e^{-t}\ \ \text{for}\ 0<t<t_{1},\ x\in\left(-\frac{1}{4 },\frac{1}{4}\right).\]
Then for any \(x^{*}\in\{x\in\mathbb{R}\setminus\left(-\frac{1}{4},\frac{1}{4}\right)\ |\ \gamma_{*}(t_{1},x)=0\}\), we compute
\[\ell_{0}=\int_{0}^{t_{1}}\int_{-\frac{1}{4}}^{\frac{1}{4}}k_{*}(x^{*}-y)c_{0} e^{-t}dydt=c_{0}\left(1-e^{-t_{1}}\right)\int_{-\frac{1}{4}}^{\frac{1}{4}}k_{*}(x^ {*}-y)dy. \tag{4.12}\]
According to the definition of \(k_{*}\), it is routine to verify that \(\int_{-\frac{1}{4}}^{\frac{1}{4}}k_{*}(x^{*}-y)dy\leq\frac{1}{2}\) and
\[\int_{-\frac{1}{4}}^{\frac{1}{4}}k_{*}(x^{*}-y)dy=\frac{1}{2}\ \ \text{if and only if}\ \ x^{*}\in\left[-\frac{5}{4}+\sigma,-\frac{3}{4}-\sigma\right]\bigcup\left[ \frac{3}{4}+\sigma,\frac{5}{4}-\sigma\right].\]
Hence when \(2\ell_{0}<c_{0}\), one has
\[\left\{x\in\mathbb{R}\setminus\left(-\frac{1}{4},\frac{1}{4}\right)\ \Big{|}\ \gamma(t_{1},x)=0\right\}=\left[-\frac{5}{4}+\sigma,-\frac{3}{4}-\sigma \right]\bigcup\left[\frac{3}{4}+\sigma,\frac{5}{4}-\sigma\right],\]
where
\[t_{1}=-\ln\left(1-\frac{2\ell_{0}}{c_{0}}\right).\]
Therefore, the jumping phenomena happens in the problem (4.11) provided that \(0<\sigma<\frac{1}{4}\) and \(2\ell_{0}<c_{0}\).
**Example 2**.: _This example is about the conditions on the shape of initial domains._
To emphasize the effect of initial domains, we still require that the kernel functions constructed in this example satisfy the requirements for kernel functions in Theorem 1.6. Then according to Theorem 1.6, if the initial domain is convex, \(\Omega(t)\) expands continuously. However, in the following constructed example, the initial domain is non-convex and the jumping phenomena happens.
Define
\[\tilde{k}(x)=\begin{cases}2^{-n}\omega_{n}^{-1}&|x|\leq 2,\\ 0&\text{otherwise},\end{cases}\]
where \(\omega_{n}\) denotes the volume of a unit ball in \(\mathbb{R}^{n}\). Consider the problem
\[\begin{cases}\gamma_{t}(t,x)=\int_{\mathbb{R}^{n}}\tilde{k}_{j}(x-y)\gamma^{+}(t, y)dy-\gamma^{+}(t,x)&t>0,\ x\in\mathbb{R}^{n},\\ \gamma(0,x)=c_{0}&x\in\bar{\Omega}_{0},\\ \gamma(0,x)=-\ell_{0}&x\in\mathbb{R}^{n}\setminus\bar{\Omega}_{0},\end{cases} \tag{4.13}\]
where \(c_{0}\), \(\ell_{0}\) are positive constants, \(n\geq 2\), \(\bar{\Omega}_{0}=\bar{B}_{2}(0)\setminus B_{1}(0)\), the kernel function \(\tilde{k}_{j}\) satisfies the conditions for kernel functions in Theorem 1.6 and
\[\lim_{j\to\infty}\|\tilde{k}_{j}-\tilde{k}\|_{L^{1}(\mathbb{R}^{n})}=0.\]
The existence of such kernel functions is obvious. We claim that _if \((1-2^{-n})\,c_{0}>\ell_{0}\), then for \(j\) sufficiently large, the jumping phenomena happens for (4.13)._
Similar to **Example 1**, to prove this claim, it suffices to show the jumping phenomena happens for the following model
\[\begin{cases}\gamma_{t}(t,x)=\int_{\mathbb{R}^{n}}\tilde{k}(x-y)\gamma^{+}(t, y)dy-\gamma^{+}(t,x)&t>0,\ x\in\mathbb{R}^{n},\\ \gamma(0,x)=c_{0}&x\in\bar{\Omega}_{0},\\ \gamma(0,x)=-\ell_{0}&x\in\mathbb{R}^{n}\setminus\bar{\Omega}_{0},\end{cases} \tag{4.14}\]
if \((1-2^{-n})\,c_{0}>\ell_{0}\).
Now let \(\tilde{\gamma}\) denote the solution to the problem (4.14) and \(t_{2}\), if exists, denote the moment when the solution \(\tilde{\gamma}\) first touches zero somewhere in \(\mathbb{R}^{n}\setminus\bar{\Omega}_{0}\). When \(0<t<t_{2}\), thanks to the definition of \(\tilde{k}\), it is easy to check that for \(x\neq 0\),
\[\int_{\mathbb{R}^{n}}\tilde{k}(x-y)\tilde{\gamma}^{+}(t,y)dy=\int_{\bar{B}_{2 }(0)\setminus B_{1}(0)}\tilde{k}(x-y)\tilde{\gamma}(t,y)dy<\int_{\bar{B}_{2}(0 )\setminus B_{1}(0)}\tilde{k}(-y)\tilde{\gamma}(t,y)dy.\]
This indicates that if \(t_{2}<+\infty\), then at \(t=t_{2}\), \(\tilde{\gamma}\) touches zero only at \(x=0\), i.e., the jumping phenomena happens.
It remains to show the existence of \(t_{2}<+\infty\). Suppose that \(t_{2}=+\infty\). Based on the definition of \(\tilde{k}\) and the first equation in (4.14), it is easy to see that
\[\ell_{0}\geq\int_{0}^{+\infty}\int_{\bar{B}_{2}(0)\setminus B_{1}(0)}\tilde{k }(-y)\tilde{\gamma}^{+}(t,y)dydt>\int_{0}^{+\infty}\left(1-2^{-n}\right)c_{0}e ^{-t}dt=\left(1-2^{-n}\right)c_{0}>\ell_{0}.\]
This is impossible. The proof is complete.
## Appendix A Important equivalent characterization
In this appendix, we include the proof of Proposition 1.2.
Proof of Proposition 1.2.: Assume that (i) holds. For clarity, set \(w=(w_{1},...,w_{n})=\dfrac{\xi}{|\xi|}\). Then we compute as follows.
\[\dfrac{1-\hat{k}(\xi)}{|\xi|^{2}} = \dfrac{1}{|\xi|^{2}}\left(1-\int_{\mathbb{R}^{n}}e^{-ix\cdot\xi}k( x)dx\right)\] \[= \dfrac{1}{|\xi|^{2}}\int_{\mathbb{R}^{n}}ix\cdot w\int_{0}^{|\xi| }e^{-(ix\cdot w)\eta}d\eta k(x)dx\] \[= \dfrac{1}{|\xi|^{2}}\int_{\mathbb{R}^{n}}ix\cdot w\int_{0}^{|\xi| }\left(e^{-(ix\cdot w)\eta}-1\right)d\eta k(x)dx\] \[= \dfrac{1}{|\xi|^{2}}\int_{\mathbb{R}^{n}}(x\cdot w)^{2}\int_{0}^ {|\xi|}\int_{0}^{\eta}e^{-(ix\cdot w)\tau}d\tau d\eta k(x)dx,\]
where the third equality is due to the first equality in (i). Notice that thanks to the assumptions in (i), we have
\[\int_{\mathbb{R}^{n}}(x\cdot w)^{2}k(x)dx=\dfrac{1}{n}\int_{\mathbb{R}^{n}}|x |^{2}k(x)dx.\]
Then it follows that
\[\dfrac{1-\hat{k}(\xi)}{|\xi|^{2}}-\dfrac{1}{2n}\int_{\mathbb{R}^{ n}}|x|^{2}k(x)dx\] \[= \dfrac{1}{|\xi|^{2}}\int_{\mathbb{R}^{n}}(x\cdot w)^{2}\int_{0}^ {|\xi|}\int_{0}^{\eta}e^{-(ix\cdot w)\tau}d\tau d\eta k(x)dx-\dfrac{1}{2}\int_{ \mathbb{R}^{n}}(x\cdot w)^{2}k(x)dx\] \[= \dfrac{1}{|\xi|^{2}}\int_{\mathbb{R}^{n}}(x\cdot w)^{2}\int_{0}^ {|\xi|}\int_{0}^{\eta}\left(e^{-(ix\cdot w)\tau}-1\right)d\tau d\eta k(x)dx.\]
Lebesgue dominated convergence theorem yields that
\[\lim_{\xi\to 0}\dfrac{1-\hat{k}(\xi)}{|\xi|^{2}}-\dfrac{1}{2n}\int_{\mathbb{R}^{ n}}|x|^{2}k(x)dx=0.\]
Thus (ii) is verified and \(\dfrac{1}{2n}\int_{\mathbb{R}^{n}}|x|^{2}k(x)dx=A\).
Assume that (ii) holds. First choose \(\xi=(0,...,\xi_{j},...,0)\), \(1\leq j\leq n\), with \(\xi_{j}>0\), then
\[\dfrac{1-\hat{k}(\xi)}{|\xi|^{2}}=\dfrac{1}{|\xi|^{2}}\left(1- \int_{\mathbb{R}^{n}}e^{-ix\cdot\xi}k(x)dx\right)=\dfrac{1}{\xi_{j}^{2}}\int_ {\mathbb{R}^{n}}\left(1-e^{-ix_{j}\xi_{j}}\right)k(x)dx\] (A.1) \[= \dfrac{1}{\xi_{j}^{2}}\int_{\mathbb{R}^{n}}\left(1-\cos(x_{j}\xi _{j})+i\sin(x_{j}\xi_{j})\right)k(x)dx.\]
For any \(R>0\), we have
\[\dfrac{|1-\hat{k}(\xi)|}{|\xi|^{2}}\geq\dfrac{1}{\xi_{j}^{2}}\int_{B_{R}(0)} \left(1-\cos(x_{j}\xi_{j})\right)k(x)dx,\]
which yields that
\[\lim_{\xi_{j}\to 0}\frac{|1-\hat{k}(\xi)|}{|\xi|^{2}}\geq\frac{1}{2}\int_{B_{R}(0 )}x_{j}^{2}k(x)dx.\]
Since \(R\) is arbitrary and \(1\leq j\leq n\), one sees that
\[\frac{1}{2n}\int_{\mathbb{R}^{n}}|x|^{2}k(x)dx\leq A<+\infty.\] (A.2)
This also indicates that
\[\int_{\mathbb{R}^{n}}|x|k(x)dx<+\infty.\] (A.3)
Next still choose \(\xi=(0,...,\xi_{j},...,0)\), \(1\leq j\leq n\), with \(\xi_{j}>0\). Notice that
\[\frac{1-\hat{k}(\xi)}{|\xi|}=\frac{1}{\xi_{j}}\int_{\mathbb{R}^{n}}\left(1-e^ {-ix_{j}\xi_{j}}\right)k(x)dx=\frac{1}{\xi_{j}}\int_{\mathbb{R}^{n}}ix_{j} \int_{0}^{\xi_{j}}e^{-ix_{j}\eta}d\eta k(x)dx,\]
where \(x=(0,...,x_{j},...,0)\). Due to (A.3), Lebesgue dominated convergence theorem can be applied and one sees that
\[0=\lim_{\xi\to 0}\frac{1-\hat{k}(\xi)}{|\xi|}=\int_{\mathbb{R}^{n}}ix_{j}k(x)dx,\]
i.e.,
\[\int_{\mathbb{R}^{n}}x_{j}k(x)dx=0,\ 1\leq j\leq n.\] (A.4)
Now thanks to (A.4), we have
\[\frac{1}{\xi_{j}^{2}}\int_{\mathbb{R}^{n}}\sin(x_{j}\xi_{j})k(x)dx\] \[= \frac{1}{\xi_{j}^{2}}\int_{\mathbb{R}^{n}}x_{j}\int_{0}^{\xi_{j} }\cos(x_{j}\eta)d\eta k(x)dx=\frac{1}{\xi_{j}^{2}}\int_{\mathbb{R}^{n}}x_{j} \int_{0}^{\xi_{j}}\left(\cos(x_{j}\eta)-1\right)d\eta k(x)dx\] \[= \frac{1}{\xi_{j}^{2}}\int_{\mathbb{R}^{n}}-x_{j}^{2}\int_{0}^{ \xi_{j}}\int_{0}^{\eta}\sin(x_{j}\tau)d\tau d\eta k(x)dx.\]
Thus (A.2) and Lebesgue dominated convergence theorem imply that
\[\lim_{\xi_{j}\to 0}\frac{1}{\xi_{j}^{2}}\int_{\mathbb{R}^{n}}\sin(x_{j}\xi_{j})k(x)dx=0.\]
Now in (A.1), letting \(\xi_{j}\to 0\), again it follows from (A.2) and Lebesgue dominated convergence theorem that
\[A = \lim_{\xi_{j}\to 0}\frac{1}{\xi_{j}^{2}}\int_{\mathbb{R}^{n}} \left(1-\cos(x_{j}\xi_{j})\right)k(x)dx=\lim_{\xi_{j}\to 0}\frac{1}{\xi_{j}^{2}} \int_{\mathbb{R}^{n}}x_{j}\int_{0}^{\xi_{j}}\sin(x_{j}\eta)d\eta k(x)dx\] \[= \lim_{\xi_{j}\to 0}\frac{1}{\xi_{j}^{2}}\int_{\mathbb{R}^{n}}x_{j}^{2 }\int_{0}^{\xi_{j}}\int_{0}^{\eta}\cos(x_{j}\tau)d\tau d\eta k(x)dx=\frac{1}{2 }\int_{\mathbb{R}^{n}}x_{j}^{2}k(x)dx.\]
Hence
\[\int_{\mathbb{R}^{n}}x_{j}^{2}k(x)dx=2A=\frac{1}{n}\int_{\mathbb{R}^{n}}|x|^{2}k( x)dx,\ \ 1\leq j\leq n.\] (A.5)
At the end, it remains to show that \(\int_{\mathbb{R}^{n}}x_{j}x_{h}k(x)dx=0\), \(1\leq j,h\leq n,j\neq h.\) Choose \(\xi=(0,...,\xi_{j},...,\xi_{h},...,0)\) with \(j<h\), \(\xi_{j}>0\) and \(\xi_{h}=\lambda\xi_{j}\). Then thanks to (A.4) and (A.5), it follows that
\[\frac{1-\hat{k}(\xi)}{|\xi|^{2}}=\frac{1}{|\xi|^{2}}\left(1-\int_ {\mathbb{R}^{n}}e^{-ix\cdot\xi}k(x)dx\right)=\frac{1}{\xi_{j}^{2}+\lambda^{2} \xi_{j}^{2}}\int_{\mathbb{R}^{n}}\left(1-e^{-i(x_{j}+\lambda x_{h})\xi_{j}} \right)k(x)dx\] \[= \frac{1}{\xi_{j}^{2}+\lambda^{2}\xi_{j}^{2}}\int_{\mathbb{R}^{n} }i\left(x_{j}+\lambda x_{h}\right)\int_{0}^{\xi_{j}}e^{-i(x_{j}+\lambda x_{h })\eta}d\eta k(x)dx\] \[= \frac{1}{\xi_{j}^{2}+\lambda^{2}\xi_{j}^{2}}\int_{\mathbb{R}^{n} }i\left(x_{j}+\lambda x_{h}\right)\int_{0}^{\xi_{j}}\left(e^{-i(x_{j}+\lambda x _{h})\eta}-1\right)d\eta k(x)dx\] \[= \frac{1}{\xi_{j}^{2}+\lambda^{2}\xi_{j}^{2}}\int_{\mathbb{R}^{n} }\left(x_{j}+\lambda x_{h}\right)^{2}\int_{0}^{\xi_{j}}\int_{0}^{\eta}e^{-i(x _{j}+\lambda x_{h})\tau}d\tau d\eta k(x)dx.\]
Letting \(\xi_{j}\to 0\), Lebesgue dominated convergence theorem and (A.5) imply that
\[A=\frac{1}{2(1+\lambda^{2})}\int_{\mathbb{R}^{n}}\left(x_{j}+\lambda x_{h} \right)^{2}k(x)dx=A+\frac{\lambda}{1+\lambda^{2}}\int_{\mathbb{R}^{n}}x_{j}x_ {h}k(x)dx.\]
This indicates that
\[\int_{\mathbb{R}^{n}}x_{j}x_{h}k(x)dx=0,\]
since \(\lambda\) is an arbitrary constant. The proof is complete.
**Data Availability Statement**
The authors confirm that this manuscript has no associated data.
|
2309.14230 | Competitive Networked Bivirus SIS spread over Hypergraphs | The paper deals with the spread of two competing viruses over a network of
population nodes, accounting for pairwise interactions and higher-order
interactions (HOI) within and between the population nodes. We study the
competitive networked bivirus susceptible-infected-susceptible (SIS) model on a
hypergraph introduced in Cui et al. [1]. We show that the system has, in a
generic sense, a finite number of equilibria, and the Jacobian associated with
each equilibrium point is nonsingular; the key tool is the Parametric
Transversality Theorem of differential topology. Since the system is also
monotone, it turns out that the typical behavior of the system is convergence
to some equilibrium point. Thereafter, we exhibit a tri-stable domain with
three locally exponentially stable equilibria. For different parameter regimes,
we establish conditions for the existence of a coexistence equilibrium (both
viruses infect separate fractions of each population node). | Sebin Gracy, Brian D. O. Anderson, Mengbin Ye, Cesar A. Uribe | 2023-09-25T15:41:58Z | http://arxiv.org/abs/2309.14230v1 | # Competitive Networked Bivirus SIS spread over Hypergraphs
###### Abstract
The paper deals with the spread of two competing viruses over a network of population nodes, accounting for pairwise interactions and higher-order interactions (HOI) within and between the population nodes. We study the competitive networked bivirus susceptible-infected-susceptible (SIS) model on a hypergraph introduced in Cui et al. [1]. We show that the system has, in a generic sense, a finite number of equilibria, and the Jacobian associated with each equilibrium point is nonsingular; the key tool is the Parametric Transversality Theorem of differential topology. Since the system is also monotone, it turns out that the typical behavior of the system is convergence to some equilibrium point. Thereafter, we exhibit a tri-stable domain with three locally exponentially stable equilibria. For different parameter regimes, we establish conditions for the existence of a coexistence equilibrium (both viruses infect separate fractions of each population node).
## I Introduction
The study of virus spread has been an active area of research for over two centuries. In particular, diverse scientific communities, such as physics [2], mathematics [3], computer science [4], automatic control [5], etc., have significantly aided in furthering our understanding of the complex mechanisms behind the spread of a virus. Fundamental to this effort has been the development of compartmental models where each individual is healthy and susceptible (S), infected with a virus (I), or has recovered from a viral infection (R). Two compartmental models, susceptible-infected-recovered (SIR) and susceptible-infected-susceptible (SIS) have garnered significant attention in several scientific disciplines, particularly in mathematical epidemiology. In contrast to the SIR model, the SIS model allows for the possibility of reinfection and is the focus of the present paper. More specifically, we will deal with networked SIS models, with each node in the network being representative of a large population, and the interconnection among the nodes denotes the possible spreading pathways for the virus.
The existing literature on modeling virus spread typically relies on the assumption that there is just a single virus present. However, one often encounters scenarios where there are two viruses, say virus 1 and virus 2, circulating in a meta-population (i.e., a network of population nodes). In such a context, said viruses could be cooperative, i.e., infection with virus 1 (resp. virus 2) increases (resp. decreases) the likelihood of simultaneous infection with virus 2 (resp. virus 1); see [6] for more details. Another possibility is for the two viruses to compete; infection with virus 1 (resp. virus 2) precludes the possibility of simultaneous infection with virus 2 (resp. virus 1) - this is the focus of the present paper. We stress that the notion of competing viruses is not restricted to just epidemics; it manifests itself in, among others, product adoption in a marketplace and the spread of opinions in social networks [7].
Networked competitive multi-virus SIS models have been analyzed in substantial depth in recent times; see [8, 9, 10, 11, 12, 13, 14, 15]. A major drawback of networked competitive bivirus SIS models studied in the aforementioned papers is that they account only for pairwise interactions between individuals. In reality, interactions in social groups often involve more than two individuals - it is not unusual that an individual can _simultaneously_ interact with more than one other individual. This motivates the need for higher-order networks such as hypergraphs 1, i.e., graphs where an edge can connect more than two nodes, which are quite effective in representing higher-order interactions (HOI) [17]. Inspired by the approach in [18], an SIS model on a hypergraph has been proposed and analyzed in [19]. However, the analytic results therein relied on certain restrictions on the network structure. Overcoming this drawback, a networked SIS model on a hypergraph has been devised and studied in considerable detail in [20]. However, the modeling frameworks in [18, 19, 20] are restrictive in the sense that none of these account for the possibility of more than one virus simultaneously circulating in a given population. Addressing this shortcoming, a competitive networked bivirus SIS model on a hypergraph has been developed and analyzed in [1]. The set of equilibria for the model in [1] can be broadly classified into three categories: the disease-free equilibrium (both viruses have been eradicated), the boundary equilibria (one virus is dead, and the other is alive); and coexistence equilibria (two viruses infect separate fractions of every population node in the network). Nevertheless, the results in [1] have the following limitations: a) some of the findings therein have yet to be rigorously established, and b) the analysis, while improving our understanding of the existence and stability of various equilibria, is not exhaustive. The present paper aims to address the aforementioned gaps. Our main contributions, therefore, are as follows:
Footnote 1: Simplicial networks (see [16] for more details) have also been used for studying HOI, see [17].
* We show that the networked bivirus SIS system with HOI has, in a generic sense, a finite number of equilibria. Furthermore, for each equilibrium, the associated Jacobian is a nonsingular matrix; see Theorem 1. In so
doing, since our proof of Theorem 1 does not, unlike the proof of [1, Theorem 5.5], require the HOI infection rates to be set to zero, we establish the correctness of the claim raised in [1, Theorem 5.5]. Building off of Theorem 1 and leveraging the fact that the system is monotone as identified in [1, Theorem 5.5], we prove that the typical behavior of the bivirus SIS system with HOI is convergence to an equilibrium point; see Theorem 2.
2. We identify a parameter regime that not only establishes the existence of three equilibria (a single-virus endemic equilibrium corresponding to virus 1 (resp. virus 2) and the DFE) but also guarantees that all of the said equilibria are locally exponentially stable at the same time; see Proposition 1.
3. We identify a parameter regime, different from the one covered by Proposition 1, for the existence of a coexistence equilibrium. We do so under different configurations of the boundary equilibria, viz. both being unstable and both being stable; see Proposition 3 and Theorem 3, respectively.
Additionally, for the parameter regime covered by Proposition 1, we establish existence of a coexistence equilibrium; see Proposition 4.
**Notation**: We denote the set of real numbers by \(\mathbb{R}\) and the set of nonnegative real numbers by \(\mathbb{R}_{+}\). For any positive integer \(n\), we use \([n]\) to denote the set \(\{1,2,...,n\}\). We use **0** and **1** to denote the vectors whose entries all equal \(0\) and \(1\), respectively, and use \(I\) to denote the identity matrix. For a vector \(x\), we denote the diagonal square matrix with \(x\) along the diagonal by \(\mathrm{diag}(x)\). For any two vectors \(a,b\in\mathbb{R}^{n}\) we write \(a\geq b\) if \(a_{i}\geq b_{i}\) for all \(i\in[n]\), \(a>b\) if \(a\geq b\) and \(a\neq b\), and \(a\gg b\) if \(a_{i}>b_{i}\) for all \(i\in[n]\). Likewise, for any two matrices \(A,B\in\mathbb{R}^{n\times m}\), we write \(A\geq B\) if \(A_{ij}\geq B_{ij}\) for all \(i\in[n]\), \(j\in[m]\), and \(A>B\) if \(A\geq B\) and \(A\neq B\). For a square matrix \(M\), we use \(\sigma(M)\) to denote the spectrum of \(M\), \(\rho(M)\) to denote the spectral radius of \(M\), and \(s(M)\) to denote the spectral abscissa of \(M\), i.e., \(s(M)=\max\{\mathrm{Re}(\lambda):\lambda\in\sigma(M)\}\).
A real square matrix \(A\) is called Metzler if all its off-diagonal entries are nonnegative. A matrix \(A\) is said to be an M-matrix if all of its off-diagonal entries are nonpositive, and there exists a constant \(c>0\) such that, for some nonnegative \(B\) and \(c\geq\rho(B)\), \(A=cI-B\). All eigenvalues of an M-matrix have nonnegative real parts. Furthermore, if an M-matrix has an eigenvalue at the origin, we say it is singular; if each eigenvalue has strictly positive parts, then we say it is nonsingular. If \(A(=[a_{ij}]_{n\times n})\) is a nonnegative matrix, then \(\rho(A)\) decreases monotonically with a decrease in \(a_{ij}\) for any \(i,j\in[n]\). The matrix \(A\) is reducible if, and only if, there is a permutation matrix \(P\) such that \(P^{\top}AP\) is block upper triangular; otherwise, \(A\) is said to be irreducible. If a nonnegative \(A\) is irreducible, and \(Ax=y\) for \(x>\textbf{0}\), then \(y>\textbf{0}\), and \(y\) cannot have a zero in every position where \(x\) has a zero.
## II Problem Formulation
### _Model_
Consider a network of \(n\) nodes. A node represents a well-mixed2 population of individuals. We will assume that the size of the population is fixed. We suppose two viruses, say virus 1 and virus 2, are spreading over such a network. Throughout this paper, we will assume that the two aforementioned viruses are competing. Through pairwise or HOI as described in more detail below, an otherwise healthy individual in node \(i\) gets infected with virus 1 (resp. virus 2) due to contact with either other individuals in node \(i\) who are infected with virus 1 (resp. virus 2) and/or with other individuals in node \(j\) (where \(j\) is a neighbor of \(i\)) who are infected with virus 1 (resp. virus 2). When a single interaction is involved (i.e., between two individuals in node \(i\) or between an individual in node \(i\) and an individual in node \(j\)), we say that the infection is caused due to _pairwise interactions_. An individual in node \(i\) could also be infected with virus 1 (resp. virus 2) due to _simultaneous_ interactions with infected individuals in nodes \(j\) and \(\ell\), where either a) \(j=i\), and/or \(\ell=i\), or b) \(j,\ell\) are neighbors of \(i\). Such interactions are referred to as _higher-order interactions_ (HOI). The notion of competition implies that no individual can be simultaneously infected with virus 1 and virus 2.
Footnote 2: Well-mixed means that the probability of any two individuals in a node interacting with each other is the same.
Footnote 3: Indeed, it is far more natural to have possibly different infection rates for each node; it is standard in the literature on classic SIS bivirus networked systems [8, 9, 10, 11, 12, 13, 21]. As evident below, we do not impose constraints on the values of the nonnegative matrices capturing the interactions, and hence the analysis does not differ materially. We choose this particular notation to remain consistent with earlier literature on epidemic models with HOI [20].
We assume that the pairwise infection (resp. HOI) rate with respect to virus \(k\) is the same for all nodes, denoted by \(\beta_{1}^{k}\) (resp. \(\beta_{2}^{k}\)) for all \(i\in[n]\) and \(k\in[2]\)3. An individual infected with virus \(k\) recovers from said infection at a healing rate \(\delta_{i}^{k}\) and immediately becomes susceptible to virus 1 or by virus 2. All individuals within a node have the same healing rate with respect to virus \(k\); individuals in different nodes possibly have different healing rates. We say that node \(i\) is healthy if all individuals in node \(i\) are healthy; otherwise, we say it is infected. Within the same node, it is possible for there to simultaneously exist a fraction of individuals that are infected with virus \(1\) and for a different fraction that is infected with virus \(2\).
Footnote 3: Indeed, it is far more natural to have possibly different infection rates for each node; it is standard in the literature on classic SIS bivirus networked systems [8, 9, 10, 11, 12, 13, 21]. As evident below, we do not impose constraints on the values of the nonnegative matrices capturing the interactions, and hence the analysis does not differ materially. We choose this particular notation to remain consistent with earlier literature on epidemic models with HOI [20].
As mentioned previously, diseases could spread due to pairwise interactions and HOI. In case of the former, if an individual in node \(j\) can infect an individual in node \(i\) with virus \(k\), then, with \(a_{ij}^{k}(\geq 0)\) denoting the strength of interactions between an individual in node \(j\) and an individual in node \(i\) with respect to spread of virus \(k\), we have that \(a_{ij}^{k}>0\); otherwise \(a_{ij}^{k}=0\). For the case of HOI, if an individual in node \(i\) gets infected with virus \(k\) due to simultaneous interactions with individuals in nodes \(j\) and \(\ell\), then, with \(b_{ij\ell}^{k}\) denoting the strength of interaction that nodes \(j\) and \(\ell\) together have on node \(i\) with respect to the spread of virus \(k\), we have that \(b_{ij\ell}^{k}>0\); else, \(b_{ij\ell}^{k}=0\). Let \(x_{i}^{k}(t)\) denote the fraction of individuals infected with virus \(k\) in
agent \(i\) at time instant \(t\). The evolution of this fraction can, therefore, be represented by the following scalar differential equation [1, Section 5], where, for \(i=1,2,\ldots,n\), we have
\[\dot{x}_{i}^{1}= -\delta_{i}^{1}x_{i}^{1}+\beta_{1}^{1}(1-x_{i}^{1}-x_{i}^{2})\sum_{ j=1}^{n}a_{ij}^{1}x_{j}^{1}+\] \[\beta_{2}^{1}(1-x_{i}^{1}-x_{i}^{2})\sum_{j,\ell=1}^{n}b_{ij\ell} ^{1}x_{j}^{1}x_{\ell}^{1}\] \[\dot{x}_{i}^{2}= -\delta_{i}^{2}x_{i}^{2}+\beta_{1}^{1}(1-x_{i}^{1}-x_{i}^{2})\sum _{j=1}^{n}a_{ij}^{2}x_{j}^{2}+\] \[\beta_{2}^{2}(1-x_{i}^{1}-x_{i}^{2})\sum_{j,\ell=1}^{n}b_{ij\ell} ^{2}x_{j}^{2}x_{\ell}^{2} \tag{1}\]
Define \(D^{1}=\operatorname{diag}(\delta_{i}^{1})\), where \(i\in[n]\), and define \(D^{2}\) analogously. Define \(X^{1}=\operatorname{diag}(x_{i}^{1})\), where \(i\in[n]\), and define \(X^{2}\) analogously. Let \(A^{1}=[a_{ij}^{1}]_{n\times n}\), and \(A^{2}=[a_{ij}^{2}]_{n\times n}\). Let \(B_{i}^{k}=[b_{ij\ell}^{k}]_{n\times n}\), for each \(i\in[n]\) and \(k\in[2]\). Let \(x^{k}=[x_{i}^{1}\qquad x_{i}^{2}\qquad\ldots\qquad x_{n}^{k}]^{\top}\) for \(k=1,2\).
Therefore, in vector form, equation (1) can be written as:
\[\dot{x}^{1}= -D^{1}x^{1}+\beta_{1}^{1}(I-X^{1}-X^{2})A^{1}x^{1}+\] \[\beta_{2}^{1}(I-X^{1}-X^{2})((x^{1})^{\top}B_{1}^{1}x^{1},(x^{1}) ^{\top}B_{2}^{1}x^{1},\ldots,(x^{1})^{\top}B_{n}^{1}x^{1})^{\top}\] \[\dot{x}^{2}= -D^{2}x^{2}+\beta_{1}^{2}(I-X^{1}-X^{2})A^{2}x^{2}+\] \[\beta_{2}^{2}(I-X^{1}-X^{2})((x^{2})^{\top}B_{1}^{2}x^{2},(x^{2}) ^{\top}B_{2}^{2}x^{2},\ldots,(x^{2})^{\top}B_{2}^{2}x^{2})^{\top} \tag{2}\]
Throughout this document, we will drop the superscript \(k\) while considering the single-virus case.
We note that system (2) is a special case of [1, system 5.5] in the following sense: System (2) only accounts for a) the case where, for \(k=1,2\), \(\beta_{i}^{k}\) and \(\beta_{2}^{k}\) is identical for every node \(i\), \(i=1,2,\ldots,n\), and b) the case where virus 1 (resp. virus 2) spread only due to contact with the infected individuals. In contrast, the model in [1] (see [1, system 5.5]) allows for the possibility of \(\beta_{i}^{k}\) and \(\beta_{2}^{k}\) being not necessarily the same for every node. Furthermore, it also allows for the possibility of the viruses to spread through additional mediums such as a water distribution network, a public transit network, etc.
**Remark 1**: _Note that setting \(\beta_{2}^{k}=0\) for \(k=1,2\) results in system (2) coinciding with the classic networked bivirus SIS model studied in, among others, [8, 9, 10, 11, 12, 13]. Setting \(x^{1}(0)=\textbf{0}\) (resp. \(x^{2}(0)=\textbf{0}\)) results in system (2) coinciding with the model used for studying the spread of a single virus over hypergraphs in [20]._
The model in system (2) has three kinds of equilibria, viz. healthy state or disease-free equilibrium (DFE), \((\textbf{0},\textbf{0})\); single-virus endemic equilibria corresponding to virus \(k\), of the form \((\bar{x}^{k},\textbf{0})\), where \(\textbf{0}\ll\bar{x}^{k}\ll\textbf{1}\) for \(k=1,2\); and coexisting equilibria, \((\bar{x}^{1},\bar{x}^{2})\), where, as we will show in Lemma 1, \(\textbf{0}\ll\bar{x}^{1},\bar{x}^{2}\ll\textbf{1}\), and, furthermore, \(\bar{x}^{1}+\bar{x}^{2}\ll\textbf{1}\). It is unknown whether the single-virus endemic equilibria corresponding to virus \(k\) are unique, in contrast to the classic bivirus SIS network model without HOI.
The Jacobian of system (2) evaluated at an arbitrary point, \((x^{1},x^{2})\), in the state space is as given in (3).
\[J(x^{1},x^{2})=\begin{bmatrix}J_{11}&J_{22}\\ J_{21}&J_{22}\end{bmatrix}, \tag{3}\]
where
\[J_{11}= -D^{1}+\beta_{1}^{1}(I-X^{1}-X^{2})A^{1}-\operatorname{diag}( \beta_{1}^{1}A^{1}x^{1})+\] \[\beta_{2}^{1}(I-X^{1}-X^{2})O_{1}(x^{1})-\beta_{2}^{1}O_{2}(x^{ 1}) \tag{4}\] \[J_{12}= -\operatorname{diag}(\beta_{1}^{1}A^{1}x^{1})-\beta_{2}^{1} \operatorname{diag}((x^{1})^{\top}B_{1}^{1}x^{1})_{i=1,2,\ldots,n}\] (5) \[J_{21}= -\operatorname{diag}(\beta_{1}^{2}A^{2}x^{2})-\beta_{2}^{2} \operatorname{diag}((x^{2})^{\top}B_{1}^{2}x^{2})_{i=1,2,\ldots,n}\] (6) \[J_{22}= -D^{2}+\beta_{1}^{2}(I-X^{1}-X^{2})A^{2}-\operatorname{diag}( \beta_{1}^{2}A^{2}x^{2})+\] \[\beta_{2}^{2}(I-X^{1}-X^{2})O_{3}(x^{2})-\beta_{2}^{2}O_{4}(x^{ 2}) \tag{7}\]
The terms \(O_{1}(x^{1})\), \(O_{2}(x^{1})\), \(O_{3}(x^{2})\) and \(O_{4}(x^{2})\) are as given in (8), (9), (10) and (11), respectively.
We will need the following assumptions to ensure the model is well-defined.
**Assumption 1**: _The matrix \(D^{k}\), for \(k=1,2\), is a positive diagonal matrix. The matrix \(A^{k}\), for \(k=1,2\), is nonnegative. The matrix \(B_{i}^{k}\) is nonnegative for all \(i\in[n]\) and \(k\in[2]\)._
**Assumption 2**: _The matrix \(A^{k}\), for \(k=1,2\), is irreducible. We define the set \(\mathcal{D}\) as follows:_
\[\mathcal{D}:=\{(x^{1},x^{2})\mid x^{k}\geq\textbf{0},k=1,2,\sum_{k=1}^{2}x^{k} \leq\textbf{1}\}. \tag{12}\]
It is known that the set \(\mathcal{D}\) is positively invariant, and that the DFE is always an equilibrium for system (2); see [1, Lemma 5.1]. The fact that \(\mathcal{D}\) is positively invariant guarantees that the state values \(x_{i}^{k},k\in[2],i\in[n]\), always stay in the \([0,1]\) interval. Since the states represent fractions of an infected population node, if the states were to take values outside the \([0,1]\) interval, then those would not correspond to physical reality.
### _Problem Statements_
With respect to system (2), we aim to answer the following questions in this paper conclusively:
1. What is the typical behavior the trajectories exhibit as time goes to infinity?
2. Can we identify a parameter regime such that multiple equilibria are simultaneously stable?
3. Can we identify sufficient conditions for the existence of a coexistence equilibrium? Furthermore, can we establish the stability properties of such an equilibrium based on knowledge of the stability properties of the boundary equilibria?
### _Preliminary Lemmas and analysis of healthy state_
In this subsection, we will establish certain preliminary results on the nature of equilibria of system (1), and recall some of the results on irreducible matrices - all of these will aid in the development of the main results of the paper.
**Lemma 1**: _Consider system (2) under Assumptions 1 and 2. If \(\bar{x}=(\bar{x}^{1},\bar{x}^{2})\in\mathcal{D}\) is an equilibrium of (2), then, for each \(k\in[2]\), either \(\bar{x}^{k}=\textbf{0}\), or \(\textbf{0}\ll\bar{x}^{k}\ll\textbf{1}\). Moreover, \(\sum_{k=1}^{2}\bar{x}^{k}\ll\textbf{1}\)._
The proof is inspired from [11, Lemma 3.1].
_Proof:_ It is clear that \((\textbf{0},\textbf{0})\) is an equilibrium of (2). Therefore, in the rest of the proof, we will show that any non-zero equilibrium \(\bar{x}=(\bar{x}^{1},\bar{x}^{2})\) of (2) must satisfy, for each \(k\in[2]\), \(\textbf{0}\ll\bar{x}^{k}\ll\textbf{1}\) and \(\sum_{k=1}^{2}\bar{x}^{k}\ll\textbf{1}\). We start off by showing
\[O_{1}(x^{1}) =[(B_{1}^{1}+(B_{1}^{1})^{\top})x^{1}\qquad(B_{2}^{1}+(B_{2}^{1})^{ \top})x^{1}\qquad\ldots\qquad(B_{n}^{1}+(B_{n}^{1})^{\top})x^{1}] \tag{8}\] \[O_{2}(x^{1}) =\text{diag}((x^{1})^{\top}B_{1}^{1}x^{1})_{i=1,2,\ldots,n}\] (9) \[O_{3}(x^{2}) =\begin{bmatrix}(B_{1}^{2}+(B_{1}^{2})^{\top})x^{2}\qquad(B_{2}^{ 2}+(B_{2}^{2})^{\top})x^{2}\qquad\ldots\qquad(B_{n}^{2}+(B_{n}^{2})^{\top})x^{2 }\end{bmatrix}\] (10) \[O_{4}(x^{2}) =\text{diag}((x^{2})^{\top}B_{1}^{2}x^{2})_{i=1,2,\ldots,n} \tag{11}\]
that \(\bar{x}^{1}+\bar{x}^{2}\ll\textbf{1}\). For any \(i\in[n]\), observe that the following is satisfied:
\[\dot{\bar{x}}_{i}^{1}+\dot{\bar{x}}_{i}^{2}= -\delta_{i}^{1}\bar{x}_{i}^{1}-\delta_{i}^{2}\bar{x}_{i}^{2}+ \beta_{1}^{1}(1-\bar{x}_{i}^{1}-\bar{x}_{i}^{2})\sum_{j=1}^{n}a_{ij}^{1}\bar{x} _{j}^{1}\] \[+\beta_{2}^{1}(1-\bar{x}_{i}^{1}-\bar{x}_{i}^{2})\sum_{j,\ell=1}^ {n}b_{ij\ell}^{1}\bar{x}_{j}^{2}\bar{x}_{\ell}^{1}\] \[+\beta_{1}^{2}(1-\bar{x}_{i}^{1}-\bar{x}_{i}^{2})\sum_{j=1}^{n}a_ {ij}^{2}\bar{x}_{j}^{2}\] \[+\beta_{2}^{2}(1-\bar{x}_{i}^{1}-\bar{x}_{i}^{2})\sum_{j,\ell=1}^ {n}b_{ij\ell}^{2}\bar{x}_{j}^{2}\bar{x}_{\ell}^{2} \tag{13}\]
Suppose that, for some \(i\in[n]\), \(\bar{x}_{i}^{1}+\bar{x}_{i}^{2}=1\). Therefore, since, by Assumption 1, \(\delta_{i}^{k}>0\) for \(k=1,2\), and since \(\bar{x}_{i}^{k}\in\mathcal{D}\), from (13), it is clear that \(\bar{x}_{i}^{1}+\dot{\bar{x}}_{i}^{2}<0\). However, since by assumption \(\bar{x}=(\bar{x}^{1},\bar{x}^{2})\) is an equilibrium, it must be that \(\dot{\bar{x}}_{i}^{1}+\dot{\bar{x}}_{i}^{2}=0\), which is a contradiction. Therefore, for all \(i\in[n]\), \(\bar{x}_{i}^{1}+\bar{x}_{i}^{2}<1\), which implies that \(\sum_{k=1}^{2}x^{k}\ll\textbf{1}\); thus guaranteeing that \(\bar{x}^{k}\ll\textbf{1}\) for \(k=1,2\).
We are left to show that \(\bar{x}^{k}\gg\textbf{0}\) for \(k=1,2\). To this end, suppose that \(\bar{x}^{1}>\textbf{0}\) is an equilibrium point for which there exists at least one (but possibly more) \(i\in[n]\) such that \(\bar{x}_{i}^{1}=0\). Note that the equilibrium version of the first line of equation (2) yields the following:
\[\dot{\bar{x}}^{1}= -D^{1}\bar{x}^{1}+\beta_{1}^{1}(I-\bar{X}^{1}-\bar{X}^{2})A^{1} \bar{x}^{1}+\] \[\beta_{2}^{1}(I-\bar{X}^{1}-\bar{X}^{2})((\bar{x}^{1})^{\top}B_{ 1}^{1}\bar{x}^{1},(\bar{x}^{1})^{\top}B_{2}^{1}\bar{x}^{1},\ldots,(\bar{x}^{1}) ^{\top}B_{n}^{1}\bar{x}^{1})^{\top} \tag{14}\]
By noting that \(\bar{x}^{1}\) is an equilibrium point, and by a suitable rearrangement of terms, we obtain:
\[\bar{x}^{1}= S\bar{x}^{1}, \tag{15}\]
where
\[S= (D^{1})^{-1}\beta_{1}^{1}(I-\bar{X}^{1}-\bar{X}^{2})A^{1}+\] \[(D^{1})^{-1}\beta_{2}^{1}(I-\bar{X}^{1}-\bar{X}^{2})((\bar{x}^{1 })^{\top}B_{1}^{1},\ldots,(\bar{x}^{1})^{\top}B_{n}^{1}. \tag{16}\]
By Assumptions 1 and 2, it is clear that the matrix \(S\) is nonnegative and irreducible. Since, by assumption, \(\bar{x}^{1}>\textbf{0}\), from (15) and coupled with a property of a nonnegative matrix that is irreducible we have the following: i) \(S\bar{x}^{1}>\textbf{0}\), and ii) there is at least one \(i\in[n]\) such that \(\bar{x}_{i}^{1}=0\) but \((S\bar{x}^{1})_{i}>0\). Note that ii) contradicts (15). Therefore, if \(\bar{x}^{1}>\textbf{0}\) is an equilibrium point, then it must be that \(\bar{x}^{1}\gg\textbf{0}\). By an analogous argument, it can be shown that \(\bar{x}^{2}\gg\textbf{0}\), thus completing the proof. \(\Box\)
**Lemma 2**: _[_10_, Proposition 1]_ _Suppose that \(\Lambda\) is a negative diagonal matrix and \(N\) is an irreducible nonnegative matrix. Let \(M\) be the irreducible Metzler matrix \(M=\Lambda+N\). Then, \(s(M)<0\) if and only if \(\rho(-\Lambda^{-1}N)<1,s(M)=0\) if and only if \(\rho(-\Lambda^{-1}N)=1\), and \(s(M)>0\) if and only if and only if, \(\rho(-\Lambda^{-1}N)>1\)._
**Lemma 3**: _[_22_, Proposition 2]_ _Let \(A\in\mathbb{R}^{n\times n}\) be Metzler. Then, \(A\) is Hurwitz if, and only if, there exists an \(x\in\mathbb{R}^{n}\) such that \(x\gg\textbf{0}\) and \(Ax\ll 0\)._
**Lemma 4**: _[_23_, Chapter 8.3]_ _[_24_, Theorem 2.7]_ _Suppose that \(N\) is an irreducible nonnegative matrix. Then,_
1. \(r=\rho(N)\) _is a simple eigenvalue of_ \(N\)_._
2. _There is an eigenvector_ \(\zeta\gg\textbf{0}\) _corresponding to the eigenvalue_ \(r\)_._
3. \(x>\textbf{0}\) _is an eigenvector only if_ \(Nx=rx\) _and_ \(x\gg\textbf{0}\)_._
4. _If_ \(A\) _is a nonnegative matrix such that_ \(A<N\)_, then_ \(\rho(A)<\rho(N)\)_._ \(\blacksquare\)__
It can be seen that \((\textbf{0},\textbf{0})\) is an equilibrium of (2), and is referred to as the disease-free equilibrium (DFE). We recall a sufficient condition for convergence to the DFE.
**Lemma 5**: _[_1_, Theorem 5.2, statement 1]_ _Consider system (2) under Assumptions 1 and 2. If, for \(k=1,2\), \(\rho(\beta_{1}^{k}(D^{k})^{-1}A^{k})<1\), then the DFE is locally stable._
Note that the guarantees provided by Lemma 5 are only local. It turns out that the DFE, under appropriate conditions, is endowed with stronger stability guarantees. We define for \(k=1,2\) the following matrices:
\[R^{k}:=\begin{bmatrix}_{1}^{\top}x_{1}^{B}x_{2}^{\rrbracket}\\ \vdots\\ _{\top}x_{n}^{B}x_{n}^{\rrbracket}\end{bmatrix}.\]
With the matrices \(R^{k}\), \(k=1,2\), in hand, we can recall the following result.
**Lemma 6**: _[_1_, Theorem 5.2, statement 2]_ _Consider system (2) under Assumptions 1 and 2. If, for \(k=1,2\), \(\rho(\beta_{1}^{k}(D^{k})^{-1}A^{k}+\beta_{2}^{k}(D^{k})^{-1}R^{k})<1\), then the DFE is globally exponentially stable._
## III Monotone dynamical systems and competitive bivirus networked SIS models with HOI
Monotone dynamical systems (MDS) are a class of systems that has found resonance in mathematical epidemiology; one of the major reasons for this is the fact that MDS, assuming that they generically have a finite number of equilibria, converge to a (stable) equilibrium point for almost all initial conditions. Here, the term "almost all" means: for all but a set of parameter values that has measure zero. An algebraic or semi-algebraic set defines this set of exceptional values. It is known that under Assumptions 1 and 2, system (2) is monotone; see [1, Theorem 5.5]. That is, suppose that \((x_{A}^{1}(0),x_{A}^{2}(0))\) and \((x_{B}^{1}(0),x_{B}^{2}(0))\) are two initial conditions in \(\text{int}(\mathcal{D})\) satisfying i) \(x_{A}^{1}(0)>x_{B}^{1}(0)\) and ii) \(x_{A}^{2}(0)<x_{B}^{2}(0)\). Since system (2) is monotone, it follows that, for all \(t\in\mathbb{R}_{\geq 0}\), i) \(x_{A}^{1}(t
proof for finiteness of equilibria in [1, Theorem 5.5] is not complete, it leaves open the issue of generic convergence to an equilibrium point. To remedy this, we provide a different proof for generic finiteness of equilibria that does not rely on \(\beta_{2}^{k}=0\) for \(k=1,2\).
Given that nonlinear systems can have complex equilibria patterns, including a continuum of equilibria for the classic bivirus network model, we establish that for generic parameter matrices, system (1) has a finite number of equilibria. We use arguments very much like those in [12]. Essentially because the healthy equilibrium and the single-virus boundary equilibria can be conveniently studied using single-virus techniques, it is easily established that there are no continua of equilibria confined to any boundary, i.e. any continuum of equilibria necessarily includes a continuum of coexistence equilibria. Therefore, we focus on showing that such equilibria cannot exist for generic parameter values. The tool is the Parametric Transversality Theorem, see [25, see p. 145] and [26, see p.68]. The main result is as follows:
**Theorem 1**: _Consider the model of (2), under Assumptions 1 and 2. With any fixed matrices \(A^{k}\) and nonnegative \(B_{i}^{k}\), and the exclusion of a set of values for the entries of \(D^{1},D^{2}\) of measure zero, the number of coexistence equilibrium points is finite, and the associated vector field zero is nondegenerate, i.e. the associated Jacobian is nonsingular. Similarly, with any fixed \(D^{1},D^{2}\) and \(B_{i}^{k}\), and the exclusion of a set of values for the entries of \(A^{1},A^{2}\) of measure zero, the same properties of equilibrium points hold._
See Appendix. \(\Box\)
Theorem 1, coupled with the fact that system (2) is monotone, allows us to leverage Hirsch's generic convergence theorem [27] to draw conclusions on the limiting behavior of system (2) outside of the specific conditions identified in Lemma 5. We have the following result.
**Theorem 2**: _Consider system (2) under Assumptions 1 and 2. For all initial conditions \((x^{1}(0),x^{2}(0))\in\mathcal{D}\) except possibly for a set of measure zero, the system (2) will converge to an equilibrium. If the system does not converge to an equilibrium, it is on a nonattractive limit cycle._
In words, Theorem 2 establishes that the typical behavior of system (2) is convergence to _some_ equilibrium; this could be healthy, or (one of the possibly many) single-virus boundary equilibria, or a coexistence equilibrium. It further says that limit cycles, if any, are nonattractive. No more complicated behavior is allowed; chaos can be ruled out, see [28]. Thus, Theorem 2 answers question i) raised in Section II.
Theorem 2 strengthens the result in [11, Theorem 3.6] by extending the generic convergence behavior to bi-virus SIS models that also account for HOI. Furthermore, it establishes the correctness of a similar claim raised in [1, Theorem 5.5].
## IV Existence and local stability of boundary equilibria
In this section, we identify a parameter regime that permits three equilibria of the bivirus system (2) to be simultaneously locally exponentially stable. Subsequently, for a parameter regime different from the one mentioned above, we identify a condition for the existence and instability of a boundary equilibrium. Finally, when there is only one virus, we identify a condition for the existence and local exponential stability of an endemic equilibrium.
**Proposition 1**: _Consider system (2) under Assumptions 1 and 2, and \(B_{i}^{k}\geq 0\) for all \(i\in[n]\) and \(k\in[2]\). Define, for \(k=1,2\), \(\mathbf{1}_{B^{k}}\in\{0,1\}^{n}\) by \((\mathbf{1}_{B^{k}})_{i}=1\) if \(B_{i}^{k}\neq\mathbf{0}\); otherwise \((\mathbf{1}_{B^{k}})_{i}=0\). Suppose that the following conditions are fulfilled for \(k=1,2\):_
1. \(\rho(\beta_{i}^{k}(D^{k})^{-1}A^{k})<1\)_, and_
2. \(\min_{i.s.d.\,B_{i}^{k}\neq\mathbf{0}}\left(\frac{\beta_{i}^{k}}{\delta_{i}^{k }}(A^{k}\mathbf{1}_{B^{k}})_{i}+\frac{\beta_{i}^{k}}{2\delta_{i}^{k}}\mathbf{1} _{B^{k}}^{\top}B_{i}\mathbf{1}_{B^{k}}\right)>2\)_._
_Then, the following statements are true:_
1. _[label=_)_]_
2. _The DFE is locally exponentially stable._
3. _there exist equilibria_ \(\bar{x}^{k}\gg\mathbf{0}\) _such that_ \(\bar{x}^{k}_{i}\geq\frac{1}{2}\) _for_ \(k=1,2\)_, for any_ \(i\) _such that_ \(B_{i}^{k}\neq\mathbf{0}\)_._
4. _Any such equilibrium point_ \((\bar{x}^{1},\mathbf{0})\) _is locally exponentially stable; and_
5. _Any such equilibrium point_ \((\mathbf{0},\bar{x}^{2})\) _is locally exponentially stable._
The proof is inspired from [20, Theorem 5.1, statements iv) and v)].
_Proof of statement i):_ Note that the Jacobian evaluated at the DFE is as follows:
\[J(\mathbf{0},\mathbf{0})=\begin{bmatrix}-D^{1}+\beta_{1}^{1}A^{1}&\begin{matrix} \mathbf{0}\\ \mathbf{0}\end{matrix}\\ -D^{2}+\beta_{1}^{2}A^{2}\end{bmatrix}.\]
By assumption, \(\beta_{1}^{k}\rho(D^{k})^{-1}A^{k})<1\), for \(k=1,2\). Therefore, from Lemma 2, it must be that \(s(-D^{k}+\beta_{1}^{k}A^{k})<0\) for \(k=1,2\), which, since \(J(\mathbf{0},\mathbf{0})\) is a block diagonal matrix, and since the matrices \(-D^{1}+\beta_{1}^{1}A^{1}\) and \(-D^{2}+\beta_{1}^{2}A^{2}\) are the only blocks along the main diagonal, implies that \(s(J(\mathbf{0},\mathbf{0}))<0\). Local exponential stability of the DFE, then, follows from [29, Theorem 4.15 and Corollary 4.3].
_Proof of statement ii):_ See [20, Theorem 5.1, statement iv)].
_Proof of statement iii):_ Consider the equilibrium point \((\bar{x}^{1},\mathbf{0})\), and observe that the Jacobian evaluated at this equilibrium is as follows:
\[J(\bar{x}^{1},\mathbf{0})=\begin{bmatrix}\bar{J}_{11}&\bar{J}_{12}\\ \mathbf{0}&\bar{J}_{22}\end{bmatrix}, \tag{17}\]
where
\[\bar{J}_{11} =-D^{1}+\beta_{1}^{1}(I-\bar{X}^{1})A^{1}-\mathrm{diag}(\beta_{1}^ {1}A^{1}\bar{x}^{1})+\] \[\beta_{2}^{1}(I-\bar{X}^{1})O_{1}(\bar{x}^{1})-\beta_{2}^{1}O_{2 }(\bar{x}^{1})\] \[\bar{J}_{12} =-\mathrm{diag}(\beta_{1}^{1}A^{1}\bar{x}^{1})-\beta_{2}^{1} \mathrm{diag}((\bar{x}^{1})^{\top}B_{i}^{1}\bar{x}^{1})_{i=1,\ldots,n}\] \[\bar{J}_{22} =-D^{2}+\beta_{1}^{2}(I-\bar{X}^{1})A^{2}.\]
The terms \(O_{1}(\bar{x}^{1})\) and \(O_{2}(\bar{x}^{1})\) are as defined in (8) and (9). We will establish the exponential stability of the 11 and 22 blocks (i.e., \(\bar{J}_{11}\) and \(\bar{J}_{22}\)) separately. Observe that
\[\bar{J}_{11}= -D^{1}+\beta_{1}^{1}(I-\bar{X}^{1})A^{1}-\mathrm{diag}(\beta_{1}^ {1}A^{1}\bar{x}^{1})\ +\] \[\beta_{2}^{1}(I-\bar{X}^{1})O_{1}(\bar{x}^{1})-\beta_{2}^{1} \begin{bmatrix}(\bar{x}^{1})^{\top}B_{i}^{1}\bar{x}^{1}\\ &\ddots\\ &(\bar{x}^{1})^{\top}B_{n}^{1}\bar{x}^{1}\end{bmatrix}.\]
Define summands
\[Q_{1} :=-D^{1}+\beta_{1}^{1}(I-\bar{X}^{1})A^{1}+\beta_{2}^{1}(I-\bar{X}^{ 1})\begin{bmatrix}(\bar{x}^{1})^{\top}B_{1}^{1}\\ \vdots\\ (\bar{x}^{1})^{\top}B_{n}^{1}\end{bmatrix},\;\text{and}\] \[Q_{2} :=\!\!\beta_{2}^{1}(I-\bar{X}^{1})\begin{bmatrix}(\bar{x}^{1})^{ \top}(B_{1}^{1})^{\top}\\ \vdots\\ (\bar{x}^{1})^{\top}(B_{n}^{1})^{\top}\end{bmatrix}-\] \[\quad\quad\text{diag}(\beta_{1}^{1}A^{1}\bar{x}^{1})-\beta_{2}^{1} \begin{bmatrix}(\bar{x}^{1})^{\top}B_{1}^{1}\bar{x}^{1}\\ &\ddots\\ &&(\bar{x}^{1})^{\top}B_{n}^{1}\bar{x}^{1}\end{bmatrix}.\]
It is immediate that \(\bar{J_{11}}=Q_{1}+Q_{2}\), which implies that \(\bar{J}_{11}\bar{x}^{1}=Q_{1}\bar{x}^{1}+Q_{2}\bar{x}^{1}\). Since \(\bar{x}^{1}\) is a single-virus endemic equilibrium corresponding to virus 1, by taking recourse to the equilibrium version of the first line of equation (2), it is clear that \(Q_{1}\bar{x}^{1}=\mathbf{0}\). Hence, \(\bar{J}_{11}\bar{x}^{1}=Q_{2}\bar{x}^{1}\).
Note that
\[Q_{2}\bar{x}^{1} =\beta_{2}^{1}(I-\bar{X}^{1})\begin{bmatrix}(\bar{x}^{1})^{\top} (B_{1}^{1})^{\top}\bar{x}^{1}\\ \vdots\\ (\bar{x}^{1})^{\top}(B_{n}^{1})^{\top}\bar{x}^{1}\end{bmatrix}-\] \[\quad\quad\text{diag}(\beta_{1}^{1}A^{1}\bar{x}^{1})\bar{x}^{1}- \beta_{2}^{1}\begin{bmatrix}(\bar{x}^{1})^{\top}B_{1}^{1}\bar{x}^{1}\\ &\ddots\\ &&(\bar{x}^{1})^{\top}B_{n}^{1}\bar{x}^{1}\end{bmatrix}\bar{x}^{1}. \tag{18}\]
Denote by \((Q_{2}\bar{x}^{1})_{i}\) the \(i^{th}\) entry of the vector \(Q_{2}\bar{x}^{1}\). Therefore, in view of (18), we have the following:
\[(Q_{2}\bar{x}^{1})_{i}=-\beta_{1}^{1}\Big{(}\sum_{j=1}^{n}a_{ij}^{1}\bar{x}_{ j}^{1}\Big{)}\bar{x}_{i}^{1}\!+\!\beta_{2}^{1}(1\!-\!2\bar{x}_{i}^{1})((\bar{x} ^{1})^{\top}B_{i}^{1}\bar{x}^{1}) \tag{19}\]
We consider its sign under two circumstances. Suppose first that \(B_{i}^{1}=\mathbf{0}\). Then, in view of (19), since by Assumption 2 the matrix \(A^{1}\) is irreducible, \(\beta_{1}^{1}>0\), and from statement ii) we know that \(\bar{x}^{1}\gg\mathbf{0}\), it must be that \((Q_{2}\bar{x}^{1})_{i}<0\). Suppose secondly that \(B_{i}^{1}\neq\mathbf{0}\). Since from statement ii) we know that \(\bar{x}_{i}^{1}\geq\frac{1}{2}\), it follows that \(1-2\bar{x}_{i}^{1}<0\); thus implying that \((Q_{2}\bar{x}^{1})_{i}<0\). Note that the choice of index \(i\) was arbitrary, and therefore again, we have \((Q_{2}\bar{x}^{1})_{i}<0\) for all \(i\in[n]\). Hence, since \(\bar{J}_{11}\bar{x}^{1}=Q_{2}\bar{x}^{1}\), it follows that \((\bar{J}_{11}\bar{x}^{1})_{i}<0\) for all \(i\in[n]\). Note that Assumptions 1 and 2 guarantee that the matrices \(Q_{1}\) and \(Q_{2}\) are irreducible Metzler matrices; hence, the matrix \(\bar{J}_{11}\) is an irreducible Metzler matrix. Therefore, from Lemma 3, it must be that the matrix \(\bar{J}_{11}\) is Hurwitz.
Turning our attention to the matrix \(\bar{J}_{22}\), consider the matrices \(\beta_{1}^{2}(D^{2})^{-1}A^{2}\) and \(\beta_{1}^{2}(D^{2})^{-1}(I-\bar{X}^{1})A^{2}\). From Assumption 1, it is clear that \(\beta_{1}^{2}(D^{2})^{-1}A^{2}\) is a nonnegative matrix. Since, from statement ii), \(\bar{x}^{1}\) satisfies \(\mathbf{0}\ll\bar{x}^{1}\ll\mathbf{1}\), it is also clear that \(\beta_{1}^{2}(D^{2})^{-1}(I-\bar{X}^{1})A^{2}\) is a nonnegative matrix. Furthermore, we also immediately obtain the following:
\[\beta_{1}^{2}(D^{2})^{-1}(I-\bar{X}^{1})A^{2}<\beta_{1}^{2}(D^{2})^{-1}A^{2}.\]
Therefore, since the spectral radius of a nonnegative matrix decreases monotonically with a decrease in any entry of said matrix, it follows that \(\rho(\beta_{1}^{2}(D^{2})^{-1}(I-\bar{X}^{1})A^{2})\leq\rho(\beta_{1}^{2}(D^{2 })^{-1}A^{2})\). By assumption, \(\rho(\beta_{1}^{2}(D^{2})^{-1}A^{2})<1\), which implies that \(\rho(\beta_{1}^{2}(D^{2})^{-1}(I-\bar{X}^{1})A^{2})<1\), and consequently, from Lemma 2, we have that \(s(-D^{2}+(I-\bar{X}^{1})A^{2})<0\). Therefore, since \(J(\bar{x}^{1},\mathbf{0})\) is block upper triangular, and since we have already established that \(\bar{J}_{11}\) is Hurwitz, it follows that \(s(J(\bar{x}^{1},\mathbf{0}))<0\). Local exponential stability of \((\bar{x}^{1},\mathbf{0})\), then, follows from [29, Theorem 4.15 and Corollary 4.3].
Proof of statement iv):.: The proof is analogous to that of statement iii).
Proposition 1 answers question ii) raised in Section II. Proposition 1 guarantees the existence and simultaneous local exponential stability of three equilibria, whereas [1, Theorem 5.3], assuming that an endemic equilibrium exists, guarantees its local stability. Furthermore, the possibility of the DFE being locally stable simultaneously is alluded to; see [1, Remark 10]. On the other hand, for a parameter regime different from the one covered in Proposition 1, assuming that the terms corresponding to HOI are sufficiently small, [1, Theorem 5.3] secures global stability of the endemic equilibrium. The following remarks are in order.
**Remark 2**: _Proposition 1 sheds light on an interesting phenomenon that bivirus spread over hypergraph exhibits (but bivirus spread over a normal graph does not exhibit): identification of a parameter regime that permits three equilibria, namely the DFE and the two boundary equilibria, to be simultaneously stable. This is an extension to the single virus case studied in [20], which permitted the simultaneous stability of the DFE and (since there is only one in this case) an endemic equilibrium._
**Remark 3**: _It is known that, assuming \(\beta_{2}^{k}=0\) for \(k=1,2\), the condition \(\rho(\beta_{1}^{k}(D^{k})^{-1}A^{k})\leq 1\) guarantees that the DFE is the only equilibrium of system (2); see [21, Lemma 2]. However, as Proposition 1 shows, that is not necessarily true when considering bivirus SIS spread over hypergraphs._
Proposition 1 guarantees existence of boundary equilibria for the case when \(\rho(\beta_{1}^{k}(D^{k})^{-1}A^{k})<1\). It is natural to ask if one is assured of existence even if the spectral radii of relevant quantities are larger than one. The following proposition addresses this issue.
**Proposition 2**: _Consider system (2) under Assumptions 1 and 2. Suppose that, for all \(k\in[2]\), \(\rho(\beta_{1}^{k}(D^{k})^{-1}A^{k})>1\). Then system (2) has at least three equilibria, namely the DFE, a single virus endemic equilibrium corresponding to virus \(1\)\((\bar{x}^{1},\mathbf{0})\), and a single virus endemic equilibrium corresponding to virus \(2\)\((\mathbf{0},\bar{x}^{2})\). Furthermore, if \(s(-D^{i}+\beta_{1}^{i}(I-\bar{X}^{k})A^{i})>0\) for \(i,k\in[2]\) such that \(i\neq k\), then the equilibrium points \((\bar{x}^{1},\mathbf{0})\) and \((\mathbf{0},\bar{x}^{2})\) are unstable._
Proof.: Observe that the DFE is always an equilibrium of system (2). Suppose that for some \(k\in[2]\), \(\rho(\beta_{1}^{k}(D^{k})^{-1}A^{k})>1\), then from [20, Theorem 5.1, statement iii) we know that there exists an endemic equilibrium, \(\bar{x}^{k}\), where \(\mathbf{0}\ll\bar{x}^{k}\ll\mathbf{1}\). Since, by assumption, \(\rho(\beta_{1}^{k}(D^{k})^{-1}A^{k})>1\) for all \(k\in[2]\), it is also immediate that there exist equilibria, \((\bar{x}^{1},\mathbf{0})\) and \((\mathbf{0},\bar{x}^{2})\), where \(\mathbf{0}\ll\bar{x}^{k}\ll\mathbf{1}\), for \(k=1,2\).
It can be verified that \(J(\bar{x}^{1},\mathbf{0})\) is block upper triangular, with the matrix \(-D^{1}+\beta_{1}^{1}(I-\bar{X}^{2})A^{1}\) being one of the blocks along the diagonal. By assumption, \(s(-D^{1}+\beta_{1}^{1}(I-\bar{X}^{2})A^{1})>0\), which implies that \(s(J(\bar{x}^{1},\mathbf{0}))>0\). Consequently, instability of \((\bar{x}^{1},\mathbf{0})\) follows from [29, Theorem 4.7, statement ii)]. The instability of \((\mathbf{0},\bar{x}^{2})\) can be shown analogously, thus completing the proof. \(\Box\)
**Remark 4**: _Proposition 2 (resp. Proposition 1) guarantees the existence (resp. existence and local exponential stability) of the equilibrium points, \((\bar{x}^{1},\mathbf{0})\) and \((\mathbf{0},\bar{x}^{2})\). It turns out that it is possible to compute these points iteratively; see [20, Theorem 5.3]._
## V (Non)existence of Coexistence equilibria
This section identifies sufficient conditions for the existence (resp. nonexistence) of coexistence equilibria. Specifically, for investigating existence, we consider two parameter regimes, viz. for \(k=1,2\), i) \(s(-D^{k}+\beta_{i}^{k}A^{k})>0\), and ii) \(s(-D^{k}+\beta_{i}^{k}A^{k})<0\). Further, for the parameter regime i), we consider two stability configurations of the boundary equilibria, viz. a) both being unstable and b) both being stable; for parameter regime ii), we consider the case where both boundary equilibria are stable.
**Proposition 3**: _Consider system (2) under Assumptions 1 and 2. Let \((\bar{x}^{1},\mathbf{0})\) and \((\mathbf{0},\bar{x}^{2})\) denote a single-virus endemic equilibrium corresponding to virus 1 and virus 2, respectively. Suppose that the following conditions are satisfied:_
1. \(s(-D^{1}+\beta_{1}^{1}A^{1})>0\)_;_
2. \(s(-D^{2}+\beta_{1}^{2}A^{2})>0\)_;_
3. \(s(-D^{1}+\beta_{1}^{1}(I-\bar{X}^{2})A^{1})>0\)_; and_
4. \(s(-D^{2}+\beta_{1}^{2}(I-\bar{X}^{1})A^{2})>0\)_._
_Then there exists at least one equilibrium of the form \((\hat{x}^{1},\hat{x}^{2})\) such that \(\mathbf{0}\ll\hat{x}^{1},\hat{x}^{2}\ll\mathbf{1}\) and \(\hat{x}^{1}+\hat{x}^{2}\ll\mathbf{1}\)._
Before proving the claim in Proposition 3, we need the following background material. In line with the terminology of [30], given an equilibrium point of system (2), we classify the same as saturated or unsaturated. We say that an equilibrium is saturated (resp. strictly saturated) if the diagonal block corresponding to the zero entries of said equilibrium possibly has a single eigenvalue at the origin (resp. has every eigenvalue to be strictly less than zero) and unsaturated otherwise [30]. A boundary equilibrium of (2) is saturated if and only if said boundary equilibrium is locally exponentially stable; this follows immediately by noting the structure of the Jacobian matrix, evaluated at a boundary equilibrium, see (17). The definition also implies that every fixed point in the interior of \(\mathcal{D}\), irrespective of its stability properties, is saturated [30]; therefore, from Lemma 1, we have that every coexistence equilibrium of system (2) is saturated.
_Proof:_ Assumptions i) and ii) of Proposition 3 guarantee existence of boundary equilibria, \((\bar{x}^{1},\mathbf{0})\) and \((\mathbf{0},\bar{x}^{2})\); see Proposition 2. Observe that [1, Lemma 5.1] guarantees that, for each \(k\in[2]\), \(x^{k}(0)\geq\mathbf{0}\) implies that \(x^{k}(t)\geq\mathbf{0}\) for all \(t\in\mathbb{R}_{\geq 0}\), and that the set \(\mathcal{D}\) (which is compact) is forward invariant. Therefore, from [30, Theorem 2], it follows that system (2) has at least one saturated fixed point. There are two cases to consider.
Case 1: Suppose the aforementioned saturated fixed point is in the interior of \(\mathcal{D}\). Note that any fixed point in the interior of \(\mathcal{D}\) is of the form \((\hat{x}^{1},\hat{x}^{2})\), where \(\mathbf{0}\ll(\hat{x}^{1},\hat{x}^{2})\ll\mathbf{1}\), thus implying that \((\hat{x}^{1},\hat{x}^{2})\) is a coexistence equilibrium. From Lemma 1, it must necessarily satisfy \(\mathbf{0}\ll(\hat{x}^{1},\hat{x}^{2})\ll\mathbf{1}\), and \(\hat{x}^{1}+\hat{x}^{2}\ll\mathbf{1}\).
Case 2: Suppose, but we will demonstrate a contradiction, that there are no fixed points in the interior of \(\mathcal{D}\). This implies that there must be a saturated fixed point on the boundary of \(\mathcal{D}\)[30]. Therefore, at least one of the single-virus boundary equilibria is saturated.
However, from Proposition 2, it is clear that assumptions iii) and iv) guarantee that the boundary equilibria are unstable; thus implying that they are unsaturated, and the contradiction is obtained. \(\Box\)
Proposition 3 is implied by [1, Theorem 5.4], which, assuming \(\beta_{2}^{k}=0\) for \(k=1,2\), is the same as [9, Theorem 5]. The proof technique in [1, Theorem 5.4] is quite involved since it primarily relies on fixed point mapping, Perron Frobenius theory, etc. Our proof is significantly shorter. Note that [30, Theorem 2] is a key ingredient of our proof strategy. In light of Theorem 1, one could perhaps leverage [30, Theorem 2] to obtain a lower bound on the number of coexistence equilibria for the stability configuration of boundary equilibria given in Proposition 3, as has been done for classic bivirus networked SIS models; see [12, Corollary 3.9, statement 2]. Subsequently, one could possibly exploit the properties of MDS to conclude that there must exist a locally exponentially stable coexistence equilibrium.
Observe that in Propostion 3 the demonstration of the existence of a coexistence equilibrium point \((\hat{x}^{1},\hat{x}^{2})\) relies on the assumption that both boundary equilibria are unstable. We now present a different condition that also guarantees the existence of a coexistence equilibrium point \((\hat{x}^{1},\hat{x}^{2})\) even when both boundary equilibria are stable.
**Theorem 3**: _Consider system (2) under Assumptions 1 and 2. Let \((\bar{x}^{1},\mathbf{0})\) and \((\mathbf{0},\bar{x}^{2})\) denote a single-virus endemic equilibrium corresponding to virus 1 and virus 2, respectively. Suppose that the following conditions are satisfied:_
1. \(s(-D^{1}+\beta_{1}^{1}A^{1})>0\)_;_
2. \(s(-D^{2}+\beta_{1}^{1}A^{2})>0\)_;_
_Suppose that both \((\bar{x}^{1},\mathbf{0})\) and \((\mathbf{0},\bar{x}^{2})\) are locally exponentially stable. Then there exists at least one equilibrium of the form \((\hat{x}^{1},\hat{x}^{2})\) such that \(\mathbf{0}\ll\hat{x}^{1},\hat{x}^{2}\ll\mathbf{1}\) and \(\hat{x}^{1}+\hat{x}^{2}\ll\mathbf{1}\), such that \((\hat{x}^{1},\hat{x}^{2})\) is either neutrally stable or unstable.4_
Footnote 4: Assuming that equilibria of system (2) are hyperbolic, a stronger conclusion can be drawn: for generic parameter matrices, the coexistence equilibrium is unstable.
_Proof:_ By assumption, \(s(-D^{k}+\beta_{1}^{k}A^{k})>0\) for \(k=1,2\). Therefore, from Proposition 2, it follows that there exists a single-virus endemic equilibrium corresponding to virus 1, \(\bar{x}^{1}\gg\mathbf{0}\), and a single-virus endemic equilibrium corresponding to virus 2, \(\bar{x}^{2}\gg\mathbf{0}\). By assumption, both \((\bar{x}^{1},\mathbf{0})\) and \((\mathbf{0},\bar{x}^{2})\) are locally exponentially stable.
The condition \(s(-D^{1}+\beta_{1}^{1}A^{1})>0\) implies that the origin is unstable; this can be observed from the proof of statement i) in Proposition 1. We are left to show that the stable manifold
of the origin does not lie in the interior of \(\mathcal{D}\). We will rely on the proof technique of [12, Lemma 3.8]. It suffices to show that for the (linear) system
\[\begin{bmatrix}\dot{x}^{1}\\ \dot{x}^{2}\end{bmatrix}=\begin{bmatrix}-D^{1}+\beta_{1}^{1}A^{1}&\boldsymbol{0} \\ \boldsymbol{0}&-D^{2}+\beta_{1}^{2}A^{2}\end{bmatrix}\begin{bmatrix}x^{1}\\ x^{2}\end{bmatrix} \tag{20}\]
no trajectory starting in the interior of \(\mathcal{D}\) converges to the origin. First, consider \(x^{1}(t)\). Let \(w^{\top}\) be the left eigenvector associated with \(s(-D^{1}+\beta_{1}^{1}A^{1})\) so all its entries sum to one. Define \(z:=w^{\top}x^{1}\), and observe that \(\dot{z}=w^{\top}\dot{x}^{1}\), which, from (20), further implies that \(\dot{z}=s(-D^{1}+\beta_{1}^{1}A^{1})z\). Since, by assumption, \(s(-D^{1}+\beta_{1}^{1}A^{1})>0\), it is clear that the projection onto \(w\) (which is a positive vector) of the points of (20) in the interior of \(\mathcal{D}\) is away from \(x^{1}=\boldsymbol{0}\). An analogous argument can be made for \(x^{2}(t)\), since, by assumption, \(s(-D^{2}+\beta_{1}^{2}A^{2})>0\). Therefore, the stable manifold of the origin does not lie in the interior of \(\mathcal{D}\). Consequently, since we know that system (2) is monotone (see [1, Theorem 5.5]) and the monotone condition \(\bar{x}^{1}\gg\boldsymbol{0},\boldsymbol{0}\ll\bar{x}^{2}\) relates the two exponentially stable equilibrium points, from [31, Proposition 2.9] it follows that there exists an equilibrium point of the form \((\dot{x}^{1},\dot{x}^{2})\) such that \(\boldsymbol{0}\ll\dot{x}^{1},\dot{x}^{2}\ll\boldsymbol{1}\) and \(\dot{x}^{1}+\dot{x}^{2}\ll\boldsymbol{1}\). Furthermore, the point \((\dot{x}^{1},\dot{x}^{2})\) satisfies \(s(J(\dot{x}^{1},\dot{x}^{2}))\geq 0\), thus concluding the proof.
Proposition 3 and Theorem 3 partially answer question iii) raised in Section II-B. Observe that neither of these results covers the case where one boundary equilibrium is locally exponentially stable, and the other is unstable.
We next consider a different parameter regime, namely \(s(-D^{k}+\beta_{1}^{k}A^{k})<0\), and identify a sufficient condition for the existence of an unstable coexistence equilibrium. We have the following result.
**Proposition 4**: _Consider system (2) under Assumptions 1 and 2. Define, for \(k=1,2\), \(\boldsymbol{1}_{B^{k}}\in\{0,1\}^{n}\) by \((\boldsymbol{1}_{B^{k}})_{i}=1\) if \(B^{k}_{i}\neq\boldsymbol{0}\); otherwise \((\boldsymbol{1}_{B^{k}})_{i}=0\). Suppose that the following conditions are fulfilled for \(k=1,2\) :_
* \(\rho(\beta_{1}^{k}(D^{k})^{-1}A^{k})<1\)_, and_
* \(\min\limits_{i.s.t.B^{k}_{i}\neq\boldsymbol{0}}\left(\frac{\beta_{1}^{k}}{ \delta_{1}^{k}}(A^{k}\boldsymbol{1}_{B^{k}})_{i}+\frac{\beta_{2}^{k}}{2\delta_ {1}^{k}}\boldsymbol{1}_{B^{k}}B_{i}\boldsymbol{1}_{B^{k}}\right)>2\)_._
_Then there exists at least one equilibrium of the form \((\hat{x}^{1},\hat{x}^{2})\) such that \(\boldsymbol{0}\ll\hat{x}^{1},\hat{x}^{2}\ll\boldsymbol{1}\) and \(\hat{x}^{1}+\hat{x}^{2}\ll\boldsymbol{1}\) that is either neutrally stable or unstable._
_Proof:_ Suppose that the conditions in Proposition 4 are fulfilled. Therefore, it follows that there exist boundary equilibria \((\bar{x}^{1},\boldsymbol{0})\) and \((\boldsymbol{0},\bar{x}^{2})\), and that both are locally exponentially stable; see statements iii) and iv) in Proposition 1. Therefore, since we know that system (2) is monotone (see [1, Theorem 5.5]), from [31, Proposition 2.9] it follows that there exists (at least) one equilibrium point of the form \((\hat{x}^{1},\hat{x}^{2})\) such that \(\boldsymbol{0}\ll\hat{x}^{1},\hat{x}^{2}\ll\boldsymbol{1}\) and \(\hat{x}^{1}+\hat{x}^{2}\ll\boldsymbol{1}\). Furthermore, \(s(J(\hat{x}^{1},\hat{x}^{2}))\geq 0\), thus delivering the claim. \(\square\)
Note that Proposition 4 guarantees the existence of at least one coexistence equilibrium. Given that system (2) is monotone, and since, from Theorem 1, it is known that for each of the equilibrium points the associated Jacobian is nonsingular, the conditions in Proposition 4 guarantee the existence of an odd number of coexistence equilibria, each of which must be unstable. The proof for the same follows from a Brouwer degree argument; see [32]. In fact, for the special case where \(\beta_{2}^{k}=0\) for \(k=1,2\), for the same stability configuration as in Theorem 3 and Proposition 4, a lower bound on the number of coexistence equilibria has been recently provided; see [12, Corollary 3.9, statement 3].
## VI Numerical Examples
We present a series of simulations highlighting interesting phenomena that can emerge when HOIs are incorporated. We use the following bivirus system with HOIs. The network has \(n=5\) nodes, and we set \(D^{1}=D^{2}=I\). The pairwise interactions are captured by two-cycle graphs with self-loops, with infection matrices:
\[A^{1}=\begin{bmatrix}1&0&0&0&1\\ 1&1&0&0&0\\ 0&1&1&0&0\\ 0&0&1&1&0\\ 0&0&0&1&1\end{bmatrix},\qquad A^{2}=(A^{1})^{\top}. \tag{21}\]
The HOI are captured by the following set of hyperedges with unit weight:
\[\text{virus }1:(1,2,3),(2,3,1),(3,2,1),(1,4,5),(4,5,1),(5,4,1)\] \[\text{virus }2:(1,2,4),(2,4,1),(4,2,1),(1,3,5),(3,5,1),(5,3,1).\]
In other words, this corresponds to the following \(b^{k}_{ij\ell}\) entries being equal to \(1\), with all other entries of \(B^{k}_{i}\) equal to \(0\): \(b^{1}_{123}\), \(b^{1}_{231}\), \(b^{1}_{321}\), \(b^{1}_{145}\), \(b^{1}_{451}\), \(b^{1}_{541}\), and \(b^{2}_{124}\), \(b^{2}_{241}\), \(b^{2}_{421}\), \(b^{2}_{421}\), \(b^{2}_{135}\), \(b^{2}_{551}\), \(b^{2}_{531}\). In our simulations, we randomly sample \(x^{i}_{k}(0)\) from a uniform distribution \((0,1)\), and then normalize the vectors \(x^{1}(0)\) and \(x^{2}(0)\) to ensure that \((x^{1}(0),x^{2}(0))\in\text{int}(\mathcal{D})\). The \(\beta_{i}^{k}\) are varied to yield different stability properties for the system in (2).
**Example 1**: _We set \(\beta_{1}^{1}=\beta_{2}^{1}=0.2\) and \(\beta_{1}^{2}=\beta_{2}^{2}=5\). This ensures the inequalities of both conditions for Proposition 1 are satisfied. As can be observed from Fig. 0(a), for initial conditions close to the DFE, the trajectories converge to the locally exponentially stable DFE, \((x^{1}=\boldsymbol{0},x^{2}=\boldsymbol{0})\). In Figs. 0(b) and 0(c), the initial conditions are further in the interior of \(\mathcal{D}\), and depending on the particular initial condition, we observe convergence to a boundary equilibrium where one of the two viruses is extinct, \((\bar{x}^{1},\boldsymbol{0})\) or \((\boldsymbol{0},\bar{x}^{2})\) for some positive \(\bar{x}^{1}>0.5\times\boldsymbol{1}\) and \(\bar{x}^{2}>0.5\times\boldsymbol{1}\). That is, both boundary equilibria are simultaneously locally exponentially stable. Interestingly, without HOIs, it is impossible for a bivirus system to have the DFE, \((\bar{x}^{1},\boldsymbol{0})\), and \((\boldsymbol{0},\bar{x}^{2})\) all locally exponentially stable [33, Section E]._
**Example 2**: _We set \(\beta_{1}^{1}=\beta_{2}^{1}=2\) and \(\beta_{1}^{2}=3\) and \(\beta_{2}^{2}=2.4\). As illustrated in Figs. 1(a) and 1(b), there are two locally exponentially stable two boundary equilibria \((\bar{x}^{1},\boldsymbol{0})\) or \((\boldsymbol{0},\bar{x}^{2})\), and we converge to either depending on the initial conditions. However, the DFE is unstable, and no trajectories in \(\mathcal{D}\) converge there except if one starts at the DFE. This simulation highlights are interesting observation: for a standard bivirus system with no HOIs, examples of systems with two locally stable boundary equilibria have not
been identified until recently and are not straightforward to construct [12, 34].
We conclude by remarking that, for each of the simulations presented, the bivirus system exhibits dynamical phenomena when HOIs are present that is not observed when HOIs are not present. In other words, HOIs unlock new possibilities in the competition of two viruses spreading over networks and suggest a significant amount of understanding remains to be unveiled.
## VII Conclusion
This paper analyzed a networked competitive bivirus SIS model that also accounts for the possibility of HOI among the nodes. By taking recourse to the Parametric Transversality Theorem of differential topology, we showed that the bivirus system with HOI has, for generic parameter values, a finite number of equilibria. Furthermore, the Jacobian matrices associated with each of the equilibria are nonsingular. This finding, coupled with the knowledge that the system is monotone, enabled us to establish that the typical behavior that our system exhibits is convergence to some equilibrium. Subsequently, we identified a parameter regime that ensures the existence of multiple boundary equilibria and simultaneous stability of the same along with that of the DFE. For the special case where only one virus is circulating in the metapopulation, we guarantee the existence and local stability of an endemic equilibrium; our result does not impose any restrictions on the model parameters besides those covered by Assumptions 1 and 2. Thereafter, for different parameter regimes, we identified conditions that guarantee the existence of a coexistence equilibrium.
|
2309.15648 | SANGEA: Scalable and Attributed Network Generation | The topic of synthetic graph generators (SGGs) has recently received much
attention due to the wave of the latest breakthroughs in generative modelling.
However, many state-of-the-art SGGs do not scale well with the graph size.
Indeed, in the generation process, all the possible edges for a fixed number of
nodes must often be considered, which scales in $\mathcal{O}(N^2)$, with $N$
being the number of nodes in the graph. For this reason, many state-of-the-art
SGGs are not applicable to large graphs. In this paper, we present SANGEA, a
sizeable synthetic graph generation framework which extends the applicability
of any SGG to large graphs. By first splitting the large graph into
communities, SANGEA trains one SGG per community, then links the community
graphs back together to create a synthetic large graph. Our experiments show
that the graphs generated by SANGEA have high similarity to the original graph,
in terms of both topology and node feature distribution. Additionally, these
generated graphs achieve high utility on downstream tasks such as link
prediction. Finally, we provide a privacy assessment of the generated graphs to
show that, even though they have excellent utility, they also achieve
reasonable privacy scores. | Valentin Lemaire, Youssef Achenchabe, Lucas Ody, Houssem Eddine Souid, Gianmarco Aversano, Nicolas Posocco, Sabri Skhiri | 2023-09-27T13:35:45Z | http://arxiv.org/abs/2309.15648v1 | # SANGEA: Scalable and Attributed Network Generation
###### Abstract
The topic of synthetic graph generators (SGGs) has recently received much attention due to the wave of the latest breakthroughs in generative modelling. However, many state-of-the-art SGGs do not scale well with the graph size. Indeed, in the generation process, all the possible edges for a fixed number of nodes must often be considered, which scales in \(\mathcal{O}(N^{2})\), with \(N\) being the number of nodes in the graph. For this reason, many state-of-the-art SGGs are not applicable to large graphs. In this paper, we present SANGEA, a sizeable synthetic graph generation framework which extends the applicability of any SGG to large graphs. By first splitting the large graph into communities, SANGEA trains one SGG per community, then links the community graphs back together to create a synthetic large graph. Our experiments show that the graphs generated by SANGEA have high similarity to the original graph, in terms of both topology and node feature distribution. Additionally, these generated graphs achieve high utility on downstream tasks such as link prediction. Finally, we provide a privacy assessment of the generated graphs to show that, even though they have excellent utility, they also achieve reasonable privacy scores.
G 1
trained on real data and used to generate synthetic samples to be shared for training models on downstream tasks.
Graphs are represented by their node feature matrix \(\mathbf{X}\in\mathbb{R}^{N\times D}\), and by their adjacency matrix \(\mathbf{A}\in\{0,1\}^{N\times N}\), which scales with \(\mathcal{O}(N^{2})\), with \(N\) being the number of nodes in the graph, and \(D\) being the number of node features. This quadratic complexity makes it very _challenging_ to deal with large graphs. The deep generative learning literature is rich in models that deal with synthetic graph generation (You et al., 2018; Liao et al., 2019; Liu et al., 2019; Goyal et al., 2020; Dai et al., 2020; Chen et al., 2022; Jo et al., 2022), but most state-of-the-art models still suffer from graphs' intrinsic scalability issues. Synthetic graph generators (SGGs) are generally classified in the literature into two main categories: one-shot and recurrent generators. The former usually requires storing a dense adjacency matrix in memory, which is only feasible for a few nodes. As for the latter, they take a long time to train because they recursively go through all the nodes in the graph during training and generation. Moreover, they are not node-invariant, so the ordering of the nodes matters considerably. In addition, since the topology creates dependencies between nodes within a graph, the data parallelisation within a graph is not trivial and often causes overhead. In summary, graph generation is challenging to scale.
One of the purposes of graph generation is to share data privately. However, the risks of re-identification still apply to synthetic datasets. Graphs are not immune to this phenomenon and have actually been shown to leak more private information than other data modalities due to the information they carry in their topology (Wu et al., 2021). For this reason, in the present work, we also provide a privacy assessment methodology by means of nearest neighbour distance ratio (NNDR) (Guzman et al., 2021) adapted to graphs.
Our goal in this paper is to generate, from a single large attributed graph, another large attributed graph that matches the statistical properties of the original one while being privacy-preserving. We present SANGEA (Scalable and Attributed Network GEnerAtion), a lightweight method to scale _any_ graph generative models to many nodes and edges under the assumption that the training graph presents a community structure.
The essence of our approach is dividing the input graph into densely connected communities that can be generated independently. Then, SANGEA learns to model inter-community interactions based on independent subgraphs. Since this divide-and-conquer strategy may not leverage joint distributions of the communities and the links between them, SANGEA iteratively improves the generated graph until it matches the original distribution.
SANGEA offers numerous advantages: i) it limits the original generation to different graphs with fewer nodes, allowing the use of any high-quality but potentially unscalable state-of-the-art generation method; ii) only one-shot generation models are used to predict links between communities and to perform the updates, making them fast to learn and fast at inference; iii) only node-invariant models are used, making the process more generalizable and less prone to overfitting, which is a challenge as we have only one training sample; iv) our refinement process conditions the updates on the synthetic graph in a similar manner to recurrent methods, thus removing the need to sample from a high-dimensional joint distribution like other one-shot generation methods do; v) Empirical results show that our proposed method achieves high privacy scores.
The contribution of this paper is threefold. Firstly, it proposes a novel approach to make _any_ state-of-the-art model scalable to large graphs that present a community struc
ture. Secondly, extensive experiments are presented on five models from the literature and compare our proposed approach against these models, to show that we match the quality of those other models while allowing us to perform generative model training and sampling for graphs up to 90,000 nodes and 450,000 edges. Thirdly, a privacy assessment has been performed for our generated data.
The rest of this paper is organized as follows. The next section presents essential works related to deep synthetic graph generators. Section 3 details our proposed model by explaining our training and generation procedures. Then, Section 4 presents the experimental setup and reports results with analyses. Section 5 concludes by highlighting the main findings of this research and by discussing directions for future works.
## 2 Related Works
Many approaches have been considered in synthetic graph generation. On one hand, traditional statistical methods, on the other hand, deep learning-based methods such as auto-encoders, diffusion models, auto-regressive methods, and many more were adapted from the tabular domain to the graph domain.
First, Barabasi-Albert model (Albert and Barabasi, 2002) was proposed to capture the scale-free property observed in numerous real-world graphs. This property states that the degree distribution follows a power-law. The Barabasi-Albert model has two parameters: the number of nodes and the number of edges to be added at each iteration. The graph is initialized with a fixed number of connected nodes. At each iteration, a new node is added and is connected to the existing nodes, with probability proportional to the current degree. Then, (Chen et al., 2007) introduced a model to deal with the small-world property, namely, the characteristics of high network clustering and short characteristic path length. The model consists of a regular lattice, typically a one-dimensional lattice with almost periodic boundary conditions. In other words, each vertex is connected to almost a fixed number of vertices nearest to it, and a small number of'shortcut' bonds are added between randomly chosen vertices. BTER (Kolda et al., 2014), exploits the same concepts as the well-known Erdos-Renyi generation technique (Erdos, 1960) but in a two-level way, first modelling communities and then linking them together. Another statistical method is DANCer (Benyahia et al., 2016), which creates a complete graph using preferential attachment (Barabasi and Albert, 1999) and then performs micro (edge) and macro (community) updates so that the final graph matches the distribution of a reference. While these statistical techniques leverage important properties of large graphs, we believe they lack the expressiveness of deep models and they do not generate node attributes.
On the other hand, deep learning models were proposed to learn graph generative models, the following paragraphs classify them in different families.
In the Auto-Encoder (AE) family, the first Graph Variational AE (GVAE) (Simonovsky and Komodakis, 2018) offered to generate a graph by sampling independent node representations from a known latent distribution and decoding it into a graph. Some other approaches built upon this model achieved better graph quality, for example by extending the loss with higher level constraints on the graph (Zahirnia et al., 2022). However, they all suffer from having to store a dense adjacency matrix, at least at generation time, which scales quadratically with the number of nodes, making them unscalable.
More recently, many works have been released on performing graph generation with diffusion methods: NVDiff (Chen et al., 2022), GDSS (Jo et al., 2022), EDP-GNN (Niu et al., 2020) and DiGress (Vignac et al., 2023). These models learn a reversible process from a graph representation to a known distribution, however, these methods too suffer from the need to store the dense adjacency matrix, both at train time and at generation time, making them unscalable.
There also exist SGGs based on reinforcement learning (Xu et al., 2020), adversarial networks (Cao and Kipf, 2022) or flow (Shah and Koltun, 2020). However, none of those works is currently considered state-of-the-art for large graph generation (Faez et al., 2021). In addition, their application domain is limited to molecular graph generation.
Another family of SGGs is auto-regressive (AR) models, such as GraphRNN (You et al., 2018). These embed each node in a recursive manner, and in doing so they update a state vector to condition the generation of a step. Some of those models, like GRAN (Liao et al., 2019), have been extended with attention layers for more expressiveness. These models are very efficient in modelling small graphs as they do not suffer from the independent generation (of nodes/edges) of one-shot generation methods. However, they often fail to represent high-level characteristics in the generated graphs as long-term dependencies are difficult to capture by recurrent models.
Some works enable recurrent models to accurately represent large graphs. GraphGen (Goyal et al., 2020) represents graphs by their minimum DFS codes1. This drastically reduces the size of the input space to the model. BiGG (Dai et al., 2020) is an auto-regressive model based on GraphRNN (You et al., 2018) that represents the recursive process by binary trees, which reduces the number of recursive steps. They also claim to scale with \(\mathcal{O}(\sqrt{M\text{log}N})\) memory-wise, \(M\) being the number of edges in the graph. However, neither GraphGen nor BiGG is able to generate node features in their original formulation2.
Footnote 1: A graph (and its isomorphisms) can be uniquely identified by its minimum DFS code, without the need for an arbitrary ordering of nodes or edges.
Footnote 2: GraphGen is able to generate node and edge labels but not feature vectors.
Few works in the literature focused on random walks to learn generative models. They have the advantage of their invariance under node reordering. Additionally, random walks only include the nonzero entries of the adjacency matrix, thus efficiently exploiting the sparsity of real-world graphs. (Bojchevski et al., 2018) proposed NetGAN, they train a generator for random walks, and a discriminator for synthetic and real random walks. After training, the generator is used to sample a paramount of random walks and a count matrix is computed for all edges. Then a threshold strategy is used to binarize this matrix.
Finally, there are works that combine hierarchical graph structure and deep models to tackle scaling issues of graphs while preserving good expressiveness. One such model applies this hierarchical idea with chemistry motifs (Jin et al., 2020) for molecule generation. Similarly, but not restricted to molecules, HiGen (Karami and Luo, 2023) proposes an AR-Based method to exploit graphs' hierarchical structure. It generates a high-level graph of communities in the first stage, then it extends each node into a community and each edge into inter-community links with a recursive model, potentially multiple times if there are more than two levels. However, they condition the expansion in the second stage only on the representation of the previous level and not on what has already been expanded elsewhere in the graph nor do they show the quality of the attribute generation. Lastly, GELLCELL
(Hammarstedt, 2022) proposes a technique for generating each community with the CELL model (Rendsburg et al., 2020) and then connecting those communities by using a link prediction model based on XGBoost. Unfortunately, CELL (Rendsburg et al., 2020) is based on statistical measures and was not shown to match the state of the art, and the linking of the communities is agnostic of the context around the nodes.
Our work is the first to extend the BTER principle of two-step, top-down graph generation using deep networks, combining the efficiency of one-shot models (by means of model architecture choices) and the precision of conditional generation thanks to the refinement process. Our meta-algorithm is community-generator agnostic, unlike existing approaches in the literature. We show that the graphs generated using our method show statistical similarity in terms of topology and node features, while also leading to low privacy risks.
## 3 Our Model
This section presents our proposed model for large-scale synthetic graph generation. The essence of our method is described in Section 3.1, then more details about the training and the generation procedures are given respectively in Section 3.2 and Section 3.3.
### The SANGEA Algorithm
We propose a divide-and-conquer strategy for generating graphs. The main idea is to separate the graph into different communities of controllable size. Usually, a graph is more densely connected within these communities than outside of them. Each community is used to train one SGG model. The SGG models are trained independently. Once trained, they are used to generate a synthetic version of their respective community. Then, the synthetic communities are patched together using a link prediction model. Finally, we refine the synthetic graph's links until we are satisfied with the quality of the generated graph. The pseudo-codes of SANGEA's training and generation steps are reported in Algorithms 1 and 2, respectively. A detailed explanation of these algorithms follows in sections 3.2 and 3.3.
With this approach, we limit SGGs to graphs with fewer nodes, namely the communities. Then, we use link prediction models to link the generated communities as these models are usually more lightweight at training and inference time than SGGs. Finally, in the refinement step, we use extra link prediction models (refiners) to refine the final synthetic graph's topology. The refiners are link prediction models that can be trained on \(k\)-hop neighbourhoods, rather than on a full graph, similarly to recursive models. Besides, the SGGs are trained on communities. Thus, at no point, does the full graph's _dense_ adjacency matrix need to be stored in memory when training SANGEA. Indeed, thanks to the community structure of the generation, we limit the memory cost of inference (generation) to the square of the size of the largest community rather than that of the full graph as further explained in the memory section. During training, the _sparse_ representation of the graph is sufficient to perform all operations.
### Training Process
Algorithm 1 shows the entire training process of the SANGEA algorithm. This process is also depicted in Figure 1. The training is divided into 5 phases.
```
Notation:\(G[c_{i}\neq c_{j}]\): inter-community edges of \(G\). \(G[\mathbf{c}=k]\): node subgraph of \(G\), only keeping nodes whose community is \(k\). Input: A large graph \(G\) Output: A set of trained community generators, a base linker and a set of \(k\)-refiners
1\(\mathbf{c}\gets assign\_communities(G)\) Phase 1
2\(C\leftarrow\)unique(\(\mathbf{c}\))
3for\(k\in[1,\ldots,C]\)do
4\(g_{k}\gets G[\mathbf{c}=k]\)
5\(community\_generators[k]\gets train\_generator(g_{k})\) Phase 2
6 end for
7\(base\_linker\gets train\_autoencoder\left(\bigcup\limits_{1\leq k\leq C }g_{k}\,\ G[c_{i}\neq c_{j}]\right)\) Phase 3
8\(base\_refiner\gets train\_autoencoder\left(G,G\right)\) Phase 4
9for\(k\in[1,\ldots,C]\)do
10\(k\_refiners[k]\gets finetune\_autoencoder(base\_refiner,G,g_{k})\) Phase 5 (a-b)
11 end for
12\(k\_refiners[^{\text{inter}}]\gets finetune\_autoencoder(base\_refiner,G,G[c_{i} \neq c_{j}])\) Phase 5 (c)
13Comment: Last argument of \(train\_autoencoder\) and \(finetune\_autoencoder\) are the edges used as labels in the loss.
```
**Algorithm 1** SANGEA learning process
_i) Louvain Partitioning:_ line 1 in Algorithm 1 shows a call to \(assign\_communities\). This function will assign to each node a label, i.e. a community, as given by the Louvain method (Blondel et al., 2008). This greedy algorithm, designed for very large graphs, aims at optimizing modularity, which measures how densely connected the communities are within themselves and how sparse the links between different communities are.
_ii) Community Generator Training:_ once the communities have been found, the original, large graph can be separated into independent, disconnected components that correspond to the graphs defined by the nodes of each community. These are graphs of smaller size than the original graph, thus any generation technique that only applies to small graphs can be trained on them. In this phase, we train one generator per community, with each generator being totally independent of the others. Note that the rest of the SANGEA algorithm is agnostic of what method is used to generate the communities.
_iii) Base Linker Training:_ the base linker is a graph autoencoder (GAE) model composed of a GNN encoder module and a MLP decoder module. This model is trained for the link prediction task. However, no message passing (Gilmer et al., 2017) over inter
Figure 1: Full training procedure of the SANGEA generation method
community links is allowed at this stage, thus message passing is allowed only over the intra-community links, and the model is trained to predict the inter-community links only.
_iv) Base Refiner Training:_ in Phase 4, we create a new GAE, possibly with different hyperparameters than the base linker, also trained for the link prediction task. However, the message passing now goes through the whole training graph and the edges used as training samples in the loss are also all the edges of the original training graph. This model is never used for prediction. However, it is used in Phase 5 of the training described below.
_v) k-Refiners Fine-Tuning:_ in this phase, the base refiner learned in Phase 4 is copied and then further trained using the entire original graph for message passing and the creation of embeddings. However, only a specific subset of links are used as samples in the loss. Specifically, we create \(C+1\) copies of the base refiner: one for the links within each community and one for the inter-community links. This fine-tune approach has two main goals: (i) It is a reasonable assumption that what is learned on the whole graph is transferable to specific parts of that graph, especially if the model is fine-tuned on that part of the graph; (ii) Some communities can be tiny, and training a model on a few samples without overfitting is a complicated task. Using this base refiner/fine-tuning approach, we still obtain good generalization results for those communities.
### Generation Process
Once the training has been completed, it is time to generate a large graph using these trained models.
```
0:\(\hat{G}[c_{i}\neq c_{j}]\): inter-community edges of \(\hat{G}\). \(\hat{G}[(c_{i}=label)\land(c_{j}=label)]\): set of links of \(\hat{G}\) where both end nodes are within community \(label\). \(K_{label},K_{inter}\): set of all possible edges that match that label (i.e. all possible inter-community edges or all possible edges within a community). Input: The models trained at the training phase, \(C\) the number of communities, \(R\) the number of refinement steps. Output: Generated graph \(\hat{G}\).
1for\(k\in[1,\dots,C]\)do
2\(\hat{g}_{k}\gets community\_generators[k].generate()\)
3 end for
4\(\hat{G}\leftarrow\bigcup\limits_{1\leq k\leq C}\hat{g}_{k}\)
5\(\mathbf{s}\gets base\_linker.score\_links(\hat{G},K_{inter})\)
6\(\hat{G}\leftarrow\hat{G}\bigcup sample(\mathbf{s})\)
7for\(r\in[1,\dots,R]\)do
8for\(label\in[1,\dots,C,"inter"]\)do
9if\(label\) is "\(inter\)" then
10\(\hat{\mathbf{s}}\leftarrow\mathbf{1}-k\_refiners["inter"].score\_links(\hat{G}, \hat{G}[c_{i}\neq c_{j}])\)
11else
12\(\hat{\mathbf{s}}\leftarrow\mathbf{1}-k\_refiners[label].score\_links(\hat{G}, \hat{G}[(c_{i}=label)\land(c_{j}=label)])\)
13 end if
14\(\mathbf{s}\gets k\_refiners[label].score\_links(\hat{G},K_{label})\)
15\(\hat{G}\leftarrow\hat{G}\setminus sample(\hat{\mathbf{s}})\)
16\(\hat{G}\gets\hat{G}\cup sample(\mathbf{s})\)
17 end for
18
19 end for
```
**Algorithm 2** SANGEA generation process
_i) Base Generation:_ graph generation using SANGEA happens in two phases. In this first phase, for each community, a graph is generated using the corresponding community generator, resulting in a collection of synthetic graphs, one per community. This collection
of disconnected components is then used for the message-passing of the base linker and the inter-community edges are predicted in one shot. We generate as many edges as there were in the original graph.
_ii) Refinement:_ due to the independence of the community generators, and due to the base linker's lack of access to the whole graph (in terms of message passing), we designed a refinement phase where we iteratively update the graph by means of a new link predictor that, this time, has access to all links for the message passing. Therefore, at each refinement step, we will input the full graph to all the \(k\)-refiners, each updating a different part of the graph. Doing this, we condition the updates of the links on the current state of the graph, in a way that is analogous to recurrent models. However, this is all done using one-shot models. We can perform this phase \(R\) times for the desired amount of refinements steps. Each refinement step replaces edges that have low scores with ones that have high scores, with the objective of improving the final topology of the generated graph. The number of refinements controls the trade-off between privacy and generation quality, and would in fact depend on the actual downstream use of the generated data.
### Memory Usage
_i) At Training Time:_ most one-shot generation techniques usually require the model to store in memory a dense adjacency matrix. This causes these models not to scale very well. Here, we show the theoretical memory upper-bound usage of SANGEA's full procedure. Let us imagine we have a large graph of \(N\) nodes and \(M\) edges. The Louvain method, which runs with \(\mathcal{O}(M)\) memory cost, yields communities for the large graph, the largest of which has \(N_{c^{*}}\) nodes, with \(c^{*}\) being the biggest community. Because we control the size of the communities, we can assume that \(N_{c^{*}}\ll N\)(Lambiotte et al., 2014). In the worst case, the method used as a community generator saves the whole dense adjacency matrix, which would imply a memory consumption proportional to \(\mathcal{O}(N_{c^{*}}^{2})\). For the base linker, the full training graph goes through the GNN layers that store, for each node, one latent representation. This means that its memory impact is proportional to \(N\), and then, for each edge used for the loss, the pair of corresponding node representations go through an MLP, which only needs to store gradients per node, having a memory cost proportional to \(N\) as well. The memory cost can be further reduced. In order to compute a node embedding, one may store the \(k\)-hop neighbourhood of that node. Because loss samples are edges, to perform a backpropagation, we only need the \(k\)-hop neighbourhoods of the two end nodes of that edge. This makes the memory impact in \(\mathcal{O}(N_{k})\) with \(N_{k}\) being the maximum number of nodes in all \(k\)-hop neighbourhoods. Assuming sparsity, \(N_{k}\) is often much lower than \(N\). This memory frugality comes at a computational expense. In practice, the value of \(k\) can be parameterized to match the memory capacity of the device running the computation. With this method, the whole memory impact of the base-linker at training time is measured in \(\mathcal{O}(N_{k})\). This holds for all other GAEs of the training process. This shows that at training time, our models can run in memory complexity bounded by \(\mathcal{O}(\max(N_{c^{*}}^{2},M))\), omitting \(N_{k}\) as it can be assumed to be smaller than \(M\).
_ii) At Inference time:_ in the worst case, each community generator requires to generate a dense adjacency matrix for each community. This means that the memory impact is bounded in \(\mathcal{O}(N_{c^{*}}^{2})\). Then comes the base linker. This model scores all possible edges not
within communities. The number of such edges is \(N_{inter}=N^{2}-\sum_{i=1}^{C}N_{i}^{2}\), which grows in \(\mathcal{O}(N^{2})\). However, rather than scoring all edges at once and then sampling, it is possible to score all edges between a pair of communities, sample amongst those, and then discard the memory used for those scores. This means that at a given time, we only store all the possible edges between a pair of communities, thus making the generation bounded in memory by \(\mathcal{O}(N_{c^{*}}^{2})\).
Applying the same reasoning to refinement, for each community-refiner, we will be bounded in memory by \(\mathcal{O}(N_{i}^{2})\), and for the inter-refiner, using the same trick as for the base linker, we are bounded in \(\mathcal{O}(N_{c^{*}}^{2})\). Combining inference memory complexities, we obtain a final memory upper-bound growing in \(\mathcal{O}(N_{c^{*}}^{2})\) for the whole process.
## 4 Experiments
In the present work, we present an approach to scale up any SGG to large graphs. In this Section, we present the experiments that we designed to validate that our method is indeed efficient and generates high-quality samples. This section aims at answering the following research questions (**RQs**):
1. Which models in the literature handle large graphs?
2. Can we make state-of-the-art models scalable to large graphs thanks to our approach?
3. Is one approach better than the other in terms of utility and privacy?
4. Does our approach bring performance gains compared to state-of-the-art approaches that deal with large graphs?
### Data Description
Table 1 lists the various datasets used for our empirical evaluation and statistics considering their _maximum connected component_. We chose one-graph datasets from Fey and Lenssen (2019). Cora and CiteSeer are citation networks, IMDB's nodes represent movies, actors, or directors. Amazon3's nodes represent products, and edges represent the co-purchased relations of products. The Flickr dataset is an ensemble of images presented in a graph where the nodes are the images, and the edges are assigned based on the presence of some shared properties (e.g., identical geographical area, identical exhibition space, remarks made by the identical individual).
\begin{table}
\begin{tabular}{l c c c c} \hline \hline & \multicolumn{3}{c}{_Max. conn. comp. properties_} \\ \cline{2-5} & Nodes & Edges & Features & Classes \\ \hline Cora & 2485 & 5069 & 1433 & 7 \\ CiteSeer & 2120 & 3679 & 3703 & 6 \\ IMDB & 10384 & 16097 & 3066 & 4 \\ Amazon Computers & 13381 & 245778 & 767 & 10 \\ Flickr & 89250 & 449878 & 500 & 7 \\ \hline \hline \end{tabular}
\end{table}
Table 1: Summary of the datasets (_Maximum Connected Component_).
### Evaluation Metrics
Multiple aspects have been considered to compare the proposed approaches. First is the structural and attribute similarity between the generated and original graphs (Thompson et al., 2022). Second, the utility of the generated graphs for downstream tasks. Third, training, generation time, and memory consumption in order to assess the ability to handle large graphs. Finally, the privacy risk associated with the generated graph.
Multiple metrics to assess the structural similarity between the generated and original graphs have been considered in our experiments. Namely, degree histogram, degree centrality, closeness centrality, eigenvector centrality, and clustering coefficient. The graphs are represented as normalized versions of these metrics and then compared using the Wasserstein distance (Vallender, 1974). Compared to the widely used Maximum Mean Discrepancy (MMD), the Wasserstein distance is more reliable. Indeed, the MMD requires additional parameters to be chosen, and issues of sensitivity to these choices have been recently raised by O'Bray et al. (2021). Node attribute similarity is implicitly taken into account by a distance measure over node embedding distributions, specifically because node embeddings also depend on node features. Nevertheless, for this comparison, we opted for the MMD, as it does not require the complex tasks of binning and then computing optimal transport between histograms in a multidimensional space. Two versions of a Graph Convolutional Network (GCN), one untrained and one trained on the link prediction task, are used to embed the input graphs into an embedding space of size 16. Then the MMD is used to compute the distance between the embeddings of the generated graph and the ones associated to the original graph. We also assess graph utility by training a GNN model on the generated graphs, on the link prediction task, and testing on the original graph. In fact, we train a VGAE link predictor on the generated graph and measure AUROC on the original graph.
Since our use-case consists in training a SGG from a single training graph, we evaluate the privacy concerns that it may imply. We choose to evaluate privacy using the Nearest Neighbour Distance Ratio (NNDR) on node embeddings of the original and generated graphs. This metric is popular in the privacy domain (Gussenbauer et al., 2021).The full methodology works as follows: we first train a GCN embedder (details in the supplementary material) on the original data on the node classification task. Then, for each node embedding in the generated set, we compute the distance to all nodes in the original set using the Euclidean distance. Thus, we have a distance vector \(\mathbf{d}^{i}\in\mathbb{R}^{N}\), for the \(i\)-th node in the generated graph, with \(N\) being the number of nodes of the original graph. Finally, if \(d^{i}_{1}\) and \(d^{i}_{2}\) are respectively the smallest and second smallest distances in \(\mathbf{d}^{i}\), the NNDR for node \(i\) can be computed as: \(NNDR_{i}=\frac{d^{i}_{1}}{d^{i}_{2}}\). For each generated node, NNDR measures the ratio between its two closest neighbours in the training set. It can be interpreted as _the higher the ratio, the harder it will be to infer that a given target node was a member of the SGG's training set_. Since this metric is dependent on the embedder chosen, we do the following. We estimated the NNDR between the original graph and itself, then between the original graph and perturbed versions of itself, with increasing perturbation strength. Then, we chose the embedder that shows (i) a low NNDR value between the original graph and itself, and (ii) an increasing NNDR value on increasingly perturbed versions of the original data.
### Experimental Protocol
All experiments are performed on a machine running Intel(R) Xeon(R) Gold 6134 [email protected] processor with 32 physical cores, with 1 Nvidia Tesla V100 card with 32GB GPU memory, and 128GB RAM with Ubuntu 18.04.6 operating system.
The first step in our experiments is to assess the capability of these models to deal with large graphs. Five approaches (Dai et al., 2020; Jo et al., 2022; Chen et al., 2022; Goyal et al., 2020; Zahirnia et al., 2022) have been considered in our experiments.4 Based on the results of this step, competitors to our approach will be identified according to their ability to handle big graphs. Then, these models will be used as community generators within our proposed approach. First, communities are identified using the Louvain algorithm (Blondel et al., 2008). They are used as training examples for the community generators. Each community generator is trained on one subgraph (i.e. community), and it is done for all considered state-of-the-art approaches. Once communities are generated, our proposed approach is used to generate the final version of the graph (more details in Section 3). The next step is to compare the different variants of our proposed approach on multiple dimensions: statistical properties, utility metrics, scalability, and privacy risk. In addition, a comparison to the selected state-of-the-art approaches will be performed. In our experiments we have chosen to set the number of refinements \(R=30\), the model parameters for all experiments are reported in Table 13 in the supplementary material. We search through all of those values through hyper-parameter optimization using the Optuna framework for the downstream task of link prediction. For the MMD metric, we used a Gaussian RBF with parameter sigma = 0.5, and for the community partitioning, we used a resolution parameter of 1 for the Cora and Citeseer datasets, to ensure sufficiently large communities, and of 1.5, 5.5 and 5.5 for the IMDB, Amazon and Flickr datasets respectively, to have communities of around a thousand nodes maximum.
Footnote 4: Our most direct competitors are HiGen (Karami and Luo, 2023) and GELLCELL (Hammarstedt, 2022) but neither of them has code publicly available nor do they report results on large, attributed real-world graphs. We, therefore, consider GraphGen and BiGG as our closest scalable competitors.
### Results and Analysis
In this section, detailed answers to the questions raised in the introduction of Section 4 are given, supported by numerical results.
_Scalability:_ In Table 2, one can notice that three out of five tested approaches fail to train the generative models on IMDB's maximum connected graph. Only GraphGen and BiGG are capable of dealing with this graph. In Table 2, it is clear that all implemented state-of-the-art approaches fail to train on Amazon Computers's graph, which has 15 times more edges than IMDB. We refer the reader to the supplementary material for a similar result on the Flickr dataset. We note that Flickr dataset is very large, and has been used only to provide scalability results; the communities were copied from the original graph and were linked together using our proposed approach. We conclude from these results that we have only two competitors for medium-sized graphs. Furthermore, it is not possible to train current state-of-the-art approaches on big graphs, given our computational capacity and the limitations of existing models from the literature. For the running time metrics of state-of-the-art approaches and our model, we refer the interested reader to the appendix.
SANGEA is able to generate graphs with sizes equivalent to those of Amazon Computers' size, as well as Flickr' size, which has over 6 times the number of nodes and twice the edges. All state-of-the-art approaches considered here, however, fail to do so. (running times available in the supplementary material). While the memory resource usage is greatly improved, all the independent steps of SANGEA add a consequent time overhead. This overhead can be greatly reduced with parallel training, which is facilitated by SANGEA's innate design. This can be explained by the fact that, no matter the size of the graph, we scale with the size of the largest community, which we control, thus enabling our model to choose where we set the time-memory cost trade-off.
_Graph generation quality:_ Table 2 reports results on the structural and attribute similarity between the original training graph and the generated one on the IMDB and Amazon Computers datasets (Thompson et al., 2022). SANGEA allows for node feature generation via the community generators. This is the case for Sa(GDSS) and Sa(NVDiff), while other scalable models do not generate node features. Our method shows to be superior on the MMD over GCN embedding metrics. On the IMDB dataset, Sa(GDSS) closely follows GraphGen when it does not outperform it on most statistical topology metrics and outperforms BiGG on most of them. Increased performance on the downstream task of Link Prediction for the IMDB dataset might again be explained by the lack of node feature generation in the competitors' models. As to the Amazon Computers dataset, our models show superior performance since no other model could scale to that dataset size. These results suggest that on medium to large graphs, SANGEA can match, and even surpass other state-of-the-art methods.
_Privacy:_ Table 3 shows the NNDR values obtained on 4 datasets, using SANGEA and also two mode models from the state of the art, NVDiff and GDSS, for which the results are reported on generated communities only (because we were not able to generate the full graph with these models). We compare the graphs generated with three baselines, the original graph and two perturbed versions of that graph; one at 50% and one at 75%. Perturbation at \(p\%\) corresponds to the original data where \(p\%\) of the edges have been replaced by random ones, and \(p\%\) of the node feature matrix has been changed. The table
\begin{table}
\begin{tabular}{l c c c c c c} \hline \hline & \multicolumn{3}{c}{IMDB Dataset} & \multicolumn{3}{c}{Amazon Computers Dataset} \\ \cline{2-7} & Others & \multicolumn{2}{c}{Ours} & \multicolumn{2}{c}{Others} & \multicolumn{2}{c}{Ours} \\ \cline{2-7} & GraphGen & BiGG & Sa & Sa & \multirow{2}{*}{All} & Sa & Sa \\ & & & (GDSS) & (NVDiff) & & (GDSS) & (NVDiff) \\ \hline MMD (tr.) & - & - & **0.305** & 0.379 & & 0.419 & **0.375** \\ MMD (untr.) & - & - & **0.00210** & 0.0713 & & 0.0426 & **0.0329** \\ WS spectral & **1.53e-3** & 9.88e-3 & 5.66e-3 & 3.50e-3 & & **1.68e-3** & 1.74e-3 \\ WS deg. hist. & 125e-4 & 29.7e-4 & **8.58e-4** & 18.2e-4 & & **8.56e-5** & 11.0e-5 \\ WS deg. cent. & 3.19e-5 & 1.11e-5 & **1.05e-5** & 5.40e-5 & & **2.04e-6** & 9.88e-6 \\ WS clos. cent. & 26.34e-5 & 8.87e-6 & **2.84e-6** & 10.0e-6 & & **2.76e-6** & 12.8e-6 \\ WS eig. cent. & **1.62e-5** & 5.07e-5 & 4.12e-5 & 3.59e-5 & & **1.78e-5** & 3.33e-5 \\ WS clust. coeff. & 29.7e-6 & **1.44e-6** & 15.1e-6 & 20.7e-6 & & **4.24e-5** & 5.78e-5 \\ AUROC (LP) & 0.74 & 0.73 & **0.76** & 0.74 & & 0.814 & **0.833** \\ \hline \hline \end{tabular}
\end{table}
Table 2: Structural and attribute similarity results on IMDB and Amazon Computers datasets. OOM stands for Out Of Memory and TO stands for TimeOut. Any competitor among GraphGen, BiGG, NVDiff, GVAEmm or GDSS not shown in the table indicates an OOM or TO. Sa(GDSS) stands for SANGEA using GDSS as community generator.
shows that, for both of our generated graphs and for each dataset, they always at least match the 50% perturbation, often reaching the level of, or surpassing the 75% perturbation. This shows that even though our method generates graphs of high utility and close statistical properties, it achieves the privacy level, for the individual nodes of the training graph, of at least a 50% perturbation graph, which is higher than the privacy levels (NNDR value) reached by using NVDiff or GDSS alone.
_Ablation study_: One of the main novelties of this work, which also differentiates it the most from HiGen (Karami and Luo, 2023) and GELLCELL (Hammarstedt, 2022), is the refinement process. This process conditions the predictions and updates of the links in the generated graph on previously generated links and nodes. Table 4 shows, for two different datasets, using two different community generators, that the refinement process improves the utility of the final generated graphs, which confirms that this process is a valuable feature of the method proposed in the present work. Results on more datasets are provided in the supplementary material, and highlight similar results.
## 5 Conclusion
We presented a novel approach called SANGEA, a lightweight method to scale graph generative models to many nodes and edges. It generates from a single large training graph another large graph that matches the statistical properties of the original one while achieving high privacy scores. Extensive experiments have been conducted to assess the effectiveness of our approach. Five state-of-the-art approaches have been considered from the literature to benchmark against. We show in our experiments that SANGEA can work with graphs
\begin{table}
\begin{tabular}{l c c c} \hline \hline & Cora, Sa(NVDiff) & \multicolumn{2}{c}{IMDB, Sa(GDSS)} \\ \cline{2-4} & _w.o. ref._ & _w. ref._ & _w.o. ref._ & _w. ref._ \\ \hline MMD (gen tr.) & 13.7e-2 & **9.6e-2** & 0.438 & **0.305** \\ MMD (gen tr.) & 17.9e-1 & **4.9e-1** & 0.0631 & **0.00210** \\ WS spectral & 11.e-4 & **5.22e-4** & 43.2e-3 & **5.66e-3** \\ WS deg. hist. & 17.6e-3 & **1.46e-3** & 41.4e-4 & **8.58e-4** \\ WS deg. cent. & 9.25e-05 & **6.66e-5** & **1.05e-5** & **1.05e-5** \\ WS clos. cent. & 10.6e-5 & **9.10e-5** & 26.2e-6 & **2.84e-6** \\ WS eigenv. cent. & 13.1e-4 & **3.88e-4** & 31.2e-5 & **4.12e-5** \\ WS clust. coeff. & 35.8e-4 & **4.64e-4** & 45.6e-5 & **1.51e-5** \\ AUROC (LP) & 0.71 & **0.74** & 0.71 & **0.76** \\ \hline \hline \end{tabular}
\end{table}
Table 4: Ablation study of the refinement process
\begin{table}
\begin{tabular}{l c c c c} \hline \hline & \multicolumn{4}{c}{Datasets} \\ \cline{2-5} & CiteSeer & Cora & IMDB & Amazon \\ \hline Orig. data & 0.05 \(\pm\) 0.21 & 0.05 \(\pm\) 0.20 & 0.32 \(\pm\) 0.43 & 0.50 \(\pm\) 0.43 \\ Pert. data (50\%) & 0.89 \(\pm\) 0.01 & 0.85 \(\pm\) 0.14 & 0.81 \(\pm\) 0.25 & 0.89 \(\pm\) 0.14 \\ Pert. data. (75\%) & 0.90 \(\pm\) 0.09 & 0.90 \(\pm\) 0.09 & 0.97 \(\pm\) 0.10 & 0.91 \(\pm\) 0.10 \\ \hline NVDiff & 0.66 \(\pm\) 0.39 & **0.98 \(\pm\) 0.03** & **0.99 \(\pm\) 0.06** & 0.88 \(\pm\) 0.25 \\ GDSS & **0.84 \(\pm\) 0.00** & 0.69 \(\pm\) 0.00 & 0.22 \(\pm\) 0.00 & **0.92 \(\pm\) 0.01** \\ \hline Sa(NVDiff) & 0.90 \(\pm\) 0.09 & 0.86 \(\pm\) 0.16 & 0.92 \(\pm\) 0.15 & 0.91 \(\pm\) 0.20 \\ Sa(GDSS) & **0.91 \(\pm\) 0.09** & **0.99 \(\pm\) 0.03** & **0.97 \(\pm\) 0.11** & **0.99 \(\pm\) 0.01** \\ \hline \hline \end{tabular}
\end{table}
Table 3: NNDR (mean \(\pm\) stand. dev.) of generated graphs. NVDiff and GDSS without SANGEA were evaluated on the largest community of each dataset, all others on the full dataset.
with up to 90,000 nodes and 450,000 edges, while the chosen literature approaches fail. Moreover, the quality of generation has been assessed using multiple graph quality metrics. Numerical results show a high similarity between our generated graph and the original one compared to our direct competitors. SANGEA also achieves a better utility score on the link prediction task. In addition, because of the single large training graph constraint of our setting, a privacy assessment methodology has been proposed and discussed. Our results show that the generated graphs naturally obtain high-privacy scores and hence is low-risk.
Our proposed approach suffers from a set of limitations. First, the feature generation is conditioned by the community generator offering this property. Moreover, our feature generation is limited to node features. Then, the input graphs are assumed to be static. In many applications, dynamic evolving graphs are of interest. For example, we can have user dynamics like join/leave events and evolving relationships in such data. These limitations are interesting directions to extend the capabilities of our approach in future work.
|
2301.00301 | Generalized PTR: User-Friendly Recipes for Data-Adaptive Algorithms with
Differential Privacy | The ''Propose-Test-Release'' (PTR) framework is a classic recipe for
designing differentially private (DP) algorithms that are data-adaptive, i.e.
those that add less noise when the input dataset is nice. We extend PTR to a
more general setting by privately testing data-dependent privacy losses rather
than local sensitivity, hence making it applicable beyond the standard
noise-adding mechanisms, e.g. to queries with unbounded or undefined
sensitivity. We demonstrate the versatility of generalized PTR using private
linear regression as a case study. Additionally, we apply our algorithm to
solve an open problem from ''Private Aggregation of Teacher Ensembles (PATE)''
-- privately releasing the entire model with a delicate data-dependent
analysis. | Rachel Redberg, Yuqing Zhu, Yu-Xiang Wang | 2022-12-31T22:22:53Z | http://arxiv.org/abs/2301.00301v1 | # Generalized PTR: User-Friendly Recipes for Data-Adaptive Algorithms with Differential Privacy
###### Abstract
The "Propose-Test-Release" (PTR) framework (Dwork and Lei, 2009) is a classic recipe for designing differentially private (DP) algorithms that are data-adaptive, i.e. those that add less noise when the input dataset is "nice". We extend PTR to a more general setting by privately testing _data-dependent privacy losses_ rather than _local sensitivity_, hence making it applicable beyond the standard noise-adding mechanisms, e.g. to queries with unbounded or undefined sensitivity. We demonstrate the versatility of generalized PTR using private linear regression as a case study. Additionally, we apply our algorithm to solve an open problem from "Private Aggregation of Teacher Ensembles (PATE)" (Papernot et al., 2017, 2018) -- privately releasing the entire model with a delicate data-dependent analysis.
## 1 Introduction
The guarantees of differential privacy (DP) (Dwork et al., 2006) are based on worst-case outcomes across all possible datasets. A common paradigm is therefore to add noise scaled by the _global sensitivity_ of a query \(f\), i.e. the maximum change in \(f\) between any pair of neighboring datasets.
A given dataset \(X\) might have a _local sensitivity_ that is much smaller than the global sensitivity, in which case we can hope to add a smaller amount of noise (calibrated to the local rather than the global sensitivity) while achieving the same privacy guarantee. However, this must not be undertaken naively - the local sensitivity is a dataset-dependent function and so calibrating noise to the local sensitivity could leak information about the dataset (Nissim et al., 2007).
The "Propose-Test-Release" (PTR) framework (Dwork and Lei, 2009) resolves this issue by introducing a test to privately check whether a proposed bound on the local sensitivity is valid. Only if the test "passes" is the output released with noise calibrated to the proposed bound on the local sensitivity.
PTR is a powerful and flexible tool for designing data-adaptive DP algorithms, but it has several limitations. First, it applies only to noise-adding mechanisms which calibrate noise according to the sensitivity of a query. Second, the test in "Propose-Test-Release" is computationally expensive for all but a few simple queries such as privately releasing the median or mode. Third, while some existing works (Decarolis et al., 2020; Kasiviswanathan et al., 2013; Liu et al., 2021) follow the approach of testing "nice" properties of a dataset before exploiting these properties in a private release to PTR 1,
there has not been a systematic recipe for _discovering_ which properties should be tested.
In this paper, we propose a generalization of PTR which addresses these limitations. The centerpiece of our framework is a differentially private test on the _data-dependent privacy loss_. This test does not directly consider the local sensitivity of a query and is therefore not limited to additive noise mechanisms. Moreover, in many cases, the test can be efficiently implemented by privately releasing a high-probability upper bound, thus avoiding the need to search an exponentially large space of datasets. Furthermore, the derivation of the test itself often spells out exactly what properties of the input dataset need to be checked, which streamlines the design of data-adaptive DP algorithms.
Our contributions are summarized as follows:
1. We propose a generalization of PTR which can handle algorithms beyond noise-adding mechanisms. Generalized PTR allows us to plug in _any_ data-dependent DP analysis to construct a high-probability DP test that adapts to favorable properties of the input dataset - without painstakingly designing each test from scratch.
2. We demonstrate that many existing examples of PTR and PTR-like algorithms can be unified under the generalized PTR framework, sometimes resulting in a tighter analysis (see an example of report-noisy-max in Sec A.1).
3. We show that one can publish a DP model through privately upper-bounding a one-dimensional statistic -- no matter how complex the output space of the mechanism is. We apply this result to solve an open problem from PATE (Papernot et al., 2017, 2018).
4. Our results broaden the applicability of private hyper-parameter tuning (Liu and Talwar, 2019; Papernot and Steinke, 2021) in enabling joint-parameter selection of DP-specific parameters (e.g., noise level) and native parameters of the algorithm (e.g., learning rate, regularization weight), which may jointly affect the data-dependent DP losses.
## 2 Related Work
**Data-dependent DP algorithms.**Privately calibrating noise to the local sensitivity is a well-studied problem. One approach is to add noise calibrated to the smooth sensitivity (Nissim et al., 2007), an upper bound on the local sensitivity which changes slowly between neighboring datasets. An alternative to this - and the focus of our work - is Propose-Test-Release (PTR) (Dwork and Lei, 2009), which works by calculating the distance \(\mathcal{D}_{\beta}(X)\) to the nearest dataset to \(X\) whose local sensitivity violates a proposed bound \(\beta\). The PTR algorithm then adds noise to \(\mathcal{D}_{\beta}(X)\) before testing whether this privately computed distance is sufficiently large.
PTR spin-offs about. Notable examples include stability-based methods (Thakurta and Smith, 2013) (stable local sensitivity of 0 near the input data) and privately releasing upper bounds of local sensitivity (Kasiviswanathan et al., 2013; Liu et al., 2021; Decarolis et al., 2020). We refer readers to Chapter 3 of Vadhan (2017) for a concise summary of these classical results. Recent work (Wang et al., 2022) has provided Renyi DP bounds for PTR and demonstrated its applications to robust DP-SGD. Our work (see Section 5.2) also considers applications of PTR in data-adaptive private deep learning: Instead of testing the local sensitivity of each gradient step as in Wang et al. (2022), our PTR-based PATE algorithm tests the data-dependent privacy loss as a whole.
Liu et al. (2021) proposed a new variant called High-dimensional Propose-Test-Release (HPTR). HPTR provides a systematic way of solving DP statistical estimation problems by using the exponential
mechanism (EM) with carefully constructed scores based on certain one-dimensional robust statistics, which have stable local sensitivity bounds. HPTR focuses on designing data-adaptive DP mechanisms from scratch; our method, in contrast, converts existing randomized algorithms (including EM and even some that do not satisfy DP) into those with formal DP guarantees. Interestingly, our proposed method also depends on a one-dimensional statistic of direct interest: the data-dependent privacy loss.
**Data-dependent DP losses.** The flip side of data-dependent DP algorithms is the study of data-dependent DP losses (Papernot et al., 2018; Soria-Comas et al., 2017; Wang, 2017), which fix the randomized algorithm but parameterize the resulting privacy loss by the specific input dataset. For example: In the simple mechanism that adds Laplace noise with parameter \(b\), data-dependent DP losses are \(\epsilon(X)=\Delta_{\text{LS}}(X)/b\). The data-dependent DP losses are often much smaller than the DP loss, but they themselves depend on the data and thus may reveal sensitive information; algorithms satisfying a data-dependent privacy guarantee are not formally DP with guarantees any smaller than that of the worst-case. Existing work has considered privately publishing these data-dependent privacy losses (Papernot et al., 2018; Redberg and Wang, 2021), but notice that privately publishing these losses does not improve the DP parameter of the given algorithm. Part of our contribution is to resolve this conundrum by showing that a simple post-processing step of the privately released upper bound of \(\epsilon(\text{Data})\) gives a formal DP algorithm.
**Private hyper-parameter tuning.** Our work has a nice connection with private hyper-parameter tuning. Prior work (Liu and Talwar, 2019; Papernot and Steinke, 2021) requires each candidate configuration to be released with the same DP (or Renyi DP) parameter set. Another hidden assumption is that the parameters must not be privacy-correlated (i.e., parameter choice will not change the privacy guarantee). Otherwise we need to use the largest DP bound across all candidates. For example, Liu and Talwar (2019) show that if each mechanism (instantiated with one group of hyper-parameters) is \((\epsilon,0)\)-DP, then running a random number of mechanisms and reporting the best option satisfies \((3\epsilon,0)\)-DP. Our work directly generalizes the above results by (1) considering a wide range of hyper-parameters, either privacy-correlated or not; and (2) requiring only that individual candidates to have a _testable_ data-dependent DP.
## 3 Preliminaries
Datasets \(X,X^{\prime}\in\mathcal{X}\) are neighbors if they differ by no more than one datapoint - i.e., \(X\simeq X^{\prime}\) if \(d(X,X^{\prime})\leq 1\). We will define \(d(\cdot)\) to be the number of coordinates that differ between two datasets of the same size \(n\): \(d(X,Y)=\#\{i\in[n]:X_{i}\neq Y_{i}\}\).
We use \(||\cdot||\) to denote the radius of the smallest Euclidean ball that contains the input set, e.g. \(||\mathcal{X}||=\sup_{x\in\mathcal{X}}||x||\).
The parameter \(\phi\) denotes the privacy parameters associated with a mechanism (e.g. noise level, regularization). \(\mathcal{M}_{\phi}\) is a mechanism parameterized by \(\phi\). For mechanisms with continuous output space, we will take \(\Pr[\mathcal{M}(X)=y]\) to be the probability density function of \(\mathcal{M}(X)\) at \(y\).
**Definition 3.1** (Differential privacy (Dwork et al., 2006)).: Fix \(\epsilon,\delta\geq 0\). A randomized algorithm \(\mathcal{M}:\mathcal{X}\to\mathcal{S}\) satisfies \((\epsilon,\delta)\)-DP if for all neighboring datasets \(X\simeq X^{\prime}\) and for all measurable sets \(S\subset\mathcal{S}\),
\[\Pr\bigl{[}\mathcal{M}(X)\in S\bigr{]}\leq e^{\epsilon}\Pr\bigl{[}\mathcal{M} (X^{\prime})\in S\bigr{]}+\delta.\]
Suppose we wish to privately release the output of a real-valued function \(f:\mathcal{X}\to\mathcal{R}\). We can do so
by calculating the _global sensitivity_\(\Delta_{GS}\), calibrating the noise scale to the global sensitivity and then adding sampled noise to the output.
**Definition 3.2** (Local / Global sensitivity).: The local \(\ell_{\star}\)-sensitivity of a function \(f\) is defined as \(\Delta_{LS}(X)=\max\limits_{X\simeq X^{\prime}}||f(X)-f(X^{\prime})||_{\ast}\) and the global sensitivity of \(f\) is \(\Delta_{GS}=\sup_{X}\Delta_{LS}(X)\).
### Propose-Test-Release
Calibrating the noise level to the local sensitivity \(\Delta_{LS}(X)\) of a function would allow us to add less noise and therefore achieve higher utility for releasing private queries. However, the local sensitivity is a data-dependent function and naively calibrating the noise level to \(\Delta_{LS}(X)\) will not satisfy DP.
PTR resolves this issue in a three-step procedure: **propose** a bound on the local sensitivity, privately **test** that the bound is valid (with high probability), and if so calibrate noise according to the bound and **release** the output.
PTR privately computes the distance \(\mathcal{D}_{\beta}(X)\) between the input dataset \(X\) and the nearest dataset \(X^{\prime\prime}\) whose local sensitivity exceeds the proposed bound \(\beta\):
\[\mathcal{D}_{\beta}(X)=\min\limits_{X^{\prime\prime}}\{d(X,X^{\prime\prime}): \Delta_{LS}(X^{\prime\prime})>\beta\}.\]
```
1:Input: Dataset \(X\); privacy parameters \(\epsilon,\delta\); proposed bound \(\beta\) on \(\Delta_{LS}(X)\); query function \(f:\mathcal{X}\to\mathbb{R}\).
2:if\(\mathcal{D}_{\beta}(X)+\operatorname{\mathrm{Lap}}\left(\frac{1}{\epsilon} \right)\leq\frac{\log(1/\delta)}{\epsilon}\)then output \(\bot\),
3:else release \(f(X)+\operatorname{\mathrm{Lap}}\left(\frac{\beta}{\epsilon}\right)\).
```
**Algorithm 1** Propose-Test-Release [Dwork and Lei, 2009]
**Theorem 3.3**.: _Algorithm 1 satisfies (\(2\epsilon,\delta\))-DP. [Dwork and Lei, 2009]_
Rather than proposing an arbitrary threshold \(\beta\), one can also privately release an upper bound of the local sensitivity and calibrate noise according to this upper bound. This was used for node DP in graph statistics [Kasiviswanathan et al., 2013], and for fitting topic models using spectral methods [Decarolis et al., 2020].
## 4 Generalized PTR
This section introduces the generalized PTR framework. We first formalize the notion of _data-dependent_ differential privacy that conditions on an input dataset \(X\).
**Definition 4.1** (Data-dependent privacy).: Suppose we have \(\delta>0\) and a function \(\epsilon:\mathcal{X}\to\mathbb{R}\). We say that mechanism \(\mathcal{M}\) satisfies \((\epsilon(X),\delta)\) data-dependent DP2 for dataset \(X\) if for all possible output sets \(S\) and neighboring datasets \(X^{\prime}\),
Footnote 2: We will sometimes write that \(\mathcal{M}(X)\) satisfies \(\epsilon(X)\) data-dependent DP with respect to \(\delta\).
\[\Pr\bigl{[}\mathcal{M}(X)\in S\bigr{]} \leq e^{\epsilon(X)}\Pr\bigl{[}\mathcal{M}(X^{\prime})\in S \bigr{]}+\delta,\] \[\Pr\bigl{[}\mathcal{M}(X^{\prime})\in S\bigr{]} \leq e^{\epsilon(X)}\Pr\bigl{[}\mathcal{M}(X)\in S\bigr{]}+\delta.\]
In generalized PTR, we propose a value \(\phi\) for the randomized algorithm \(\mathcal{M}\), which could be a noise scale or regularization parameter - or a set including both. For example, \(\phi=(\lambda,\gamma)\) in Example 4.4. We then say that \(\mathcal{M}_{\phi}\) is the mechanism \(\mathcal{M}\) parameterized by \(\phi\), and \(\epsilon_{\phi}(X)\) its data-dependent DP.
The following example illustrates how to derive the data-dependent DP for a familiar friend - the Laplace mechanism.
**Example 4.2**.: _(Data-dependent DP of Laplace Mechanism.) Given a function \(f:\mathcal{X}\to\mathbb{R}\), we will define_
\[\mathcal{M}_{\phi}(X)=f(X)+\text{Lap}\left(\phi\right).\]
_We then have_
\[\log\frac{\Pr[\mathcal{M}_{\phi}(X)=y]}{\Pr[\mathcal{M}_{\phi}(X^{ \prime})=y]}\leq\frac{|f(X)-f(X^{\prime})|}{\phi}.\]
_Maximizing the above calculation over all possible outputs \(y\) and using Definition 4.1,_
\[\epsilon_{\phi}(X)=\max_{X^{\prime}:X^{\prime}\simeq X}\frac{|f(X)-f(X^{\prime })|}{\phi}=\frac{\Delta_{LS}(X)}{\phi}.\]
The data-dependent DP \(\epsilon_{\phi}(X)\) is a function of both the dataset \(X\) and the parameter \(\phi\). Maximizing \(\epsilon_{\phi}(X)\) over \(X\) recovers the standard DP guarantee of running \(\mathcal{M}\) with parameter \(\phi\).
```
1:Input: Dataset \(X\); mechanism \(\mathcal{M}_{\phi}:\mathcal{X}\to\mathcal{R}\) and its privacy budget \(\epsilon,\delta\); (\(\hat{\epsilon},\hat{\delta}\))-DP test \(\mathcal{T}\); false positive rate \(\leq\delta^{\prime}\); data-dependent DP function \(\epsilon_{\phi}(\cdot)\) w.r.t. \(\delta\).
2:if not\(\mathcal{T}(\mathcal{X})\)then output \(\bot\),
3:else release \(\theta=\mathcal{M}_{\phi}(X)\).
```
**Algorithm 2** Generalized Propose-Test-Release
**Theorem 4.3** (Privacy guarantee of generalized PTR).: _Consider a proposal \(\phi\) and a data-dependent DP function \(\epsilon_{\phi}(X)\) w.r.t. \(\delta\). Suppose that we have an (\(\hat{\epsilon},\hat{\delta}\))-DP test \(\mathcal{T}:\mathcal{X}\to\{0,1\}\) such that when \(\epsilon_{\phi}(X)>\epsilon\),_
\[\mathcal{T}(X)=\begin{cases}0\ \ \text{with probability }1-\delta^{\prime},\\ 1\ \ \text{with probability }\delta^{\prime}.\end{cases}\]
_Then Algorithm 2 satisfies (\(\epsilon+\hat{\epsilon},\delta+\hat{\delta}+\delta^{\prime}\))-DP._
Proof sketch.: There are three main cases to consider:
1. We decide not to run \(\mathcal{M}_{\phi}\).
2. We decide to run \(\mathcal{M}_{\phi}\) and \(\epsilon_{\phi}(X)>\epsilon\);
3. We decide to run \(\mathcal{M}_{\phi}\) and \(\epsilon_{\phi}(X)\leq\epsilon\).
In the first case, the decision to output \(\bot\) is post-processing of an \((\hat{\epsilon},\hat{\delta})\)-DP mechanism and inherits its privacy guarantees. The second case occurs when the \((\hat{\epsilon},\hat{\delta})\)-DP test "fails" (produces a false positive) and occurs with probability at most \(\delta^{\prime}\). The third case is a composition of an \((\hat{\epsilon},\hat{\delta})\)-DP algorithm and an \((\epsilon,\delta)\)-DP algorithm.
Generalized PTR is a _strict_ generalization of Propose-Test-Release. For some function \(f\), define \(\mathcal{M}_{\phi}\) and \(\mathcal{T}\) as follows:
\[\mathcal{M}_{\phi}(X)=f(X)+\mathrm{Lap}(\phi);\] \[\mathcal{T}(X)=\begin{cases}0&\text{if}\ \ \mathcal{D}_{\beta}(X)+ \mathrm{Lap}\left(\frac{1}{\epsilon}\right)>\frac{\log(1/\delta)}{\epsilon}, \\ 1&\text{otherwise}.\end{cases}\]
Notice that our choice of parameterization is \(\phi=\frac{\beta}{\epsilon}\), where \(\phi\) is the scale of the Laplace noise. In other words, we know from Example 4.2 that \(\epsilon_{\phi}(X)>\epsilon\) exactly when \(\Delta_{LS}(X)>\beta\).
For noise-adding mechanisms such as the Laplace mechanism, the sensitivity is proportional to the privacy loss (in both the global and local sense, i.e. \(\Delta_{GS}\propto\epsilon\) and \(\Delta_{LS}\propto\epsilon(X)\)). Therefore for these mechanisms the only difference between privately testing the local sensitivity (Algorithm 1) and privately testing the data-dependent DP (Theorem 4.3) is a change of parameterization.
### Limitations of local sensitivity
Why do we want to generalize PTR beyond noise-adding mechanisms? Compared to classic PTR, the generalized PTR framework allows us to be more flexible in both the type of test conducted and also the type of mechanism whose output we wish to release. For many mechanisms, the local sensitivity either does not exist or is only defined for specific data-dependent quantities (e.g., the sensitivity of the score function in the exponential mechanism) rather than the mechanism's output.
The following example illustrates this issue.
**Example 4.4** (Private posterior sampling).: _Let \(\mathcal{M}:\mathcal{X}\times\mathcal{Y}\to\Theta\) be a private posterior sampling mechanism [20, 16, 22] for approximately minimizing \(F_{X}(\theta)\)._
\(\mathcal{M}\) _samples \(\theta\sim P(\theta)\propto e^{-\gamma(F_{X}(\theta)+0.5\lambda||\theta||^{2})}\) with parameters \(\gamma,\lambda\). Note that \(\gamma,\lambda\) cannot be appropriately chosen for this mechanism to satisfy DP without going through a sensitivity calculation of \(\arg\min F_{X}(\theta)\). In fact, the global and local sensitivity of the minimizer is unbounded even in linear regression problems, i.e when \(F_{X}(\theta)=\frac{1}{2}||y-X\theta||^{2}\)._
Output perturbation algorithms do work for the above problem when we regularize, but they are known to be suboptimal in theory and in practice [11]. In Section 5.1 we demonstrate how to apply generalized PTR to achieve a data-adaptive posterior sampling mechanism.
Even in the cases of noise-adding mechanisms where PTR seems to be applicable, it does not lead to a tight privacy guarantee. Specifically, by an example of privacy amplification by post-processing (Example A.1 in the appendix), we demonstrate that the local sensitivity does not capture all sufficient statistics for data-dependent privacy analysis and thus is loose.
### Which \(\phi\) to propose
The main limitation of generalized PTR is that one needs to "propose" a good guess of parameter \(\phi\). Take the example of \(\phi\) being the noise level in a noise-adding mechanism. Choosing too small a \(\phi\) will result in a useless output \(\bot\), while choosing too large a \(\phi\) will add more noise than necessary. Finding this 'Goldilocks' \(\phi\) might require trying out many different possibilities - each of which will consume privacy budget.
This section introduces a method to jointly tune privacy parameters (e.g., noise scale) along with parameters related only to the utility of an algorithm (e.g., learning rate or batch size in stochastic gradient descent) - while avoiding the \(\bot\) output.
Algorithm 3 takes a list of parameters as input, runs generalized PTR with each of the parameters, and returns the output with the best utility. We show that the privacy guarantee with respect to \(\epsilon\) is independent of the number of \(\phi\) that we try.
Formally, let \(\phi_{1},...,\phi_{k}\) be a set of hyper-parameters and \(\tilde{\theta}_{i}\in\{\bot,\text{Range}(\mathcal{M})\}\) denotes the output of running generalized PTR on a private dataset \(X\) with \(\phi_{i}\). Let \(X_{val}\) be a public validation set and \(q(\tilde{\theta}_{i})\) be the score of evaluating \(\tilde{\theta}_{i}\) with \(X_{val}\) (e.g., validation accuracy). The goal is to select a pair (\(\tilde{\theta}_{i}\), \(\phi_{i}\)) such that DP model \(\tilde{\theta}_{i}\) maximizes the validation score.
The generalized PTR framework with privacy calibration is described in Algorithm 3. The privacy guarantee of Algorithm 3 is an application of Liu and Talwar (2019).
```
1:Input: Privacy budget per PTR algorithm (\(\epsilon^{*},\delta^{*}\)), cut-off \(T\), parameters \(\phi_{1:k}\), flipping probability \(\tau\) and validation score function \(q(\cdot)\).
2:Initialize the set \(S=\varnothing\).
3:Draw \(G\) from a geometric distribution \(\mathcal{D}_{\tau}\) and let \(\hat{T}=\min(T,G)\).
4:for i = 1,..., \(\hat{T}\)do
5: pick a random \(\phi_{i}\) from \(\phi_{1:k}\).
6: evaluate \(\phi_{i}\): \((\tilde{\theta}_{i},q(\tilde{\theta}_{i}))\leftarrow\) Algorithm 2(\(\phi_{i},(\epsilon^{*},\delta^{*})\)).
7:\(S\gets S\cup\{\tilde{\theta}_{i},q(\tilde{\theta}_{i})\}\).
8:endfor
9:Output the highest scored candidate from \(S\).
```
**Algorithm 3** PTR with hyper-parameter selection
**Theorem 4.5** ( Theorem 3.4 Liu and Talwar (2019) ).: _Fix any \(\tau\in[0,1],\delta_{2}>0\) and let \(T=\frac{1}{\tau}\log\frac{1}{\delta_{2}}\). If each oracle access to Algorithm 2 is \((\epsilon^{*},\delta^{*})\)-DP, then Algorithm 3 is \((3\epsilon^{*}+3\sqrt{2\delta^{*}},\sqrt{2\delta^{*}}T+\delta_{2})\)-DP._
The theorem implies that one can try a random number of \(\phi\) while paying a constant \(\epsilon\). In practice, we can roughly set \(\tau=\frac{1}{10k}\) so that the algorithm is likely to test all \(k\) parameters. We emphasize that the privacy and the utility guarantee (stated in the appendix) is not our contribution. But the idea of applying generalized PTR to enforce a uniform DP guarantee over all choices of parameters with a data-dependent analysis is new, and in our opinion, significantly broadens the applicability to generic hyper-parameter tuning machinery from Liu and Talwar (2019).
### Construction of the DP test
Classic PTR uses the Laplace mechanism to construct a differentially private upper bound of \(\mathcal{D}_{\beta}(X)\), the distance from input dataset \(X\) to the closest dataset whose local sensitivity exceeds the proposed
bound \(\beta\). The tail bound of the Laplace distribution then ensures that if \(\mathcal{D}_{\beta}(X)=0\) (i.e. if \(\Delta_{LS}(X)>\beta\)), then the output will be released with only a small probability \(\delta\).
The following theorem shows that we could instead use a differentially private upper bound of the data-dependent DP \(\epsilon_{\phi}(X)\) in order to test whether to run the mechanism \(\mathcal{M}_{\phi}\).
**Theorem 4.6** (Generalized PTR with private upper bound).: _Suppose we have a differentially private upper bound of \(\epsilon_{\phi}(X)\) w.r.t. \(\delta\) such that with probability at least \(1-\delta^{\prime}\), \(\epsilon_{\phi}^{P}(X)>\epsilon_{\phi}(X)\). Further suppose we have an \((\hat{\epsilon},\hat{\delta})\)-DP test \(\mathcal{T}\) such that_
\[T(X)=\begin{cases}1&\text{ if }\epsilon_{\phi}^{P}(X)<\epsilon,\\ 0&\text{ otherwise}.\end{cases}\]
_Then Algorithm 2 is \((\epsilon+\hat{\epsilon},\delta+\hat{\delta}+\delta^{\prime})\)-DP._
In Section 5.2, we demonstrate that one can upper bound the data-dependent DP through a modification of the smooth sensitivity framework applied on \(\epsilon_{\phi}(X)\). Moreover, in Section 5.1 we provide a direct application of Theorem 4.6 with private linear regression by making use of the per-instance DP technique (Wang, 2017).
The applications in Section 5 are illustrative of two distinct approaches to constructing the DP test for generalized PTR:
1. Private sufficient statistics release (used in the private linear regression example of Section 5.1) specifies the data-dependent DP as a function of the dataset and privately releases each data-dependent component.
2. The second approach (used in the PATE example of Section 5.2) uses the smooth sensitivity framework to privately release the data-dependent DP as a whole, and then construct a high-confidence test using the Gaussian mechanism.
These two approaches cover most of the scenarios arising in data-adaptive analysis. For example, in the appendix we demonstrate the merits of generalized PTR in handling data-adaptive private generalized linear models (GLMs) using private sufficient statistics release. Moreover, sufficient statistics release together with our private hyper-parameter tuning (Algorithm 3) can be used to construct data-adaptive extensions of DP-PCA and Sparse-DP-ERM (see details in the future work section).
## 5 Applications
In this section, we put into action our approaches to construct the DP test and provide applications in private linear regression and PATE.
### Private Linear Regression
**Theorem 5.1** ((Wang, 2017)).: _For input data \(X\in\mathcal{X}\) and \(Y\in\mathcal{Y}\), define the following:_
* \(\lambda_{\min}(X)\) _denotes the smallest eigenvalue of_ \(X^{T}X\)_;_
* \(||\theta_{\lambda}^{*}||\) _is the magnitude of the solution_ \(\theta_{\lambda}^{*}=(X^{T}X+\lambda I)^{-1}X^{T}Y\)_;_
* _and_ \(L(X,\mathbf{y}):=||\mathcal{X}||(||\mathcal{X}||||\theta_{\lambda}^{*}||+|| \mathcal{Y}||)\) _is the local Lipschitz constant, denoted_ \(L\) _in brief._
_For brevity, denote \(\lambda^{*}=\lambda+\lambda_{\min}(X)\). The algorithm used in Example 4.4 with parameter \(\phi=(\lambda,\gamma)\) obeys \((\epsilon_{\phi}(Z),\delta)\) data-dependent DP for each dataset \(Z=(X,Y)\) with \(\epsilon_{\phi}(Z)\) equal to_
\[\sqrt{\frac{\gamma L^{2}\log(2/\delta)}{\lambda^{*}}}+\frac{\gamma L^{2}}{2( \lambda^{*}+||\mathcal{X}||^{2})}+\frac{1+\log(2/\delta)||\mathcal{X}||^{2}}{ 2(\lambda^{*})}.\]
Notice that the data-dependent DP is a function of \((\lambda_{\min},L,||\theta_{\lambda}^{*}||,\lambda,\gamma)\), where \((\lambda_{\min},L,||\theta_{\lambda}^{*}||)\) are data-dependent quantities. One can apply the generalized PTR framework as in the following example.
**Example 5.2** (OPS with Ptr).: _We demonstrate here how to apply generalized PTR to the one-posterior sample (OPS) algorithm, a differentially private mechanism which outputs one sample from the posterior distribution of a Bayesian model with bounded log-likelihood._
* _Propose_ \(\phi=(\lambda,\gamma)\)_._
* _Based on_ \((\lambda,\gamma)\)_, differentially privately release_ \(\lambda_{min},||\theta_{\lambda}^{*}||,L\) _with privacy budget_ \((\epsilon,\delta/2)\)_._
* _Condition on a high probability event (with probability at least_ \(1-\delta/2\)_) of_ \(\lambda_{min},||\theta_{\lambda}^{*}||,L\)_, test if_ \(\epsilon_{\phi}^{P}(X)\) _is smaller than the predefined privacy budget_ \((\hat{\epsilon},\hat{\delta})\)_, where_ \(\epsilon_{\phi}^{P}(X)\) _denotes the sanitized data-dependent DP._
* _Based on the outcome of the test, decide whether to release_ \(\theta\propto e^{-\frac{\gamma}{2}||Y-X\theta||^{2}+\lambda||\theta||^{2}}\)_._
**Theorem 5.3**.: _The algorithm outlined in Example 5.2 satisfies \((\epsilon+\hat{\epsilon},\delta+\hat{\delta})\)-DP._
The main idea of the above algorithm boils down to privately releasing all data-dependent quantities in data-dependent DP, constructing high-probability confidence intervals of these quantities, and then deciding whether to run the mechanism \(\mathcal{M}\) with the proposed parameters. We defer the details of the privacy calibration of data-dependent quantities to the appendix.
One may ask why we cannot directly tune privacy parameters \((\lambda,\gamma)\) based on the sanitized data-dependent DP. This is because, in many scenarios, data-dependent quantities depend on the choice of privacy parameters, e.g., \(||\theta_{\lambda}^{*}||\) is a complicated function of \(\lambda\). Thus, the optimization on \(\lambda\) becomes
Figure 1: Differentially private linear regression algorithms on UCI datasets. \(y\)-axis reports the MSE error with confidence intervals. \(\epsilon\) is evaluated with \(\delta=1e-6\).
a circular problem -- to solve \(\lambda\), we need to sanitize \(||\theta_{\lambda}^{*}||\), which needs to choose a \(\lambda\) to begin with. Alternatively, generalized PTR provides a clear and flexible framework to test the validity of privacy parameters adapted to the dataset.
**Remark 5.4**.: The above "circular" issue is even more serious for generalized linear models (GLMs) beyond linear regression. The data-dependent DP there involves a local strong-convexity parameter, a complex function of the regularizer \(\lambda\) and we only have zeroth-order access to. In the appendix, we demonstrate how to apply generalized PTR to provide a generic solution to a family of private GLMs where the link function satisfies a self-concordance assumption.
We next apply Algorithm 3 for Example 5.2 with UCI regression datasets. Standard z-scoring is applied and each data point is normalize with a Euclidean norm of \(1\). We consider \((60\%,10\%,30\%)\) splits for training, validation and testing test.
**Baselines**
* Output Perturbation (Outpert) (Chaudhuri et al., 2011): \(\theta=(X^{T}X+\lambda I)^{-1}X^{T}\mathbf{y}\). Release \(\hat{\theta}=\theta+\mathbf{b}\) with an appropriate \(\lambda\), where \(\mathbf{b}\) is a Gaussian random vector.
* Posterior sampling (OPS). Sample \(\hat{\theta}\sim P(\theta)\propto e^{-\gamma(F(\theta)+0.5\lambda||\theta||^{ 2})}\) with parameters \(\gamma,\lambda\).
* Adaptive posterior sampling (AdaOPS) (Wang, 2018). Run OPS with \((\lambda,\gamma)\) chosen adaptively according to the dataset.
Outpert and OPS serve as two non-adaptive baselines. In particular, we consider OPS-Balanced (Wang, 2018), which chooses \(\lambda\) to minimize a data-independent upper bound of empirical risk and dominates other OPS variants. AdaOPS is one state-of-the-art algorithm for adaptive private regression, which automatically chooses \(\lambda\) by minimizing an upper bound of the data-dependent empirical risk.
We implement OPS-PTR as follows: propose a list of \(\lambda\) through grid search (we choose \(k=30\) and \(\lambda\) ranges from \([2.5,2.5^{10}]\) on a logarithmic scale); instantiate Algorithm 3 with \(\tau=0.1k\), \(T=\frac{1}{\tau}\log(1/\delta_{2})\) and \(\delta_{2}=1/2\delta\); calibrate \(\gamma\) to meet the privacy requirement for each \(\lambda\). sample \(\hat{\theta}\) using \((\lambda,\gamma)\) and return the one with the best validation accuracy. Notice that we use a "no \(\bot\)" variant of Algorithm 2 as the calibration of \(\gamma\) is clear given a fixed \(\lambda\) and privacy budget (see more details in the appendix). We can propose various combinations of \((\lambda,\gamma)\) for more general applications.
Figure 1 demonstrates how the MSE error of the linear regression algorithms varies with the privacy budget \(\epsilon\). OutPert suffers from the large global sensitivity of output \(\theta\). OPS performs well but does not benefit from the data-dependent quantities. AdaOPS is able to adaptively choose \((\lambda,\gamma)\) based on the dataset, but suffers from the estimation error of the data-dependent empirical risk. On the other hand, OPS-PTR selects a \((\lambda,\gamma)\) pair that minimizes the empirical error on the validation set directly, and the privacy parameter \(\gamma\) adapts to the dataset thus achieving the best result.
### Pate
In this section, we apply the generalized PTR framework to solve an open problem from the Private Aggregation of Teacher Ensembles (PATE) (Papernot et al., 2017, 2018) -- privately publishing the entire model through privately releasing data-dependent DP losses. Our algorithm makes use of the smooth sensitivity framework (Nissim et al., 2007) and the Gaussian mechanism to construct a high-probability test of the data-dependent DP. The one-dimensional statistical nature of data-dependent DP enables efficient computations under the smooth sensitivity framework. Thus, this approach is generally applicable for other private data-adaptive analysis beyond PATE.
PATE is a knowledge transfer framework for model-agnostic private learning. In this framework, an ensemble of teacher models is trained on the disjoint private data and uses the teachers' aggregated consensus answers to supervise the training of a "student" model agnostic to the underlying machine-learning algorithms. By publishing only the aggregated answers and by the careful analysis of the "consensus", PATE has become a practical technique in recent private model training.
The tight privacy guarantee of PATE heavily relies on a delicate data-dependent DP analysis, for which the authors of PATE use the smooth sensitivity framework to privately publish the data-dependent privacy cost. However, it remains an open problem to show that the released model is DP under data-dependent analysis. Our generalized PTR resolves this gap by carefully testing a private upper bound of the data-dependent privacy cost. Our algorithm is fully described in Algorithm 4, where the modification over the original PATE framework is highlighted in blue.
Algorithm 4 takes the input of privacy budget \((\epsilon^{\prime},\hat{\epsilon},\delta)\), unlabeled public data \(x_{1:T}\) and \(K\) teachers' predictions on these data. The parameter \(\epsilon\) denotes the privacy cost of publishing the data-dependent DP and \(\epsilon^{\prime}\) is the predefined privacy budget for testing. \(n_{j}(x_{i})\) denotes the the number of teachers that agree on label \(j\) for \(x_{i}\) and \(C\) denotes the number of classes. The goal is to privately release a list of plurality outcomes -- \(\operatorname*{argmax}_{j\in[C]}n_{j}(x_{i})\) for \(i\in[T]\) -- and use these outcomes to supervise the training of a "student" model in the public domain. The parameter \(\sigma_{1}\) denotes the noise scale for the vote count.
In their privacy analysis, Papernot et al. (2018) compute the data-dependent \(\operatorname*{RDP}_{\sigma_{1}}(\alpha,X)\) of labeling the entire group of student queries. \(\operatorname*{RDP}_{\sigma_{1}}(\alpha,X)\) can be orders of magnitude smaller than its data-independent version if there is a strong agreement among teachers. Note that \(\operatorname*{RDP}_{\sigma_{1}}(\alpha,X)\) is a function of the RDP order \(\alpha\) and the dataset \(X\), analogous to our Definition 4.1 but subject to RDP (Mironov, 2017).
**Theorem 5.5** ((Papernot et al., 2018)).: _If the top three vote counts of \(x_{i}\) are \(n_{1}>n_{2}>n_{3}\) and \(n_{1}-n_{2},n_{2}-n_{3}\gg\sigma_{1}\), then the data-dependent RDP of releasing \(\operatorname*{argmax}_{j}\{n_{j}+\mathcal{N}(0,\sigma_{1}^{2})\}\) satisfies \((\alpha,\exp\{-2\alpha/\sigma_{1}^{2}\}/\alpha)\)-RDP and the data-independent RDP (using the Gaussian mechanism) satisfies \((\alpha,\frac{\alpha}{\sigma_{1}^{2}})\)-RDP._
```
1:Input: Unlabeled public data \(x_{1:T}\), aggregated teachers prediction \(n(\cdot)\), privacy parameter \(\hat{\epsilon},\epsilon^{\prime},\delta\), noisy parameter \(\sigma_{1}\).
2:Set \(\alpha=\frac{2\log(2/\delta)}{\hat{\epsilon}}+1\), \(\sigma_{s}=\sigma_{2}=\sqrt{\frac{3\alpha+2}{\hat{\epsilon}}},\delta_{2}= \delta/2\), smoothness parameter \(\beta=\frac{0.2}{\alpha}\).
3:Compute noisy labels: \(y_{i}^{p}\leftarrow\operatorname*{argmax}_{j\in[C]}\{n_{j}(x_{i})+\mathcal{N}( 0,\sigma_{1}^{2})\}\) for all \(i\in[1:T]\).
4:\(\operatorname*{RDP}_{\sigma_{1}}(\alpha,X)\leftarrow\) data-dependent RDP at the \(\alpha\)-th order.
5:\(SS_{\beta}(X)\leftarrow\) the smooth sensitivity of \(\operatorname*{RDP}_{\sigma_{1}}^{\operatorname*{upper}}(\alpha,X)\).
6:Privately release \(\mu:=\log(SS_{\beta}(X))+\beta\cdot\mathcal{N}(0,\sigma_{2}^{2})+\sqrt{2\log( 2/\delta_{2})}\cdot\sigma_{2}\cdot\beta\)
7:\(\operatorname*{RDP}_{\sigma_{1}}^{\operatorname*{upper}}(\alpha)\leftarrow\) an upper bound of data-dependent RDP through Lemma 5.6.
8:\(\epsilon_{\sigma_{1}}\leftarrow\) DP guarantee converted from \(\operatorname*{RDP}_{\sigma_{1}}^{\operatorname*{upper}}(\alpha)\).
9:If \(\epsilon^{\prime}\geq\epsilon_{\sigma_{1}}\)return a student model trained using \((x_{1:T};y_{1:T}^{p})\).
10:Else return \(\bot\).
```
**Algorithm 4** PATE with generalized PTR
However, \(\operatorname*{RDP}_{\sigma_{1}}(\alpha,X)\) is data-dependent and thus cannot be revealed. The authors therefore privately publish the data-dependent RDP using the smooth sensitivity framework (Nissim et al., 2007). The smooth sensitivity calculates a smooth upper bound on the local sensitivity of \(\operatorname*{RDP}_{\sigma_{1}}(\alpha,X)\)
denoted as \(SS_{\beta}(X)\), such that \(SS_{\beta}(X)\leq e^{\beta}SS_{\beta}(X^{\prime})\) for any neighboring dataset \(X\) and \(X^{\prime}\). By adding Gaussian noise scaled by the smooth sensitivity (i.e., release \(\epsilon_{\sigma_{1}}(\alpha,X)+SS_{\beta}(X)\cdot\mathcal{N}(0,\sigma_{s}^{2})\)), the privacy cost is safely published.
Unlike most noise-adding mechanisms, the standard deviation \(\sigma_{s}\) cannot be published since \(SS_{\beta}(X)\) is a data-dependent quantity. Moreover, this approach fails to provide a valid privacy guarantee of the noisy labels obtained through the PATE algorithm, as the published privacy cost could be smaller than the real privacy cost. Our solution in Algorithm 4 looks like the following:
* Privately release an upper bound of the smooth sensitivity \(SS_{\beta}(X)\) with \(e^{\mu}\).
* Conditioned on a high-probability event of \(e^{\mu}\), publish the data-dependent RDP with \(\text{RDP}^{\text{upper}}_{\sigma_{1}}(\alpha)\).
* Convert \(\text{RDP}^{\text{upper}}_{\sigma_{1}}(\alpha)\) back to the standard DP guarantee using RDP to DP conversion at \(\delta/2\).
* Test if the converted DP is above the predefined budget \(\epsilon^{\prime}\).
The following lemma states that \(\text{RDP}^{\text{upper}}_{\sigma_{1}}(\alpha)\) is a valid upper bound of the data-dependent RDP.
**Lemma 5.6** (Private upper bound of data-dependent RDP).: _We are given a RDP function \(\text{RDP}(\alpha,X)\) and a \(\beta\)-smooth sensitivity bound \(SS(\cdot)\) of \(\text{RDP}(\alpha,X)\). Let \(\mu\) (defined in Algorithm 4) denote the private release of \(\log(SS_{\beta}(X))\). Let the \((\beta,\sigma_{s},\sigma_{2})\)-GNSS mechanism be_
\[\text{RDP}^{\text{upper}}(\alpha):=\text{RDP}(\alpha,X)+SS_{\beta}(X)\cdot \mathcal{N}(0,\sigma_{s}^{2})+\sigma_{s}\sqrt{2\log(\frac{2}{\sigma_{2}})}e^ {\mu}\]
_Then, the release of \(\text{RDP}^{\text{upper}}(X)\) satisfies \((\alpha,\frac{3\alpha+2}{2\sigma_{s}^{2}})\)-RDP for all \(1<\alpha<\frac{1}{2\beta}\); w.p. at least \(1-\delta_{2}\), \(\text{RDP}^{\text{upper}}(\alpha)\) is an upper bound of \(\text{RDP}(\alpha,X)\)._
The proof (deferred to the appendix) makes use of the facts that: (1) the log of \(SS_{\beta}(X)\) has a bounded global sensitivity \(\beta\) through the definition of smooth sensitivity; (2) releasing \(\text{RDP}_{\sigma_{1}}(\alpha,X)+SS_{\beta}(X)\cdot\mathcal{N}(0,\sigma_{s}^ {2})\) is \((\alpha,\frac{\alpha+1}{\sigma_{s}^{2}})\)-RDP (Theorem 23 from Papernot et al. (2018)).
Now, we are ready to state the privacy guarantee of Algorithm 4.
Figure 2: Privacy and utility tradeoffs with PATE. When \(\sigma_{1}\) is aligned, three algorithms provide the same utility. \(y\)-axis plots the privacy cost of labeling \(T=200\) public data with \(\delta=10^{-5}\). The left figure considers the high-consensus case, where the data-adaptive analysis is preferred.
**Theorem 5.7**.: _Algorithm 4 satisfies \((\epsilon^{\prime}+\hat{\epsilon},\delta)\)-DP._
In the proof, the choice of \(\alpha\) ensures that the cost of the \(\delta/2\) contribution (used in the RDP-to-DP conversion) is roughly \(\hat{\epsilon}/2\). Then the release of \(\mathrm{RDP}_{\sigma_{1}}^{\mathrm{upper}}(\alpha)\) with \(\sigma_{s}=\sqrt{\frac{2+3\alpha}{\hat{\epsilon}}}\) accounts for another cost of \((\epsilon/2,\delta/2)\)-DP.
**Empirical results.** We next empirically evaluate Algorithm 4 (PATE-PTR) on the MNIST dataset. Following the experimental setup from Papernot et al. (2018), we consider the training set to be the private domain, and the testing set is used as the public domain. We first partition the training set into 400 disjoint sets and 400 teacher models, each trained individually. Then we select \(T=200\) unlabeled data from the public domain, with the goal of privately labeling them. To illustrate the behaviors of algorithms under various data distributions, we consider two settings of unlabeled data, high-consensus and low-consensus. In the low-consensus setting, we choose \(T\) unlabeled data such that there is no high agreement among teachers, so the advantage of data-adaptive analysis is diminished. We provide further details on the distribution of these two settings in the appendix.
**Baselines.** We consider the Gaussian mechanism as a data-independent baseline, where the privacy guarantee is valid but does not take advantage of the properties of the dataset. The data-dependent DP ( Papernot et al. (2018)) serves as a non-private baseline, which requires further sanitation. Note that these two baselines provide different privacy analyses of the same algorithm (see Theorem 5.5).
Figure 2 plots privacy-utility tradeoffs between the three approaches by varying the noise scale \(\sigma_{1}\). The purple region denotes a set of privacy budget choices (\(\hat{\epsilon}+\epsilon^{\prime}\) used in Algorithm 4) such that the utility of the three algorithms is aligned under the same \(\sigma_{1}\). In more detail, the purple region is lower-bounded by \(\hat{\epsilon}+\epsilon_{\sigma_{1}}\). We first fix \(\sigma_{s}=\sigma_{2}=15\) such that \(\hat{\epsilon}\) is fixed. Then we empirically calculate the average of \(\epsilon_{\sigma_{1}}\) (the private upper bound of the data-dependent DP) over 10 trials. Running Algorithm 4 with any choice of \(\hat{\epsilon}+\epsilon^{\prime}\) chosen from the purple region implies \(\epsilon^{\prime}>\epsilon_{\sigma_{1}}\). Therefore, PATE-PTR will output the same noisy labels (with high probability) as the two baselines.
**Observation** As \(\sigma_{1}\) increases, the privacy loss of the Gaussian mechanism decreases, while the data-dependent DP curve does not change much. This is because the data-dependent DP of each query is a complex function of both the noise scale and the data and does not monotonically decrease when \(\sigma_{1}\) increases (see more details in the appendix). However, the data-dependent DP still dominates the Gaussian mechanism for a wide range of \(\sigma_{1}\). Moreover, PATE-PTR nicely interpolates between the data-independent DP guarantee and the non-private data-adaptive DP guarantee. In the low-consensus case, the gap between the data-dependent DP and the DP guarantee of the Gaussian mechanism unsurprisingly decreases. Meanwhile, PATE-PTR (the purple region) performs well when the noise scale is small but deteriorates when the data-independent approach proves more advantageous. This example demonstrates that using PTR as a post-processing step to convert the data-dependent DP to standard DP is effective when the data-adaptive approach dominates others.
## 6 Limitations and Future Work
One weakness of generalized PTR is that it requires a case-specific privacy analysis. Have we simply exchanged the problem of designing a data-adaptive DP algorithm with the problem of analyzing the data-dependent privacy loss? We argue that this limitation is inherited from classic PTR. In situations where classic PTR is not applicable, we've outlined several approaches to constructing the
DP test for our framework (see Sections 4.3 and 5.2).
Furthermore, the data-dependent privacy loss is often more straightforward to compute than local sensitivity, and often exists in intermediate steps of classic DP analysis already. Most DP analysis involves providing a high-probability tail bound of the privacy loss random variable. If we stop before taking the max over the input dataset, then we get a data-dependent DP loss right away (as in Example 4.2).
There are several exciting directions for applying generalized PTR to more problems. Sufficient statistics release and our private hyperparameter tuning (Algorithm 3) can be used to construct data-adaptive extensions of DP-PCA (Dwork et al., 2014) and Sparse-DP-ERM (Kifer et al., 2012). For DP-PCA we could use our Algorithm 3 to tune the variance of the noise added to the spectral gap; for Sparse-DP-ERM we would test the restricted strong convexity parameter (RSC), i.e. not adding additional regularization if the RSC is already large.
## 7 Conclusion
Generalized PTR extends the classic "Propose-Test-Release" framework to a more general setting by testing the data-dependent privacy loss of an input dataset, rather than its local sensitivity. In this paper we've provided several examples - private linear regression with hyperparameter selection and PATE - to illustrate how generalized PTR can enhance DP algorithm design via a data-adaptive approach.
### Acknowledgments
The work was partially supported by NSF Award # 2048091 and the Google Research Scholar Award. Yuqing was supported by the Google PhD Fellowship.
###### Contents
* 1 Introduction
* 2 Related Work
* 3 Preliminaries
* 3.1 Propose-Test-Release
* 4 Generalized PTR
* 4.1 Limitations of local sensitivity
* 4.2 Which \(\phi\) to propose
* 4.3 Construction of the DP test
* 5 Applications
* 5.1 Private Linear Regression
* 5.2 PATE
* 6 Limitations and Future Work
* 7 Conclusion
* A Omitted examples in the main body
* A.1 Limits of the classic PTR in private binary voting
* A.2 Self-concordant generalized linear model (GLM)
* A.3 Differentially privately release \(\lambda_{min}\left(\nabla^{2}F(\theta)\right)\)
* A.4 Other applications of generalized PTR
* B Omitted proofs in Section 4
* C Experimental details
* C.1 Experimental details in private linear regression
* C.2 Details of PATE case study
* D Omitted proofs in private GLM
* D.1 Per-instance DP of GLM
## Appendix A Omitted examples in the main body
In this appendix, we provide more examples to demonstrate the merits of generalized PTR. We focus on a simple example of post-processed Laplace mechanism in Section A.1 and then an example on differentially private learning of generalized linear models in Section 4. In both cases, we observe that generalized PTR provides data-adaptive algorithms with formal DP guarantees, that are simple, effective and not previously proposed in the literature (to the best of our knowledge).
### Limits of the classic PTR in private binary voting
The following example demonstrates that classic PTR does not capture sufficient data-dependent quantities even when the local sensitivity exists and can be efficiently tested.
**Example A.1**.: _Consider a binary class voting problem: \(n\) users vote for a binary class \(\{0,1\}\) and the goal is to output the class that is supported by the majority. Let \(n_{i}\) denote the number of people who vote for the class \(i\). We consider the report-noisy-max mechanism:_
\[\mathcal{M}(X):\text{argmax}_{i\in[0,1]}n_{i}(X)+\text{Lap}(b),\]
_where \(b=1/\epsilon\) denotes the scale of Laplace noise._
In the example, we will (1) demonstrate the merit of data-dependent DP; and (2) empirically compare classic PTR with generalized PTR.
We first explicitly state the data-dependent DP.
**Theorem A.2**.: _The data-dependent DP of the above example is_
\[\epsilon(X):=\max_{X^{\prime}}\{|\log\frac{p}{p^{\prime}}|,|\log\frac{1-p}{1- p^{\prime}}|\},\]
_where \(p:=\Pr[n_{0}(X)+\text{Lap}(1/\epsilon)>n_{1}(X)+\text{Lap}(1/\epsilon)]\) and \(p^{\prime}:=\Pr[n_{0}(X^{\prime})+\text{Lap}(1/\epsilon)>n_{1}(X^{\prime})+ \text{Lap}(1/\epsilon)]\). There are four possible neighboring datasets \(X^{\prime}:n_{0}(X^{\prime})=\max(n_{0}(X)\pm 1,0),n_{1}(X^{\prime})=n_{1}(X)\) or \(n_{0}(X^{\prime})=n_{0}(X),n_{1}(X^{\prime})=\max(n_{1}(X)\pm 1,0)\)._
In Figure 3(a), we empirically compare the above data-dependent DP with the Laplace mechanism by varying the gap between the two vote counts \(|n_{0}(X)-n_{1}(X)|\). The noise scale is fixed to \(\epsilon=10\). The data-dependent DP substantially improves over the standard DP if the gap is large. However, the data-dependent DP is a function of the dataset. We next demonstrate how to apply generalized PTR to exploit the data-dependent DP.
Notice that the probability \(n_{0}(X)+\text{Lap}(1/\epsilon)>n_{1}(X)+\text{Lap}(1/\epsilon)\) is equal to the probability that a random variable \(Z:=X-Y\) exceeds \(\epsilon(n_{1}(X)-n_{0}(X))\), where \(X,Y\) are two independent Lap(1) distributions. We can compute the pdf of \(Z\) through the convolution of two Laplace distributions, which implies \(f_{X-Y}(z)=\frac{1+|z|}{4e^{|z|}}\). Let \(t\) denote the difference between \(n_{1}(X)\) and \(n_{0}(X)\), i.e., \(t=n_{1}(X)-n_{0}(X)\). Then we have
\[p=\Pr[Z>\epsilon\cdot t]=\frac{2+\epsilon\cdot t}{4\exp(\epsilon\cdot t)}\]
Similarly, \(p^{\prime}=\frac{2+\epsilon\cdot(t+\ell)}{4\exp(\epsilon\cdot(t+\ell))}\), where \(\ell\in[-1,1]\) denotes adding or removing one data point to construct the neighboring dataset \(X^{\prime}\). Therefore, we can upper bound \(\log(p/p^{\prime})\) by
\[\log\frac{p}{p^{\prime}} =\frac{2+\epsilon\cdot t}{4\exp(\epsilon\cdot t)}\cdot\frac{4\exp (\epsilon(t+\ell))}{2+\epsilon\cdot(t+\ell)}\] \[\leq\epsilon\cdot\log\left(\frac{2+\epsilon t}{2+\epsilon(t+1)}\right)\] \[=\epsilon\log\left(1-\frac{\epsilon}{2+\epsilon(t+1)}\right)\]
Then we can apply generalized PTR by privately lower-bounding \(t\).
On the other hand, the local sensitivity \(\Delta_{LS}(X)\) of this noise-adding mechanism is \(0\) if \(t>1\). Specifically, if the gap is larger than one, adding or removing one user will not change the result. To
apply classic PTR, we let \(\gamma(X)\) denote the distance to the nearest dataset \(X^{{}^{\prime\prime}}\) such that \(\Delta_{LS}>0\) and test if \(\gamma(X)+\text{Lap}(1/\epsilon)>\frac{\log(1/\delta)}{\epsilon}\). Notice in this example that \(\gamma(X)=\max(t-1,0)\) can be computed efficiently. We provide the detailed implementation of these approaches.
1. Gen PTR: lower bound \(t\) with \(t^{p}=t-\frac{log(1/\delta)}{\tilde{\epsilon}}+\text{Lap}(1/\tilde{\epsilon})\). Calculate an upper bound of data-dependent DP \(\epsilon^{p}\) using Theorem A.2 with \(t^{p}\). The algorithm then tests if \(\epsilon^{p}\) is within an predefined privacy budget \(\epsilon^{\prime}\). If the test passes, the algorithm returns \(\text{argmax}_{i\in[0,1]}n_{i}(X)+Lap(1/\epsilon)\) satisfies \((\tilde{\epsilon}+\epsilon^{\prime},\delta)\)-DP.
2. classic PTR: lower bound \(t\) with \(t^{p}=t-\frac{log(1/\delta)}{\tilde{\epsilon}}+\text{Lap}(1/\tilde{\epsilon})\). If \(t^{p}>1\), classic PTR outputs the ground-truth result else returns a random class. This algorithm satisfies \((\tilde{\epsilon},\delta)\)-DP.
3. Laplace mechanism. \(\mathcal{M}(X):\text{argmax}_{i\in[0,1]}n_{i}(X)+Lap(1/\epsilon)\). \(\mathcal{M}\) is \((\epsilon,\delta)\)-DP.
We argue that though the Gen-PTR and the classic PTR are similar in privately lower-bounding the data-dependent quantity \(t\), the latter does not capture sufficient information for data-adaptive analysis. That is to say, only testing the local sensitivity restricts us from learning helpful information to amplify the privacy guarantee if the test fails. In contrast, our generalized PTR, where privacy parameters and the local sensitivity parameterize the data-dependent DP, can handle those failure cases nicely.
To confirm this conjecture, Figure 3(b) plots a privacy-utility trade-off curve between these three approaches. We consider a voting example with \(n_{0}(X)=n_{1}(X)+100\) and \(t=100\), chosen such that the data-adaptive analysis is favorable.
In Figure 3(b), we vary the noise scale \(b=1/\epsilon\) between \([0,0.5]\). For each choice of \(b\), we plot the privacy guarantee of three algorithms when the error rate is aligned. For Gen-PTR, we set \(\tilde{\epsilon}=\frac{1}{2b}\) and empirically calculate \(\epsilon^{p}\) over \(100000\) trials.
Figure 3: In Figure 3(a), we compare the privacy guarantee by varying the gap. In Figure 3(b) We fix \(t=n_{0}(X)-n_{1}(X)=100\) and compare privacy cost when the accuracy is aligned. Gen-PTR with any choice of privacy budget \((\tilde{\epsilon}+\epsilon^{\prime})\) chosen from the purple region would achieve the same utility as Laplace mechanism but with a smaller privacy cost. The curve of Gen-PTR is always below than that of the classic PTR, which implies that Gen-PTR can result a tighter privacy analysis when the utility is aligned.
In the plot, when \(\epsilon\ll\frac{\log(1/\delta)}{t}\), the classic PTR is even worse than the Laplace mechanism. This is because the classic PTR is likely to return \(\bot\) while the Laplace mechanism returns \(\operatorname*{argmax}_{i\in[0,1]}n_{i}(X)+\operatorname*{Lap}(1/\epsilon)\), which contains more useful information. Compared to the Laplace mechanism, Gen-PTR requires an extra privacy allocation \(\tilde{\epsilon}\) to release the gap \(t\). However, it still achieves an overall smaller privacy cost when the error rate \(\leq 10^{-5}\) (the purple region). Meanwhile, Gen-PTR dominates the classic PTR (i.e., the dashed black curve is always below the blue curve). Note that the classic PTR and the Gen-PTR utilize the gap information differently: the classic PTR outputs \(\bot\) if the gap is not sufficiently large, while the Gen-PTR encodes the gap into the data-dependent DP function and tests the data-dependent DP in the end. This empirical result suggests that testing the local sensitivity can be loosely compared to testing the data-dependent DP. Thus, Gen-PTR could provide a better privacy-utility trade-off.
### Self-concordant generalized linear model (GLM)
In this section, we demonstrate the effectiveness and flexibility of generalized PTR in handling a family of GLMs where the link function satisfies a self-concordance assumption. This section is organized as follows:
* Introduce a family of GLMs with the self-concordance property.
* Introduce a general output perturbation algorithm for private GLMs.
* Analyze the data-dependent DP of GLMs with the self-concordance property.
* Provide an example of applying our generalized PTR framework to logistic regression.
Consider the empirical risk minimization problem of the generalized linear model
\[\theta^{*}=\operatorname*{argmin}_{\theta}\sum_{i=1^{n}}l_{i}(\theta)+r( \theta),\]
where \(l:\mathbb{R}\times\mathbb{R}\to\mathbb{R}\) belongs to a family of convex GLMs: \(l_{i}(\theta)=l(y,x_{i}^{T}\theta)\). Let \(r:\mathbb{R}^{d}\to\mathbb{R}\) be a regularization function.
We now define the self-concordance property.
**Definition A.3** (Generalized self-concordance [3]).: A convex and three-times differentiable function \(f:\Theta\to\mathbb{R}\) is \(R\)-generalized-self-concordant on an open nonempty convex set \(\Theta^{*}\subset\Theta\) with respect to norm \(\|\cdot\|\) if for all \(u\in\Theta^{*}\) and all \(v\in\mathbb{R}^{d}\),
\[\nabla^{3}f(u)[v,v,v]\leq 2R\|v\|(\nabla^{2}f(u)[v,v]).\]
The closer R is to 0, the "nicer" -- more self-concordant -- the function is. A consequence of (generalized) self-concordance is the spectral (multiplicative) stability of Hessian to small perturbations of parameters.
**Lemma A.4** (Stability of Hessian[23, Theorem 2.1.1], [3, Proposition 1]).: _Let \(H_{\theta}:=\nabla^{2}F_{s}(\theta)\). If \(F_{s}\) is \(R\)-self-concordant at \(\theta\), then for any \(v\) such that \(R\|v\|_{H_{\theta}}<1\), we have that_
\[(1-R\|v\|_{H_{\theta}})^{2}\nabla^{2}F_{s}(\theta) \prec\nabla^{2}F_{s}(\theta+v)\] \[\prec\frac{1}{(1-R\|v\|_{H_{\theta}})^{2}}\nabla^{2}F_{s}(\theta).\]
_If instead we assume \(F_{s}\) is \(R\)-generalized-self-concordant at \(\theta\) with respect to norm \(\|\cdot\|\), then_
\[e^{-R\|v\|}\nabla^{2}F_{s}(\theta)\prec\nabla^{2}F_{s}(\theta+v)\prec e^{R\|v\|} \nabla^{2}F_{s}(\theta)\]
The two bounds are almost identical when \(R\|v\|\) and \(R\|v\|_{\theta}\) are close to \(0\). In particular, for \(x\leq 1/2\), we have that \(e^{-2x}\leq 1-x\leq e^{-x}\).
In particular, the loss function of binary logistic regression is \(1\)-generalized self-concordant.
**Example A.5** (Binary logistic regression).: _Assume \(\|x\|_{2}\leq 1\) for all \(x\in\mathcal{X}\) and \(y\in\{-1,1\}\). Then binary logistic regression with datasets in \(\mathcal{X}\times\mathcal{Y}\) has a log-likelihood of \(F(\theta)=\sum_{i=1}^{n}\log(1+e^{-y_{i}x_{i}^{T}\theta})\). The univariate function \(l:=\log(1+\exp(\cdot))\) satisfies_
\[|l^{\prime\prime\prime}|=\left|\frac{\exp{(\cdot)}(1-\exp{(\cdot)})}{(1+\exp{ (\cdot)})^{3}}\right|\leq\frac{\exp{(\cdot)}}{(1+\exp{(\cdot)})^{2}}:=l^{ \prime\prime}.\]
We next apply the modified output perturbation algorithm to privately release \(\theta^{*}\). The algorithm is simply:
1. Solve \[\theta^{*}=\operatorname*{argmin}_{\theta}\sum_{i=1}^{n}l_{i}(\theta)+r( \theta).\]
2. Release \[\hat{\theta}=\theta^{*}+Z,\] where \(\gamma>0\) is a tuning parameter and \(Z\sim\mathcal{N}(0,\gamma^{-1}(\sum_{i=1}^{n}\nabla^{2}l_{i}(\theta)+\nabla^{ 2}r(\theta))^{-1})\).
The data-dependent DP of the above procedure is stated as follows.
**Theorem A.6** (Data-dependent DP of GLM).: _Denote the smooth part of the loss function \(F_{s}=\sum_{i=1}^{n}l(y_{i},<x_{i},\cdot>)+r_{s}(\cdot)\). Assume the following:_
1. _The GLM loss function_ \(l\) _is convex, three-times continuously differentiable and_ \(R\)_-generalized-self-concordant w.r.t._ \(\|\cdot\|_{2}\)_,_
2. \(F_{s}\) _is locally_ \(\alpha\)_-strongly convex w.r.t._ \(\|\cdot\|_{2}\)_,_
3. _and in addition, denote_ \(L:=\sup_{\theta\in[\theta^{*},\tilde{\theta}^{*}]}|l^{\prime}(y,x^{T}\theta)|\)_,_ \(\beta:=\sup_{\theta\in[\theta^{*},\tilde{\theta}^{*}]}|l^{\prime\prime}(y,x^{ T}\theta)|\)_. That is,_ \(\ell(\cdot)\) _is_ \(L\)_-Lipschitz and_ \(\beta\)_-smooth._
_We then have the data-dependent DP_
\[\epsilon(Z)\leq\frac{R(L+\beta)}{\alpha}(1+\log(2/\delta))+\frac{\gamma L^{2} }{\alpha}+\sqrt{\frac{\gamma L^{2}}{\alpha}\log(2/\delta)}.\]
The proof follows by taking an upper bound of the per-instance DP loss (Theorem D.1) \(\epsilon(Z,z)\) over \(z=(x,y)\in(\mathcal{X},\mathcal{Y})\).
Notice that the Hessians can be arbitrarily singular and \(\alpha\) could be \(0\), which leads to an infinite privacy loss without additional assumptions. Thus, we will impose an additional regularization of form \(\frac{3}{2}||\theta||^{2}\), which ensures that for any dataset \(F_{S}\) is \(\lambda\)-strongly convex.
This is not yet DP because it is still about a fixed dataset. We also need a pre-specified privacy budget \((\epsilon,\delta)\). We next demonstrate how to apply the generalized PTR to provide a general solution to the above GLM, using logistic regression as an example.
**Remark A.7** (Logistic regression).: For logistic regression, we know \(L\leq 1\), \(\beta\leq 1/4\) and if \(\|x\|_{2}\leq 1\), it is \(1\)-generalized self-concordant. For any dataset \(Z=(X,y)\), the data-dependent DP \(\epsilon(X)\) w.r.t. \(\delta\) can be simplified to:
\[\frac{1.25}{\alpha}(1+\log(2/\delta))+\frac{\gamma}{\alpha}+\sqrt{\frac{ \gamma}{\alpha}\log(2/\delta)}\]
Now, the data-dependent DP is a function of \(\alpha\) and \(\gamma\), where \(\alpha\) denotes the local strong convexity at \(\theta_{\lambda}^{*}\) and \(\gamma\) controls the noise scale. We next show how to select these two parameters adapted to the dataset.
**Example A.8**.: _We demonstrate here how we apply generalized PTR to output perturbation of the logistic regression problem._
1. _Take an exponential grid of parameters_ \(\{\lambda\}\) _and propose each_ \(\lambda\)_._
2. _Solve for_ \(\theta_{\lambda}^{*}=\text{argmin}_{\theta}F(\theta)+\lambda\|\theta\|^{2}/2\)__
3. _Calculate the smallest eigenvalue_ \(\lambda_{\text{min}}(\nabla^{2}F(\theta_{\lambda}^{*}))\) _(e.g., using power method)._
4. _Differentially privately release_ \(\lambda_{\text{min}}\) _with_ \(\lambda_{\text{min}}^{p}:=\max\{\lambda_{\text{min}}+\frac{\sqrt{\log(4/\delta )}}{\epsilon/2}\cdot\Delta_{GS}\cdot Z-\frac{\sqrt{2\log(4/\delta)\cdot\log(1/ \delta)}\Delta_{GS}}{\epsilon/2},0\}\)_, where_ \(\Delta_{GS}\) _denote the global sensitivity of_ \(\lambda_{\text{min}}\) _using Theorem_ A.11_._
5. _Let_ \(\epsilon^{p}(\cdot)\) _be instantiated with_ \(\epsilon(X)\) _w.r.t._ \(\delta\) _from Remark_ A.7_, where_ \(\alpha=\lambda_{\text{min}}^{p}+\lambda\)_. Then, conditioned on a high probability event,_ \(\epsilon^{p}(\cdot)\) _(a function of_ \(\gamma\)_) is a valid DP bound that holds for all datasets and all parameters_ \(\gamma\)_._
6. _Calculate the maximum_ \(\gamma\) _such that_ \(\epsilon^{p}_{\delta/2}(\gamma)\leq\epsilon/2\)_._
7. _Release_ \(\hat{\theta}\sim\mathcal{N}(\theta_{\lambda}^{*},\gamma^{-1}\nabla^{2}F_{s}( \theta_{\lambda}^{*})^{-1})\)_._
8. _Evaluate the utility on the validation set and return the_ \((\lambda,\gamma)\) _pair that leads to the highest utility._
**Theorem A.9**.: _For each proposed \(\lambda\), the algorithm that releases \(\hat{\theta}\sim\mathcal{N}(\theta_{\lambda}^{*},\gamma^{-1}\nabla^{2}F_{s}( \theta_{\lambda}^{*})^{-1})\) is \((\epsilon,2\delta)\)-DP._
Proof.: The proof follows the recipe of generalized PTR with private upper bound (Example 4.6). First, the release of \(\lambda_{\text{min}}(\nabla^{2}F(\theta_{\lambda}^{*}))\) is \((\epsilon/2,\delta/2)\)-DP. Then, with probability at least \(1-\delta\), \(\epsilon^{p}_{\delta}(\cdot)>\epsilon_{\delta}(X)\) holds for all \(X\) and \(\gamma\). Finally, \(\gamma\) is chosen such that the valid upper bound is \((\epsilon/2,\delta/2)\)-DP.
_For the hyper-parameter tuning on \(\lambda\) (Steps 1 and 8), we can use Algorithm 3 to evaluate each \(\lambda\)._
_Unlike Example 5.2, the \(\lambda_{\text{min}}(\nabla^{2}F(\theta_{\lambda}^{*}))\) is a complicated data-dependent function of \(\lambda\). Thus, we cannot privately release the data-dependent quantity \(\lambda_{\text{min}}(\nabla^{2}F(\theta_{\lambda}^{*}))\) without an input \(\lambda\). The PTR approach allows us to test a number of different \(\lambda\) and hence get a more favorable privacy-utility trade-off._
An interesting perspective of this algorithm for logistic regression is that increasing the regularization \(\alpha\) is effectively increasing the number of data points within the soft "margin"3 of separation, hence a larger contribution to the Hessian from the loss function.
**Remark A.10**.: The PTR solution for GLMs follows a similar recipe: propose a regularization strength \(\lambda\); construct a lower bound of the strong convexity \(\alpha\) at the optimal solution \(\theta_{\lambda}^{*}\); and test the validity of data-dependent DP using Theorem D.1.
Before moving on to other applications of generalized PTR, we will show how to differentially privately release \(\lambda_{min}\) according to the requirements of the logistic regression example.
### Differentially privately release \(\lambda_{min}\left(\nabla^{2}F(\theta)\right)\)
To privately release \(\lambda_{min}\nabla^{2}F(\theta)\), we first need to compute its global sensitivity. Once we have that then we can release it differentially privately using either the Laplace mechanism or the Gaussian mechanism.
**Theorem A.11** (Global sensitivity of the minimum eigenvalue at the optimal solution).: _Let \(F(\theta)=\sum_{i=1}^{n}f_{i}(\theta)+r(\theta)\) and \(\tilde{F}(\theta)=F(\theta)+f(\theta)\) where \(f_{1},...,f_{n}\) are loss functions corresponding to a particular datapoint \(x\). Let \(\theta^{*}=\text{argmin}_{\theta}F(\theta)\) and \(\tilde{\theta}^{*}=\text{argmin}_{\theta}\tilde{F}(\theta)\). Assume \(f\) is \(L\)-Lipschitz and \(\beta\)-smooth, \(r(\theta)\) is \(\lambda\)-strongly convex, and \(F\) and \(\tilde{F}\) are \(R\)-self-concordant. If in addition, \(\lambda\geq RL\), then we have_
\[\sup_{X,x}(\lambda_{min}(\nabla^{2}F(\theta_{\lambda}^{*}))-\lambda_{min}( \nabla^{2}\tilde{F}(\tilde{\theta_{\lambda}^{*}})))\leq 2RL+\beta.\]
Proof.: \[\lambda_{min} (\nabla^{2}F(\theta_{\lambda}^{*}))-\lambda_{min}(\nabla^{2} \tilde{F}(\tilde{\theta_{\lambda}^{*}}))\] (1) \[=(\lambda_{min}(\nabla^{2}F(\theta_{\lambda}^{*}))-\lambda_{min}( \nabla^{2}\tilde{F}(\theta_{\lambda}^{*})))\] \[+(\lambda_{min}(\nabla^{2}\tilde{F}(\theta_{\lambda}^{*}))- \lambda_{min}(\nabla^{2}\tilde{F}(\tilde{\theta_{\lambda}^{*}}))).\]
We first bound the part on the left. By applying Weyl's lemma \(\lambda(X+E)-\lambda(X)\leq||E||_{2}\), we have
\[\sup_{x}||\nabla^{2}F(\theta_{\lambda}^{*})-\nabla^{2}\tilde{F}(\tilde{ \theta_{\lambda}^{*}})||_{2}=||\nabla^{2}f(\theta_{\lambda}^{*})||_{2}\leq\beta \tag{2}\]
In order to bound the part on the right, we apply the semidefinite ordering using self-concordance, which gives
\[e^{-R||\tilde{\theta_{\lambda}^{*}}-\theta_{\lambda}^{*}||\nabla^{2}\tilde{F} (\tilde{\theta_{\lambda}^{*}})\prec\nabla^{2}\tilde{F}(\theta_{\lambda}^{*}) \prec e^{R||\tilde{\theta_{\lambda}^{*}}-\theta_{\lambda}^{*}||\nabla^{2} \tilde{F}(\tilde{\theta_{\lambda}^{*}})}.\]
By the Courant-Fischer Theorem and the monotonicity theorem, we also have that for the smallest eigenvalue
\[e^{-R||\tilde{\theta_{\lambda}^{*}}-\theta_{\lambda}^{*}||} \lambda_{\min}\left(\nabla^{2}\tilde{F}(\tilde{\theta_{\lambda}^{* }})\right)\leq\lambda_{\min}\left(\nabla^{2}\tilde{F}(\theta_{\lambda}^{*})\right) \tag{3}\] \[\leq e^{R||\tilde{\theta_{\lambda}^{*}}-\theta_{\lambda}^{*}||} \lambda_{\min}\left(\nabla^{2}\tilde{F}(\tilde{\theta_{\lambda}^{*}})\right).\]
Moreover by Proposition D.2, we have that
\[\|\tilde{\theta_{\lambda}^{*}}-\theta_{\lambda}^{*}\|_{2}\leq\frac{\|\nabla f (\tilde{\theta_{\lambda}}{}_{\lambda})\|}{\lambda_{\min}\left(\nabla^{2} \tilde{F}(\tilde{\theta_{\lambda}^{*}})\right)}\leq\frac{L}{\lambda_{\min} \left(\nabla^{2}\tilde{F}(\tilde{\theta_{\lambda}^{*}})\right)}.\]
If \(\lambda_{\min}\left(\nabla^{2}\tilde{F}(\tilde{\theta_{\lambda}^{*}})\right) \geq RL\), then use that \(e^{x}-1\leq 2x\) for \(x\leq 1\). Substituting the above bound to (3) then to (1) together with (2), we get a data-independent global sensitivity bound of
\[\lambda_{min}(\nabla^{2}F(\theta_{\lambda}^{*}))-\lambda_{min}(\nabla^{2} \tilde{F}(\tilde{\theta_{\lambda}^{*}}))\leq 2RL+\beta\]
as stated.
**Proposition A.12**.: _Let \(\|\cdot\|\) be a norm and \(\|\cdot\|_{*}\) be its dual norm. Let \(F(\theta)\), \(f(\theta)\) and \(\tilde{F}(\theta)=F(\theta)+f(\theta)\) be proper convex functions and \(\theta^{*}\) and \(\tilde{\text{theta}}^{*}\) be their minimizers, i.e., \(0\in\partial F(\theta^{*})\) and \(0\in\partial\tilde{F}(\tilde{\text{theta}}^{*})\). If in addition, \(F,\tilde{F}\) is \(\alpha,\tilde{\alpha}\)-strongly convex with respect to \(\|\cdot\|\) within the restricted domain \(\theta\in\{t\theta^{*}+(1-t)\tilde{\theta}^{*}\mid t\in[0,1]\}\). Then there exists \(g\in\partial f(\theta^{*})\) and \(\tilde{g}\in\partial f(\tilde{\theta}^{*})\) such that_
\[\|\theta^{*}-\tilde{\theta}^{*}\|\leq\min\left\{\frac{1}{\alpha}\|\tilde{g}\|_ {*},\frac{1}{\tilde{\alpha}}\|g\|_{*}\right\}.\]
Proof.: Apply the first order condition to \(F\) restricted to the line segment between \(\tilde{\theta}^{*}\) and \(\theta^{*}\), we get
\[F(\tilde{\theta}^{*}) \geq F(\theta^{*})+\langle\partial F(\theta^{*}),\tilde{\theta}^{ *}-\theta^{*}\rangle+\frac{\alpha}{2}\|\tilde{\theta}^{*}-\theta^{*}\|^{2} \tag{4}\] \[F(\theta^{*}) \geq F(\tilde{\theta}^{*})+\langle\partial F(\tilde{\theta}^{*}), \theta^{*}-\tilde{\theta}^{*}\rangle+\frac{\alpha}{2}\|\tilde{\theta}^{*}- \theta^{*}\|^{2} \tag{5}\]
Note by the convexity of \(F\) and \(f\), \(\partial\tilde{F}=\partial F+\partial f\), where \(+\) is the Minkowski Sum. Therefore, \(0\in\partial\tilde{F}(\tilde{\theta}^{*})\) implies that there exists \(\tilde{g}\) such that \(\tilde{g}\in\partial f(\tilde{\theta}^{*})\) and \(-\tilde{g}\in\partial F(\hat{\theta}^{*})\). Take \(-\tilde{g}\in\partial F(\hat{\theta}^{*})\) in Equation 10 and \(0\in\partial F(\theta^{*})\) in Equation 9 and add the two inequalities, we obtain
\[0 \geq\langle-\tilde{g},\theta^{*}-\tilde{\theta}^{*}\rangle+ \alpha\|\tilde{\theta}^{*}-\theta^{*}\|^{2}\] \[\geq-\|\tilde{g}\|_{*}\|\theta^{*}-\tilde{\theta}^{*}\|+\alpha \|\tilde{\theta}^{*}-\theta^{*}\|^{2}.\]
For \(\|\tilde{\theta}^{*}-\theta^{*}\|=0\) the claim is trivially true; otherwise, we can divide both sides of the above inequality by \(\|\tilde{\theta}^{*}-\theta^{*}\|\) and get \(\|\theta^{*}-\tilde{\theta}^{*}\|\leq\frac{1}{\alpha}\|\tilde{g}\|_{*}\).
It remains to show that \(\|\theta^{*}-\tilde{\theta}^{*}\|\leq\frac{1}{\tilde{\alpha}}\|g\|_{*}\). This can be obtained by exactly the same arguments above but applying strong convexity to \(\tilde{F}\) instead. Note that we can actually get something slightly stronger than the statement because the inequality holds for all \(g\in\partial f(\theta^{*})\).
### Other applications of generalized PTR
Besides one-posterior sampling for GLMs, there are plenty of examples that our generalized-PTR could be applied, e.g., DP-PCA (Dwork et al., 2014) and Sparse-DP-ERM (Kifer et al., 2012) (when the designed matrix is well-behaved).
(Dwork et al., 2014) provides a PTR style privacy-preserving principle component analysis (PCA). The key observation of (Dwork et al., 2014) is that the local sensitivity is quite "small" if there is a large eigengap between the \(k\)-th and the \(k+1\)-th eigenvalues. Therefore, their approach (Algorithm 2) chooses to privately release a lower bound of the k-th eigengap (\(k\) is fixed as an input) and use that to construct a high-confidence upper bound of the local sensitivity.
For noise-adding mechanisms, the local sensitivity is proportional to the data-dependent loss and generalized PTR is applicable. We can formulate the data-dependent DP of DP-PCA as follows:
**Theorem A.13**.: _For a given matrix \(A\in\mathcal{R}^{m\times n}\), assume each row of \(A\) has a bounded \(\ell_{2}\) norm being \(1\). Let \(V_{k}\) denotes the top \(k\) eigenvectors of \(A^{T}A\) and \(d_{k}\) denotes the gap between the \(k\)-th and the \(k+1\)-th eigenvalue. Then releasing \(V_{k}V_{k}^{T}+E\), where \(E\in\mathcal{R}^{n\times n}\) is a symmetric matrix with the upper triangle is i.i.d samples from \(\mathcal{N}(0,\sigma^{2})\) satisfies \((\epsilon(A),\delta)\) data-dependent DP and \(\epsilon(A)=\frac{2\sqrt{\log(1.25/\delta)}}{\sigma(d_{k}-2)}\)._
The proof is based on the local sensitivity result from (Dwork et al., 2014) and the noise calibration of Gaussian mechanism.
We can combine Theorem A.13 with our Algorithm 3 to instantiate the generalized PTR framework. The improvement over Dwork et al. (2014) will be to allow joint tuning of the parameter \(k\) and the noise variance (added to the spectral gap \(d_{k}\)).
## Appendix B Omitted proofs in Section 4
The utility of Algorithm 3 depends on how many rounds that Algorithm 2 is invoked. We next provide the utility guarantee of Algorithm 3, which follows a simplification of the result in the Section A.2 of Papernot and Steinke (2021).
**Theorem B.1**.: _Suppose applying Algorithm 2 with each \(\phi_{i}\) has an equal probability to achieve the highest validation score. Let \(\hat{T}\) denotes the number of invocation of Algorithm 2, where \(\hat{T}\) follows a truncated geometric distribution. Then the expected quantile of the highest score candidate is given by \(\mathbb{E}_{\hat{T}}\bigg{[}1-\frac{1}{\hat{T}+1}\bigg{]}\)._
In practice, we can roughly set \(\tau=\frac{1}{10k}\) so that the algorithm is likely to test all \(k\) parameters.
Proof.: Suppose each oracle access to \(Q(X)\) has a probability \(1/k\) of achiving the best validation accuracy. Let \(\beta\) denote the probability that \(\mathcal{A}\) (shorthand for Algorithm 3) outputs the best choice of \(\phi_{i}\).
\[\beta =1-\Pr[\mathcal{A}(X)\text{is not best}]\] \[=1-\mathbb{E}_{\hat{T}}\bigg{[}\Pr[Q(X)\text{is not best}]^{\hat {T}}\bigg{]}\] \[=1-\mathbb{E}_{\hat{T}}\bigg{[}(1-\frac{1}{k})^{\hat{T}}\bigg{]}.\]
Let \(f(x)=\mathbb{E}[x^{\hat{T}}]\). Applying a first-order approximation on \(f(1-\frac{1}{k})\), we have \(f(1-\frac{1}{k})\approx f(1)-f^{\prime}(1)\cdot\frac{1}{k}=1-\mathbb{E}[\hat{T }]/k\). Then, if \(k\) is large and we choose \(\tau=0.1/k\), \(\mathcal{A}\) can roughly return the best \(\phi_{i}\).
## Appendix C Experimental details
### Experimental details in private linear regression
We start with the privacy calibration of the OPS-PTR algorithm.
Algorithm 5 provides the detailed privacy calibration of the private linear regression problem.
**Theorem C.1**.: _Algorithm 5 is \((\epsilon,2\delta)\)-DP._
Proof.: There are three data-dependent quantities in Theorem 5.1: \(\lambda_{\min},||\theta_{\lambda}^{*}||\) and \(L\). First, notice that \(\lambda_{\min}\) has a global sensitivity of \(||\mathcal{X}||^{2}\) by Weyl's lemma. Under the assumption \(||\mathcal{X}||^{2}\leq 1\), we privately release \(\lambda_{\min}\) using \((\epsilon/4,\delta/3)\) in Step 3. Notice that with probability at least \(1-\delta/2\), \(\tilde{\lambda}_{\min}\) is a lower bound of \(\lambda_{\min}\).
Then, we apply Lemma C.2 from Wang (2018) to privately release \(\log(||\mathcal{Y}||+||\mathcal{X}|||\hat{\theta}||)\) using \((\epsilon/4,\delta/3)\). Note that both the local Lipschitz constant \(L\) and the norm \(||\theta_{\lambda}^{\star}||\) are functions of \(\log(||\mathcal{Y}||+||\mathcal{X}|||\hat{\theta}||)\). Thus, we can construct a private upper bound of these by post-processing of \(\Delta\).
Then, with probability at least \(1-\delta\) (by a union bound over \(\tilde{\lambda}_{\min}\) and \(\Delta\)), instantiating Theorem 5.1 with \(\tilde{\lambda}_{\min}\) and \(\tilde{L}\) provides a valid upper bound of the data-dependent DP. We then tune the parameter \(\gamma\) using the remaining privacy budget \((\epsilon/2,\delta/3)\).
**Lemma C.2** (Lemma 12 (Wang, 2018)).: _Let \(\theta_{\lambda}^{\star}\) be the ridge regression estimate with parameter \(\lambda\) and the smallest eigenvalue of \(X^{T}X\) be \(\lambda_{\min}\), then the function \(\log(||\mathcal{Y}+||\mathcal{X}|||\theta_{\lambda}^{\star}||)\) has a local sensitivity of \(\log(1+\frac{||\mathcal{X}||^{2}}{\lambda_{min+\lambda}})\)._
### Details of PATE case study
**Definition C.3** (Renyi DP (Mironov, 2017)).: We say a randomized algorithm \(\mathcal{M}\) is \((\alpha,\epsilon_{\mathcal{M}}(\alpha))\)-RDP with order \(\alpha\geq 1\) if for neighboring datasets \(X,X^{\prime}\)
\[\mathbb{D}_{\alpha}(\mathcal{M}(X)||\mathcal{M}(X^{\prime})):=\] \[\frac{1}{\alpha-1}\log\mathbb{E}_{o\sim\mathcal{M}(X^{\prime})} \bigg{[}\bigg{(}\frac{\Pr[\mathcal{M}(X)=o]}{\Pr[\mathcal{M}(X^{\prime})=o]} \bigg{)}^{\alpha}\bigg{]}\leq\epsilon_{\mathcal{M}}(\alpha).\]
At the limit of \(\alpha\to\infty\), RDP reduces to \((\epsilon,0)\)-DP. We now define the data-dependent Renyi DP that conditioned on an input dataset \(X\).
**Definition C.4** (Data-dependent Renyi DP (Papernot et al., 2018)).: We say a randomized algorithm \(\mathcal{M}\) is \((\alpha,\epsilon_{\mathcal{M}}(\alpha,X))\)-RDP with order \(\alpha\geq 1\) for dataset \(X\) if for neighboring datasets \(X^{\prime}\)
\[\mathbb{D}_{\alpha}(\mathcal{M}(X)||\mathcal{M}(X^{\prime})):=\] \[\frac{1}{\alpha-1}\log\mathbb{E}_{o\sim\mathcal{M}(X^{\prime})} \bigg{[}\bigg{(}\frac{\Pr[\mathcal{M}(X)=o]}{\Pr[\mathcal{M}(X^{\prime})=o]} \bigg{)}^{\alpha}\bigg{]}\leq\epsilon_{\mathcal{M}}(\alpha,X).\]
RDP features two useful properties.
**Lemma C.5** (Adaptive composition).: \(\epsilon_{(\mathcal{M}_{1},\mathcal{M}_{2})}=\epsilon_{\mathcal{M}_{1}}(\cdot)+ \epsilon_{\mathcal{M}_{2}}(\cdot)\)_._
**Lemma C.6** (From RDP to DP).: _If a randomized algorithm \(\mathcal{M}\) satisfies \((\alpha,\epsilon(\alpha))\)-RDP, then \(\mathcal{M}\) also satisfies \((\epsilon(\alpha)+\frac{\log(1/\delta)}{\alpha-1},\delta)\)-DP for any \(\delta\in(0,1)\)._
**Definition C.7** (Smooth Sensitivity).: Given the smoothness parameter \(\beta\), a \(\beta\)-smooth sensitivity of \(f(X)\) is defined as
\[SS_{\beta}(X):=\max_{d\geq 0}e^{-\beta d}\cdot\max_{\tilde{X}^{\prime}:dist(X, \tilde{X}^{\prime})\leq d}\Delta_{LS}(\tilde{X}^{\prime})\]
**Lemma C.8** (Private upper bound of data-dependent RDP, Restatement of Theorem 5.6).: _] Given a RDP function \(\mathrm{RDP}(\alpha,X)\) and a \(\beta\)-smooth sensitivity bound \(SS(\cdot)\) of \(\mathrm{RDP}(\alpha,X)\). Let \(\mu\) (defined in Algorithm 4) denote the private release of \(\log(SS_{\beta}(X))\). Let \((\beta,\sigma_{s},\sigma_{2})\)-GNSS mechanism be_
\[\mathrm{RDP}^{upper}(\alpha):=\mathrm{RDP}(\alpha,X)+SS_{\beta}(X)\cdot \mathcal{N}(0,\sigma_{s}^{2})+\sigma_{s}\sqrt{2\log(\frac{2}{\delta_{2}})}e^{\mu}\]
_Then, the release of \(\mathrm{RDP}^{upper}(X)\) satisfies \((\alpha,\frac{3\alpha+2}{2\sigma_{s}^{2}})\)-RDP for all \(1<\alpha<\frac{1}{2\beta}\); w.p. at least \(1-\delta_{2}\), \(\mathrm{RDP}^{upper}(\alpha)\) is an upper bound of \(\mathrm{RDP}(\alpha,X)\)._
Proof sketch.: We first show that releasing the smooth sensitivity \(SS_{\beta}\) with \(e^{\mu}\) satisfies \((\alpha,\frac{\alpha}{2\sigma_{s}^{2}})\)-RDP. Notice that the log of \(SS_{\beta}(X)\) has a bounded global sensitivity \(\beta\) (Definition C.7 implies that \(|\log SS_{\beta}(X)-\log SS_{\beta}(X^{\prime})|\leq\beta\) for any neighboring dataset \(X,X^{\prime}\)). By Gaussian mechanism, scaling noise with \(\beta\sigma_{2}\) to \(\log SS_{\beta}(X)\) is \((\alpha,\frac{\alpha}{2\sigma_{s}^{2}})\)-RDP. Therefore, the release of \(\mathrm{RDP}(\alpha,X)\) is \((\alpha,\epsilon_{s}(\alpha)+\frac{\alpha}{2\sigma_{s}^{2}})\)-RDP. Since the release of \(f(X)+SS_{\beta}(X)\cdot\mathcal{N}(0,\sigma_{s}^{2})\) is \((\alpha,\frac{\alpha+1}{\sigma_{s}^{2}})\)-RDP (Theorem 23 from Papernot et al. [2018]) for \(\alpha<\frac{1}{2\beta}\), we have \(\epsilon_{s}(\alpha)+\frac{\alpha}{2\sigma_{s}^{2}}=\frac{3\alpha+2}{2\sigma_ {s}^{2}}\).
We next prove the second statement. First, notice that with probability at least \(1-\delta_{2}/2\), \(e^{\mu}\geq SS_{\beta}(X)\) using the standard Gaussian tail bound. Let \(E\) denote the event that \(e^{\mu}\geq SS_{\beta}(X)\).
\[\Pr\biggl{[}\mathrm{RDP}^{\mathrm{upper}}(\alpha)\leq\mathrm{RDP }(\alpha,X)\biggr{]}\] \[=\Pr\biggl{[}\mathrm{RDP}^{\mathrm{upper}}(\alpha)\leq\mathrm{ RDP}(\alpha,X)|E\biggr{]}+\Pr\biggl{[}\mathrm{RDP}^{\mathrm{upper}}(\alpha)\leq \mathrm{RDP}(\alpha,X)|E^{c}\biggr{]}\] \[\leq\Pr\biggl{[}\mathrm{RDP}^{\mathrm{upper}}(\alpha)\leq \mathrm{RDP}(\alpha,X)|E\biggr{]}+\delta_{2}/2\] \[=\underbrace{\Pr\biggl{[}\mathcal{N}(0,\sigma_{s}^{2})\cdot SS_{ \beta(X)}\geq\sigma_{s}\cdot\sqrt{2\log(2/\delta_{2})}e^{\mu}|E\biggr{]}}_{ \text{denoted by}(*)}+\delta_{2}/2\]
Condition on the event \(E\), \(e^{\mu}\) is a valid upper bound of \(SS_{\beta}(X)\), which implies
\[(*)\leq\Pr[\mathcal{N}(0,\sigma_{s}^{2})\cdot SS_{\beta}(X)\geq\sigma_{s} \cdot\sqrt{2\log(2/\delta_{2})}SS_{\beta}(X)|E]\leq\delta_{2}/2\]
Therefore, with probability at least \(1-\delta_{2}\), \(\mathrm{RDP}^{\mathrm{upper}}(\alpha)\geq\mathrm{RDP}(\alpha,X)\).
**Theorem C.9** (Restatement of Theorem 5.7).: _Algorithm 4 satisfies \((\epsilon^{\prime}+\hat{\epsilon},\delta)\)-DP._
Proof.: The privacy analysis consists of two components -- the privacy cost of releasing an upper bound of data-dependent RDP (\(\epsilon_{\text{upper}}(\alpha):=\epsilon_{s}(\alpha)+\frac{\alpha}{2\sigma_{2}^{2}}\) and the valid upper bound \(\epsilon_{\sigma_{1}}^{p}(\alpha)\). First, set \(\alpha=\frac{2\log(2/\delta)}{\epsilon}+1\) and use RDP to DP conversion with \(\delta/2\) ensures that the cost of \(\delta/2\) contribution to be roughly \(\epsilon/2\) (i.e., \(\frac{\log(2/\delta)}{\alpha-1}=\epsilon/2\)). Second, choosing \(\sigma_{s}=\sqrt{\frac{2+3\alpha}{\epsilon}}\) gives us another \(\epsilon/2\).
**Experimental details**\(K=400\) teacher models are trained individually on the disjoint set using AlexNet model. We set \(\sigma_{2}=\sigma_{s}=15.0\). Our data-dependent RDP calculation and the smooth-sensitivity calculation follow Papernot et al. (2018). Specifically, we use the following theorem (Theorem 6 from Papernot et al. (2018)) to compute the data-dependent RDP of each unlabeled data \(x\) from the public domain.
**Theorem C.10** (data-dependent RDP Papernot et al. (2018)).: _Let \(\tilde{q}\geq\Pr[\mathcal{M}(X)\neq Argmax_{j\in[C]}n_{j}(x)]\), i.e., an upper bound of the probability that the noisy label does not match the majority label. Assume \(\alpha\leq\mu_{1}\) and \(\tilde{q}\leq e^{(\mu_{2}-1)\epsilon_{2}}/\bigg{(}\frac{\mu_{1}}{\mu_{1}-1} \cdot\frac{\mu_{2}}{\mu_{2}-1}\bigg{)}^{\mu_{2}}\), then we have:_
\[\epsilon_{\mathcal{M}}(\alpha,X)\leq\frac{1}{\alpha-1}\log\bigg{(}(1-\tilde{q} )\cdot A(\tilde{q},\mu_{2},\epsilon_{2})^{\alpha-1}+\tilde{q}\cdot B(\tilde{q },\mu_{1},\epsilon_{1})^{\alpha-1}\bigg{)}\]
_where \(A(\tilde{q},\mu_{2},\epsilon_{2}):=(1-\tilde{q})/\bigg{(}1-(\tilde{q}e^{ \epsilon_{2}})^{\frac{\mu_{2}-1}{\mu_{2}}}\bigg{)}\), \(B(\tilde{q},\mu_{1},\epsilon_{1})=e^{\epsilon_{1}}/\tilde{q}^{\frac{1}{\mu_{1} -1}},\mu_{2}=\sigma_{1}\cdot\sqrt{\log(1/\tilde{q})},\mu_{1}=\mu_{2}+1,\epsilon _{1}=\mu_{1}/\sigma_{1}^{2}\) and \(\epsilon_{2}=\mu_{2}/\sigma_{2}^{2}\)._
In the experiments, the non-private data-dependent DP baseline is also based on the above theorem. Notice that the data-dependent RDP of each query is a function of \(\tilde{q}\), where \(\tilde{q}\) denotes an upper bound of the probability where the plurality output does not match the noisy output. \(\tilde{q}\) is a complex function of both the noisy scale and data and is not monotonically decreasing when \(\sigma_{1}\) is increasing.
**Simulation of two distributions.** The motivation of the experimental design is to compare three approaches under different data distributions. Notice that there are \(K=400\) teachers, which implies the number of the vote count for each class will be bounded by \(400\). In the simulation of high-consensus distribution, we choose \(T=200\) unlabeled public data such that the majority vote count will be larger than \(150\) (i.e., \(\max_{j\in[C]}n_{j}(x)>150\)). For the low-consensus distribution, we choose to select \(T\) unlabeled data such that the majority vote count will be smaller than \(150\).
## Appendix D Omitted proofs in private GLM
### Per-instance DP of GLM
**Theorem D.1** (Per-instance differential privacy guarantee).: _Consider two adjacent data sets \(Z\) and \(Z^{\prime}=[Z,(x,y)]\), and denote the smooth part of the loss function \(F_{s}=\sum_{i=1}^{n}l(y_{i},\langle x_{i},\cdot\rangle)+r_{s}(\cdot)\) (thus \(\tilde{F}_{s}=F_{s}+l(y,\langle x,\cdot\rangle)\). Let the local neighborhood be the line segment between \(\theta^{*}\) and \(\tilde{\theta}^{*}\). Assume_
1. _the GLM loss function_ \(l\) _be convex, three-time continuous differentiable and_ \(R\)_-generalized-self-concordant w.r.t._ \(\|\cdot\|_{2}\)_,_
2. \(F_{s}\) _is locally_ \(\alpha\)_-strongly convex w.r.t._ \(\|\cdot\|_{2}\)_,_
3. _and in addition, denote_ \(L:=\sup_{\theta\in[\theta^{*},\tilde{\theta}^{*}]}|l^{\prime}(y,x^{T}\theta)|\)_,_ \(\beta:=\sup_{\theta\in[\theta^{*},\tilde{\theta}^{*}]}|l^{\prime\prime}(y,x^{T} \theta)|\)_._
_Then the algorithm obeys \((\epsilon,\delta)\)-pDP for \(Z\) and \(z=(x,y)\) with any \(0<\delta<2/e\) and_
\[\epsilon\leq\epsilon_{0}(1+\log(2/\delta))+e^{\frac{RL\|x\|_{2}}{\alpha}}\left[ \frac{\gamma L^{2}\|x\|_{H^{-1}}^{2}}{2}+\sqrt{\gamma L^{2}\|x\|_{H^{-1}}^{2} \log(2/\delta)}\right]\]
_where \(\epsilon_{0}\leq e^{\frac{RL\|x\|_{2}}{\alpha}}-1+2\beta\|x\|_{H_{1}^{-1}}^{2} +2\beta\|x\|_{\tilde{H}_{2}^{-1}}^{2}.\) If we instead assume that \(l\) is \(R\)-self concordant. Then the same results hold, but with all \(e^{\frac{RL\|x\|_{2}}{\alpha}}\) replaced with \((1-RL\|x\|_{H^{-1}})^{2}\)._
Under the stronger three-times continuous differentiable assumption, by mean value theorem, there exists \(\xi\) on the line-segment between \(\theta^{*}\) and \(\tilde{\theta}^{*}\) such that
\[H=\left[\int_{t=0}^{1}\nabla^{2}F_{s}((1-t)\theta^{*}+t\tilde{\theta}^{*})dt \right]=\nabla^{2}F_{s}(\xi).\]
The two distributions of interests are \(\mathcal{N}(\theta^{*},[\gamma\nabla^{2}F_{s}(\theta^{*})]^{-1})\) and \(\mathcal{N}(\tilde{\theta}^{*},[\gamma\nabla^{2}F_{s}(\tilde{\theta}^{*})+ \nabla^{2}l(y,x^{T}\tilde{\theta}^{*})]^{-1})\). Denote \([\nabla^{2}F_{s}(\theta^{*})]^{-1}=:\Sigma\) and \([\nabla^{2}F_{s}(\tilde{\theta}^{*})+\nabla^{2}l(y,x^{T}\tilde{\theta}^{*})]^ {-1}=:\tilde{\Sigma}\). Both the means and the covariance matrices are different, so we cannot use multivariate Gaussian mechanism naively. Instead we will take the tail bound interpretation of \((\epsilon,\delta)\)-DP and make use of the per-instance DP framework as internal steps of the proof.
First, we can write down the privacy loss random variable in analytic form
\[\log\frac{|\Sigma|^{-1/2}e^{-\frac{\gamma}{2}\|\theta-\theta^{*}\|_{\Sigma^{- 1}}^{2}}}{|\tilde{\Sigma}|^{-1/2}e^{-\frac{\gamma}{2}\|\theta-\tilde{\theta}^{ *}\|_{\Sigma^{-1}}^{2}}}=\underbrace{\frac{1}{2}\log\left(\frac{|\Sigma^{-1} |}{|\tilde{\Sigma}^{-1}|}\right)}_{(*)}+\underbrace{\frac{\gamma}{2}\left[\| \theta-\theta^{*}\|_{\Sigma^{-1}}^{2}-\|\theta-\tilde{\theta}^{*}\|_{\tilde{ \Sigma}^{-1}}^{2}\right]}_{(**)}\]
The general idea of the proof is to simplify the expression above and upper bounding the two terms separately using self-concordance and matrix inversion lemma, and ultimately show that the privacy loss random variable is dominated by another random variable having an appropriately scaled shifted \(\chi\)-distribution, therefore admits a Gaussian-like tail bound.
To ensure the presentation is readable, we define a few short hands. We will use \(H\) and \(\tilde{H}\) to denote the Hessian of \(F_{s}\) and \(F_{s}+f\) respectively and subscript \(1\)\(2\) indicates whether the Hessian evaluated at at \(\theta^{*}\) or \(\tilde{\theta}^{*}\). \(H\) without any subscript or superscript represents the Hessian of \(F_{s}\) evaluated at \(\xi\) as previously used.
\[(*)=\frac{1}{2}\log\frac{|H_{1}|}{|H|}\frac{|H|}{|H_{2}|}\frac{|H_{2}|}{|\tilde {H}_{2}|}\leq\frac{1}{2}\left[\log\frac{|H_{1}|}{|H|}+\log\frac{|H|}{|H_{2}|}+ \log\frac{|H_{2}|}{|\tilde{H}_{2}|}\right]\]
By the \(R\)-generalized self-concordance of \(F_{s}\), we can apply Lemma D.3,
\[-\|\theta^{*}-\xi\|_{2}R\leq\log\frac{|H_{1}|}{|H|}\leq R\|\theta^{*}-\xi\|_{ 2},\quad-R\|\xi-\tilde{\theta}^{*}\|_{2}\leq\log\frac{|H|}{|H_{2}|}\leq R\| \xi-\tilde{\theta}^{*}\|_{2}.\]
The generalized linear model ensures that the Hessian of \(f\) is rank-\(1\):
\[\nabla^{2}f(\tilde{\theta}^{*})=l^{\prime\prime}(y,x^{T}\tilde{\theta}^{*})xx ^{T}\]
and we can apply Lemma 3 in both ways (taking \(A=H_{2}\) and \(A=\tilde{H}_{2}\)) and obtain
\[\frac{|H_{2}|}{|\tilde{H}_{2}|}=\frac{1}{1+l^{\prime\prime}(y,x^{T}\tilde{ \theta}^{*})x^{T}H_{2}^{-1}x}=1-l^{\prime\prime}(y,x^{T}\tilde{\theta}^{*})x^{T }\tilde{H}_{2}x\]
Note that \(l^{\prime\prime}(y,x^{T}\tilde{\theta}^{*})x^{T}\tilde{H}_{2}^{-1}x\) is the in-sample leverage-score and \(l^{\prime\prime}(y,x^{T}\tilde{\theta}^{*})x^{T}H_{2}^{-1}x\) is the out-of-sample leverage-score of the locally linearized problem at \(\tilde{\theta}^{*}\). We denote them by \(\mu_{2}\) and \(\mu_{2}^{\prime}\) respectively (similarly, for the consistency of notations, we denote the in-sample and out of sample leverage score at \(\theta^{*}\) by \(\mu_{1}\) and \(\mu_{1}^{\prime}\) ).
Combine the above arguments we get
\[(*)\leq R\|\theta^{*}-\xi\|_{2}+R\|\xi-\tilde{\theta}^{*}\|_{2}+\log(1-\mu_{2 })\leq R\|\theta^{*}-\tilde{\theta}^{*}\|_{2}+\log(1-\mu_{2}) \tag{6}\] \[(*)\geq -R\|\theta^{*}-\tilde{\theta}^{*}\|_{2}-\log(1-\mu_{2}). \tag{7}\]
We now move on to deal with the second part, where we would like to express everything in terms of \(\|\theta-\theta^{*}\|_{H_{1}}\), which we know from the algorithm is \(\chi\)-distributed.
\[(**)=\frac{\gamma}{2}\left[\|\theta-\theta^{*}\|_{H_{1}}^{2}-\|\theta-\theta^ {*}\|_{H_{2}}^{2}+\|\theta-\theta^{*}\|_{H_{2}}^{2}-\|\theta-\tilde{\theta}^{ *}\|_{H_{2}}^{2}+\|\theta-\tilde{\theta}^{*}\|_{H_{2}}^{2}-\|\theta-\tilde{ \theta}^{*}\|_{\tilde{H}_{2}}^{2}\right]\]
By the generalized self-concordance at \(\theta^{*}\)
\[e^{-R\|\theta^{*}-\tilde{\theta}^{*}\|_{2}}\|\cdot\|_{H_{1}}^{2}\leq\|\cdot\| _{H_{2}}^{2}\leq e^{R\|\theta^{*}-\tilde{\theta}^{*}\|_{2}}\|\cdot\|_{H_{1}}^{2}\]
This allows us to convert from \(\|\cdot\|_{H_{2}}\) to \(\|\cdot\|_{H_{1}}\), and as a consequence:
\[\left|\|\theta-\theta^{*}\|_{H_{1}}^{2}-\|\theta-\theta^{*}\|_{H_{2}}^{2} \right|\leq[e^{R\|\theta^{*}-\tilde{\theta}^{*}\|_{2}}-1]\|\theta-\theta^{*} \|_{H_{1}}^{2}.\]
Also,
\[\|\theta-\theta^{*}\|_{H_{2}}^{2}-\|\theta-\tilde{\theta}^{*}\|_{H_{2}}^{2}= \left\langle\tilde{\theta}^{*}-\theta^{*},2\theta-2\theta^{*}+\theta^{*}- \tilde{\theta}^{*}\right\rangle_{H_{2}}=2\langle\theta-\theta^{*},\tilde{ \theta}^{*}-\theta^{*}\rangle_{H_{2}}-\|\theta^{*}-\tilde{\theta}^{*}\|_{H_{2}} ^{2}\]
Therefore
\[\left|\|\theta-\theta^{*}\|_{H_{2}}^{2}-\|\theta-\tilde{\theta}^{ *}\|_{H_{2}}^{2}\right| \leq 2\|\theta-\theta^{*}\|_{H_{2}}\|\theta^{*}-\tilde{\theta}^{ *}\|_{H_{2}}+\|\theta^{*}-\tilde{\theta}^{*}\|_{H_{2}}^{2}\] \[\leq 2e^{R\|\tilde{\theta}^{*}-\theta^{*}\|_{2}}\|\theta-\theta^{ *}\|_{H_{1}}\|\theta^{*}-\tilde{\theta}^{*}\|_{H}+e^{R\|\tilde{\theta}^{*}- \theta^{*}\|_{2}}\|\theta^{*}-\tilde{\theta}^{*}\|_{H}^{2}.\]
Then lastly we have
\[0\geq\|\theta-\tilde{\theta}^{*}\|_{H_{2}}^{2}-\|\theta-\tilde{ \theta}^{*}\|_{\tilde{H}_{2}}^{2} =-l^{\prime\prime}(y,x^{T}\tilde{\theta}^{*})\left[\langle x, \theta-\theta^{*}\rangle+\langle x,\theta^{*}-\tilde{\theta}^{*}\rangle\right]^ {2}\] \[\geq-2\beta\|x\|_{H_{1}}^{2}\|\theta-\theta^{*}\|_{H_{1}}^{2}-2 \beta\|x\|_{H^{-1}}^{2}\|\theta^{*}-\tilde{\theta}^{*}\|_{H}^{2}\] \[\left|\|\theta-\tilde{\theta}^{*}\|_{H_{2}}^{2}-\|\theta-\tilde{ \theta}^{*}\|_{\tilde{H}_{2}}^{2}\right| \leq 2\beta\|x\|_{H_{1}}^{2}\|\theta-\theta^{*}\|_{H_{1}}^{2}+2 \beta\|x\|_{H^{-1}}^{2}\|\theta^{*}-\tilde{\theta}^{*}\|_{H}^{2}\]
Combine the above derivations, we get
\[|(**)|\leq\frac{\gamma}{2}\left[a\|\theta-\theta^{*}\|_{H_{1}}^{2}+b\|\theta- \theta^{*}\|_{H_{1}}+c\right] \tag{8}\]
where
\[a:= \left[e^{R\|\theta^{*}-\tilde{\theta}^{*}\|_{2}}-1+2\beta\|x\|_{H_{ 1}^{-1}}^{2}\right]\] \[b:= 2e^{R\|\theta^{*}-\tilde{\theta}^{*}\|_{2}}\|\theta^{*}-\tilde{ \theta}^{*}\|_{H}\] \[c:= (e^{R\|\theta^{*}-\tilde{\theta}^{*}\|_{2}}+2\beta\|x\|_{H^{-1}} ^{2})\|\theta^{*}-\tilde{\theta}^{*}\|_{H}^{2}\]
Lastly, by (6) and (8),
\[\left|\log\frac{p(\theta|Z)}{p(\theta|Z^{\prime})}\right|\leq R\|\theta^{*}-\tilde {\theta}^{*}\|_{2}+\log(1-\mu_{2})+\frac{\gamma}{2}[aW^{2}+bW+c].\]
where according to the algorithm \(W:=\|\theta-\theta^{*}\|_{H_{1}}\) follows a half-normal distribution with \(\sigma=\gamma^{-1/2}\).
By standard Gaussian tail bound, we have for all \(\delta<2/e\).
\[\mathbb{P}(|W|\leq\gamma^{-1/2}\sqrt{\log(2/\delta)})\leq\delta.\]
This implies that a high probability upper bound of the absolute value of the privacy loss random variable \(\log\frac{p(\theta|Z)}{p(\theta|Z^{\prime})}\) under \(p(\theta|Z)\). By the tail bound to privacy conversion lemma (Lemma 17), we get that for any set \(S\subset\Theta\)\(\mathbb{P}(\theta\in S|Z)\leq e^{\epsilon}\mathbb{P}(\theta\in S|Z^{\prime})+\delta\) for any \(0<\delta<2/e\) and
\[\epsilon=R\|\theta^{*}-\tilde{\theta}^{*}\|_{2}+\log(1-\mu_{2})+\frac{\gamma c }{2}+\frac{a}{2}\log(2/\delta)+\frac{\gamma^{1/2}b}{2}\sqrt{\log(2/\delta)}.\]
Denote \(v:=\theta^{*}-\tilde{\theta}^{*}\), by strong convexity
\[\|v\|_{2}\leq\|\nabla l(y,x^{T}\theta)[\tilde{\theta}^{*}]\|_{2}/\alpha=|l^{ \prime}|\|x\|_{2}/\alpha\leq L\|x\|_{2}/\alpha\]
and
\[\|v\|_{H}\leq\|\nabla l(y,x^{T}\theta)[\tilde{\theta}^{*}]\|_{H^{-1}}=|l^{ \prime}|\|x\|_{H^{-1}}\leq L\|x\|_{H^{-1}}.\]
Also use the fact that \(|\log(1-\mu_{2})|\leq 2\mu_{2}\) for \(\mu_{2}<0.5\) and \(\mu_{2}\leq\beta\|x\|_{\tilde{H}_{2}^{-1}}^{2}\), we can then combine similar terms and have a more compact representation.
\[\epsilon\leq\epsilon_{0}(1+\log(2/\delta))+e^{\frac{RL\|x\|_{2}}{\alpha}} \left[\frac{\gamma L^{2}\|x\|_{H^{-1}}^{2}}{2}+\sqrt{\gamma L^{2}\|x\|_{H^{-1 }}^{2}\log(2/\delta)}\right]\]
where
\[\epsilon_{0}\leq e^{\frac{RL\|x\|_{2}}{\alpha}}-1+2\beta\|x\|_{H_{1}^{-1}}^{2 }+2\beta\|x\|_{\tilde{H}_{2}^{-1}}^{2}\]
is the part of the privacy loss that does not get smaller as \(\gamma\) decreases.
**Proposition D.2**.: _Let \(\|\cdot\|\) be a norm and \(\|\cdot\|_{*}\) be its dual norm. Let \(F(\theta)\), \(f(\theta)\) and \(\tilde{F}(\theta)=F(\theta)+f(\theta)\) be proper convex functions and \(\theta^{*}\) and \(\tilde{theta}^{*}\) be their minimizers, i.e., \(0\in\partial F(\theta^{*})\) and \(0\in\partial\tilde{F}(\tilde{theta}^{*})\). If in addition, \(F,\tilde{F}\) is \(\alpha,\tilde{\alpha}\)-strongly convex with respect to \(\|\cdot\|\) within the restricted domain \(\theta\in\{t\theta^{*}+(1-t)\tilde{\theta}^{*}\mid t\in[0,1]\}\). Then there exists \(g\in\partial f(\theta^{*})\) and \(\tilde{g}\in\partial f(\tilde{\theta}^{*})\) such that_
\[\|\theta^{*}-\tilde{\theta}^{*}\|\leq\min\left\{\frac{1}{\alpha}\|\tilde{g}\| _{*},\frac{1}{\tilde{\alpha}}\|g\|_{*}\right\}.\]
Proof.: Apply the first order condition to \(F\) restricted to the line segment between \(\tilde{\theta}^{*}\) and \(\theta^{*}\), there are we get
\[F(\tilde{\theta}^{*}) \geq F(\theta^{*})+\langle\partial F(\theta^{*}),\tilde{\theta}^{* }-\theta^{*}\rangle+\frac{\alpha}{2}\|\tilde{\theta}^{*}-\theta^{*}\|^{2} \tag{9}\] \[F(\theta^{*}) \geq F(\tilde{\theta}^{*})+\langle\partial F(\tilde{\theta}^{*}), \theta^{*}-\tilde{\theta}^{*}\rangle+\frac{\alpha}{2}\|\tilde{\theta}^{*}- \theta^{*}\|^{2} \tag{10}\]
Note by the convexity of \(F\) and \(f\), \(\partial\tilde{F}=\partial F+\partial f\), where \(+\) is the Minkowski Sum. Therefore, \(0\in\partial\tilde{F}(\tilde{\theta}^{*})\) implies that there exists \(\tilde{g}\) such that \(\tilde{g}\in\partial f(\tilde{\theta}^{*})\) and \(-\tilde{g}\in\partial F(\tilde{\theta}^{*})\). Take \(-\tilde{g}\in\partial F(\tilde{\theta}^{*})\) in Equation 10 and \(0\in\partial F(\theta^{*})\) in Equation 9 and add the two inequalities, we obtain
\[0\geq\langle-\tilde{g},\theta^{*}-\tilde{\theta}^{*}\rangle+\alpha\|\tilde{ \theta}^{*}-\theta^{*}\|^{2}\geq-\|\tilde{g}\|_{*}\|\theta^{*}-\tilde{\theta }^{*}\|+\alpha\|\tilde{\theta}^{*}-\theta^{*}\|^{2}.\]
For \(\|\tilde{\theta}^{*}-\theta^{*}\|=0\) the claim is trivially true, otherwise, we can divide the both sides of the above inequality by \(\|\tilde{\theta}^{*}-\theta^{*}\|\) and get \(\|\theta^{*}-\tilde{\theta}^{*}\|\leq\frac{1}{\alpha}\|\tilde{g}\|_{*}\).
It remains to show that \(\|\theta^{*}-\tilde{\theta}^{*}\|\leq\frac{1}{\tilde{\alpha}}\|g\|_{*}\). This can be obtained by exactly the same arguments above but applying strong convexity to \(\tilde{F}\) instead. Note that we can actually get something slightly stronger than the statement because the inequality holds for all \(g\in\partial f(\theta^{*})\).
A consequence of (generalized) self-concordance is the spectral (_multiplicative_) stability of Hessian to small perturbations of parameters.
**Lemma D.3** (Stability of Hessian(Nesterov and Nemirovskii, 1994, Theorem 2.1.1), (Bach, 2010, Proposition 1)).: _Let \(H_{\theta}:=\nabla^{2}F_{s}(\theta)\). If \(F_{s}\) is \(R\)-self-concordant at \(\theta\). Then for any \(v\) such that \(R\|v\|_{H_{\theta}}<1\), we have that_
\[(1-R\|v\|_{H_{\theta}})^{2}\nabla^{2}F_{s}(\theta)\prec\nabla^{2}F_{s}(\theta+ v)\prec\frac{1}{(1-R\|v\|_{H_{\theta}})^{2}}\nabla^{2}F_{s}(\theta).\]
_If instead we assume \(F_{s}\) is \(R\)-generalized-self-concordant at \(\theta\) with respect to norm \(\|\cdot\|\), then_
\[e^{-R\|v\|}\nabla^{2}F_{s}(\theta)\prec\nabla^{2}F_{s}(\theta+v)\prec e^{R\|v \|}\nabla^{2}F_{s}(\theta)\]
The two bounds are almost identical when \(R\|v\|\) and \(R\|v\|_{\theta}\) are close to \(0\), in particular, for \(x\leq 1/2\), \(e^{-2x}\leq 1-x\leq e^{-x}\).
|
2309.15890 | An Introduction to Complex Networks in Climate Finance | In this perspective, we introduce recent research into the structure and
function of complex investor networks supporting sustainability efforts. Using
the case of solar, wind and hydro energy technologies, this perspective
explores the complexity in low-carbon finance markets, defined as markets that
direct capital flows towards low-carbon technologies, using network approaches
to study their structure and dynamics. Investors are modeled as nodes which
form a network or higher-order network connected by edges representing projects
in which joint funding or security-related insurance was provided or other
investment-related interaction occurred. We review the literature on investor
networks generally, particularly in the case of complex networks, and address
areas where these ideas were applied in this emerging field. The complex
investor dynamics which emerge from the extant funding scenarios are not well
understood. These dynamics have the potential to result in interesting
non-linear behaviour, growth, and decline, which can be studied, explained and
controlled using the tools of network science. | Alexander P. Kartun-Giles, Nadia Ameli | 2023-09-27T16:47:25Z | http://arxiv.org/abs/2309.15890v1 | # An Introduction to Complex Networks in Climate Finance
###### Abstract
In this perspective, we introduce recent research into the structure and function of complex investor networks supporting sustainability efforts. Using the case of solar, wind and hydro energy technologies, this perspective explores the complexity in low-carbon finance markets, defined as markets that direct capital flows towards low-carbon technologies, using network approaches to study their structure and dynamics. Investors are modeled as nodes which form a network or higher-order network connected by edges representing projects in which joint funding or security-related insurance was provided or other investment-related interaction occurred. We review the literature on investor networks generally, particularly in the case of complex networks, and address areas where these ideas were applied in this emerging field. The complex investor dynamics which emerge from the extant funding scenarios are not well understood. These dynamics have the potential to result in interesting non-linear behaviour, growth, and decline, which can be studied, explained and controlled using the tools of network science.
complex networks; climate change; economics and finance; statistical physics +
Footnote †: journal: _entropy_
0
[MISSING_PAGE_POST]
Figure 1: Green debt issued in the Balkans to date. Nodes are banks, and links exist between banks when they insure a bond issuance together (i.e., investors provide financing for a loan to support a green energy project, and the loan is insured by the larger financial system by buying the debt and reselling to investors as a security). A multilayer network is formed, since banks work together on deals that are domiciled in a specific country. Whenever two banks work together to underwrite a loan for a project whose country of domicile is listed in Kosovo, they are connected with a blue link, with a red link when in Bosnia, and with a purple link in Montenegro. The node degree is reflected in its relative size. The Austrian financial service provider _Erste Group Bank_ has _activity_ 2, since it takes part in two layers, and _degree_ 11, since it has interacted with that many banks.
are drawn [14]. In recent years, however, we turn to the grandest scale, and look at how these ideas apply in real systems such as political groups, social networks, city formations, and beyond. It is the aspect of _randomness_ emerging from _deterministic_ laws in these systems that unites them under the theme of complexity, and it can be remarkable to see case studies of probabilistic analogies between systems usually seen in physics, and these economic systems suggest a vast amount of untapped potential in describing their behaviour.
In particular, in this article, focuses on financial flows channeled into sustainability. Actors usually form a bipartite graph of investors and projects, and the underlying dependency structure takes the form of influences between investors, and, in general, the economic agents involved. The dynamics influenced by this structure constitute syndicated investment in renewable energy. How is the structure of the economic interactions related to the investment dynamics? Is the behaviour universal across different energy markets (such as wind, solar, or hydro)? Is there any observable connection between the complexity of these economic interactions, and the rate of renewable energy investment at all? All these questions are important in climate finance, and therefore scholars turn to the theory of complex networks to help answer them.
This article is structured as follows. In Section 2, we discuss the relevant background to complex networks in climate finance, and investor networks more generally. In Section 3, we discuss empirical evidence in different climate finance scenarios and markets. Finally, in Section 4, we conclude with some take-away messages, and discuss the potential for future research development in this area.
## 2 Background
### Econophysics and Investor Networks
Econophysics is a "revolutionary reaction" to standard economic theory that threatens to enforce a paradigm shift in thinking [15]. Usually, complex networks appear in economics within this general area. An early and highly cited example is Mantenga's use of the graph theory to study the influence between stock prices [2]. A weighted complete graph is obtained from the matrix of correlation coefficients between stocks of a portfolio by considering the synchronous time evolution of (the difference of the logarithm of) daily stock prices. For a review of milestones and challenges in econophysics, see [16].
Within this field, complex networks are commonplace. Remco van der Hofstad writes in his recent book on complex networks that
The advent of the computer age has incited an increasing interest in the fundamental properties of real networks. Due to the increased computational power, large data sets can now easily be stored and investigated, and this has had a profound impact in the empirical studies on large networks. A striking conclusion from this empirical work is that many real networks share fascinating features.
The two primary and first-studied examples of this are the scale-free degree distribution and the small world property, known informally as _six degrees of separation_. This universal behaviour observed in real networks has lead to the new subject of network science [17]. As a subdiscipline of theoretical physics, network science uses techniques and ideas from statistical physics such as random graphs, stochastic processes, combinatorics, and wider mathematical ideas involving probability (as distinct from statistics), analysis (i.e., calculus) and dynamics (dynamical processes, particularly on networks) to reveal the structure and function of complex systems [18; 19].
A growing trend in corporate finance is to apply centrality measures to investor networks derived from various datasets. In Bajo et al. [20], the value of a firm is shown to be strongly correlated with the degree of centrality of its investors in the wider US investor network (as well as with other centrality measures, in an attempt to show the results are robust to a variety of measures). Investors are nodes, and links form between pairs of investors when they co-invest in an equity as listed in a public US equity holding database. They write:
In our sample, the information on the equity holdings by US institutional investors allows to construct a network of relations. Stemming from the simple observation that often institutional blockholders share co-ownership relationships with other institutional investors, we interpret the blockholder as actor and the co-ownership link as a tie.
The network is then the set of actors and their ties. Fracassi et al. (2018) also consider centrality, showing how managers are influenced by their social peers when making corporate policy decisions, while Crane et al. (2018) show how investors acting together in cliques can amplify their voice concerning how the company is run, which strengthens governance, while weakening governance via threat of exit.
In Dordi et al. (2018), ten actors are identified that can accelerate the transition away from fossil fuels using a centrality analysis of shareholder data from Bloomberg, and the Carbon Underground 200 list of companies (200 companies that own 98% of global oil, gas and coal reserves). The study finds that the top ten owners of CU200 fossil fuel reserve holders are Blackrock, Vanguard, the Government of India, State Street, the Kingdom of Saudi Arabia, Dimensional Fund Advisors, Life insurance Corporation, Norges Bank, Fidelity Investments and Capital Group. Similarly, Galaz et al. (2018) identify a limited set of financial actors mediating flows of capital that affect biomes of the earth.
In Dimson et al. (2018), the authors study coordinated engagements by a network of shareholders cooperating to influence firms on environmental and social issues. They write in the conclusion that
Our evidence indicates that, for maximum effect, coordinated engagements on (ESG) issues should preferably have a credible lead investor who is well suited geographically, linguistically, culturally and socially to influencing target companies.
Shareholder activist networks are studied by Yang et al. (2018). Pension funds, special interest groups and religious organizations interact in a network of networks to influence corporate behaviour through the joint control of shares for what they perceive to be societal benefit. They show a correlation between both eigenvector and degree centrality, and the "efficiency of results" obtained by the activists.
### Nonequilibrium Statistical Physics Meets Climate Finance
Network evolution--see, for example Figure 2--concerns a topic within network science where growing network models, known as models of "nonequilibrium statistical physics", are used as null models of network growth. They attempt to explain, via simple combinatorial rules, the large-scale universal behaviour of real networks, including their degree distribution, clustering, homology, and anything else concerning their structure.
An early and fundamental observation in network science is the power-law degree distribution observed in many real networks (such as citation networks, social networks, the internet, world airline connection, etc). How does this appear? Even more important is observing it in the first place, by comparing real networks with a null statistical model, which in the case of Barabasi and Albert is the Erdos-Renyi random graph. The degree distribution has been known since the 1950s to follow a Poisson distribution (in the so-called thermodynamic limit where the number of nodes continues to infinity, but keeping the expected degree constrained such that it converges to a positive constant). The fact that random networks do not have power law degrees (also called a _scale-free_ degree distribution) suggests there exist global organizing principles at play that "fatten the tail" or, more formally, _skew_ the degree distribution.
The question in finance which has only been recently explored is how this happens in financial systems such as green bond syndication networks, or investor networks as discussed above. A simple observation is that the degree distribution, see, e.g., Figure 3, which is a the discrete probability mass function for the node degree (or, in layman's terms, the proportion of nodes with a certain degree, plotted against the degree), has an exponent
\(\gamma<2\). An example hypergraph evolution model where banking syndicates of more than two parties can form is shown in Figure 2.
Figure 3: Degree distribution for the green bond and loan network. The red squares represent the probability a randomly selected bank in the international banking network supporting green loans and bonds is involved in \(k\) bond issuances. The non-linear model \(P(k)\propto k^{-1.77}\) is the black line. This suggests a highly right-skewed degree distribution with exponent 1.77, which occurs because the syndicates arrive in time faster than the banks, leaving a dense network where deals/banks \(>1\), and grows in time.
Figure 2: Hypergraph evolution. Nodes represent banks, and hyperedges (coloured edges) represent syndicates. New hyperedges attach to the current network nodes based on preferential attachment.
Chung et al. points out that preferential attachment models of network evolution cannot explain such a large skew [28]. Alternative suggested models involve node duplication presented by Chung et al. in their work on biological networks. The Pittman-Yor process is also a candidate which involves preferential attachment and can explain degree distributions whose exponent can be lower even than unity [27; 28]. A major research question is to explain the skew we observe in the green bond syndication network of Section 3.4. This develops early research of Rickman et al. in [9] and Ameli et al. in [7] who also consider the effect of the fitness model of Bianconi-Barabasi [29] using the work of Pham et al. on attachment functions [30].
### Investor Hubs Dominate the Market
The network analysis in [9] provides the first quantitative evidence of a right-skewed degree distribution. With _hubs_ defined in this context to be vertices of \(G\) with more pairwise connections than average by one standard deviation, it is observed that _"The domination of energy markets by a few organizations can be driven by large incumbents achieving cost reductions through, e.g., economies of scale, better access to finance, or vertical integration of services bringing in multiple revenue streams"_ [9], Section 3.1). The authors also write that _"we observe a strong positive correlation between growth of wind markets and the level of debt hub activity"_ ([9], Section 3.2), and discuss this in depth.
### Fit Get Richer, and Rich Get Richer
In Ameli et al., the preferential attachment model is compared with the fitness model [7; 29]. Instead of the standard method of considering attachment of new nodes based on the existing degree, nodes may attach to the existing lenders or sponsors based on the intrinsic fitness of nodes, as was proposed by Bianconi and Barabasi in the Bianconi-Barabasi model introduced in 2001 to develop the theory of competition and multiscaling in evolving networks [29]. Ameli et al. address this in the context of climate finance networks of energy efficiency investors.
### Community Detection
Community detection is one of the largest areas of complex networks [31]. The goal is to define _community_ in such a way that groups of financial actors are identified in a way that reveal a non-trivial structure to the larger system. Larosa et al. identify a significant home bias [32], writing _"The investor community analysis reveals geographical investment patterns. In far-east Asian countries (Korea and Vietnam) the interactions between domestic investors (i.e., community density) are more frequent compared to the rest of the world. In India, the financial landscape is dominated by domestic state-owned banks, while Japan has a strong presence over the continent through the investment made by its second biggest bank, namely Sumitomo Mitsui Banking Corporation and a private utility (Kansai Electric Power Co., Inc.)... Investors mainly cluster together at national and regional level confirming the existence of a "home bias" in investments"_ ([8], Section 3.1).
Home bias, in layman's terms, occurs when investors are more likely to invest in projects in their native country or region. For obvious reasons, knowledge of the local economy and the ability to predict the long-term prospects of a venture are a major advantage. The community detection of Larosa et al. is the first quantitative evidence of this effect.
Larosa et al. detail their methodology in [8], Section 2 using the Jaccard coefficient [33]. The effect is, in fact, every important for the corresponding network science. Home bias leads to local clustering and the potential emergence of network geometry, as nearby links are favored over long range links, on the whole. Longer range links exist, but they connect large hubs in a way similar to the World Airlines Network [10]. As such, it is important to askteh following question: To what extent does home bias lead to the emergent network structure in climate finance networks of this type?
### Centrality Measures
Centrality is a measure of the importance of a node in a network. Important examples include betweenness centrality [34], where the extent to which nodes lie on multiple shortest paths in the network is considered important [35; 36; 37]. PageRank, used by Google, is an example of providing centrality scores to websites containing a searchable keyword based on the extent to which random walks around the interconnected websites spend time on a particular webpage.
How do we measure centrality in climate finance networks? What makes a lender node \(i\in I\) or a project sponsor node \(s\in S\) important to the network? Larosa et al. introduce a new measure based on the number of communities an investor node takes part in. They introduce the community-based centrality score (CC), writing the following: _"The CC is strongly anchored to the link community structure. In fact, well-connected investors are not just the ones with many active co-investments, but rather those who operate in communities with high connecting power. Investors with high CC score will belong to communities capable of reaching distant groups of actors, hence spreading the available financial resources to different players. We express CC as the weighted sum of communities a node belongs to over the X communities weighted by the average similarity between pairs of communities"_. The authors discuss this in further detail in [8], Section 2. Further work identifying the centrality measures important in climate finance markets is of great interest.
## 3 Empirical Evidence
### Wind Markets
Wind makes up a significant part of renewable energy consumption around the world. For example, in the UK, wind makes up about a quarter of the energy contribution of the country [38], with 11 thousand wind turbines (14 GW onshore and 14 GW offshore) active by 2023. In a recent journal article, _The internal dynamics of fast-growing wind finance markets_[9], Rickman et al. investigate, inter alia, the claim that preferential attachment (PA) drives the evolution of the hypergraph \(G\) introduced in Section 2.2. PA was initially introduced in a paper by Barabasi and Albert in 1999 in order to explain the emergence of the scale-free property in complex networks [39]. The arrival of new lenders is a discrete process of unknown temporal distribution, but it is hypothesized in [9] that they form new links to existing equity investors with probability proportional to the attachment kernel
\[A_{l}(w_{l})=w_{l}^{\beta_{l}}, \tag{1}\]
and sponsors, on arrival, form new links to lenders with a probability proportional to the attachment kernel
\[A_{s}(w_{s})=w_{s}^{\beta_{s}}. \tag{2}\]
With multiple lenders involved in a single project, this constitutes a hyperedge of G. When a project has recieved multiple funding sources in the form of equity or debt loans, this also constitutes a hyperedge.
These authors do not attempt to recreate the BNEF data for the wind finance market via a random model [11] involving preferential attachment. This aspect remains an open avenue of further research. Instead, assuming this hypothesis, the exponents \(\beta_{l}\) and \(\beta_{s}\) are estimated via likelihood-based statistical methods. The lender exponent \(\beta_{l}\) of Equation (1) and the sponsor exponent \(\beta_{s}\) of Equation (2) are obtained via partial maximum likelihood estimation [40]. The accuracy of the validity of the PA model is itself assessed via the likelihood ratio test of Clegg [41]; see [9], Section 2.5.
The authors found that the preferential attachment theory described 11 out of the 16 countries analyzed in the study. They write that _debt investors (lenders) face competition for projects and past lending experience is a major determinant of who will be selected as a project partner_[9], Section 3.3. The authors claim that preferential attachment in this market is based on financial learning. Egli et al. write that _"On the level of the renewable energy finance industry, investors benefitted from growing renewable energy technology (RET) markets
and subsequent learning-by-doing (e.g., better risk assessment). Larger markets allowed banks to form in-house project finance teams specialized in RETs. The knowledge and data that these teams accumulated allowed for a more accurate technology assessment. Consequently, project risks declined. For example, as the market had accumulated experience on historical wind speeds, investors shifted from calculating project returns on wind resource estimations with 90% certainty" ([42], Drivers of Change).
### Hydro Markets
Hydroelectric power (hydropower) is the largest international contributor to renewable energy production, producing more than half of the total output. Hydropower is particularly popular in developing countries, and thus plays an important role in the UN sustainability goals [38]. Larosa et al. write that _"financing hydropower projects requires investors to pay large upfront capital and lock in their capital for decades (hydro projects can last for 100 years), while also bearing high investment risks"_. With this in mind, the hydroelectric project financing landscape is addressed by Larosa et al. in _"Finding the right partners? Examining inequalities in the global investment landscape of hydropower"_[8]. Given the unique aspect of intercontinental development at work, internationally diverse financial actors need to be assembled. The focus is therefore on centrality and community detection rather than network evolution; see Figure 3 in [8].
Financing hydropower projects necessitates substantial upfront capital investment, with funds tied up for extended periods, often spanning a century due to the long lifespan of such projects. Indeed, the construction of a large hydropower dam typically exceeds a billion dollars, demanding patient capital and enduring the natural investment cycle. Thus, an intricate network of diverse investors, and effective capital distribution becomes essential for hydropower assets. The focus is therefore on centrality and community detection rather than network evolution.
### Energy Efficiency Markets
Energy Efficiency Technologies are interventions that reduces energy consumption, such as using light-emitting diodes (LED) in place of conventional filament bulbs. This technology has a major place in modern science due primarily to its efficiency. The Nobel Prize in 2014 was awarded for the blue LED as it enables white light and a more universal employment of energy efficiency with the climate in mind [43]. Ameli et al. write that _"investments in energy efficiency (EE) are particularly crucial to reduce the energy demand for a growing world economy and are listed as core measures for sustainable recovery plans"_[7].
As with the research in wind markets, Ameli et al. focus on the theme of preferential attachment in a bipartite graph of investors and energy efficiency projects. Applying ideas from Pham et al. in their recent work concerning the joint estimation of preferential attachment and node fitness in growing complex networks, the authors look at how influential the intrinsic fitness of a node to acquire links is compared with its degree-based link acquisition (i.e., simple preferential attachment compared with a fitness-based network evolution model [29]).
They _"empirically estimate the preferential attachment (PA) function and node fitnesses from observed network data"_[7]. The authors suggest that there is a balance between preferential attachment and node fitness determining the evolution of their network, writing the following: _"Following Pham et al. approach, we measure the respective influences of the preferential attachment and the fitness models"_[7; 30]. The PAFit method is discussed by Pham: _"Our main contributions are twofold. The first contribution is a statistical method called PAFit to simultaneously estimate the PA and node fitness functions without imposing any assumptions on their functional forms. To the best of our knowledge, PAFit is the first ever method in the literature that can do so"_.
Given this approach to the theory of network evolution, the authors draw the conclusion that _"...this suggests that the 'rich get richer' mechanism becomes weaker when the 'fit get richer' effect is considered, showing that to some extent technology's ability to attract new
_investment is explained by its fitness"_. They also discuss the snapshot overtime of the network (see Figure 4), observing the total number of investments that different types of investors (e.g., from the utilities sector) have made ([7]: evolution, dynamics and growth of the energy efficiency network).
### Green Bonds, Loans, and Networks of Underwriter Syndicates
Green bonds, loans and debt securities are designated to finance environmentally friendly projects. This may take the form of renewable energy infrastructure, such as a wind farm, or refurbishment of real estate to make it more sustainable. Whatever project requires funding, the project managers approach a bank looking for funding. They attempt to acquire investment by selling green bonds to investors [9]. These are underwriting, i.e., insured by a banking syndicate who buys all the bonds and resells them to investors for profit, therefore taking on the risk in case the project collapses. This guarantees returns for the holders of the bonds (these may be private customers, such as pension funds, or individuals purchasing online using their own funds) [44].
One can build a complex network--see Figure 5--from transaction data in the following way. The modeling consists of
Figure 4: Aggregated network of financial actors involved in energy efficiency financing (2000–2017) in different sectors, taken from [7]. Nodes are investors, and edges are financial interactions between them, with the following key: pink from state-owned utility, brown from investor-owned utilities, light blue from manufacturing and services, green from the governmental sector, dark purple from an energy cooperative, light-green research and the university sector, blue from institutional investors, orange construction and real estate, turquoise diversified, deep green chemicals and steel, green-brown food, bright red retail, light purple defence, and bright purple the remaining uncatagorised areas,.
1. A hypergraph \(G(V,E)\) where \(V\) is the vertex set and \(E\) is the edge set, with \(|V|=n|\) and \(|E|=m\), and each edge \(e\) is simply a subset of \(V\); see [45], Introduction.
2. The vertices, which represent banks.
3. The hyperedges (i.e., higher-order edges representing groups of investors and an investment rather than simply pairs of investors). which represent project financing by the corresponding banking syndicate. The amount of money invested is large enough in many cases to require large syndicates of banks to underwrite the risk.
See also Berge [46] and Beckenbach [47] for a discussion of bipartite hypergraphs. Note that there is also a potential to view this as a simplicial complex [48]. Higher-order network models of a banking network is an interesting area of further research in this area.
## 4 Final Words and Open Avenues of Research
The area is still developing, but has immense potential. Understanding the interesting aspects of complex networks which present particular application in banking networks funding climate initiatives allows policy makers intervention and influence on climate funding in a postive way for society.
The ways in which different market places, different sectors such as finance, technology or utilities, or different geographic regions lead to different network structure, or the ways in which it is universal, is still not well understood. Community detection needs further work, for example, by developing network embedding techniques that incorporate financial metrics (e.g., weighted edges representing money mobilized in a deal). As we disucssed, higher-order network models of a banking network is an interesting area of further research.
Two recent articles have addressed preferential attachment as the main driver of network evolution. It remains an open question as to whether random models of climate finance hypergraphs which evolve based on "rich get richer" or "fit get richer" models are able to reproduce the data of BREF in a sophisticated way. This would present an important connection between statistical physics and climate finance, and allow further insights into how these networks evolve and develop. How to then encourage the transition to green energy based on this detailed understanding is a difficult and multi-disciplinary task, but well founded on the excellent descriptive analysis that can be provided by these early works in complex networks Further work on network evolution is critical to understand the mechanisms that generate the highly skewed degree distributions observed in banking syndicate networks. We look forward to a future review concerning research developing these ideas, and to the corresponding new insights into climate finance as we track the critically important goals of the Paris agreement.
Figure 5: A sample of the multilayer network of banks (nodes) underwriting green bonds and loans in two countries, UK (blue links), and France (red links). Goldman Sachs, BNP Paribas, and HSBC connect the layers, serving as international actors which unite layers more often than local banks.
We hope that in the future, the link between policy and network structure can be addressed, as well as the ways in which this structure leads to better and more sustainable green growth. This is a major challenge which we hope the networks community can begin to address to provide a remarkable example of physics in society.
Writing--original draft, A.P.K.-G. and N.A. All authors have read and agreed to the published version of the manuscript.
Both authors acknowledge support from the European Research Council (ERC) under the European Union's Horizon 2020 research and innovation programme (grant agreement No 802891).
**Data Availability Statement**: Bloomberg data used for any banking syndicate informaton (not published elsewhere) is proprietary and shareable on request with the corresponding author.
**Acknowledgments:** We thank Denitsa Angelova, Ginestra Bianconi, Claudia Brown, Max Falkenberg, Michael Grubb, Ben Hinder, Francesca Larosa, Figo Lau, Sumit Kothari, and Jamie Rickman, for many helpful discussion.
**Conflicts of Interest**: The authors declare no conflict of interest.
|
2302.14624 | The 2022 NIST Language Recognition Evaluation | In 2022, the U.S. National Institute of Standards and Technology (NIST)
conducted the latest Language Recognition Evaluation (LRE) in an ongoing series
administered by NIST since 1996 to foster research in language recognition and
to measure state-of-the-art technology. Similar to previous LREs, LRE22 focused
on conversational telephone speech (CTS) and broadcast narrowband speech (BNBS)
data. LRE22 also introduced new evaluation features, such as an emphasis on
African languages, including low resource languages, and a test set consisting
of segments containing between 3s and 35s of speech randomly sampled and
extracted from longer recordings. A total of 21 research organizations, forming
16 teams, participated in this 3-month long evaluation and made a total of 65
valid system submissions to be evaluated. This paper presents an overview of
LRE22 and an analysis of system performance over different evaluation
conditions. The evaluation results suggest that Oromo and Tigrinya are easier
to detect while Xhosa and Zulu are more challenging. A greater confusability is
seen for some language pairs. When speech duration increased, system
performance significantly increased up to a certain duration, and then a
diminishing return on system performance is observed afterward. | Yooyoung Lee, Craig Greenberg, Eliot Godard, Asad A. Butt, Elliot Singer, Trang Nguyen, Lisa Mason, Douglas Reynolds | 2023-02-28T15:05:33Z | http://arxiv.org/abs/2302.14624v1 | # The 2022 NIST Language Recognition Evaluation
###### Abstract
In 2022, the U.S. National Institute of Standards and Technology (NIST) conducted the latest Language Recognition Evaluation (LRE) in an ongoing series administered by NIST since 1996 to foster research in language recognition and to measure state-of-the-art technology. Similar to previous LREs, LRE22 focused on conversational telephone speech (CTS) and broadcast narrowband speech (BNBS) data. LRE22 also introduced new evaluation features, such as an emphasis on African languages, including low resource languages, and a test set consisting of segments containing between 3s and 35s of speech randomly sampled and extracted from longer recordings. A total of 21 research organizations, forming 16 teams, participated in this 3-month long evaluation and made a total of 65 valid system submissions to be evaluated. This paper presents an overview of LRE22 and an analysis of system performance over different evaluation conditions. The evaluation results suggest that Oromo and Tigrinya are easier to detect while Xhosa and Zulu are more challenging. A greater confusability is seen for some language pairs. When speech duration increased, system performance significantly increased up to a certain duration, and then a diminishing return on system performance is observed afterward.
Yooyoung Lee\({}^{1}\), Craig Greenberg\({}^{1}\), Eliot Godard\({}^{1,*}\), Asad A. Butt\({}^{1,*}\), Elliot Singer\({}^{2}\), Trang Nguyen\({}^{2}\), Lisa Mason\({}^{3}\), Douglas Reynolds\({}^{3}\)\({}^{1}\)NIST ITL/IAD/Multimodal Information Group, MD, USA
\({}^{2}\)MIT Lincoln Laboratory, Lexington, MA, USA
\({}^{3}\)U.S. Department of Defense, MD, USA [email protected]
**Index Terms**: human language technology, LRE, language recognition, language detection, speech technology performance evaluation
## 1 Introduction
The 2022 NIST Language Recognition Evaluation (LRE), held in fall of 2022, was the latest in an ongoing series of language recognition evaluations conducted by NIST since 1996 [1]. The primary objectives of the LRE series are to: 1) advance language recognition technologies with innovative ideas, 2) facilitate the development of language recognition technology by providing data and research direction, and 3) measure the performance of the current state-of-the-art technology. Figure 1 shows the number of target languages and participants (based on sites) for all NIST LREs.
LRE22 was conducted entirely online using a web-based platform like LRE15 [2] and LRE17 [3, 4]. The updated LRE22 web-platform1 supported a variety of evaluation activities, such as registration, data license submission, data distribution, system output submission and validation/scoring, and system description/presentation uploads. A total of 16 teams from 21 organizations in 13 different countries made submissions for LRE22. Figure 2 displays a world map with heatmap representing the number of participating sites per country. Since two teams did not submit valid system descriptions, analysis considering only 14 teams in presented this paper. It should be noted that all participant information, including country, was self-reported.
Footnote 1: [https://lre.nist.gov](https://lre.nist.gov)
## 2 Task
The general task in the NIST LREs is language detection, i.e. to automatically determine whether a particular target language was spoken in a given test segment of speech. Since LRE11 [5], the focus of the language detection task had turned to distinguishing between closely related, and sometimes mutually intelligible, languages. However LRE22 introduced a new emphasis on distinguishing between African languages, including low resource languages. Table 1 shows the 14 target languages included in LRE22. Similar to LRE17, LRE22 participants were required to provide a 14-dimensional vector of log-likelihood scores corresponding to the languages in Table 1. Unlike LRE17, language clusters were not considered in this evaluation; a language cluster is a group of two or more consonant sounds with those from the same speech community [6].
Like LRE17, there were two training conditions in LRE22: _fixed_ and _open_. For the _fixed_ training condition, participants were restricted to use only a limited pre-specified set of data
Figure 1: Language and participant count for the NIST LREs
Figure 2: Heatmap of the world showing the number of LRE22 participating sites per country.
for system training and target model development. For the _open_ training condition, participants were allowed to utilize unlimited amounts of publicly available and/or proprietary data for their system training and target model development. To facilitate more meaningful cross-system comparisons, LRE22 participants were required to provide submissions to the _fixed_ condition while participation in the optional _open_ condition was strongly encouraged to understand the impacts that larger amounts of training and development data have on system performance. In order to encourage participation in the _open_ training condition, the deadline for this condition was made one week later than the required _fixed_ training condition submission deadline. A total of 65 valid submissions were received, 40 for the _fixed_ training condition and 25 for the _open_ condition. LRE participants were required to specify one submission as _primary_ for each training condition they took part in, while all other systems submitted were considered _alternate_.
## 3 Data
This section provides a brief description of data used in LRE22 for training, development (_dev_), and evaluation (_test_) sets, along with the associated metadata.
### Training set
As mentioned in Section 2, there were two training conditions in LRE22. The _fixed_ condition limited the system training and development data to the following specific data sets provided to participants by the Linguistic Data Consortium (LDC): 2017 NIST LRE _dev_ set and previous NIST LRE training data (LDC2022E16), 2017 NIST LRE _test_ set (LDC2022E17), 2022 NIST LRE _dev_ set (LDC2022E14). The VoxLingua107 data set [7] was also permitted for use in the _fixed_ condition. The _open_ training condition removed the limitations of the _fixed_ condition. In addition to the data listed in the _fixed_ condition, participants could use any additional data to train and develop their system, including proprietary data and data that are not publicly available. LDC also made selected data from the IARPA Babel Program [8] available to participants to be used in the _open_ training condition.
### Development and test sets
The development (_dev_) set is normally used to build/optimize a system model during the development process while the evaluation (_test_) set is used to evaluate the performance of the system model. The speech segments in the LRE22 _dev_ and _test_ sets were selected from data sets collected by the Linguistic Data Consortium (LDC) to support LR technology evaluations; namely the Magrhebi Language identification (MAGLIC), Speech Archive of South African Languages (SASAL), and Low Resource African Languages (LRAL) corpora. The MAGLIC corpus was a CTS-only collection based in Tunisia and includes four regional language varieties spoken in North Africa: Algerian Arabic, Libyan Arabic, Tunisian Arabic, and North African French. The SASAL corpus was a CTS and BNBS collection located in South Africa and contains several African language varieties, a subset of which were included in LRE22: Afrikans, Ndebele, Tsonga, Venda, Xhosa, and Zulu, as well as South African English and Indian-accent South African English. The LRAL corpus was a BNBS collection based in Ethiopia, and, of the languages in LRAL, two were selected for inclusion in LRE22: Oromo and Tigrinya.
All audio data provided was sampled at 8 kHz, a-law encoded, and formatted as SPHERE [9] files. When the source audio recordings were higher bandwidth or encoded differently, they were downsampled and transcoded to 8-kHz a-law. Unlike in previous LREs, the amount of speech in the LRE22 segments was uniformly sampled between approximately 3 and 35 seconds, as determined by an automatic speech activity detector. Figure 3 shows a stacked histogram for the _dev_ and _test_ sets. The _dev_ set consisted of 300 segments per target language while the _test_ set contained a total of 26,473 segments ranging from 383 to 2,769 segments across the target languages.
### Metadata
The metadata collected by LDC can be categorized into audio- and audit-related metadata. The audio metadata indicates information related to the audio recording or segment, such as speech duration, data source type (i.e., either CTS or BNBS), and source file (i.e., the original recording from which the audio segment was extracted). The audit metadata reflects a human auditor's judgement of the speech, having listened to an audio recording, such as whether the recording contained a single speaker, if the person speaking was a native speaker, the
\begin{table}
\begin{tabular}{|l|l||l|l|} \hline
**Language** & **Code** & **Language** & **Code** \\ \hline Afrikans & afr-afr & Ndebele & nbl-nbl \\ \hline Tunisian Arabic & ara-aeb & Oromo & orm-orm \\ \hline Algerian Arabic & ara-arq & Tigrinya & tir-tir \\ \hline Libyan Arabic & ara-ayl & Tsonga & tso-tso \\ \hline South African English & eng-ens & Venda & ven-ven \\ \hline Indian-accent South & eng-iaf & Xhosa & xho-xho \\ African English & & & \\ \hline North African French & fra-ntf & Zulu & zul-zul \\ \hline \end{tabular}
\end{table}
Table 1: LRE22 target languages
Figure 4: System performance (actual and minimum costs) on primary submissions under the fixed training condition
Figure 3: Distribution of speech segments per target language for both dev and test sets
speech clarity, the speaker sex, or if the recording took place in a noisy environment. In this paper, we limit our analyses on data source type and speech duration.
## 4 Performance Measure
As stated in the Section 2, LRE22 participants were required to provide a 14-dimensional vector of log-likelihood scores for the 14 target languages (see Table 1 for the LRE22 target languages). Unlike LRE17, language clusters were not considered in this evaluation. Pair-wise performance was computed for all target/non-target language pairs. A decision threshold derived from log-likelihood ratios was used to determine the number of missed detections and false alarms, computed separately for each target language. The missed detections (Misses) indicate the segments that are the target language, but are not predicted to be, while the false alarms (FAs) indicate the segments that are falsely identified as the target language. The probabilities of missed detections (\(P_{Miss}\)) and false alarms (\(P_{FA}\)) are then combined using a linear cost function [10]:
\[C(L_{T},L_{N})= C_{Miss}\times P_{Target}\times P_{Miss}(L_{T})+\] \[C_{FA}\times(1-P_{Target})\times P_{FA}(L_{T},L_{N}) \tag{1}\]
where \(L_{T}\) and \(L_{N}\) are target and non-target languages, respectively. Here, \(C_{Miss}\) (cost of a missed detection), \(C_{FA}\) (cost of a false alarm), and \(P_{Target}\) (the _a priori_ probability of the specified target language) are application-motivated cost model parameters. Two sets of cost-function parameters were used in LRE22: the first set of parameters provides equal weighting to the costs of errors (\(C_{Miss}=C_{FA}=1\)) and a target probability of 0.5, while the second set of parameters changed the target probability to 0.1. The final metric, \(C_{Primary}\), consisted of the mean value of the costs using the two different cost function parameters, normalized by dividing by the cost of a "no information" system. Costs using thresholds that minimize the Bayes risk, \(actC_{Primary}\), as well as using thresholds that minimize the empirical cost, \(minC_{Primary}\), were computed. We refer readers to the LRE22 evaluation plan [10] for details of the performance measures.
## 5 Results and Analyses
A total of 14 teams from academic and industrial sectors successfully completed LRE22. For both the _fixed_ and _open_ training conditions, the teams were allowed to have one _primary_ submission and one or more _alternate_ submissions. In this section, we present a summary of results and key findings on the _primary_ submissions using the performance metrics defined in Section 4.
Figure 4 illustrates system performance for all the _primary_ submissions under the _fixed_ training condition. The x-axis are anonymized team names and the y-axis are \(C_{Primary}\) values for both the actual and minimum costs (N.B., a lower \(C_{Primary}\) value indicates better performance). The orange dashed-line indicates an actual cost, \(actC_{Primary}\), and the blue is a minimum cost, \(minC_{Primary}\), for a reference system; we used an off-the-shelf algorithm as a reference to validate the LRE22 data construction and evaluation process. The reference system was trained and fine-tuned only on VoxLingua107 and the LRE22 development set. The shaded color on each team's bar indicates the difference between \(actC_{Primary}\) and \(minC_{Primary}\), which indicates a calibration error. In Figure 4, we observe that, given the primary submissions under the _fixed_ condition, the \(C_{Primary}\) values range from 0.11 to 0.73 across all the teams. It is observed that the top-performing systems (e.g., T1-T4) have small calibration errors (i.e., the absolute difference between the actual and minimum costs is relatively small) while a few teams (e.g., T5, T7, T11 and T12) are less well-calibrated.
As described in Section 2, the _fixed_ training condition is required while _open_ is optional; 7 out of the 14 teams submitted their system outputs to the _open_ training condition. Figure 5 illustrates a performance comparison of training conditions (_fixed_ vs _open_) for the seven teams only (ordered by _open_ system performance). The result shows that system performance from the _open_ condition generally outperforms the _fixed_ condition submission across the teams (except T9), and a calibration error is observed in team T7 under the _open_ training condition.
To understand variability of language-level system performance and language detection difficulty, Figure 6 illustrates a box plot of the primary submission performance under the _fixed_ training condition. The x-axis is a team name (ordered by median), the y-axis is the actual cost (\(actC_{Primary}\)), and each point represents a target language. The black line within a box is the median, the box edges represent the lower quartile and upper quartile, and the whiskers extending from the box indicate variability outside the upper and lower quartiles. We observe a high dispersion of language performance for a few teams such as T4, T5, and T9. Overall, the _Oromo (orm-orm)_ and _Tigrinya (tit-tir)_ points marked in blue are located in the bottom side of Figure 6 (easier to detect) while _Xhosa (xho-xho)_ and _Zulu (zul)_ are in the top (harder to detect); a similar trend is observed across the teams.
To examine language-pair confusability, we conducted data analysis using heatmap confusion matrices as shown in Figure 7. The axes are language codes.
Figure 5: A performance comparison of the fixed and open training conditions
Figure 6: A language-level performance on primary submissions under the fixed training condition
upper-left to bottom-right are \(P_{Miss}\) (false reject rates) and the off-diagonal values are \(P_{FA}\) (false alarm rates). A higher false alarm probability implies a potential confusability for that language pair. For simplicity, results of \(P_{Target}=0.5\) for the four leading systems are demonstrated using heatmap confusion matrices. Given the _test_ set and systems, a higher confusability is observed for three clusters of language pairs as follows: 1) among Arabic languages (ara-aeb, ara-ara, ara-ayl), 2) between South African English (eng-ens) and Indian-accented South African English (eng-iaf), and 3) Mdebele (nbl-nbl), Tsonga (iso-tso), Venda (ven-ven), Xhosa (xho-xho) and Zulu (zul-zul).
To gain insight on how metadata variables (i.e., factors) affect system performance, we conducted experiments given the metadata listed in Section 3.3. For simplicity, the following analyses are demonstrated using _data source type_ and _speech duration_ only. The LRE22 data was collected in two primary genres, namely, conversational telephone speech (CTS) and broadcast narrowband speech (BNBS) which we call _data source type_. Figure 8 shows system performance (\(\mathit{actC}_{Primary}\)) partitioned by _data source type_ (CTS vs BNBS) for all the _primary_ submissions under the _fixed_ training condition. The top-left pie chart is a distribution of CTS and BNBS on the _test_ set, which is imbalanced. The bar plot shows a performance comparison between CTS (blue) and BNBS (orange) across all the teams. The results indicates that, given the imbalanced distribution, CTS is more challenging and that _data source type_ has a strong effect on system performance; a similar trend is observed across the systems.
Durations of _test_ set segments varied between 3s and 35s of speech that have been randomly sampled and extracted from longer recordings as determined by an automatic Speech Activity Detector (SAD) which we call _SAD duration_. Figure 8(a) shows a distribution of _SAD duration_ for the _test_ set and Figures 8(b) shows the performance of a top-performing system by _SAD duration_. Given the _test_ set and systems, it is seen that when _SAD duration_ increases, \(\mathit{actC}_{Primary}\) significantly decreases up to a certain duration (between 15s and 20s). After that, a diminishing return on system performance improvement is observed across the systems.
## 6 Conclusions
We presented a summary of the 2022 NIST Language Recognition Evaluation with an emphasis on low resource languages and random duration of speech segments.
The results showed that almost no calibration error was observed for the top-performing systems for both the _fixed_ and _open_ training condition. Overall, the submissions under the _open_ training condition had better performance compared to the _fixed_ condition submissions, with only one exception. Given the _test_ set and _primary_ systems under the _fixed_ training condition, we found that Oromo and Tigrinya were easier to detect while Xhosa and Zulu were harder to detect. A greater confusability was observed for the language pairs 1) among Zulu, Xhosa, Ndebele, Tsonga, and Venda, 2) between South African and Indian-accent South African English, and 3) among Tunisian, Algerian, and Libyan Arabic languages. Some of the metadata, such as _data source type_ and _SAD duration_, had a significant effect on system performance for all systems. In terms of _SAD duration_, when speech duration increased, system performance significantly increased up to a certain duration, and then we observed a diminishing return on system performance afterward.
## 7 Disclaimer
These results presented in this paper are not to be construed or represented as endorsements of any participant's system, methods, or commercial product, or as official findings on the part of NIST or the U.S. Government.
The work of MIT Lincoln Laboratory (MITLL) is sponsored by the Department of Defense under Air Force Contract No. FA8702-15-D-0001. Any opinions, findings, conclusions or recommendations expressed in this material are those of the author(s) and do not necessarily reflect the views of the U.S. Air Force.
Figure 8: A data source type distribution and effect on systems
Figure 7: Language confusability of the leading systems
Figure 9: SAD duration effect on system performance (a) SAD duration distribution for the dev and test set (b) T1 system performance vs. SAD duration. |
2309.09261 | Leveraging Large Language Models for Sequential Recommendation | Sequential recommendation problems have received increasing attention in
research during the past few years, leading to the inception of a large variety
of algorithmic approaches. In this work, we explore how large language models
(LLMs), which are nowadays introducing disruptive effects in many AI-based
applications, can be used to build or improve sequential recommendation
approaches. Specifically, we devise and evaluate three approaches to leverage
the power of LLMs in different ways. Our results from experiments on two
datasets show that initializing the state-of-the-art sequential recommendation
model BERT4Rec with embeddings obtained from an LLM improves NDCG by 15-20%
compared to the vanilla BERT4Rec model. Furthermore, we find that a simple
approach that leverages LLM embeddings for producing recommendations, can
provide competitive performance by highlighting semantically related items. We
publicly share the code and data of our experiments to ensure reproducibility. | Jesse Harte, Wouter Zorgdrager, Panos Louridas, Asterios Katsifodimos, Dietmar Jannach, Marios Fragkoulis | 2023-09-17T12:53:53Z | http://arxiv.org/abs/2309.09261v1 | # Leveraging Large Language Models for Sequential Recommendation
###### Abstract.
Sequential recommendation problems have received increasing attention in research during the past few years, leading to the inception of a large variety of algorithmic approaches. In this work, we explore how large language models (LLMs), which are nowadays introducing disruptive effects in many AI-based applications, can be used to build or improve sequential recommendation approaches. Specifically, we devise and evaluate three approaches to leverage the power of LLMs in different ways. Our results from experiments on two datasets show that initializing the state-of-the-art sequential recommendation model BERT4Rec with embeddings obtained from an LLM improves NDCG by 15-20% compared to the vanilla BERT4Rec model. Furthermore, we find that a simple approach that leverages LLM embeddings for producing recommendations, can provide competitive performance by highlighting semantically related items. We publicly share the code and data of our experiments to ensure reproducibility.1
Footnote 1: [https://github.com/dh-r/LLM-Sequential-Recommendation](https://github.com/dh-r/LLM-Sequential-Recommendation)
**ACM Reference Format:**
Jesse Harte, Wouter Zorgdrager, Panos Louridas, Asterios Katsifodimos, Dietmar Jannach, and Marios Fragkoulis. 2023. Leveraging Large Language Models for Sequential Recommendation. In _Seventeenth ACM Conference on Recommender Systems (RecSys '23), September 18-22, 2023, Singapore, Singapore._ ACM, New York, NY, USA, 9 pages. [https://doi.org/10.1145/3604915.3610639](https://doi.org/10.1145/3604915.3610639)
## 1. Introduction
Sequential recommendation problems have received increased interest recently (Sutton et al., 2017; Wang et al., 2018). In contrast to the traditional, sequence-agnostic matrix-completion setup (Sutton et al., 2017), the problem in sequential recommendation is to predict the next user interest or action, given a sequence of past user interactions. Practical applications of sequential recommendation include next-purchase prediction, next-track music recommendation, or next Point-of-Interest suggestions for tourism. Due to their high practical relevance, a multitude of algorithmic approaches have been proposed in the past few years (Groff et al., 2018; Wang et al., 2018; Wang et al., 2018; Wang et al., 2018), including approaches that utilize side information about the items, such as an item's category (Sutton et al., 2017; Wang et al., 2018).
From a technical perspective, the sequential recommendation problem shares similarities with the next word prediction problem (Groff et al., 2018; Wang et al., 2018). Under this light, we can observe a parallel between research in Natural Language Processing (NLP) and sequential recommendation, where novel recommendation models are inspired by NLP models (Chen et al., 2018). GRU4Rec (Groff et al., 2018) adopted the Gated Recurrent Unit (GRU) mechanism from (Chen et al., 2018), SASRec (Wang et al., 2018) used the transformer architecture from (Wang et al., 2018),
and BERT4Rec[35] adopted BERT[7]. The influence of NLP research to sequential recommendation models extends naturally to Large Language Models (LLMs). LLMs, in particular ones based on Generative Pretrained Transformers [32], are exhibiting disruptive effects in various AI-based applications with their semantically rich and meaningful responses.
However, limited research exists so far on leveraging the inherent semantic information of LLMs, which the abovementioned approaches lack, for sequential recommendation problems. A number of recent works in fact started to explore the potential of relying on LLMs for recommendation tasks; see [27; 40] for recent surveys. Here, we extend this line of research for sequential recommendation problems, providing the following contributions and insights.
* We devise three orthogonal methods of leveraging LLMs for sequential recommendation. In our first approach (LLMSEqSim), we retrieve a semantically-rich embedding from an existing LLM (from OpenAI) for each item in a session. We then compute an aggregate session embedding to recommend catalog products with a similar embedding. In the second approach (LLMSEqPrompt), we fine-tune an LLM with dataset-specific information in the form of prompt-completion pairs and ask the model to produce next item recommendations for test prompts. Finally, our third approach (LLM2BERT4Rec) consists of initializing existing sequential models with item embeddings obtained from an LLM.
* Experiments on two datasets, including a real-world dataset from Delivery Hero, reveal that initializing a sequential model with LLM embeddings is particularly effective: applying it to the state-of-the-art model BERT4Rec improves accuracy in terms of NDCG by 15-20%, making it the best-performing model in our experiments.
* Finally, we find that in certain applications simply using LLM embeddings to find suitable items for a given session (LLMSEqSim) can lead to state-of-the-art performance.
## 2. Background & Related Work
The recent developments in LLMs have taken the world by surprise. Models like OpenAI GPT [4], Google BERT [7], and Facebook LLaMA [36], which employ deep transformer architectures, demonstrate how innovations in NLP can reshape mainstream online activities, such as search, shopping, and customer care. Inevitably, research in recommender systems is significantly impacted by the developments in the area of LLMs as well. According to recent surveys [27; 40], LLMs are mainly utilized for recommendation problems in two ways: by providing embeddings that can be used to initialize existing recommendation models [29; 39; 43], and by producing recommendations leveraging their inherent knowledge encoding [2; 13; 22]. LLMs as recommendation models can provide recommendations given _a)_ only a task specification (zero-shot), _b)_ a few examples given inline to the prompt of a task (few-shot), or _c)_ after fine-tuning the model's weights for a task given a set of training examples [4]. This incremental training process deviates from typical recommendation models, which have to be trained from zero on domain data. In fact, LLMs show early indications of adaptability to different recommendation domains with modest fine-tuning [15; 16]. Finally, LLMs have been applied in various recommendation tasks, such as rating prediction [25], item generation [26], and reranking [17] across domains [29; 39].
In this work we explore the potential of using LLMs for sequential recommendation problems [20]. In short, in sequential recommendation problems, we consider as input a sequence of user interactions \(S^{u}=(S^{u}_{1},S^{u}_{2},...,S^{u}_{n})\), for user \(u\), where \(n\) is the length of the sequence and \(S^{u}_{i}\) are individual items. The aim is to predict the next interaction of the given sequence. Besides the recent sequential recommendation models mentioned in the introduction [14; 21; 35], in earlier works, the sequential recommendation problem has been modelled as a Markov Chain [9] or a Markov Decision Process [34]. Neighborhood-based approaches, such as SKNN [19], have also been proposed.
Early research work regarding LLMs for sequential recommendation problems has shown mixed results (Kang et al., 2018; Chen et al., 2019; Chen et al., 2020; Li et al., 2021; Li et al., 2021; Li et al., 2021). The very recent VQ-Rec model (Li et al., 2021) employs a transformer architecture and applies a novel representation scheme to embeddings retrieved from BERT in order to adapt to new domains. VQ-Rec outperforms a number of sequential recommendation models across datasets of different domains, and it has been shown that SASRec with LLM embeddings is better than the original SASRec method for half of the datasets representing different domains. Finally, in an upcoming work (Li et al., 2021), SASRec with LLM embeddings is shown to improve over SASRec. The recent approaches presented in (Li et al., 2021) and (Li et al., 2021) differ from our work in particular in terms of the goals they pursue. VQ-Rec (Li et al., 2021) targets cross-domain recommendations with a novel item representation scheme, while (Li et al., 2021) evaluates whether recommendation models leveraging different modalities perform better than existing recommendation models that rely on item identifiers.
The work presented in this paper complements these recent lines of research and proposes and evaluates three alternative ways of leveraging LLMs for sequential recommendation. Differently from earlier approaches, our work shows that initializing an existing sequential model with LLM-based embeddings is highly effective and helps to outperform existing state-of-the-art models. In addition, we find that retrieving relevant items solely based on LLM embedding similarity can lead to compelling recommendations depending on the dataset.
## 3. Three LLM-based approaches for sequential recommendations
In this section, we describe the three technical approaches sketched in Section 1.
### LLMsEqSim: Recommending Semantically Related Items via LLM Embeddings
With this first approach, our goal is to explore if recommendations can benefit from a holistic notion of similarity provided by LLMs. To achieve this, we leverage _LLM embeddings_ to produce recommendations in three steps. First, we query the text-embedding-ada-0022 OpenAI embedding model with the names of the products in the item catalog and retrieve their embeddings. Second, we compute a session embedding for each session in our test set by combining the embeddings of the individual products in the session. Here, we try different combination strategies: _a)_ the average of the product embeddings, _b)_ a weighted average using linear and exponential decay functions depending on the position of the item in the session, and _c)_ only the embedding of the last product.3 Third, we compare the session embedding to the embeddings of the items in the product catalog using cosine, Euclidean, and dot product similarity.4 Finally, we recommend the top-_k_ products from the catalog with the highest embedding similarity to the session embedding.
Footnote 2: [https://platform.openai.com/docs/guides/embeddings/second-generation-models](https://platform.openai.com/docs/guides/embeddings/second-generation-models)
Footnote 3: We also tried to create an aggregated session embedding by concatenating the plain product names and then querying the Open AI embeddings API. This however led to worse results.
Footnote 4: The choice of the similarity measure did not significantly impact the results.
### LLMsEqPrompt: Prompt-based Recommendations by a Fine-Tuned LLM
In this approach, we inject domain knowledge to the collective information that a base LLM incorporates, with the goal of increasing the quality of the recommendations by an LLM that is given information about an ongoing session in the form of a prompt. To this end, we fine-tune an OpenAI ada model on training samples consisting of a prompt (the input) and a completion (the intended output). In our case, the prompt is a session, which contains a list of product names except for the last product, and the completion is the name of the last product in the same session, see Figure 1.
To optimize performance, we fine-tune the model until the validation loss converges. After training, we provide the prompts of the sessions in the test set to the fine-tuned model to obtain recommendations. We note that we make no strong assumption regarding the order of the returned recommendations. Therefore, we use the tendency of the model
to provide duplicate recommendations as a proxy of its confidence and rank the recommendations by frequency of appearance. Then, to create a full slate of unique recommendations, we retrieve the embedding of each duplicate product using the OpenAI embeddings API and take the catalog's product that is closest in terms of embedding similarity using the dot product measure. Finally, we note that the fine-tuned LLM, being a generative model, may also return hallucinated products, which we map to catalog products using the same method as for duplicate products.
### LLM2Bert4Rec: Recommending with an LLM-enhanced Sequential Model
In our third approach, our goal is to leverage the semantically-rich item representations provided by an LLM to enhance an existing sequential recommendation model. Specifically, in our work we focus on BERT4Rec[35], a state-of-the-art transformer-based model, which employs the transformer architecture [37] of BERT[7].
BERT's transformer architecture consists of an embedding layer, a stack of encoder layers, and a projection head. Furthermore, BERT features a masked language model training protocol, which involves masking items at random positions and letting the model predict their true identity. Initially, the embedding layer embeds an input sequence of (potentially masked) item IDs into a sequence of embeddings using both the item ID and the item position. Then the transformer encoder layers process the embedding sequence using a multi-head attention module and a feed-forward network shared across all positions. Finally, the projection head projects the embeddings at each masked position to a probability distribution in order to obtain the true identity of the masked item. The projection head reuses the item embeddings of the embedding layer to reduce the model's size and to avoid overfitting.
To allow BERT4Rec to leverage the rich information encoded in LLMs, we initialize BERT4Rec's item embeddings using the LLM embeddings described in Section 3.1. In order to align the embedding dimension of the LLM embeddings (1536) with the configured dimension of BERT4Rec's embedding layer (e.g., 64), we employ Principal Components Analysis (PCA) to get 64 principal components of the LLM embeddings, which we then use to initialize the item embeddings of BERT4Rec's embedding layer. Finally, we train the enhanced model the same way as our baseline BERT4Rec model.
## 4. Experimental Evaluation
In this section, we describe our experimental setup (Section 4.1) and the results of our empirical evaluation (Section 4.2).
### Experimental setup
Datasets and Data SplittingWe use the public Amazon Beauty [12] dataset and a novel, real-world e-commerce dataset from Delivery Hero5 for our experiments. The Beauty dataset contains product reviews and ratings from Amazon. In line with prior research [1], we pre-processed the dataset to include at least five interactions per user and item (p-core = 5). The Delivery Hero dataset contains anonymous QCommerce sessions for dark store and local shop orders. To better simulate a real-world setting, we did not preprocess this dataset, except that we removed sessions
Figure 1. Example prompt and completion for fine-tuning from the Beauty dataset
with only one interaction from the test set. QCommerce is a segment of e-Commerce focusing on fast delivery times on the last mile. Dataset statistics are given in Table 1. To create a train and test set in a sound way, we first split a dataset containing sessions temporally such that all test sessions succeed train sessions in time. Then in the test set, we adopt the leave-one-out approach followed by [(21; 35)] where all but the last interaction of each session represent the prompt, while the last interaction serves as the ground truth.
MetricsWe use the standard ranking accuracy metrics NDCG, MRR, and HitRate at the usual cut-off lengths of 10 and 20. Furthermore, we consider the following _beyond-accuracy_ metrics to obtain a more comprehensive picture of the performance of the different algorithms: catalog coverage, serendipity, and novelty. _Catalog coverage_ represents the fraction of catalog items that appeared in at least one top-n recommendation list of the users in the test set [(18)]. _Serendipity_ measures the average number of correct recommendations for each user that are not recommended by a popularity baseline [(10)]. _Novelty_ computes the negative log of the relative item popularity, or self-information [(45)].
ModelsWe include both session-based algorithms of different families, GRU4Rec[(14)], and SKNN[(19)], as well as two state-of-the-art sequential models, BERT4Rec[(35)] and SASRec[(21)]. We tested all variants of the SKNN nearest-neighbor method proposed in [(30)] and report the results in the online material. In addition, we include the three LLM-based approaches proposed in Section 3. Finally, we include a popularity-based baseline (MostPopular) in the experiments.
Hyperparameter TuningWe systematically tuned all models (except the LLMSEqSim and the LLMSEqPrompt) on three validation folds with the Tree Parzen Estimator (TPE) sampler [(3)], and used the average NDCG@20 across the folds as the optimization goal. For LLMSEqPrompt, we applied manual hyperparameter search. The examined hyperparameter ranges and optimal values for each dataset are reported in the online material.
### Results and Discussion
Table 2 and Table 3 show the results obtained for the Amazon Beauty and the Delivery Hero dataset on the hidden test set, respectively. We report the best results of 5 runs. The table is sorted according to NDCG@20.
\begin{table}
\begin{tabular}{c c c c c c} \hline \hline
**Dataset** & **\# sessions** & **\# items** & **\# interactions** & **Avg. length** & **Density** \\ \hline Beauty 5-core & 22,363 & 12,101 & 198,502 & 8.9 & 0.073\% \\ \hline Delivery Hero & 258,710 & 38,246 & 1,474,658 & 5.7 & 0.015\% \\ \hline \hline \end{tabular}
\end{table}
Table 1. Dataset statistics
Figure 2. Distribution of items ranked by popularity (left) and histogram of session length (right) for the datasets
_Accuracy Results._ The highest values in terms of NDCG@20 are obtained by **LLM2BERT4Rrc** for both datasets. In both cases, the gains obtained by using LLM-based item embeddings are substantial, demonstrating the benefits of relying on semantically-rich embeddings in this sequential model. The NDCG value increased by more than 20% for Beauty and over 15% on the Delivery Hero dataset.6 To confirm that the semantics of the LLM embeddings is the driver of performance, we ran an experiment in which we permuted the item embeddings such that the embedding of each item is initialized to the principal components of the LLM embedding of another product from the catalogue. The experiment maintains the statistical properties of the embeddings, but deprives the item embeddings of the semantics of the LLM embeddings. The resulting model exhibited worse performance than the baseline BERT4Rrc model with randomly-initialized item embeddings clearly showing that the performance improvement cannot be credited to the statistical properties of the embeddings.
Footnote 6: We also examined the value of LLM embeddings for the SASRec model, where we observed marked increases in the NDCG, but not to the extent that it outperformed LLM2BERT4Rrc. We report these additional results in the online material.
The relative performance of **LLMSpGSim**, again considering NDCG values, varies across the two datasets. On the Beauty dataset, the model is highly competitive, with NDCG@20 values only being slightly lower than LLM2BERT4Rec. At shorter list lengths, i.e., at NDCG@10, the LLMSpGSim model even leads to the best performance for this dataset. Notably, the embedding combination strategy that led to the best results considered only the last item of the session (see Section 3.1). For the Delivery Hero dataset, in contrast, the picture is very different, and LLMSpGSim leads to quite poor performance, only outperforming the popularity-based baseline. We hypothesize that this phenomenon is a result of the quite different characteristics of the two datasets. For example, in Figure 2, we observe that many items in the real-world Delivery Hero dataset occur very infrequently. This may limit the capacity of LLMSpGSim to find similar items, given also the substantially broader item catalog in the Delivery Hero dataset. Furthermore, a manual inspection of a sample of test prompts, recommendations, and ground truths of the two datasets indicates that users in the Beauty dataset frequently rate items of a certain brand. Since brand names are part of the product names that are input to the LLM, recommending similar items may turn out to be particularly effective.
Looking at the other accuracy metrics (**Hit Rate** and **MRR**), we find that these are generally highly correlated with the NDCG results. A notable exception are the MRR values of the LLMSpGSim model and the V_SKNN approach on the Beauty dataset. While these two approaches lead to slightly inferior results at NDCG@20 and in particular also for HR@20, they are superior in terms of MRR. This means that these methods place the hidden target item higher up in the recommendation list in case the target item is included in the top 20. Similar observations regarding the good performance of some methods in terms of MRR on specific datasets were previously reported also in [30].
\begin{table}
\begin{tabular}{l c c c c c c c c c c c c} \hline \hline \multicolumn{1}{c}{\multirow{2}{*}{**Model**}} & \multicolumn{5}{c}{**Top@10**} & \multicolumn{5}{c}{**Top@20**} \\ & **nDCG** & **HR** & **MRR** & **CatCov** & **Seren** & **Novel** & **nDCG** & **HR** & **MRR** & **CatCov** & **Seren** & **Novel** \\ \hline LLM2BERT4Rrc & 0.041 & **0.076** & 0.030 & 0.180 & **0.072** & 11.688 & **0.051** & **0.118** & 0.033 & 0.260 & **0.110** & 11.888 \\ LLMSpGSim & **0.044** & 0.063 & **0.038** & **0.763** & 0.063 & **13.819** & 0.048 & 0.079 & **0.039** & **0.889** & 0.079 & **13.858** \\ V\_SKNN & 0.041 & **0.071** & 0.033 & 0.673 & 0.069 & 12.241 & 0.047 & 0.095 & 0.034 & 0.889 & 0.091 & 12.492 \\ BERT4Rec & 0.034 & 0.067 & 0.024 & 0.231 & 0.064 & 12.293 & 0.043 & 0.103 & 0.027 & 0.312 & 0.098 & 12.423 \\ GRU4Rec & 0.027 & 0.051 & 0.020 & 0.145 & 0.047 & 11.409 & 0.035 & 0.082 & 0.022 & 0.214 & 0.074 & 11.597 \\ SASRec & 0.026 & 0.051 & 0.019 & 0.121 & 0.048 & 11.485 & 0.033 & 0.080 & 0.021 & 0.182 & 0.073 & 11.678 \\ LLMSpGPrompt & 0.025 & 0.045 & 0.019 & 0.500 & 0.044 & 13.001 & 0.030 & 0.064 & 0.020 & 0.688 & 0.063 & 13.361 \\ MostPopular & 0.005 & 0.010 & 0.003 & 0.001 & 0.001 & 9.187 & 0.006 & 0.018 & 0.003 & 0.002 & 0.001 & 9.408 \\ \hline \hline \end{tabular}
\end{table}
Table 2. Evaluation results for the Amazon Beauty dataset
Interestingly, as also reported in (Kang et al., 2019; Wang et al., 2020), **nearest-neighbor** approaches can be quite competitive depending on the dataset. On Beauty, V_SKNN outperforms all of the more sophisticated neural models (BERT4Rec, GRU4Rec, SASRec) in all accuracy metrics except Hit Rate@20. On the Delivery Hero dataset, in contrast, the neural models perform better in all accuracy metrics except MRR and NDCG@10. Further inspection (see online material) showed that SKNN's performance drops as the length of sessions increases, while the performance of the other models remains stable.
The performance of the LLMSeqPrompt model again depends on the dataset. On the Beauty dataset, it leads to accuracy values that are often only slightly lower than SASRec, which is typically considered a strong state-of-the-art baseline. On the Delivery Hero dataset, in contrast, the drop in performance compared to the other models is substantial. Still, LLMSeqPrompt leads to accuracy values that are markedly higher than the popularity baseline. Given its versatility, ease of configuration and promising performance, LLMSeqPrompt merits further research.
Beyond-Accuracy ResultsWe make the following observations for **coverage**, **serendipity** and **novelty**. The LLMSeqSim model consistently leads to the best coverage and novelty. This is not too surprising, given the nature of the approach, which is solely based on embeddings similarities. Unlike other methods that use collaborative signals, i.e., past user-item interactions, the general popularity of an item in terms of the amount of observed past interactions does not play role in LLMSeqSim, neither directly nor implicitly. Thus, the model has no tendency to concentrate the recommendations on a certain subset of (popular) items. We recall that the used novelty measure is based on the popularity of the items in the recommendations. The serendipity results are largely aligned with the accuracy measures across the datasets. This generally confirms the value of personalizing the recommendations to individual user preferences, compared to recommending mostly popular items to everyone. We iterate that our serendipity measure counts the fraction of correctly recommended items that would not be recommended by a popularity-based approach.
## 5. Conclusions
In this work, we devised and evaluated three approaches that leverage LLMs for sequential recommendation problems. A systematic empirical evaluation revealed that BERT4Rec initialized with LLM embeddings achieves the best performance for two datasets, and that the LLM-based initialization leads to a substantial improvement in accuracy. In our future work, we plan to investigate if our findings generalize to different domains, using alternative datasets with diverse characteristics. Furthermore, we will explore if using other LLMs, e.g., ones with different architectures and training corpora, will lead to similar performance gains, including a hybrid of LLM2BERT4Rec with LLMSeqSim towards combining their accuracy and beyond-accuracy performance. Finally, it is open so far if passing other types of information besides product names, e.g., category information, to an LLM can help to further improve the performance of the models.
\begin{table}
\begin{tabular}{l c c c c c c c c c c c} \hline \hline \multicolumn{1}{c}{\multirow{2}{*}{**Model**}} & \multicolumn{5}{c}{**Top@10**} & \multicolumn{5}{c}{**Top@20**} \\ & **nDCG** & **HR** & **MRR** & **CatCov** & **Seren** & **Novel** & **nDCG** & **HR** & **MRR** & **CatCov** & **Seren** & **Novel** \\ \hline LLM2BERT4Rec & **0.102** & **0.179** & **0.078** & 0.245 & **0.151** & 10.864 & **0.120** & **0.252** & **0.083** & 0.311 & **0.198** & 11.050 \\ BERT4Rec & 0.088 & 0.157 & 0.067 & 0.325 & 0.128 & 10.821 & 0.104 & 0.221 & 0.071 & 0.429 & 0.165 & 11.032 \\ GRU4Rec & 0.085 & 0.153 & 0.064 & 0.127 & 0.124 & 10.570 & 0.101 & 0.218 & 0.068 & 0.172 & 0.161 & 10.823 \\ SASRec & 0.084 & 0.149 & 0.065 & 0.170 & 0.120 & 10.674 & 0.100 & 0.212 & 0.069 & 0.229 & 0.156 & 10.913 \\ V\_SKNN & 0.087 & 0.148 & 0.068 & 0.381 & 0.120 & 10.444 & 0.100 & 0.200 & 0.072 & 0.452 & 0.146 & 10.602 \\ LLMSeqPrompt & 0.063 & 0.116 & 0.047 & 0.400 & 0.107 & 12.048 & 0.070 & 0.144 & 0.049 & 0.611 & 0.123 & 13.788 \\ LLMSeqSim & 0.039 & 0.069 & 0.029 & **0.633** & 0.069 & **16.315** & 0.046 & 0.096 & 0.031 & **0.763** & **0.093** & **16.536** \\ MostPopular & 0.024 & 0.049 & 0.017 & 0.000 & 0.000 & 7.518 & 0.032 & 0.079 & 0.019 & 0.001 & 0.000 & 7.836 \\ \hline \hline \end{tabular}
\end{table}
Table 3. Evaluation results for the Delivery Hero dataset |
2309.06152 | Distinguishing the importance of different charge trapping centers in
CaF2-based 2D material MOSFETs | Crystalline CaF2 is drawing huge attentions due to its great potential of
being the gate dielectric of two-dimensional (2D) material MOSFETs. It is
deemed to be much superior than boron nitride and traditional SiO2 because of
its larger dielectric constant, wider band gap, and lower defect density.
Nevertheless, the CaF2-based MOSFETs fabricated in experiment still present
notable reliability issues, and the underlying reason remains unclear. Here we
studied the various intrinsic defects and adsorbates in CaF2/MoS2 and
CaF2/MoSi2N4 interface systems to reveal the most active charge trapping
centers in CaF2-based 2D material MOSFETs. An elaborate Table that comparing
the importance of different defects in both n-type and p-type device is
provided. Most impressively, the oxygen molecules adsorbed at the interface or
surface, which are inevitable in experiments, are as active as the intrinsic
defects in channel materials, and they can even change the MoSi2N4 to p-type
spontaneously. These results mean that it is necessary to develop high vacuum
packaging process as well as preparing high-quality 2D materials for better
device performance. | Zhe Zhao, Tao Xiong, Jian Gong, Yue-Yang Liu | 2023-09-12T11:52:04Z | http://arxiv.org/abs/2309.06152v1 | Distinguishing the importance of different charge trapping centers in CaF\({}_{2}\)-based 2D material MOSFETs
###### Abstract
Crystalline CaF\({}_{2}\) is drawing huge attentions due to its great potential of being the gate dielectric of two-dimensional (2D) material MOSFETs. It is deemed to be much superior than boron nitride and traditional SiO\({}_{2}\) because of its larger dielectric constant, wider band gap, and lower defect density. Nevertheless, the CaF2-based MOSFETs fabricated in experiment still present notable reliability issues, and the underlying reason remains unclear. Here we studied the various intrinsic defects and adsorbates in CaF\({}_{2}\)/MoS\({}_{2}\) and CaF\({}_{2}\)/MoS\({}_{2}\)N\({}_{4}\) interface systems to reveal the most active charge trapping centers in CaF\({}_{2}\)-based 2D material MOSFETs. An elaborate Table that comparing the importance of different defects in both n-type and p-type device is provided. Most impressively, the oxygen molecules adsorbed at the interface or surface, which are inevitable in experiments, are as active as the intrinsic defects in channel materials, and they can even change the MoS\({}_{2}\)N\({}_{4}\) to p-type spontaneously. These results mean that it is necessary to develop high vacuum packaging process as well as preparing high-quality 2D materials for better device performance.
## 1 Introduction
Two-dimensional (2D) materials offer new possibilities for more Moore due to their ultra-thin thickness and smooth surface with no dangling bonds [1, 2, 3]. With the ultra-scaled channel, higher requirements are raised for the quality and reliability of gate dielectric materials.
To match the silicon technologies, oxides (such as SiO\({}_{2}\)[4] HfO\({}_{2}\)[5] and Al\({}_{2}\)O\({}_{3}\)[6]) are usually used, but these materials are non-layered, which makes it difficult to form a good interface with the 2D channels. To deal with the problem, 2D dielectrics such as h-BN have been studied [7]. However, the band gap ("6 eV) and dielectric constant (5.06 \(\varepsilon\)) of h-BN are not satisfying for dielectric materials [8]. Its band offset with 2D materials is not large enough, which will lead to many reliability problems [9].
Excitingly, the recent experimental preparation of crystalline CaF\({}_{2}\) provides a strong support for the solution of this dilemma [10, 11]. By using molecular beam epitaxy (MBE), crystalline CaF\({}_{2}\) can be grown on a silicon or germanium substrate [12]. It has a larger bandgap (12.1 eV) and larger dielectric constant (8.43 \(\varepsilon\)) than h-BN [13]. The grown CaF\({}_{2}\) is terminated by F atoms, which means that there is no dangling bond on its surface [14]. Another important point is that CaF\({}_{2}\) itself is stable in air, and is not easily dissolved in water [15]. CaF\({}_{2}\) can form good type I band alignment with many 2D materials, which means that it will be very advantageous as a gate dielectric of semiconductor devices.
Nevertheless, notable device reliability issues were still observed in CaF\({}_{2}\)-based MOSFETs [13, 15, 16, 17], which contradicts the perfect electrical properties of CaF\({}_{2}\). For example, the \(l_{0}\)-\(V_{0}\) hysteresis is significant (although lower than that in MoS\({}_{2}\)/SiO\({}_{2}\) FET), and it shows obvious variability when the same device is operated at different scanning times. On the other hand, if different devices are operated under the same \(V_{0}\), the \(l_{0}\)-\(V_{0}\) characteristics such as on/off current ratio and subthreshold swing (SS) (150-90 mV dec\({}^{-1}\)) differ greatly [13]. In addition, some devices with large negative threshold voltage (\(V_{\text{th}}\)) are prone to fail due to the bias overload of the CaF\({}_{2}\) layer. The physical origin of hysteresis and threshold voltage shift is widely attributed to the charge trapping and de-trapping of microscopic defects [18, 19, 20, 21, 22, 23, 24], and the strength of the charge trapping effect is closely related to the type of defects [25, 26, 27, 28]. Therefore, it is very urgent to distinguishing the activity of various defects in CaF\({}_{2}\)-based transistors so that corresponding strategies can be proposed to deal with them.
Figure 1: Atomic structure and type-I band alignment of the two kinds of interface models.
## 2 Method
Among the 2D materials, MoS\({}_{2}\) is one of the most widely used semiconductor. It has a direct band gap of 1.8 eV, and has been used to design high-performance electronic as well as optoelectronic devices [29]. On the other hand, there are also some new materials being synthetized, such as the MoS\({}_{2}\)N\({}_{4}\)[30]. MoS\({}_{2}\)N\({}_{4}\) is very promising because of the excellent photocatalytic performance [31], mechanical strength [32], and electrical transportability [33]. Therefore, we construct both MoS\({}_{2}\)/CaF\({}_{2}\) and MoS\({}_{2}\)N\({}_{4}\)/CaF\({}_{2}\) interface models to make the simulation results representative. The lattice parameter of CaF\({}_{2}\), MoS\({}_{2}\) and MoS\({}_{2}\)N\({}_{4}\) is 3.90 A, 3.16 A, and 2.91 A, respectively. To obtain good lattice matching, the primary cell of MoS\({}_{2}\) is repeated by five times to contact with the CaF\({}_{2}\) cell that is repeated by four times. The final CaF\({}_{2}\) deformation is only 1.28%. Similarly, the primary cell of MoS\({}_{2}\)N\({}_{4}\) is repeated by four times to contact the CaF\({}_{2}\) that is repeated by three times, and the CaF\({}_{2}\) deformation is only 0.52%. All the first-principles calculations are performed by the software PWmat [34, 35]. The SG15 pseudopotential [18] is adopted, and the plane wave cutoff energy is 50 Ry. The Heyd-Scuseria-Ernzerhof (HSE) [37] functional is used in the calculation of electronic structures to improve the accuracy of calculations. The vdW force between the layers of the material is also considered.
## 3 Results and Discussion
The interface models are shown in Fig. 1(a) and (c). A 5-layer CaF\({}_{2}\) is adopted because the experimental MBE grown CaF\({}_{2}\) is about 2 nm thick. The band alignments that manifested by the projected density of states (PDOS) are shown in Fig. 1(b) and (d). It can be seen that the VBM (valence band maximum) and CBM (conduction band minimum) are provided by MoS\({}_{2}\) and MoS\({}_{2}\)N\({}_{4}\), and the band offsets are greater than 2 eV, which makes charge tunneling difficult. This confirms that using CaF\({}_{2}\) as the gate of 2D material MOSFET is likely to obtain good device reliability [38]. Therefore, when considering practical applications, we believe that the reliability issues should stem from some intrinsic or external charge trapping centers.
Intuitively, we should first study the F vacancy defect in the CaF\({}_{2}\) layer. However, it has been demonstrated in experiment that CaF\({}_{2}\) is not easy to generate defects [13]. Besides, it has been proved by first principle calculation that even though F vacancies (V\({}_{\rm F}\)) and Ca vacancies (V\({}_{\rm Ga}\)) exist, there is no defect state near the band edge of channel material due to the large band offset between the two materials [39]. Consequently, we turn our attention to the trapping centers inside the channel material, in the semiconductor/dielectric interface, and at the dielectric surface. For MoS\({}_{2}\), we considered 5 vacancy defect (V\({}_{\rm Cs}\)), Mo vacancy defects (V\({}_{\rm MoS}\)), MoS\({}_{3}\) vacancy defect (V\({}_{\rm MoS}\)) and MoS\({}_{5}\) vacancy defect (V\({}_{\rm MoS}\)) at different spatial locations. On the other hand, considering that gas adsorption is very easy to occur in the process of device manufacturing, we also studied the water and oxygen molecules that adsorbed at different positions. For a more intuitive display of defects and adsorption, the related structural diagrams are shown in Fig. 2.
shown in Fig. 3. First, there is an occupied defect state denoted by d1 for the V\({}_{\rm S}\) in MoS\({}_{2}\), whose energy is 0.38 eV below VBM, and there are two empty defect states with similar energy denoted by d2, whose energy is 0.57 eV below CBM. According to charge transfer theories, the charge trapping rate will decrease exponentially with the increasing energy barrier between the initial and final electronic states, thus we can consider that only the defect levels that locate less than 1 eV away from the Si band edge are active trapping centers. Therefore, it can be concluded that d1 is an important hole trapping state when negative gate voltage is applied, and d2 is an important electron trapping states when positive gate voltage is applied. Similarly, the Mo vacancy are active in trapping holes and electrons, but they not as active as the S vacancy in electron trapping, because the V\({}_{\rm Mo}\) defect levels are farther away the CBM. In addition to the common V\({}_{\rm S}\) and V\({}_{\rm Mo}\)
Figure 3: The energy level distribution of different defects. (a) S vacancy (V\({}_{\rm A}\), (b) Mo vacancy (V\({}_{\rm Na}\)), (c) MoS\({}_{3}\) vacancy (V\({}_{\rm Na}\)), and (d) MoS\({}_{5}\) vacancy (V\({}_{\rm Na}\)).
Figure 2: The different defects in MoS\({}_{2}\) from (a) to (d) \(V_{\rm Cs}\), V\({}_{\rm Na}\), V\({}_{\rm Na}\), and V\({}_{\rm Na}\) defects. Oxygen molecule(e) and water molecules(f) are adsorbed between the CaF\({}_{2}\)-MoS\({}_{2}\) interface, respectively. Oxygen is adsorbed on the interlayer of MoS\({}_{2}\)(g) and the surface of CaF\({}_{2}\)(h), respectively. The atoms highlighted in red in the figure represent defects and adsorption sites.
experiments have reported that complex vacancy defects (such as V\({}_{\text{MoS}_{2}}\) and V\({}_{\text{MoS}_{2}}\)) are found in MoS\({}_{2}\)[40]. These two complex vacancies contain many dangling bonds, and thus can introduce a series of defect states (up to 13) that locate either close to VBM or to CBM. Consequently, they will be very active charge trapping centers. Nevertheless, the formation energy of these complex defects is very high, which makes them low in density. More details of the defect levels have been listed in Table 1.
It has been mentioned in previous reports that the hysteresis of CaF\({}_{2}\)-MoS\({}_{2}\) devices can be reduced after they are heated and dried[13]. This indicate that molecules had been adsorbed during device preparation, so the activity of these adsorbates need to be discussed. Fig. 4(a) shows the adsorption of O\({}_{2}\) in the CaF\({}_{2}\)-MoS\({}_{2}\) interface, and three defect levels that denoted by d1, d2 and d3 are observed. They are only 1 eV, 0.85 eV and 0.54 eV below VBM, respectively. Therefore, they will be active hole traps in p-MOSFETs. In contrast, the adsorption of water molecules on the interface is much less important. It can be seen from Fig. 4(b) that there is no obvious defect state near the band edge of MoS\({}_{2}\). To further check the importance of oxygen, we studied the oxygens that adsorbed in other positions. Fig. 4(c) shows the situation that the oxygen molecules are adsorbed in the interlayer of MoS\({}_{2}\). It can be seen that the defect state is only 0.37 eV below VBM, which will trap holes easily and thus affects the device performance. Fig. 4(d) shows the case that the oxygen is adsorbed on the surface of CaF\({}_{2}\). An occupied defect state that is close to CBM rather than CBM is seen. Considering that the negative gate voltage in a p-FET will drag the defect level down towards the VBM, the oxygen on the CaF\({}_{2}\) surface will be very active hole trapping centers with large gate voltage.
To exhibit the importance of different defects more clearly, Table 1 summarizes and compares the information of all defects. The defect levels that are more than 1 eV away from the MoS\({}_{2}\) band edge are regarded as electronically unimportant[41, 42, 43]. Moreover, the formation energy/adsorption energy is considered to provide an overall evaluation of their importance.
Now we study the MoS\({}_{2}\)Na\({}_{4}\)/CaF\({}_{2}\) system. MoS\({}_{2}\)Na\({}_{4}\) is a 2D material with 7 atomic layers. One Mo atomic layer lies in the middle while two Si-N-Si tri-layers lie on top and bottom surface symmetrically. Vacancy defects caused by the shedding of N (Fig. 5a) and Si (Fig. 5b) atoms on the surface layer are the primary problems to be considered. At the same time, the influence of oxygen molecules (Fig. 5c) and water molecules (Fig. 5d) adsorption during device manufacturing is also considered here. The atoms highlighted in red in the figure represent defects and adsorption sites.
For the N vacancy (V\({}_{\text{N}}\)) (Fig. 6a), two defect levels are induced into the band gap, of which the half-occupied d1 state is 0.98 eV above VBM and the empty d2 state is 0.45 eV below CBM. Such small energy barriers make them very active hole/electron trapping centers. In contrast, the V\({}_{\text{N}}\) defect induce no defect levels close to the CBM, as is shown in Fig. 6(b), but it induces many defect levels below the VBM. Specially, the electrons in VBM have spontaneously transferred to the defect states, making the Fermi level shifted below the VBM, and making the CaF\({}_{2}\)-MoS\({}_{2}\)Na\({}_{4}\) as a whole P-type heterostructure. Interestingly, the adsorption of an oxygen in the CaF\({}_{2}\)-MoS\({}_{2}\)Na\({}_{4}\) interface has very similar effect, as is shown in Fig. 6(c), the electrons in VBM are spontaneously captured by the oxygen, and the MoS\({}_{2}\)Na\({}_{4}\) becomes a p-type material. If the oxygen density is high, this will greatly impair the performance and reliability of the device. In comparison, the adsorption of water
\begin{table}
\begin{tabular}{c c c c c c c c} \hline \multirow{2}{*}{\begin{tabular}{c} Defect \\ Types \\ \end{tabular} } & Defect & \begin{tabular}{c} \(\Delta\)C \\ VBM \\ (eV) \\ \end{tabular} & \begin{tabular}{c} \(\Delta\)C \\ VBM \\ (eV) \\ \end{tabular} & \begin{tabular}{c} NST1 \\ Importance \\ \end{tabular} & \begin{tabular}{c} PRET \\ Importance \\ \end{tabular} & \begin{tabular}{c} \(\Gamma\)/ \\ Adsorption \\ energy (eV) \\ \end{tabular} &
\begin{tabular}{c} \(\Gamma\)/ \\ Adsorption \\ \end{tabular} \\ \hline V\({}_{\text{N}}\) & d1 & -0.38 & -1.91 & ✓ & ✗ & ✗ & 2.91 & ✓ \\ & d2 & 0.95 & -4.57 & ✓ & ✓ & ✗ & \\ & d1 & -0.06 & -1.68 & ✓ & ✗ & \\ V\({}_{\text{MoS}}\) & d2 & 0.40 & -1.17 & ✗ & ✓ & ✗ & 8.52 & ✓ \\ & d3 & 0.71 & -4.86 & ✗ & ✓ & ✗ & \\ & d1 & -0.25 & -1.78 & ✓ & ✗ & \\ V\({}_{\text{MoS}}\) & d2 & 0.89 & -0.64 & ✗ & ✗ & 11.81 & ✓ \\ & d3 & 0.99 & -0.53 & ✗ & ✓ & \\ & \(<\) & \(>\) & ✓ & ✗ & \\ & \(<\) & \(>\) & ✗ & ✗ & \\ V\({}_{\text{MoS}}\) & \(<\) & \(>\) & ✗ & ✗ & 21.41 & ✗ \\ & \(>\) & \(<\) & ✗ & ✓ & \\ & \(\text{{}_{\text{MoS}}}\) & 1.75 & 0.25 & ✗ & ✓ & \\ & \(\text{{}_{\text{MoS}}}\) & d1 & -0.49 & -2.45 & ✓ & ✗ & \\ & d2 & -0.95 & -2.00 & ✓ & ✗ & 0.68 & ✓ \\ & d3 & -0.85 & -2.31 & ✓ & ✗ & \\ & d1 & -3.42 & -4.91 & ✗ & ✗ & 0.61 & ✗ \\ & \(\text{{}_{\text{MoS}}}\) & -0.37 & -2.01 & ✓ & ✗ & 2.35 & ✓ \\ & \(\text{{}_{\text{MoS}}}\) & & & & & & \\ & \(\text{{}_{\text{MoS}}}\) & & & & & & \\ & \(\text{{}_{\text{MoS}}}\) & & & & & & \\ & \(\text{{}_{\text{MoS}}}\) & & & & & & \\ & \(\text{{}_{\text{MoS}}}\) & & & & & & \\ & \(\text{{}_{\text{MoS}}}\) & & & & & & \\ & \(\text{{}_{\text{MoS}}}\) & & & & & & \\ & \(\text{{}_{\text{MoS}}}\) & & & & & & \\ & \(\text{{}_{\text{MoS}}}\) & & & & & & \\ & \(\text{{}_{\text{MoS}}}\) & & & & & & \\ & \(\text{{}_{\text{MoS}}}\) & & & & & & \\ & \(\text{{}_{\text{MoS}}}\) & & & & & & \\ & \(\text{{}_{\text{MoS}}}\) & & & & & & \\ & \(\text{{}_{\text{MoS}}}\) & & & & & & \\ & \(\text{{}_{\text{MoS}}}\) & & & & & & \\ & \(\text{{}_{\text{MoS}}}\) & & & & & & \\ & \(\text{{}_{\text{MoS}}}\) & & & & & & \\ & \(\text{{}_{\text{MoS}}}\) & & & & & & \\ & \(\text{{}_{\text{MoS}}}\) & & & & & & \\ & \(\text{{}_{\text{MoS}}}\) & & & & & & \\ & \(\text{{}_{\text{MoS}}}\) & & & & & & \\ & \(\text{{}_{\text{MoS}}}\) & & & & & & \\ & \(\text{{}_{\text{MoS}}}\) & & & & & & \\ & \(\text{{}_{\text{MoS}}}\) & & & & & & \\ & \(\text{{}_{\text{MoS}}}\) & & & & & & \\ & \(\text{{}_{\text{MoS}}}\) & & & & & & \\ & \(\text{{}_{\text{MoS}}}\) & & & & & & \\ & \(\text{{}_{\text{MoS}}}\) & & & & & & \\ & \(\text{{}_{\text{MoS}}}\) & & & & & & \\ & \(\text{{}_{\text{MoS}}}\) & & & & & & & \\ & \(\text{{}_{\text{MoS}}}\) & & & & & & \\ & \(\text{{}_{\text{MoS}}}\) & & & & & & \\ & \(\text{{}_{\text{MoS}}}\) & & & & & & \\ & \(\text{{}_{\text{MoS}}}\) & & &
molecules in the interface does not have such effect, as is shown in Fig. 6(d). The water related defect energy level is far away from the band edge of MoS\({}_{2}\)N\({}_{4}\). This further confirms that the water molecules adsorption is less important than oxygen adsorption in impacting device performance and reliability.
To present the importance of different defects more intuitively, Table 2 summarizes and compares the information of all defects in CaF\({}_{2}\)-MoS\({}_{2}\)N\({}_{4}\) system.
## 4 Conclusion
In conclusion, we have investigated the various defects and adsorbates in CaF\({}_{2}\)-based 2D material MOSFET structures to distinguish their importance in degrading the device performance and reliability. First, the intrinsic defects in channel materials, including the V\({}_{3}\) and V\({}_{\text{tot}}\) in MoS\({}_{2}\), and V\({}_{\text{N}}\) and V\({}_{\text{N}}\) in MoS\({}_{2}\)N\({}_{4}\) are very active charge trapping centers. Second, the adsorbed oxygen molecule in the channel/CaF\({}_{2}\) interface or CaF\({}_{2}\) surface is very important trap centers, and they can even spontaneously change the MoS\({}_{2}\)N\({}_{4}\) to p-type. Third, the adsorbed water molecules are very inactive in capture charges, and thus is much less important in affecting device performance. An elaborate Table that comparing the detailed properties of different defects is provided so that both the researchers in experiment and in theory can refer to easily. These results mean that the exclusion of adsorbates in device fabrication is as important as growing high-quality channel material to obtain better device performance.
## Author Contributions
Zhe Zhao: Conceptualization, Methodology, Data collection, Writing - original draft. Tao Xiong: Writing - review & editing. Jian Gong and Yue-Yang Liu: Supervision, Writing - review & editing.
## Conflicts of interest
There are no conflicts to declare.
## Acknowledgements
This work was financially supported by the National Natural Science Foundation of China Grant No. 12004375, in part by the National Natural Science Foundation of China Grant No. 62174155, the National Natural Science Foundation of China Grant No. 62004193, the National Natural Science Foundation of China Grant No. 62125404, the Inner Mongolia Natural Science Foundation No. 2023ZD27, and the National Natural Science Foundation of China Grant No. 11964022.
|
2307.00110 | Whole-Body Human Ultrasound Tomography | We developed a system for whole-body human ultrasound tomography in
reflection and transmission modes. A custom 512-element ultrasound receiver
array with a rotating single-element ultrasound transmitter are used to
generate 2D isotropically resolved images across the entire human
cross-section. We demonstrate this technique in regions such as the abdomen and
legs in healthy volunteers. Compared to handheld-probe-based ultrasonography,
this approach provides a substantially larger field of view, depends less on
operator training, and obtains quantitative tissue parameter profiles in
addition to reflectivity images. Whole-body ultrasound tomography could be
valuable in applications such as organ disease screening, image-guided needle
biopsy, and treatment monitoring. | David C. Garrett, Jinhua Xu, Geng Ku, Lihong V. Wang | 2023-06-30T19:52:33Z | http://arxiv.org/abs/2307.00110v1 | # Whole-Body Human Ultrasound Tomography
###### Abstract
We developed a system for whole-body human ultrasound tomography in reflection and transmission modes. A custom 512-element ultrasound receiver array with a rotating single-element ultrasound transmitter are used to generate 2D isotropically resolved images across the entire human cross-section. We demonstrate this technique in regions such as the abdomen and legs in healthy volunteers. Compared to handheld-probe-based ultrasonography, this approach provides a substantially larger field of view, depends less on operator training, and obtains quantitative tissue parameter profiles in addition to reflectivity images. Whole-body ultrasound tomography could be valuable in applications such as organ disease screening, image-guided needle biopsy, and treatment monitoring.
## Introduction
Since its inception in the mid-20\({}^{\text{th}}\) century, ultrasound imaging has revolutionized healthcare by providing rapid and affordable insight into tissue structure and function. Early systems employed single transducers scanned linearly or circularly with subjects immersed in a water bath [1], [2], later followed by membrane approaches to image regions in the abdomen [3]. Initial results were promising for disease diagnosis [4], but bulky electronics and slow acquisition times necessitated mechanical scanning over several minutes. Later developments in transducers and electronics led to linear probes [5], where multiple channels could be used in parallel. The handheld probe remains the most used form of ultrasonography and has found many clinical applications. However, probes require trained operation [6], provide only reflection-mode images over a narrow field of view (FOV), and have limited ability to visualize features behind bone or air pockets.
More recently, alternate approaches using smaller immersion tanks with planar [7], linear [8], ring [9], or hemispherical [10] transducer arrays have been investigated for ultrasound tomography (UST) imaging of the breast [11] or limbs of the body. These systems record both reflected and transmitted signals, allowing for reflectivity, speed of sound, and attenuation profiles to be recovered. In extending to human-scale imaging, acoustically opaque regions like bone or air pockets have been conventionally viewed as insurmountable obstacles. A recent study achieved whole-body imaging in piglets despite the presence of bone and air [12]. Another recent system enables volumetric reflection-mode imaging of human extremities like the arm, visualizing vasculature and bones [13]. However, these system geometries and parameters (e.g., acoustic frequency, transmitter power, and detection sensitivity) are not yet suitable for whole-body human imaging.
In this work, we return to geometries like those used by the early ultrasonography practitioners but with the advantage of modern electronics and transducer technology. We employ a custom circular array with 512 receiver elements combined with a single-element transmitter which rotates around the subject. This configuration allows for whole-body UST imaging of humans immersed in water, resulting 2D isotropic images of reflectivity, speed of sound, and attenuation profiles. Using full 360\({}^{\circ}\) viewing angles, we overcome acoustic penetration through tissues such as bone or air pockets. We demonstrate this technique by imaging regions in the abdomen and legs in healthy volunteers. Several organs and key features can be clearly observed in reflection-mode images, and we also demonstrate recovery of tissue speed of sound and attenuation.
## Results
We developed a custom 60 cm diameter 512-element acoustic receiver array with 1 MHz center frequency. A 1.5-inch diameter 2.25 MHz transducer (Olympus V395) with a custom diverging cylindrical polymethylpentene (TPX) lens is used as a transmitter. The transmitter is mounted on a plastic gear which rotates around the subject using a stepper motor. The array is mounted on two vertical linear motor stages to adjust its height in a water immersion tank. Water acts as acoustic coupling between tissue and the transducers. An arbitrary function generator (Sigilent SDG2042X) connected to a 300-Watt RF power amplifier (ENI 350L) excite the transmitter using a 400 \(\upmu\)s chirp signal spanning 0.3-2.0 MHz. The system hardware is shown in Figure 1.
We demonstrate whole-body UST with a healthy female volunteer. The subject is seated in the water immersion tank with the head held against a cushion to reduce motion and with arms raised slightly to lift the ribs. Figure 2 shows an example reflection-mode image of the abdomen. The image is displayed in inverse grayscale (brighter regions are more anechoic) normalized to the peak pixel amplitude. Various structures are visualized, including the liver, stomach, spleen, abdominal aorta, and vertebral body. Note that despite the presence of bone and air pockets, our imaging geometry allows high fidelity imaging of regions deep in the body.
Using data collecting during the same scan, we also obtain transmission-mode profiles of the speed of sound and attenuation coefficient which are overlaid on the reflection-mode images in Figure 2. The transmission mode image reconstruction uses the filtered back projection algorithm like that used in x-ray computed tomography, where the arrival time delay and the attenuation of the subject data with respect to the homogenous data are found for each transmitter-receiver ray. Slowness and attenuation coefficient maps are solved for by multiplying the derived arrival delay and attenuation vectors to the inverse of a matrix corresponding to the crossing ray length density. Due to the large size of the data, it is not practical to store and operate on such a matrix directly. Therefore, the conjugate gradient descent algorithm is used to solve the matrix inversion. The speed of sound of map can then be obtained by inverting the slowness matrix. We observe higher tissue speed of sound in the liver which agrees with literature values of approximately 1560 m/s [14].
Figure 1: a) System diagram. AWG: arbitrary waveform generator; P.A: power amplifier; MN: matching network; DAQ: data acquisition module. b) System photograph.
We further performed 15 scans at 1 cm vertical intervals from approximately the ribcage to the pelvis. Each scan was acquired over 10 seconds, and the subject was in the immersion tank for approximately 15 minutes. Examples of other 2D images are shown in Figure 3. Note that this volunteer previously had her left kidney removed, so only the right one is visualized.
Figure 3: Example of elevational scans of a female subject from approximately the ribcage to the pelvis. RK: right kidney (left kidney was removed).
Figure 2: Example UST images. a) Reflectivity image of human abdomen. IVC: inferior vena cava. AA: abdominal aorta. RL: right lobe of liver. LL: left lobe of liver. VB: vertebral body. SC: spinal cord. St: stomach. Sp: spleen. b) and c) show the speed of sound and attenuation profiles, respectively, overlaid on the reflectivity image.
With the subject standing in the immersion tank, we also imaged the legs as shown in Figure 4. In the upper legs, the femur, surrounding muscles, and adipose boundaries are clearly observed. The tibia and fibula are visualized in the lower legs as well as adipose boundaries.
## Discussion
We developed a system for whole-body ultrasound imaging. Compared with clinical handheld-probe-based ultrasonography, our approach images cross-sections of the whole human body and visualizes three contrasts: reflectivity, speed of sound, and attenuation. This may be of clinical use for screening organ size or structure as an early indicator of inflammation or disease [15]. The speed of sound and attenuation could also be used as diagnostic tools, for instance to assess changes due to non-alcoholic fatty liver disease. Whole-body UST could also be used in applications such as image-guided needle biopsy where x-ray computed tomography is conventionally used. With our whole-body FOV, the location of the biopsy needle could be localized with respect to tissues of interest without use of ionizing radiation.
Furthermore, clinical ultrasonography typically requires trained operation for observing regions of interest. Our approach requires only that the patient remain still, where the imaging process could then be automated. This could be an appealing feature for regular screening approaches and would help reduce the cost compared to other modalities. However, our current implementation involving patient water immersion is likely unsuitable for imaging of diseased subjects. A similar imaging geometry could therefore be implemented using water bags like those used in shockwave lithotripsy.
In the future, we plan to enhance this system with additional photoacoustic and thermoacoustic contrast. Using the same acoustic receivers, these images could be immediately co-registered with our UST images to overlay optical and microwave absorption profiles. We also aim to improve our transmission-mode reconstruction quality using techniques such as full-wave inversion [16] to better localize variations in the speed of sound and attenuation coefficient. Additional acoustic elements could also reduce image acquisition time and provide 3D imaging capability.
Figure 4: Reflection-mode images of a) the upper leg; and b) the lower leg of a female subject.
## Materials and Methods
### System hardware
All 512 receiver array elements are 3 mm \(\times\) 10 mm polymer piezoelectric (PVDF-TrFE, PolyK Technologies LLC) capacitively coupled to polyimide electrodes which are directly connected to parallel preamplifiers. The preamplifiers are implemented on custom annular printed circuit boards and provide 15 dB voltage gain with 100 k\(\Omega\) input impedance. The elements and preamplifiers are housed in a stainless-steel shielded enclosure. Casting epoxy is used as a backing material for each element, and an angled back panel is used to reduce reverberation. All channels are low-pass filtered (\(f_{c}=2\) MHz) and digitized (Photosound Legion) in parallel at 5 MSPS. The preamplifiers are powered by rechargeable lithium polymer batteries. To account for geometrical error during manufacturing, the technique described in [17] is used to calibrate each element's position.
### Imaging parameters
To enhance the signal-to-noise ratio (SNR) while limited by the mechanical index, a linear chirp signal versus time (\(t\)) is used with a time varying frequency \(f(t)=ct+f_{0}\), where \(c=(f_{1}-f_{0})/T\) is the linear chirp rate, \(f_{0}=0.3\) MHz is the lower frequency, \(f_{1}=2.0\) MHz is the upper frequency, and \(T=400\) us is the chirp duration. The transmitted frequencies are limited by the bandwidths of the transmitter and receivers. We used a maximal pulse duration given our maximal acquisition time of 800 us, allowing for recovery of the roundtrip reflected signals over the entire field of view (FOV). The resulting transmitted chirp signal is
\[x(t)=\sin\Big{[}2\pi(\frac{c}{2}t^{2}+f_{0}t)\Big{]}.\]
Compared to a pulse with similar peak pressure, this results in an expected SNR gain of \(\sim\sqrt{T\cdot B}\), where \(B=f_{1}-f_{0}\) is the acoustic bandwidth. In addition to the target, we also perform a scan with only water in the imaging domain, resulting in recorded signals \(x_{w,i}(t)\) for each receiver element \(i\). This provides the response of each transducer to the chirp which is then cross-correlated with the target's chirp response \(x_{c,i}(t)\). The pulse response for the target signals \(\chi_{s,i}(t)\) is then recovered for each element \(i\) as:
\[\chi_{s,i}(t)=\frac{x_{w,i}(t)\star x_{c,i}(t)}{\max\bigl{[}x_{w,i}(t)\star x_ {w,i}(t)\bigr{]}}\]
where \(\star\) denotes cross-correlation. We normalize by the maximum of the autocorrelation of \(x_{w,i}(t)\) to account for sensitivity variation in the receiver elements. The transmitter operates with a pulse repetition rate of 180 Hz. With the gear rotation time of 10 seconds, this results in 1800 transmitted pulses over a full circular scan around the target.
### Human imaging protocol
A healthy female volunteer consented to being imaged in this system. This imaging procedure was approved by the Caltech Institutional Review Board (protocol IR21-1099). Prior to human
imaging, we used a calibrated hydrophone (Onda HGL-0085) positioned immediately in front of the transmitter to evaluate the mechanical index as less than 0.2, whereas the limit from the U.S. Food and Drug Administration is 1.9 [18].
## Acknowledgements
This work was supported in part by National Institutes of Health grants R35 CA220436 (Outstanding Investigator Award). L.W. has a financial interest in Microphotoacoustics, Inc., CalPACT, LLC, and Union Photoacoustic Technologies, Ltd., which, however, did not support this work.
|
2307.16772 | Weighted topological pressure revisited | Feng--Huang (2016) introduced weighted topological entropy and pressure for
factor maps between dynamical systems and established its variational
principle. Tsukamoto (2022) redefined those invariants quite differently for
the simplest case and showed via the variational principle that the two
definitions coincide. We generalize Tsukamoto's approach, redefine the weighted
topological entropy and pressure for higher dimensions, and prove the
variational principle. Our result allows for an elementary calculation of the
Hausdorff dimension of affine-invariant sets such as self-affine sponges and
certain sofic sets that reside in Euclidean space of arbitrary dimension. | Nima Alibabaei | 2023-07-31T15:38:39Z | http://arxiv.org/abs/2307.16772v1 | # Weighted topological pressure revisited
###### Abstract.
Feng-Huang (2016) introduced weighted topological entropy and pressure for factor maps between dynamical systems and established its variational principle. Tsukamoto (2022) redefined those invariants quite differently for the simplest case and showed via the variational principle that the two definitions coincide. We generalize Tsukamoto's approach, redefine the weighted topological entropy and pressure for higher dimensions, and prove the variational principle. Our result allows for an elementary calculation of the Hausdorff dimension of affine-invariant sets such as self-affine sponges and certain sofic sets that reside in Euclidean space of arbitrary dimension.
Key words and phrases:Dynamical systems, weighted topological entropy, weighted topological pressure, variational principle, affine-invariant sets, self-affine sponges, sofic sets, Hausdorff dimension 2
For a dynamical system \((X,T)\), denote its **topological entropy** by \(h_{\rm top}(T)\). Let \(P(f)\) be the **topological pressure** for a continuous function \(f:X\to\mathbb{R}\) (see section 2 for the definition of these quantities). Let \(\mathscr{M}^{T}(X)\) be the set of \(T\)-invariant probability measures on \(X\) and \(h_{\mu}(T)\) the **measure-theoretic entropy** for \(\mu\in\mathscr{M}^{T}(X)\) (see subsection 3.2). The variational principle then states that [10, 11, 12, 13]
\[P(f)=\sup_{\mu\in\mathscr{M}^{T}(X)}\left(h_{\mu}(T)+\int_{X}fd\mu\right).\]
### Background
We first look at _self-affine sponges_ to understand the background of weighted topological entropy introduced by Feng-Huang. Let \(m_{1},m_{2},\ldots,m_{r}\) be natural numbers with \(m_{1}\leq m_{2}\leq\cdots\leq m_{r}\). Consider an endomorphism \(T\) on \(\mathbb{T}^{r}=\mathbb{R}^{r}/\mathbb{Z}^{r}\) represented by the diagonal matrix \(A={\rm diag}(m_{1},m_{2},\ldots,m_{r})\). For \(D\subset\prod_{i=1}^{r}\{0,1,\ldots,m_{i}-1\}\), define
\[K(T,D)=\left\{\sum_{n=0}^{\infty}A^{-n}e_{n}\in\mathbb{T}^{r}\Bigg{|}e_{n}\in D \right\}.\]
This set is compact and \(T\)-invariant, i.e., \(TK(T,D)=T\).
These sets for \(r=2\) are known as _Bedford-McMullen carpets_ or _self-affine carpets_. The following figure exhibits a famous example, the case of \(D=\{(0,0),(1,1),(0,2)\}\subset\{0,1\}\times\{0,1,2\}\).
The analysis of these sets is complicated compared to "self-similar" sets. Bedford [1] and McMullen [12] independently studied these sets and showed that, in general, their Hausdorff dimension is strictly smaller than their Minkowski dimension (a.k.a. Box-counting dimension). The figure above has Hausdorff dimension \(\log_{2}{(1+2^{\log_{3}2})}=1.349\cdots\) and Minkowski dimension \(1+\log_{3}\frac{3}{2}=1.369\cdots\).
The sets \(K(T,D)\) for \(r\geq 3\) are called _self-affine sponges_. Kenyon-Peres [11] calculated their Hausdorff dimension for the general case (see Theorem 1.5 in this section). In addition, they showed the following variational principle for the Hausdorff dimension of \(K(T,D)\);
\[\dim_{H}K(T,D)=\sup_{\mu\in\mathscr{M}^{T}(\mathbb{T}^{r})}\Bigg{\{}\frac{1}{ \log m_{r}}h_{\mu}(T)+\sum_{i=2}^{r}\left(\frac{1}{\log m_{r-i+1}}-\frac{1}{ \log m_{r-i+2}}\right)h_{\mu_{i}}(T_{i})\Bigg{\}}. \tag{1.1}\]
Figure 1. First four generations of Bedford-McMullen carpet
Here, the endomorphism \(T_{i}\) on \(\mathbb{T}^{r-i+1}\) is defined from \(A_{i}=\operatorname{diag}(m_{1},m_{2},\ldots,m_{r-i+1})\), and \(\mu_{i}\) is defined as the push-forward measure of \(\mu\) on \(\mathbb{T}^{r-i+1}\) by the projection onto the first \(r-i+1\) coordinates. Feng-Huang's definition of weighted topological entropy of \(K(T,D)\) equals \(\dim_{H}K(T,D)\) with a proper setting.
### The original definition of the weighted topological pressure
Motivated by the geometry of self-affine sponges described in the previous subsection, Feng-Huang introduced a generalized notion of pressure. Consider dynamical systems \((X_{i},T_{i})\)\((i=1,\,2,\,\ldots,\,r)\) and factor maps \(\pi_{i}:X_{i}\to X_{i+1}\)\((i=1,\,2,\,\ldots,\,r-1)\):
\[(X_{1},T_{1})\xrightarrow{\pi_{1}}(X_{2},T_{2})\xrightarrow{\pi_{2}}\cdots \xrightarrow{\pi_{r-1}}(X_{r},T_{r})\.\]
We refer to this as a **sequence of dynamical systems**. Let the weight \(\boldsymbol{w}=(w_{1},w_{2},\ldots,w_{r})\) with \(w_{1}>0\) and \(w_{i}\geq 0\) for \(i\geq 2\). Feng-Huang [10] ingeniously defined the \(\boldsymbol{w}\)-weighted topological pressure \(P^{\boldsymbol{w}}_{\text{FH}}(f)\) for a continuous function \(f:X_{1}\to\mathbb{R}\) and established the variational principle [10, Theorem 1.4]
\[P^{\boldsymbol{w}}_{\text{FH}}(f)=\sup_{\mu\in\mathscr{M}^{T_{1}}(X_{1})} \left(\sum_{i=1}^{r}w_{i}h_{\pi^{(i-1)}\ast\mu}(T_{i})+w_{1}\int_{X_{1}}fd\mu \right). \tag{1.2}\]
Here \(\pi^{(i)}\) is defined by
\[\pi^{(0)}=\operatorname{id}_{X_{1}}:X_{1}\to X_{1},\] \[\pi^{(i)}=\pi_{i}\circ\pi_{i-1}\circ\cdots\circ\pi_{1}:X_{1}\to X _{i+1},\]
and \(\pi^{(i-1)}{}_{\ast}\mu\) is the push-forward measure of \(\mu\) by \(\pi^{(i-1)}\) on \(X_{i}\). The \(\boldsymbol{w}\)-weighted topological entropy \(h^{\boldsymbol{w}}_{\text{top}}(T_{1})\) is the value of \(P^{\boldsymbol{w}}_{\text{FH}}(f)\) when \(f\equiv 0\). In this case, (1.2) becomes
\[h^{\boldsymbol{w}}_{\text{top}}(T_{1})=\sup_{\mu\in\mathscr{M}^{T_{1}}(X_{1}) }\left(\sum_{i=1}^{r}w_{i}h_{\pi^{(i-1)}\ast\mu}(T_{i})\right). \tag{1.3}\]
We will explain here Feng-Huang's method of defining \(h^{\boldsymbol{w}}_{\text{top}}(T_{1})\). For the definition of \(P^{\boldsymbol{w}}_{\text{FH}}(f)\), see their original paper [10].
Let \(n\) be a natural number and \(\varepsilon\) a positive number. Let \(d^{(i)}\) be a metric on \(X_{i}\). For \(x\in X_{1}\), define the \(\boldsymbol{n}\)**-th \(\boldsymbol{w}\)-weighted Bowen ball of radius \(\boldsymbol{\varepsilon}\) centered at \(\boldsymbol{x}\)** by
\[B^{\boldsymbol{w}}_{n}(x,\varepsilon)=\left\{y\in X_{1}\left|\begin{array}{ l}d^{(i)}\big{(}T_{i}^{j}(\pi^{(i-1)}(x)),T_{i}^{j}(\pi^{(i-1)}(y))\big{)}< \varepsilon\text{ for every}\\ 0\leq j\leq\lceil(w_{1}+\cdots+w_{i})n\rceil\text{ and }1\leq i\leq k. \end{array}\right.\right\}.\]
Consider \(\Gamma=\{B^{\boldsymbol{w}}_{n_{j}}(x_{j},\varepsilon)\}_{j}\), an at-most countable cover of \(X_{1}\) by Bowen balls. Let \(n(\Gamma)=\min_{j}n_{j}\). For \(s\geq 0\) and \(N\in\mathbb{N}\), let
\[\Lambda^{\boldsymbol{w},s}_{N,\varepsilon}=\inf\left\{\sum_{j}e^{-sn_{j}} \Bigg{|}\ \Gamma=\{B^{\boldsymbol{w}}_{n_{j}}(x_{j},\varepsilon)\}_{j}\text{ covers }X_{1}\text{ and }n(\Gamma)\geq N\right\}.\]
This quantity is non-decreasing as \(N\to\infty\). The following limit hence exists:
\[\Lambda_{\varepsilon}^{\mathbf{w},s}=\lim_{N\to\infty}\Lambda_{N,\varepsilon}^{\mathbf{w},s}.\]
There is a value of \(s\) where \(\Lambda_{\varepsilon}^{\mathbf{w},s}\) jumps from \(\infty\) to \(0\), which we will denote by \(h_{\mathrm{top}}^{\mathbf{w}}(T_{1},\varepsilon)\):
\[\Lambda_{\varepsilon}^{\mathbf{w},s}=\left\{\begin{array}{ll}\infty&(s<h_{ \mathrm{top}}^{\mathbf{w}}(T_{1},\varepsilon))\\ 0&(s>h_{\mathrm{top}}^{\mathbf{w}}(T_{1},\varepsilon))\end{array}\right..\]
The value \(h_{\mathrm{top}}^{\mathbf{w}}(T_{1},\varepsilon)\) is non-decreasing as \(\varepsilon\to 0\). Therefore, we can define the \(\mathbf{w}\)-weighted topological entropy \(h_{\mathrm{top}}^{\mathbf{w}}(T_{1})\) by
\[h_{\mathrm{top}}^{\mathbf{w}}(T_{1})=\lim_{\varepsilon\to 0}h_{\mathrm{top}}^{\mathbf{w} }(T_{1},\varepsilon).\]
An important point about this definition is that in some dynamical systems, such as self-affine sponges, the quantity \(h_{\mathrm{top}}^{\mathbf{w}}(T_{1})\) is directly related to the Hausdorff dimension of \(X_{1}\).
**Example 1.1**.: Consider the self-affine sponges introduced in subsection 1.2. Define \(p_{i}:\mathbb{T}^{r-i+1}\to\mathbb{T}^{r-i}\) by
\[p_{i}(x_{1},x_{2},\ldots,x_{r-i},x_{r-i+1})=(x_{1},x_{2},\ldots,x_{r-i}).\]
Let \(X_{1}=K(T,D)\), \(X_{i}=p_{i-1}\circ p_{i}\circ\cdots\circ p_{1}(X_{1})\), and \(T_{i}:X_{i}\to X_{i}\) be the endomorphism defined from \(A_{i}=\mathrm{diag}(m_{1},m_{2},\ldots,m_{r-i+1})\). Define the factor maps \(\pi_{i}:X_{i}\to X_{i+1}\) as the restrictions of \(p_{i}\). Let
\[\mathbf{w}=\left(\frac{\log m_{1}}{\log m_{r}},\quad\frac{\log m_{1}}{\log m_{r-1 }}-\frac{\log m_{1}}{\log m_{r}},\ldots,\quad\frac{\log m_{1}}{\log m_{2}}- \frac{\log m_{1}}{\log m_{3}},\quad 1-\frac{\log m_{1}}{\log m_{2}}\right). \tag{1.4}\]
Then \(n\)-th \(\mathbf{w}\)-weighted Bowen ball is approximately a square with a side length of \(\varepsilon m_{1}^{-n}\). Therefore,
\[\dim_{H}K(T,D)=\frac{h_{\mathrm{top}}^{\mathbf{w}}(T_{1})}{\log m_{1}}. \tag{1.5}\]
### Tsukamoto's approach and its extension
Following the work of Feng-Huang [17] described in the previous subsection, Tsukamoto [14] published an intriguing approach to these invariants. There, he gave a new definition of the weighted topological pressure for two dynamical systems and a factor map:
\[\begin{CD}(X_{1},T_{1})@>{\pi}>{}>(X_{2},T_{2}).\end{CD}\]
He then proved the variational principle using his definition, showing the surprising coincidence of the two definitions. His expression of weighted topological entropy allowed for relatively easy calculations for sets like self-affine carpets.
We will extend Tsukamoto's idea, redefine the weighted topological pressure for an arbitrary length of a sequence of dynamical systems, and establish the variational principle. Here we will explain our definition in the case \(f\equiv 0\). See section 2 for the general setting.
We will not introduce Tsukamoto's definition since it is obtained by letting \(r=2\) in the following argument.
Let \(\boldsymbol{a}=(a_{1},\,a_{2},\,\cdots,a_{r-1})\) with \(0\leq a_{i}\leq 1\) for each \(i\). Let \(N\) be a natural number and \(\varepsilon\) a positive number. We define a new metric \(d_{N}^{(i)}\) on \(X_{i}\) by
\[d_{N}^{(i)}(x_{1},\,x_{2})=\max_{0\leq n<N}d^{(i)}(T_{i}^{\,n}x_{1},T_{i}^{\,n }x_{2}).\]
For \(\Omega\subset X_{1}\), we define
\[\#_{1}^{\boldsymbol{a}}(\Omega,N,\varepsilon)=\min\left\{n\in\mathbb{N} \left|\begin{array}{l}\mbox{There exists an open cover }\{U_{j}\}_{j=1}^{n}\mbox{ of }\Omega\\ \mbox{with diam}(U_{j},\,d_{N}^{(1)})<\varepsilon\mbox{ for all }1\leq j\,\leq n \end{array}\right.\right\}.\]
Let \(\Omega\subset X_{i+1}\). If \(\#_{i}^{\boldsymbol{a}}\) is already defined, let
\[\left.\begin{array}{l}\#_{i+1}^{\boldsymbol{a}}(\Omega,N, \varepsilon)\\ =\min\left\{\sum_{j=1}^{n}\left(\#_{i}^{\boldsymbol{a}}(\pi_{i}^{-1}(U_{j}),N, \varepsilon)\right)^{a_{i}}\left|\begin{array}{l}n\in\mathbb{N},\,\{U_{j}\}_ {j=1}^{n}\mbox{ is an open cover of }\Omega\\ \mbox{with diam}(U_{j},\,d_{N}^{(i+1)})<\varepsilon\mbox{ for all }1\leq j\,\leq n \end{array}\right.\right\}.\end{array}\right\}.\]
We define the **topological entropy of \(\boldsymbol{a}\)-exponent**\(h^{\boldsymbol{a}}(\boldsymbol{T})\), where \(\boldsymbol{T}=(T_{i})_{i}\), by
\[h^{\boldsymbol{a}}(\boldsymbol{T})=\lim_{\varepsilon\to 0}\left(\lim_{N\to\infty} \frac{\log\#_{r}^{\boldsymbol{a}}(X_{r},\,N,\varepsilon)}{N}\right).\]
This limit exists since \(\log\#_{r}^{\boldsymbol{a}}(X_{r},\,N,\,\varepsilon)\) is sub-additive in \(N\) and non-decreasing as \(\varepsilon\) tends to \(0\).
From \(\boldsymbol{a}\), define \(\boldsymbol{w}_{\boldsymbol{a}}=(w_{1},\,\cdots,\,w_{r})\) by
\[\left\{\begin{array}{l}w_{1}=a_{1}a_{2}a_{3}\cdots a_{r-1}\\ w_{2}=(1-a_{1})a_{2}a_{3}\cdots a_{r-1}\\ w_{3}=(1-a_{2})a_{3}\cdots a_{r-1}\\ \qquad\qquad\vdots\\ w_{r-1}=(1-a_{r-2})a_{r-1}\\ w_{r}=1-a_{r-1}\end{array}\right.\quad.\]
Then our main result Theorem 2.1 below yields
**Theorem 1.2**.: _For \(\boldsymbol{a}=(a_{1},\,a_{2},\,\cdots,a_{r-1})\) with \(0\leq a_{i}\leq 1\) for each \(i\),_
\[h^{\boldsymbol{a}}(\boldsymbol{T})=\sup_{\mu\in\mathscr{M}^{T_{1}}(X_{1})} \left(\sum_{i=1}^{r}w_{i}h_{\pi^{(i-1)},\mu}(T_{i})\right). \tag{1.6}\]
The strategy of the proof is adopted from Tsukamoto's paper. However, there are some additional difficulties. Let \(h^{\boldsymbol{a}}_{\rm var}(\boldsymbol{T})\) be the right-hand side of (1.6). We use the "zero-dimensional trick" for proving \(h^{\boldsymbol{a}}(\boldsymbol{T})\leq h^{\boldsymbol{a}}_{\rm var}( \boldsymbol{T})\), meaning we reduce the proof to the case where all dynamical systems are zero-dimensional. Merely taking a zero-dimensional extension for each \(X_{i}\) does not work. Therefore we realize this by taking step by step
an extension of the whole sequence of dynamical systems (see subsection 3.3). Then we show \(h^{\mathbf{a}}(\mathbf{T})\leq h^{\mathbf{a}}_{\mathrm{var}}(\mathbf{T})\) by using an appropriate measure, the definition of which is quite sophisticated (see \(\sigma_{N}\) in the proof of Theorem 4.1). In proving \(h^{\mathbf{a}}(\mathbf{T})\geq h^{\mathbf{a}}_{\mathrm{var}}(\mathbf{T})\), the zero-dimensional trick can not be utilized. The proof, therefore, requires a detailed estimation of these values for arbitrary covers, which is more complicated than the original argument in [13].
Theorem 1.2 and Feng-Huang's version of variational principle (1.3) yield
**Corollary 1.3**.: _For \(\mathbf{a}=(a_{1},a_{2},\cdots,a_{r-1})\) with \(0<a_{i}\leq 1\) for each \(i\),_
\[h^{\mathbf{a}}(\mathbf{T})=h^{\mathbf{w}_{\mathbf{a}}}_{\mathrm{top}}(T_{1}).\]
This corollary is rather profound, connecting the two seemingly different quantities. We can calculate the Hausdorff dimension of certain self-affine sets using this result, as seen in the following example and section 6.
**Example 1.4**.: Let us take another look at self-affine sponges. Kenyon-Peres [11, Theorem 1.2] calculated their Hausdorff dimension as follows (recall that \(m_{1}\leq m_{2}\leq\cdots\leq m_{r}\)).
**Theorem 1.5**.: _Define a sequence of real numbers \((Z_{j})_{j}\) as follows. Let \(Z_{r}\) be the indicator of \(D\), namely, \(Z_{r}(i_{1},\ldots,i_{r})=1\) if \((i_{1},\ldots,i_{r})\in D\) and \(0\) otherwise. Define \(Z_{r-1}\) by_
\[Z_{r-1}(i_{1},\ldots,i_{r-1})=\sum_{i_{r}=0}^{m_{r}-1}Z_{r}(i_{1},\ldots,i_{r- 1},i_{r}).\]
_More generally, if \(Z_{j+1}\) is already defined, let_
\[Z_{j}(i_{1},\ldots,i_{j})=\sum_{i_{j+1}=0}^{m_{j+1}-1}Z_{j+1}(i_{1},\ldots,i_{ j},i_{j+1})^{\log m_{j+1}/\log m_{j+2}}.\]
_Then_
\[\dim_{H}K(T,D)=\frac{\log Z_{0}}{\log m_{1}}.\]
We can prove this result fairly elementary by Corollary 1.3 without requiring measure theory on the surface. Set \(a_{i}=\log_{m_{r-i+1}}m_{r-i}\) for each \(i\), then \(\mathbf{w}_{\mathbf{a}}\) equals \(\mathbf{w}\) in (1.4). Combining (1.5) and Corollary 1.3,
\[\dim_{H}K(T,D)=\frac{h^{\mathbf{w}_{\mathbf{a}}}_{\mathrm{top}}(T_{1})}{\log m_{1}}= \frac{h^{\mathbf{a}}(\mathbf{T})}{\log m_{1}}.\]
Hence, we need to show the following claim.
**Claim 1.6**.: _We have_
\[h^{\mathbf{a}}(\mathbf{T})=\log Z_{0}.\]
Proof.: Observe first that taking the infimum over closed covers instead of open ones in the definition of \(h^{\boldsymbol{a}}(\boldsymbol{T})\) does not change its value. Define a metric \(d^{(i)}\) on each \(X_{i}\) by
\[d^{(i)}(x,y)=\min_{n\in\mathbb{Z}^{r-i+1}}|x-y-n|.\]
Let
\[D_{j}=\{(e_{1},\ldots,e_{j})|\text{ there are }e_{j+1},\ldots,e_{r}\text{ with }(e_{1}, \ldots,e_{r})\in D\}.\]
Define \(p_{i}:D_{r-i+1}\to D_{r-i}\) by \(p_{i}(e_{1},\ldots,e_{r-i+1})=(e_{1},\ldots,e_{r-i})\). Fix \(0<\varepsilon<\frac{1}{m_{r}}\) and take a natural number \(n\) with \(m_{1}^{-n}<\varepsilon\). Fix a natural number \(N\) and let \(\psi_{i}:D_{r-i+1}^{N+n}\to D_{r-i}^{N+n}\) be the product map of \(p_{i}\), i.e., \(\psi_{i}(v_{1},\ldots,v_{N+n})=(p_{i}(v_{1}),\ldots,p_{i}(v_{N+n}))\).
For \(x\in D_{r-i+1}^{N+n}\), define (recall that \(A_{i}=\operatorname{diag}(m_{1},m_{2},\ldots,m_{r-i+1})\))
\[U_{x}^{(i)}=\left\{\sum_{k=0}^{\infty}A_{i}^{-k}e_{k}\in X_{i}\middle|e_{k} \in D_{r-i+1}\text{ for each }k\text{ and }(e_{1},\ldots,e_{N+n})=x\right\}.\]
Then \(\{U_{x}^{(i)}\}_{x\in D_{r-i+1}^{N+n}}\) is a closed cover of \(X_{i}\) with \(\operatorname{diam}(U_{x}^{(i)},d_{N}^{(i)})<\varepsilon\). For \(x,y\in D_{r-i+1}^{N+n}\), we write \(x\backsim y\) if and only if \(U_{x}^{(i)}\cap U_{y}^{(i)}\neq\varnothing\). We have for any \(i\) and \(x\in D_{r-i}^{N+n}\)
\[\pi_{i}^{-1}(U_{x}^{(i+1)})\subset\bigcup_{\begin{subarray}{c}x^{\prime}\in D _{r-i}^{N+n}\\ x^{\prime}\backsim x\end{subarray}}\bigcup_{y\in\psi_{i}^{-1}(x^{\prime})}U_{y }^{(i)}.\]
Notice that for each \(x\in D_{r-i}^{N+n}\), the number of \(x^{\prime}\in D_{r-i}^{N+n}\) with \(x^{\prime}\backsim x\) is not more than \(3^{r}\). Therefore, for every \(v=(v_{1}^{(1)},\ldots,v_{N+n}^{(1)})\in D_{r-1}^{N+n}\), there are \((v_{1}^{(k)},\ldots,v_{N+n}^{(k)})\in D_{r-1}^{N+n}\), \(k=2,3,\ldots,L\), and \(L\leq 3^{r}\), with
\[\#_{1}^{\boldsymbol{a}}(\pi_{1}^{-1}(U_{v}^{(2)}),\,N,\,\varepsilon)\leq\sum_ {k=1}^{L}Z_{r-1}(v_{1}^{(k)})\cdots Z_{r-1}(v_{N+n}^{(k)}).\]
We inductively continue while considering that the multiplicity is at most \(3^{r}\) and obtain
\[\#_{r}^{\boldsymbol{a}}(X_{r},\,N,\,\varepsilon)\] \[\leq 3^{r(r-1)}\sum_{x_{1}\in D_{1}^{N+n}}\left(\sum_{\begin{subarray} {c}x_{2}\in\psi_{2}^{-1}(x_{1})\\ x_{2}\in\psi_{2}^{-1}(x_{1})\end{subarray}}\left(\cdots\left(\sum_{x_{r-2}\in \psi_{r-2}^{-1}(x_{r-3})}\right.\right.\right.\] \[\left.\left.\left.\left(\sum_{\begin{subarray}{c}(v_{1},\ldots,v_ {N+n})\in\psi_{r-1}^{-1}(x_{r-2})\\ v_{j}\in D_{r-1}\text{ for each }j\end{subarray}}\left(Z_{r-1}(v_{1})\cdots Z_{r-1}(v_{N+n}) \right)^{a_{1}}\right)^{a_{2}}\right)^{a_{3}}\cdots\right)^{a_{r-2}}\right)^{a_{ r-1}}\] \[=3^{r(r-1)}\left\{\sum_{x_{1}\in D_{1}}\left(\sum_{x_{2}\in p_{2 }^{-1}(x_{1})}\left(\cdots\left(\sum_{x_{r-1}\in p_{r-1}^{-1}(x_{r-2})}Z_{r-1}( x_{1},\ldots,x_{r-1})^{a_{1}}\right)^{a_{2}}\cdots\right)^{a_{r-2}}\right)^{a_{r-1}} \right\}^{N+n}\] \[=3^{r(r-1)}Z_{0}^{\,N+n}.\]
Therefore,
\[h^{\boldsymbol{a}}(\boldsymbol{T})=\lim_{\varepsilon\to 0}\left(\lim_{N\to \infty}\frac{\log\#_{r}^{\boldsymbol{a}}(X_{r},\,N,\,\varepsilon)}{N}\right) \leq\log Z_{0}.\]
Next, we prove \(h^{\boldsymbol{a}}(\boldsymbol{T})\geq\log Z_{0}\). We fix \(0<\varepsilon<\frac{1}{m_{r}}\) and utilize \(\varepsilon\)-separated sets. Take and fix \(\boldsymbol{s}=(t_{1},\ldots,t_{r})\in D\), and set \(\boldsymbol{s}_{i}=(t_{1},\ldots,t_{r-i+1})\). Fix a natural number \(N\) and let \(\psi_{i}:D_{r-i+1}^{N}\to D_{r-i}^{N}\) be the product map of \(p_{i}\) as in the previous definition. Define
\[Q_{i}=\left\{\sum_{k=1}^{N}{A_{i}}^{-k}e_{k}+\sum_{k=N+1}^{\infty}{A_{i}}^{-k} \boldsymbol{s}_{i}\in X_{i}\right|e_{1},\ldots,e_{N}\in D_{r-i+1}\right\}.\]
Then \(Q_{i}\) is an \(\varepsilon\)-separated set with respect to the metric \(d_{N}^{(i)}\) on \(X_{i}\). Consider an arbitrary open cover \(\mathscr{F}^{(i)}\) of \(X_{i}\) for each \(i\) with the following properties (this \((\mathscr{F}^{(i)})_{i}\) is defined as \(\boldsymbol{\mathfrak{a}}\)**chain of open (\(N\), \(\varepsilon\))-covers** of \((X_{i})_{i}\) in Definition 3.1).
1. For every \(i\) and \(V\in\mathscr{F}^{(i)}\), we have \(\operatorname{diam}(V,d_{N}^{(i)})<\varepsilon\).
2. For each \(1\leq i\leq r-1\) and \(U\in\mathscr{F}^{(i+1)}\), there is \(\mathscr{F}^{(i)}(U)\subset\mathscr{F}^{(i)}\) such that \[\pi_{i}^{-1}(U)\subset\bigcup\mathscr{F}^{(i)}(U)\] and \[\mathscr{F}^{(i)}=\bigcup_{U\in\mathscr{F}^{(i+1)}}\mathscr{F}^{(i)}(U).\]
We have \(\#(V\cap Q_{i})\leq 1\) for each \(V\in\mathscr{F}^{(i)}\) by (1). Let \((e_{1}^{(2)},e_{2}^{(2)},\cdots,e_{N}^{(2)})\in D_{r-1}^{N}\) and suppose \(U\in\mathscr{F}^{(2)}\) satisfies
\[\sum_{k=1}^{N}{A_{2}}^{-k}e_{k}^{(2)}+\sum_{k=N+1}^{\infty}{A_{2}}^{-k} \boldsymbol{s}_{2}\in U\cap Q_{2}.\]
Then \(\pi_{1}^{-1}(U)\) contains at least \(Z_{r-1}(e_{1}^{(2)})\cdots Z_{r-1}(e_{N}^{(2)})\) points of \(Q_{1}\). Hence,
\[\#_{1}^{\boldsymbol{a}}(\pi_{1}^{-1}(U),\,N,\,\varepsilon)\geq Z_{r-1}(e_{1}^ {(2)})\cdots Z_{r-1}(e_{N}^{(2)}).\]
We continue this reasoning inductively and get
\(\#_{r}^{\boldsymbol{a}}(X_{r},\,N,\varepsilon)\)
\[\geq\sum_{e^{(1)}\in D_{1}^{N}}\left(\sum_{e^{(2)}\in\psi_{2} ^{-1}(e^{(1)})}\left(\cdots\left(\sum_{e^{(r-2)}\in\psi_{r-2}^{-1}(e^{(r-3)})} \right.\right.\right.\] \[\left.\left.\left(\sum_{\begin{subarray}{c}(e_{1}^{(2)},\ldots,e_ {N}^{(2)})\in\psi_{r-1}^{-1}(e^{(3)})\\ e_{j}^{(2)}\in D_{r-1}\text{ for each }j\end{subarray}}\left(Z_{r-1}(e_{1}^{(2)}) \cdots Z_{r-1}(e_{N}^{(2)})\right)^{a_{1}}\right)^{a_{2}}\right)^{a_{3}}\cdots \left.\right)^{a_{r-2}}\right)^{a_{r-1}}\] \[=\left\{\sum_{x_{1}\in D_{1}}\left(\sum_{x_{2}\in p_{2}^{-1}(x_{ 1})}\left(\cdots\left(\sum_{x_{r-1}\in p_{r-1}^{-1}(x_{r-2})}Z_{r-1}(x_{1}, \ldots,x_{r-1})^{a_{1}}\right)^{a_{2}}\cdots\right)^{a_{r-2}}\right)^{a_{r-1} }\right\}^{N}\] \[=Z_{0}^{\,N}.\]
This implies
\[h^{\boldsymbol{a}}(\boldsymbol{T})\geq\log Z_{0}.\]
We conclude that
\[h^{\boldsymbol{a}}(\boldsymbol{T})=\log Z_{0}.\]
We would like to mention the work of Barral and Feng [1, 2], and of Yayama [13]. These papers independently studied the related invariants when \((X,T)\) and \((Y,S)\) are subshifts over finite alphabets.
## 2. Weighted topological pressure
Here, we introduce the generalized, new definition of weighted topological pressure. Let \((X_{i},T_{i})\)\((i=1,\,2,\,\ldots,\,r)\) be dynamical systems and \(\pi_{i}:X_{i}\to X_{i+1}\)\((i=1,\,2,\,\ldots,\,r-1)\) factor maps. For a continuous function \(f:X_{1}\to\mathbb{R}\) and a natural number \(N\), set
\[S_{N}f(x)=f(x)+f(T_{1}x)+f(T_{1}^{2}x)+\cdots+f(T_{1}^{N-1}x).\]
Let \(d^{(i)}\) be a metric on \(X_{i}\). Recall that we defined a new metric \(d^{(i)}_{N}\) on \(X_{i}\) by
\[d^{(i)}_{N}(x_{1},\,x_{2})=\max_{0\leq n<N}d^{(i)}(T_{i}^{\,n}x_{1},T_{i}^{\, n}x_{2}).\]
We may write these as \(S_{N}^{T_{1}}f\) or \(d^{T_{i}}_{N}\) to clarify the maps \(T_{1}\) and \(T_{i}\) in the definitions above.
Let \(\boldsymbol{a}=(a_{1},\,a_{2},\cdots,a_{r-1})\) with \(0\leq a_{i}\leq 1\) for each \(i\) and \(\varepsilon\) a positive number. For \(\Omega\subset X_{1}\), we define
\(P_{1}^{\mathbf{a}}(\Omega,\,f,\,N,\,\varepsilon)\)
\[=\inf\left\{\sum_{j=1}^{n}\exp\left(\sup_{U_{j}}S_{N}f\right)\Bigg{|}\begin{array} []{l}n\in\mathbb{N},\,\{U_{j}\}_{j=1}^{n}\text{ is an open cover of }\Omega\\ \text{with diam}(U_{j},d_{N}^{T_{1}})<\varepsilon\text{ for all }1\leq j\leq n \end{array}\right\}.\]
(Letting \(\Omega=X_{1}\), the above defines the standard topological pressure \(P(f)\) on \((X_{1},T_{1})\). The topological entropy \(h_{\text{top}}(T_{1})\) is the value of \(P(f)\) when \(f\equiv 0\).) Let \(\Omega\subset X_{i+1}\). If \(P_{i}^{\mathbf{a}}\) is already defined, let
\[P_{i+1}^{\mathbf{a}}(\Omega,\,f,\,N,\,\varepsilon)\] \[=\inf\left\{\sum_{j=1}^{n}\left(P_{i}^{\mathbf{a}}(\pi_{i}^{-1}(U_{j} ),\,f,\,N,\,\varepsilon)\right)^{a_{i}}\Bigg{|}\begin{array}{l}n\in\mathbb{ N},\,\{U_{j}\}_{j=1}^{n}\text{ is an open cover of }\Omega\\ \text{with diam}(U_{j},d_{N}^{T_{i+1}})<\varepsilon\text{ for all }1\leq j \leq n\end{array}\right\}.\]
We define the **topological pressure of \(\mathbf{a}\)-exponent \(P^{\mathbf{a}}(f)\)** by
\[P^{\mathbf{a}}(f)=\lim_{\varepsilon\to 0}\left(\lim_{N\to\infty}\frac{\log P_{r}^{ \mathbf{a}}(X_{r},\,f,\,N,\,\varepsilon)}{N}\right).\]
This limit exists since \(\log P_{r}^{\mathbf{a}}(X_{r},\,f,\,N,\,\varepsilon)\) is sub-additive in \(N\) and non-decreasing as \(\varepsilon\) tends to \(0\). When we want to clarify the maps \(T_{i}\) and \(\pi_{i}\) used in the definition of \(P^{\mathbf{a}}(f)\), we will denote it by \(P^{\mathbf{a}}(f,\,\mathbf{T})\) or \(P^{\mathbf{a}}(f,\,\mathbf{T},\,\mathbf{\pi})\) with \(\mathbf{T}=(T_{i})_{i=1}^{r}\) and \(\mathbf{\pi}=(\pi_{i})_{i=1}^{r}\).
From \(\mathbf{a}=(a_{1},\,a_{2},\,\cdots,a_{r-1})\), we define a probability vector (i.e., all entries are non-negative, and their sum is \(1\)) \(\mathbf{w_{a}}=(w_{1},\,\cdots,\,w_{r})\) by
\[\left\{\begin{array}{l}w_{1}=a_{1}a_{2}a_{3}\cdots a_{r-1}\\ w_{2}=(1-a_{1})a_{2}a_{3}\cdots a_{r-1}\\ w_{3}=(1-a_{2})a_{3}\cdots a_{r-1}\\ \vdots\\ w_{r-1}=(1-a_{r-2})a_{r-1}\\ w_{r}=1-a_{r-1}\end{array}\right.\,. \tag{2.1}\]
Let
\[\pi^{(0)}=\text{id}_{X_{1}}:X_{1}\to X_{1},\] \[\pi^{(i)}=\pi_{i}\circ\pi_{i-1}\circ\cdots\circ\pi_{1}:X_{1}\to X _{i+1}.\]
We can now state the main result of this paper.
**Theorem 2.1**.: _Let \((X_{i},T_{i})\) (\(i=1,2,\,\ldots,r\)) be dynamical systems and \(\pi_{i}:X_{i}\to X_{i+1}\)\((i=1,2,\,...,\,r-1)\) factor maps. For any continuous function \(f:X_{1}\to\mathbb{R}\),_
\[P^{\mathbf{a}}(f)=\sup_{\mu\in\mathcal{M}^{T_{1}}(X_{1})}\left(\sum_{i=1}^{r}w_{i }h_{\pi^{(i-1)},\mu}(T_{i})+w_{1}\int_{X_{1}}fd\mu\right). \tag{2.2}\]
We define \(P^{\boldsymbol{a}}_{\text{var}}(f)\) to be the right-hand side of this equation. Then we need to prove
\[P^{\boldsymbol{a}}(f)=P^{\boldsymbol{a}}_{\text{var}}(f).\]
## 3. Preparation
### Basic properties and tools
Let \((X_{i},T_{i})\)\((i=1,2,\ldots,r)\) be dynamical systems, \(\pi_{i}:X_{i}\to X_{i+1}\)\((i=1,2,\ldots,r-1)\) factor maps, \(\boldsymbol{a}=(a_{1},\cdots,a_{r-1})\in[0,1]^{r-1}\), and \(f:X_{1}\to\mathbb{R}\) a continuous function.
We will use the following notions in sections 3.3 and 5.
**Definition 3.1**.: Consider a cover \(\mathscr{F}^{(i)}\) of \(X_{i}\) for each \(i\). For a natural number \(N\) and a positive number \(\varepsilon\), the family \((\mathscr{F}^{(i)})_{i}\) is said to be **a chain of (\(\boldsymbol{N}\), \(\boldsymbol{\varepsilon}\))-covers** of \((X_{i})_{i}\) if the following conditions are true:
1. For every \(i\) and \(V\in\mathscr{F}^{(i)}\), we have \(\text{diam}(V,d^{(i)}_{N})<\varepsilon\).
2. For each \(1\leq i\leq r-1\) and \(U\in\mathscr{F}^{(i+1)}\), there is \(\mathscr{F}^{(i)}(U)\subset\mathscr{F}^{(i)}\) such that \[\pi_{i}^{-1}(U)\subset\bigcup\mathscr{F}^{(i)}(U)\] and \[\mathscr{F}^{(i)}=\bigcup_{U\in\mathscr{F}^{(i+1)}}\mathscr{F}^{(i)}(U).\]
Moreover, if all the elements of each \(\mathscr{F}^{(i)}\) are open/closed/compact, we call \((\mathscr{F}^{(i)})_{i}\)**a chain of open/closed/compact (\(\boldsymbol{N}\), \(\boldsymbol{\varepsilon}\))-covers** of \((X_{i})_{i}\).
**Remark 3.2**.: Note that we can rewrite \(P^{\boldsymbol{a}}_{r}(X_{r},\,f,\,N,\,\varepsilon)\) using chains of open covers as follows. For a chain of \((N,\,\varepsilon)\)-covers \((\mathscr{F}^{(i)})_{i}\) of \((X_{i})_{i}\), let
\[\mathscr{P}^{\boldsymbol{a}}\left(f,\,N,\,\varepsilon,\,(\mathscr{F}^{(i)})_ {i}\right)\]
\[=\sum_{U^{(r)}\in\mathscr{F}^{(r)}}\left(\sum_{U^{(r-1)}\in\mathscr{F}^{(r-1 )}(U^{(r)})}\left(\cdots\left(\sum_{U^{(1)}\in\mathscr{F}^{(1)}(U^{(2)})}e^{ \sup_{U^{(1)}}S_{N}f}\right)^{a_{1}}\cdots\right)^{a_{r-2}}\right)^{a_{r-1}}.\]
Then
\[P^{\boldsymbol{a}}_{r}(X_{r},\,f,\,N,\,\varepsilon)\]
\[=\inf\left\{\mathscr{P}^{\boldsymbol{a}}\left(f,\,N,\,\varepsilon,\,(\mathscr{ F}^{(i)})_{i}\right)\right|(\mathscr{F}^{(i)})_{i}\text{ is a chain of open $(N,\,\varepsilon)$-covers of $(X_{i})_{i}$ }\right\}.\]
Just like the classic notion of pressure, we have the following property.
**Lemma 3.3**.: _For any natural number \(m\),_
\[P^{\boldsymbol{a}}(S_{m}^{T_{1}}f,\boldsymbol{T}^{m})=mP^{\boldsymbol{a}}(f, \boldsymbol{T}),\]
_where \(\boldsymbol{T}^{m}=(T_{i}^{\,m})_{i=1}^{r}\)._
Proof.: Fix \(\varepsilon>0\). It is obvious from the definition of \(P_{1}^{\boldsymbol{a}}\) that for any \(\Omega_{1}\subset X_{1}\) and a natural number N,
\[P_{1}^{\boldsymbol{a}}(\Omega_{1},\,S_{m}^{T_{1}}f,\,\boldsymbol{T}^{m},\,N, \,\varepsilon)\leq P_{1}^{\boldsymbol{a}}(\Omega_{1},\,f,\boldsymbol{T},\, mN,\,\varepsilon).\]
Let \(\Omega_{i+1}\subset X_{i+1}\). By induction on \(i\), we have
\[P_{i}^{\boldsymbol{a}}(\Omega_{i+1},\,S_{m}^{T_{1}}f,\boldsymbol{T}^{m},N,\, \varepsilon)\leq P_{i}^{\boldsymbol{a}}(\Omega_{i+1},\,f,\boldsymbol{T},\, mN,\,\varepsilon).\]
Thus,
\[P_{r}^{\boldsymbol{a}}(S_{m}^{T_{1}}f,\boldsymbol{T}^{m},N,\, \varepsilon)\leq P_{r}^{\boldsymbol{a}}(f,\boldsymbol{T},\,mN,\,\varepsilon). \tag{3.1}\]
There exists \(0<\delta<\varepsilon\) such that for any \(1\leq i\leq r\),
\[d^{(i)}(x,y)<\delta\implies d_{m}^{T_{i}}(x,y)<\varepsilon\qquad(\text{for }x,y\in X_{i}).\]
Then
\[d_{N}^{T_{i}^{m}}(x,y)<\delta\implies d_{mN}^{T_{i}}(x,y)<\varepsilon\quad( \text{for }x,\,y\in X_{i}\text{ and }\,1\leq i\leq r). \tag{3.2}\]
Let \(i=1\) in (3.2), then we have for any \(\Omega_{1}\subset X_{1}\),
\[P_{1}^{\boldsymbol{a}}(\Omega_{1},\,f,\boldsymbol{T},\,mN,\, \varepsilon)\leq P_{1}^{\boldsymbol{a}}(\Omega_{1},\,S_{m}^{T_{1}}f, \boldsymbol{T}^{m},N,\,\delta).\]
Take \(\Omega_{i+1}\subset X_{i+1}\). Again by induction on \(i\) and by (3.2), we have
\[P_{i}^{\boldsymbol{a}}(\Omega_{i+1},\,f,\boldsymbol{T},\,mN, \,\varepsilon)\leq P_{i}^{\boldsymbol{a}}(\Omega_{i+1},\,S_{m}^{T_{1}}f, \boldsymbol{T}^{m},N,\,\delta).\]
Hence,
\[P_{r}^{\boldsymbol{a}}(f,\boldsymbol{T},\,mN,\,\varepsilon)\leq P_{r}^{ \boldsymbol{a}}(S_{m}^{T_{1}}f,\boldsymbol{T}^{m},N,\,\delta).\]
Combining with (3.1) we have
\[P_{r}^{\boldsymbol{a}}(S_{m}^{T_{1}}f,\boldsymbol{T}^{m},N,\, \varepsilon)\leq P_{r}^{\boldsymbol{a}}(f,\boldsymbol{T},\,mN,\,\varepsilon) \leq P_{r}^{\boldsymbol{a}}(S_{m}^{T_{1}}f,\boldsymbol{T}^{m},N,\,\delta).\]
Therefore,
\[P^{\boldsymbol{a}}(S_{m}^{T_{1}}f,\boldsymbol{T}^{m})=mP^{\boldsymbol{a}}(f, \boldsymbol{T}).\]
We will later use the following standard lemma of calculus.
**Lemma 3.4**.:
1. _For_ \(0\leq a\leq 1\) _and non-negative numbers_ \(x,y\)_,_ \[(x+y)^{a}\leq x^{a}+y^{a}.\] 2. _Suppose that non-negative real numbers_ \(p_{1},p_{2},\ldots,p_{n}\) _satisfy_ \(\sum_{i=1}^{n}p_{i}=1\)_. Then for any real numbers_ \(x_{1},x_{2},\ldots,x_{n}\) _we have_ \[\sum_{i=1}^{n}\left(-p_{i}\log p_{i}+x_{i}p_{i}\right)\leq\log\sum_{i=1}^{n}e^{ x_{i}}.\] _In particular, letting_ \(x_{1}=x_{2}=\cdots=x_{n}=0\) _gives_ \[\sum_{i=1}^{n}(-p_{i}\log p_{i})\leq\log n.\] _Here,_ \(0\cdot\log 0\) _is defined as_ \(0\)_._
The proof for (1) is elementary. See [22, SS9.3, Lemma 9.9] for (2).
### Measure theoretic entropy
In this subsection, we will introduce the classical measure-theoretic entropy (a.k.a. Kolmogorov-Sinai entropy) and state some of the basic lemmas we need to prove Theorem 2.1. The main reference is the book of Walters [22].
Let \((X,T)\) be a dynamical system and \(\mu\in\mathscr{M}^{T}(X)\). A set \(\mathscr{A}=\{A_{1},\ldots,A_{n}\}\) is called a finite partition of X with measurable elements if \(X=A_{1}\cup\cdots\cup A_{n}\), each \(A_{i}\) is a measurable set, and \(A_{i}\cap A_{j}=\varnothing\) for \(i\neq j\). In this paper, a partition is always finite and consists of measurable elements.
Let \(\mathscr{A}\) and \(\mathscr{A}^{\prime}\) be partitions of \(X\). We define a new partition \(\mathscr{A}\vee\mathscr{A}^{\prime}\) by
\[\mathscr{A}\vee\mathscr{A}^{\prime}=\left\{A\cap A^{\prime}\,|\,A\in\mathscr{ A}\text{ and }A^{\prime}\in\mathscr{A}^{\prime}\right\}.\]
For a natural number \(N\), we define a refined partition \(\mathscr{A}_{N}\) of \(\mathscr{A}\) by
\[\mathscr{A}_{N}=\mathscr{A}\lor T^{-1}\mathscr{A}\lor T^{-2}\mathscr{A}\vee \cdots\lor T^{-(N-1)}\mathscr{A},\]
where \(T^{-i}\mathscr{A}=\{T^{-i}(A)\,|\,A\in\mathscr{A}\}\) is a partition for \(i\in\mathbb{N}\).
For a partition \(\mathscr{A}\) of \(X\), let
\[H_{\mu}(\mathscr{A})=-\sum_{A\in\mathscr{A}^{\prime}}\mu(A)\log\left(\mu(A)\right).\]
We set
\[h_{\mu}(T,\mathscr{A})=\lim_{N\to\infty}\frac{H_{\mu}(\mathscr{A}_{N})}{N}.\]
This limit exists since \(H_{\mu}(\mathscr{A}_{N})\) is sub-additive in \(N\). The **measure theoretic entropy**\(h_{\mu}(T)\) is defined by
\[h_{\mu}(T)=\sup\left\{h_{\mu}(T,\mathscr{A})\,|\,\mathscr{A}\text{ is a partition of }X\right\}.\]
Let \(\mathscr{A}\) and \(\mathscr{A}^{\prime}\) be partitions. Their **conditional entropy** is defined by
\[H_{\mu}(\mathscr{A}|\mathscr{A}^{\prime})=-\sum_{\begin{subarray}{c}A^{\prime} \in\mathscr{A}^{\prime}\\ \mu(A^{\prime})\neq 0\end{subarray}}\mu(A^{\prime})\sum_{A\in\mathscr{A}} \frac{\mu(A\cap A^{\prime})}{\mu(A^{\prime})}\log\bigg{(}\frac{\mu(A\cap A^{ \prime})}{\mu(A^{\prime})}\bigg{)}.\]
**Lemma 3.5**.:
1. \(H_{\mu}(\mathscr{A})\) _is sub-additive in_ \(\mathscr{A}\)_: i.e., for partitions_ \(\mathscr{A}\) _and_ \(\mathscr{A}^{\prime}\)_,_ \[H_{\mu}(\mathscr{A}\vee\mathscr{A}^{\prime})\leq H_{\mu}(\mathscr{A})+H_{\mu} (\mathscr{A}^{\prime}).\] 2. \(H_{\mu}(\mathscr{A})\) _is concave in_ \(\mu\)_: i.e., for_ \(\mu,\nu\in\mathscr{M}^{T}(X)\) _and_ \(0\leq t\leq 1\)_,_ \[H_{(1-t)\mu+t\nu}(\mathscr{A})\geq(1-t)H_{\mu}(\mathscr{A})+tH_{\nu}(\mathscr{ A}).\] 3. _For partitions_ \(\mathscr{A}\) _and_ \(\mathscr{A}^{\prime}\)_,_ \[h_{\mu}(T,\mathscr{A})\leq h_{\mu}(T,\mathscr{A}^{\prime})+H_{\mu}(\mathscr{A} ^{\prime}|\mathscr{A}).\]
For the proof confer [22, Theorem 4.3 (viii), SS4.5] for (1), [22, Remark SS8.1] for (2), and [22, Theorem 4.12, SS4.5] for (3).
### Zero-dimensional principal extension
Here we will see how we can reduce the proof of \(P^{\boldsymbol{a}}(f)\leq P^{\boldsymbol{a}}_{\mathrm{var}}(f)\) to the case where all dynamical systems are zero-dimensional.
First, we review the definitions and properties of (zero-dimensional) principal extension. The introduction here closely follows Tsukamoto's paper [21] and references the book of Downarowicz [15]. Suppose \(\pi:(Y,S)\to(X,\,T)\) is a factor map between dynamical systems. Let \(d\) be a metric on \(Y\). We define the **conditonal topological entropy** of \(\pi\) by
\[h_{\mathrm{top}}(Y,S\,|\,X,T)=\lim_{\varepsilon\to 0}\left(\lim_{N\to \infty}\frac{\sup_{x\in X}\log\#(\pi^{-1}(x),N,\varepsilon)}{N}\right).\]
Here,
\[\#(\pi^{-1}(x),\,N,\varepsilon)=\min\left\{n\in\mathbb{N}\middle|\begin{array} []{l}\text{There exists an open cover $\{U_{j}\}_{j=1}^{n}$ of $\pi^{-1}(x)$}\\ \text{with $\mathrm{diam}(U_{j},\,d_{N})<\varepsilon$ for all $1\leq j \leq n$}\end{array}\right\}.\]
A factor map \(\pi:(Y,S)\to(X,\,T)\) between dynamical systems is said to be a **principal factor map** if
\[h_{\mathrm{top}}(Y,S\,|\,X,T)=0.\]
Also, \((Y,S)\) is called a **principal extension** of \((X,T)\).
The following theorem is from [15, Corollary 6.8.9].
**Theorem 3.6**.: _Suppose \(\pi:(Y,S)\to(X,\,T)\) is a principal factor map. Then \(\pi\) preserves measure-theoretic entropy, namely,_
\[h_{\mu}(S)=h_{\pi,\mu}(T)\]
_for any \(S\)-invariant probability measure \(\mu\) on Y._
More precisely, it is proved in [11, Corollary 6.8.9] that \(\pi\) is a principal factor map if and only if it preserves measure-theoretic entropy.
Suppose \(\pi:(X_{1},T_{1})\to(X_{2},T_{2})\) and \(\phi:(Y,S)\to(X_{2},T_{2})\) are factor maps between dynamical systems. We define a fiber product \((X_{1}\times_{X_{2}}Y,T_{1}\times S)\) of \((X_{1},T_{1})\) and \((Y,S)\) over \((X_{2},T_{2})\) by
\[X_{1}\times_{X_{2}}Y=\left\{(x,y)\in X_{1}\times Y|\,\pi(x)=\phi(y)\right\},\]
\[T_{1}\times S:X_{1}\times_{X_{2}}Y\ni(x,y)\longmapsto(T_{1}(x),S(y))\in X_{1} \times_{X_{2}}Y.\]
We have the following commutative diagram:
(3.3)
Here \(\pi^{\prime}\) and \(\psi\) are restrictions of the projections onto \(Y\) and \(X_{1}\), respectively:
\[\pi^{\prime}:X_{1}\times_{X_{2}}Y\ni(x,y)\longmapsto y\in Y,\]
\[\psi:X_{1}\times_{X_{2}}Y\ni(x,y)\longmapsto x\in X_{1}.\]
Since \(\pi\) and \(\phi\) are surjective, both \(\pi^{\prime}\) and \(\psi\) are factor maps.
**Lemma 3.7**.: _If \(\phi\) is a principal extension in the diagram (3.3), then \(\psi\) is also a principal extension._
Proof.: Let \(d^{1}\) and \(d^{Y}\) be metrics on \(X_{1}\) and \(Y\), respectively. Define a metric \(\widetilde{d}\) on \(X_{1}\times_{X_{2}}Y\) by
\[\widetilde{d}\big{(}(x,y),(x^{\prime},y^{\prime})\big{)}=\max\,\{d^{1}(x,x^{ \prime}),d^{Y}(y,y^{\prime})\}.\]
Let \(x\in X_{1}\). We have
\[\psi^{-1}(x)=\{x\}\times\{y\in Y|\,\pi(x)=\phi(y)\}=\{x\}\times\phi^{-1}(\pi( x)),\]
which in turn implies \(\widetilde{d}|_{\psi^{-1}(x)}=d^{Y}|_{\phi^{-1}(\pi(x))}\). Then the metric space \((\psi^{-1}(x),\widetilde{d}_{N})\) is isometric to \((\phi^{-1}(\pi(x)),d^{Y}_{N})\) for any natural number \(N\). Therefore for any \(\varepsilon>0\),
\[\#(\psi^{-1}(x),N,\varepsilon)=\#(\phi^{-1}(\pi(x)),N,\varepsilon).\]
Since \(\pi\) is surjective,
\[\sup_{x\in X_{1}}\#(\psi^{-1}(x),N,\varepsilon)=\sup_{x\in X_{1}}\#(\phi^{-1 }(\pi(x)),N,\varepsilon)=\sup_{y\in Y}\#(\phi^{-1}(y),N,\varepsilon).\]
Hence,
\[h_{\text{top}}(X_{1}\times_{X_{2}}Y,T_{1}\times S|\,X_{1},T_{1})=h_{\text{top} }(Y,S|\,X_{2},T_{2})=0.\]
A dynamical system \((Y,S)\) is said to be **zero-dimensional** if there is a clopen basis of the topology of \(Y\), where clopen means any element in the basis is both closed and open. A basic example of a zero-dimensional dynamical system is the Cantor set \(\{0,1\}^{\mathbb{N}}\) with the shift map.
A principal extension \((Y,S)\) of \((X,T)\) is called a **zero-dimensional principal extension** if \((Y,S)\) is zero-dimensional. The following important theorem can be found in [11, Theorem 7.6.1].
**Theorem 3.8**.: _For any dynamical system, there is a zero-dimensional principal extension._
Let \((Y_{i},R_{i})\) (\(i=1,\,2,\,\dots,\,m\)) be dynamical systems, \(\pi_{i}:Y_{i}\to Y_{i+1}\) (\(i=1,\,2,\,...,\,m-1\)) factor maps, and \(\boldsymbol{a}=(a_{1},\cdots,a_{m-1})\in[0,1]^{m-1}\). Fix \(2\leq k\leq m-1\) and take a zero-dimensional principal extension \(\phi_{k}:(Z_{k},S_{k})\to(Y_{k},R_{k})\). For each \(1\leq i\leq k-1\), let \((Y_{i}\times_{Y_{k}}Z_{k},R_{i}\times S_{k})\) be the fiber product and \(\phi_{i}:Y_{i}\times_{Y_{k}}Z_{k}\to Y_{i}\) be the restriction of the projection as in the earlier definition. We have
By Lemma 3.7, \(\phi_{i}\) is a principal factor map. We define \(\Pi_{i}:Y_{i}\times_{Y_{k}}Z_{k}\to Y_{i+1}\times_{Y_{k}}Z_{k}\) by \(\Pi_{i}(x,y)=(\pi_{i}(x),y)\) for each \(i\). Then we have the following commutative diagram:
(3.4)
Let
\[(Z_{i},S_{i})=(Y_{i}\times_{Y_{k}}Z_{k},R_{i}\times S_{k})\text{ for }1\leq i\leq k -1,\ \ (Z_{i},S_{i})=(Y_{i},R_{i})\text{ for }k+1\leq i\leq m,\]
\[\Pi_{k}=\pi_{k}\circ\phi_{k}:Z_{k}\to Y_{k+1},\ \ \Pi_{i}=\pi_{i}:Z_{i}\to Z_{i+1}\text{ for }k+1\leq i\leq m-1,\]
\[\phi_{i}=\operatorname{id}_{Z_{i}}:Z_{i}\to Z_{i}\text{ for }k+1\leq i\leq m.\]
**Lemma 3.9**.: _In the settings above,_
\[P^{\boldsymbol{a}}_{\operatorname{var}}(f,\boldsymbol{R},\boldsymbol{\pi}) \geq P^{\boldsymbol{a}}_{\operatorname{var}}(f\circ\phi_{1},\boldsymbol{S}, \boldsymbol{\Pi})\]
_and_
\[P^{\boldsymbol{a}}(f,\boldsymbol{R},\boldsymbol{\pi})\leq P^{\boldsymbol{a}} (f\circ\phi_{1},\boldsymbol{S},\boldsymbol{\Pi}).\]
_Here, \(\boldsymbol{R}=(R_{i})_{i}\), \(\boldsymbol{\pi}=(\pi_{i})_{i}\), \(\boldsymbol{S}=(S_{i})_{i}\) and \(\boldsymbol{\Pi}=(\Pi_{i})_{i}\)._
Proof.: We remark that the following proof does not require \(Z_{k}\) to be zero-dimensional. Let
\[\pi^{(0)}=\operatorname{id}_{Y_{1}}:Y_{1}\to Y_{1},\]
\[\pi^{(i)}=\pi_{i}\circ\pi_{i-1}\circ\cdots\circ\pi_{1}:Y_{1}\to Y_{i+1}\]
and
\[\Pi^{(0)}=\operatorname{id}_{Z_{1}}:Z_{1}\to Z_{1},\]
\[\Pi^{(i)}=\Pi_{i}\circ\Pi_{i-1}\circ\cdots\circ\Pi_{1}:Z_{1}\to Z_{i+1}.\]
Let \(\nu\in\mathscr{M}^{S_{1}}(Y_{1})\) and \(1\leq i\leq m\). Since all the horizontal maps in (3.4) are principal factor maps, we have
\[h_{\Pi^{(i-1)}{}_{*}\nu}(S_{i})=h_{(\phi_{i})_{*}\Pi^{(i-1)}{}_{*}\nu}(R_{i})= h_{\pi^{(i-1)}{}_{*}(\phi_{1})_{*}\nu}(R_{i}).\]
It follows that
\[P^{\boldsymbol{a}}_{\operatorname{var}}(f\circ\phi_{1},\boldsymbol{S}, \boldsymbol{\Pi}) =\sup_{\nu\in\mathscr{M}^{S_{1}}(Z_{1})}\left(\sum_{i=1}^{m}w_{i} h_{\Pi^{(i-1)}{}_{*}\nu}(S_{i})+w_{1}\int_{Z_{1}}f\circ\phi_{1}d\nu\right)\] \[=\sup_{\nu\in\mathscr{M}^{S_{1}}(Z_{1})}\left(\sum_{i=1}^{m}w_{i} h_{\pi^{(i-1)}{}_{*}(\phi_{1})_{*}\nu}(R_{i})+w_{1}\int_{Y_{1}}fd\big{(}(\phi_{1}) _{*}\nu\big{)}\right)\] \[\leq\sup_{\mu\in\mathscr{M}^{T_{1}}(Y_{1})}\left(\sum_{i=1}^{m}w_ {i}h_{\pi^{(i-1)}{}_{*}\mu}(R_{i})+w_{1}\int_{Y_{1}}fd\mu\right)\] \[=P^{\boldsymbol{a}}_{\operatorname{var}}(f,\boldsymbol{R}, \boldsymbol{\pi}).\]
(The reversed inequality is generally true by the surjectivity of factor maps, yielding equality. However, we do not use this fact.)
Let \(d^{i}\) be a metric on \(Y_{i}\) for each \(i\) and \(\widetilde{d^{k}}\) a metric on \(Z_{k}\). We define a metric \(\widetilde{d^{i}}\) on \((Z_{i},S_{i})\) for \(1\leq i\leq k-1\) by
\[\widetilde{d^{i}}\big{(}(x_{1},y_{1}),(x_{2},y_{2})\big{)}=\max\left\{d^{i}(x_ {1},x_{2}),\widetilde{d^{k}}(y_{1},y_{2})\right\}\quad\big{(}(x_{1},y_{1}),(x_ {2},y_{2})\in Z_{i}=Y_{i}\times_{Y_{k}}Z_{k}\big{)}\.\]
Set \(\widetilde{d}^{i}=d^{i}\) for \(k+1\leq i\leq m\). Take an arbitrary positive number \(\varepsilon\). There exists \(0<\delta<\varepsilon\) such that for every \(1\leq i\leq m\),
\[\widetilde{d}^{i}(x,y)<\delta\implies d^{i}(\phi_{i}(x),\phi_{i}(y))<\varepsilon \quad(x,y\in Z_{i}). \tag{3.5}\]
Let \(N\) be a natural number. We claim that
\[P^{\boldsymbol{a}}_{r}(f,\boldsymbol{R},\boldsymbol{\pi},N,\varepsilon)\leq P ^{\boldsymbol{a}}_{r}(f\circ\phi_{1},\boldsymbol{S},\boldsymbol{\Pi},N, \delta).\]
Take \(M>0\) with
\[P^{\boldsymbol{a}}_{r}(f\circ\phi_{1},\boldsymbol{S},\boldsymbol{\Pi},N, \delta)<M.\]
Then there exists a chain of open \((N,\,\delta)\)-covers \((\mathscr{F}^{(i)})_{i}\) of \((Z_{i})_{i}\) (see Definition 3.1 and Remark 3.2) with
\[\mathscr{P}^{\boldsymbol{a}}\left(f\circ\phi_{1},\,\boldsymbol{S},\, \boldsymbol{\Pi},N,\delta,\,(\mathscr{F}^{(i)})_{i}\right)<M.\]
We can find a compact set \(C_{U}\subset U\) for each \(U\in\mathscr{F}^{(m)}\) such that \(\bigcup_{U\in\mathscr{F}^{(m)}}C_{U}=Z_{m}\). Let \(\mathscr{K}^{(m)}:=\{C_{U}|U\in\mathscr{F}^{(m)}\}\). Since \(\Pi_{m-1}^{-1}(C_{U})\subset\Pi_{m-1}^{-1}(U)\) is compact for each \(U\in\mathscr{F}^{(m)}\), we can find a compact set \(E_{V}\subset V\) for each \(V\in\mathscr{F}^{(m-1)}(U)\) such that \(\Pi_{m-1}^{-1}(C_{U})\subset\bigcup_{V\in\mathscr{F}^{(k)}(U)}E_{V}\). Let \(\mathscr{K}^{(m-1)}(C_{U}):=\{E_{V}|V\in\mathscr{F}^{(m-1)}(U)\}\) and \(\mathscr{K}^{(m-1)}:=\bigcup_{C\in\mathscr{K}^{(m)}}\mathscr{K}^{(m-1)}(C)\). We continue likewise and obtain a chain of compact \((N,\,\delta)\)-covers \((\mathscr{K}^{(i)})_{i}\) of \((Z_{i})_{i}\) with
\[\mathscr{P}^{\boldsymbol{a}}\left(f\circ\phi_{1},\,\boldsymbol{S},\, \boldsymbol{\Pi},\,N,\,\delta,\,(\mathscr{K}^{(i)})_{i}\right)\leq\mathscr{P} ^{\boldsymbol{a}}\left(f\circ\phi_{1},\,\boldsymbol{S},\,\boldsymbol{\Pi},\,N,\,\delta,\,(\mathscr{F}^{(i)})_{i}\right)<M.\]
Let \(\phi_{i}(\mathscr{K}^{(i)})=\left\{\phi_{i}(C)\,\big{|}\,C\in\mathscr{K}^{(i) }\right\}\) for each \(i\). Note that for any \(\Omega\subset Z_{i}\),
\[\pi_{i-1}^{-1}(\phi_{i}(\Omega))=\phi_{i-1}(\Pi_{i-1}^{-1}(\Omega)).\]
This and (3.5) assure that \((\phi_{i}(\mathscr{K}^{(i)}))_{i}\) is a chain of compact \((N,\,\varepsilon)\)-covers of \((Y_{i})_{i}\). We have
\[\mathscr{P}^{\boldsymbol{a}}\left(f,\,\boldsymbol{R},\,\boldsymbol{\pi},\,N, \,\varepsilon,\,(\phi_{i}(\mathscr{K}^{(i)}))_{i}\right)=\mathscr{P}^{ \boldsymbol{a}}\left(f\circ\phi_{1},\,\boldsymbol{S},\,\boldsymbol{\Pi},\,N, \,\delta,\,(\mathscr{K}^{(i)})_{i}\right)<M.\]
Since \(f\) is continuous and each \(\phi_{i}(\mathscr{K}^{(i)})\) is a closed cover, we can slightly enlarge each set in \(\phi_{i}(\mathscr{K}^{(i)})\) and create a chain of open \((N,\,\varepsilon)\)-covers \((\mathscr{O}^{(i)})_{i}\) of \((Y_{i})_{i}\) satisfying
\[\mathscr{P}^{\boldsymbol{a}}\left(f,\,\boldsymbol{R},\,\boldsymbol{\pi},\,N,\varepsilon,\,(\mathscr{O}^{(i)})_{i}\right)<M.\]
Therefore,
\[P^{\boldsymbol{a}}_{r}(f,\boldsymbol{R},\boldsymbol{\pi},N,\varepsilon)\leq \mathscr{P}^{\boldsymbol{a}}\left(f,\,\boldsymbol{R},\,\boldsymbol{\pi},\,N, \,\varepsilon,\,(\mathscr{O}^{(i)})_{i}\right)<M.\]
Since \(M>P^{\boldsymbol{a}}_{r}(f\circ\phi_{1},\boldsymbol{S},\boldsymbol{\Pi},N, \delta)\) was chosen arbitrarily, we have
\[P^{\boldsymbol{a}}_{r}(f,\boldsymbol{R},\boldsymbol{\pi},N,\varepsilon)\leq P ^{\boldsymbol{a}}_{r}(f\circ\phi_{1},\boldsymbol{S},\boldsymbol{\Pi},N,\delta).\]
This implies
\[P^{\boldsymbol{a}}(f,\boldsymbol{R},\boldsymbol{\pi})\leq P^{\boldsymbol{a}}(f \circ\phi_{1},\boldsymbol{S},\boldsymbol{\Pi}).\]
The following proposition reduces the proof of \(P^{\boldsymbol{a}}(f)\leq P^{\boldsymbol{a}}_{\rm var}(f)\) in the next section to the case where all dynamical systems are zero-dimensional.
**Proposition 3.10**.: _For all dynamical systems \((X_{i},\,T_{i})\) (\(i=1,\,2,\,\ldots,\,r\)) and factor maps \(\pi_{i}:X_{i}\to X_{i+1}\)\((i=1,\,2,\,...,\,r-1)\), there are zero-dimensional dynamical systems \((Z_{i},\,S_{i})\) (\(i=1,\,2,\,\ldots,\,r\)) and factor maps \(\Pi_{i}:Z_{i}\to Z_{i+1}\)\((i=1,\,2,\,...,\,r-1)\) with the following property; for every continuous function \(f:X_{1}\to\mathbb{R}\) there exists a continuous function \(g:Z_{1}\to\mathbb{R}\) with_
\[P^{\boldsymbol{a}}_{\rm var}(f,\boldsymbol{T},\boldsymbol{\pi})\geq P^{ \boldsymbol{a}}_{\rm var}(g,\boldsymbol{S},\boldsymbol{\Pi})\]
_and_
\[P^{\boldsymbol{a}}(f,\boldsymbol{T},\boldsymbol{\pi})\leq P^{\boldsymbol{a}}(g,\boldsymbol{S},\boldsymbol{\Pi}).\]
Proof.: We will first construct zero-dimensional dynamical systems \((Z_{i},\,S_{i})\)\((i=1,\,2,\,\ldots,\,r)\) and factor maps \(\Pi_{i}:Z_{i}\to Z_{i+1}\)\((i=1,\,2,\,...,\,r-1)\) alongside the following commutative diagram of dynamical systems and factor maps:
(3.6)
where all the horizontal maps are principal factor maps.
By Theorem 3.8, there is a zero-dimensional principal extension \(\psi_{r}:(Z_{r},S_{r})\to(X_{r},T_{r})\). The set \(\{*\}\) is the trivial dynamical system, and the maps \(X_{r}\to\{*\}\) and \(Z_{r}\to\{*\}\) send every element to \(*\). For each \(1\leq i\leq r-1\), the map \(X_{i}\times_{X_{r}}Z_{r}\to X_{i}\) in
the following diagram is a principal factor map by Lemma 3.7.
For \(1\leq i\leq r-2\), define \(\pi_{i}^{(2)}:X_{i}\times_{X_{r}}Z_{r}\to X_{i+1}\times_{X_{r}}Z_{r}\) by
\[\pi_{i}^{(2)}(x,z)=(\pi_{i}(x),y).\]
Then every horizontal map in the right two rows of (3.6) is a principal factor map. Next, take a zero-dimensional principal extension \(\psi_{r-1}:(Z_{r-1},S_{r-1})\to(X_{r-1}\times_{X_{r}}Z_{r},T_{r-1}\times S_{r})\) and let \(\Pi_{r-1}=\pi_{r-1}^{(2)}\circ\psi_{r-1}\). The rest of (3.6) is constructed similarly, and by Lemma 3.7, each horizontal map is a principal factor map.
Let \(f:X_{1}\to\mathbb{R}\) be a continuous map. Applying Lemma 3.9 to the right two rows of (3.6), we get
\[P^{\boldsymbol{a}}_{\mathrm{var}}(f,\boldsymbol{T},\boldsymbol{\pi})\geq P^{ \boldsymbol{a}}_{\mathrm{var}}(f\circ\phi_{1},\boldsymbol{S^{(2)}},\boldsymbol {\Pi^{(2)}})\]
and
\[P^{\boldsymbol{a}}(f,\boldsymbol{T},\boldsymbol{\pi})\leq P^{\boldsymbol{a}} (f\circ\phi_{1},\boldsymbol{S^{(2)}},\boldsymbol{\Pi^{(2)}})\]
for \(\boldsymbol{\Pi^{(2)}}=(\pi_{i}^{(2)})_{i}\) and \(\boldsymbol{S^{(2)}}=(T_{i}\times S_{r})_{i}\). Again by Lemma 3.9,
\[P^{\boldsymbol{a}}_{\mathrm{var}}(f\circ\phi_{1},\boldsymbol{S^{(2)}}, \boldsymbol{\Pi^{(2)}})\geq P^{\boldsymbol{a}}_{\mathrm{var}}(f\circ\phi_{1} \circ\phi_{2},\boldsymbol{S^{(3)}},\boldsymbol{\Pi^{(3)}})\]
and
\[P^{\boldsymbol{a}}(f\circ\phi_{1},\boldsymbol{S^{(2)}},\boldsymbol{\Pi^{(2)}} )\leq P^{\boldsymbol{a}}(f\circ\phi_{1}\circ\phi_{2},\boldsymbol{S^{(3)}}, \boldsymbol{\Pi^{(3)}})\]
where \(\boldsymbol{\Pi^{(3)}}=\big{(}(\pi_{i}^{(3)})_{i=1}^{r-2},\Pi_{r-1}\big{)}\), and \(\boldsymbol{S^{(3)}}\) is the collection of maps associated with \(Z_{r}\) and the third row from the right of (3.6). We continue inductively and obtain the desired inequalities, where \(g\) is taken as \(f\circ\phi_{1}\circ\phi_{2}\circ\cdots\circ\phi_{r}\).
## 4. Proof of \(P^{\boldsymbol{a}}(f)\leq P^{\boldsymbol{a}}_{\mathrm{var}}(f)\).
Let \(\boldsymbol{a}=(a_{1},\cdots,a_{r-1})\in[0,1]^{r-1}\). Recall that we defined \((w_{1},\ldots,w_{r})\) by
\[\left\{\begin{array}{l}w_{1}=a_{1}a_{2}a_{3}\cdots a_{r-1}\\ w_{2}=(1-a_{1})a_{2}a_{3}\cdots a_{r-1}\\ w_{3}=(1-a_{2})a_{3}\cdots a_{r-1}\\ \qquad\qquad\vdots\\ w_{r-1}=(1-a_{r-2})a_{r-1}\\ w_{r}=1-a_{r-1}\end{array}\right.\]
and \(P^{\mathbf{a}}_{\rm var}(f)\) by
\[P^{\mathbf{a}}_{\rm var}(f)=\sup_{\mu\in\mathscr{A}^{T_{1}}(X_{1})}\left(\sum_{i=1}^{ r}w_{i}h_{\pi^{(i-1)}}{}_{*\mu}(T_{i})+w_{1}\int_{X_{1}}fd\mu\right)\]
where
\[\pi^{(0)}={\rm id}_{X_{1}}:X_{1}\to X_{1},\]
\[\pi^{(i)}=\pi_{i}\circ\pi_{i-1}\circ\cdots\circ\pi_{1}:X_{1}\to X_{i+1}.\]
The following theorem suffices by Theorem 3.10 in proving \(P^{\mathbf{a}}(f)\leq P^{\mathbf{a}}_{\rm var}(f)\) for arbitrary dynamical systems.
**Theorem 4.1**.: _Suppose \((X_{i},\,T_{i})\) (\(i=1,\,2,\,\ldots,\,r\)) are zero-dimensional dynamical systems and \(\pi_{i}:X_{i}\to X_{i+1}\,\,(i=1,\,2,\,...,\,r-1)\) are factor maps. Then we have_
\[P^{\mathbf{a}}(f)\leq P^{\mathbf{a}}_{\rm var}(f)\]
_for any continuous function \(f:X_{1}\to\mathbb{R}\)._
Proof.: Let \(d^{(i)}\) be a metric on \(X_{i}\) for each \(i=1,2,\ldots,r\). Take a positive number \(\varepsilon\) and a natural number \(N\). First, we will backward inductively define a finite clopen partition \(\mathscr{A}^{(i)}\) of \(X_{i}\) for each \(i\). Since \(X_{r}\) is zero-dimensional, we can take a sufficiently fine finite clopen partition \(\mathscr{A}^{(r)}\) of \(X_{r}\). That is, each \(A\in\mathscr{A}^{(r)}\) is both open and closed, and \({\rm diam}(A,d^{(r)}_{N})<\varepsilon\). Suppose \(\mathscr{A}^{(i+1)}\) is defined. For each \(A\in\mathscr{A}^{(i+1)}\), take a clopen partition \(\mathscr{B}(A)\) of \(\pi_{i}^{-1}(A)\subset X_{i}\) such that any \(B\in\mathscr{B}(A)\) satisfies \({\rm diam}(B,d^{(i)}_{N})<\varepsilon\). We let \(\mathscr{A}^{(i)}=\bigcup_{A\in\mathscr{A}^{(i+1)}}\mathscr{B}(A)\). Then \(\mathscr{A}^{(i)}\) is a finite clopen partition of \(X_{i}\). We define
\[\mathscr{A}^{(i)}_{N}=\mathscr{A}^{(i)}\lor T_{i}^{-1}\mathscr{A}^{(i)}\lor T _{i}^{-2}\mathscr{A}^{(i)}\vee\cdots\lor T_{i}^{-(N-1)}\mathscr{A}^{(i)}.\]
We employ the following notations. For \(i<j\) and \(A\in\mathscr{A}^{(j)}_{N}\), let \(\mathscr{A}^{(i)}_{N}(A)\) be the set of "children" of A;
\[\mathscr{A}^{(i)}_{N}(A)=\left\{B\in\mathscr{A}^{(i)}_{N}\Big{|}\,\pi_{j-1} \circ\pi_{j-2}\circ\cdots\circ\pi_{i}(B)\subset A\right\}.\]
Also, for \(B\in\mathscr{A}^{(i)}_{N}\) and \(i<j\), we denote by \(\widetilde{\pi}_{j}B\) the unique "parent" of \(B\) in \(\mathscr{A}^{(j)}_{N}\);
\[\widetilde{\pi}_{j}B=A\in\mathscr{A}^{(j)}_{N}\text{ such that }\pi_{j-1}\circ\pi_{j-2}\circ\cdots\circ\pi_{i}(B) \subset A.\]
We will evaluate \(P^{\mathbf{a}}(f,N,\varepsilon)\) from above using \(\{\mathscr{A}^{(i)}\}\). Let \(A\in\mathscr{A}^{(2)}_{N}\), and start by setting
\[Z^{(1)}_{N}(A)=\sum_{B\in\mathscr{A}^{(1)}_{N}(A)}e^{\sup_{B}S_{N}f}.\]
Let \(A\in\mathscr{A}^{(i+1)}_{N}\). If \(Z^{(i-1)}_{N}\) is already defined, set
\[Z^{(i)}_{N}(A)=\sum_{B\in\mathscr{A}^{(i)}_{N}(A)}\left(Z^{(i-1)}_{N}(B)\right) ^{a_{i-1}}.\]
We then define \(Z_{N}\) by
\[Z_{N}=\sum_{A\in\mathscr{A}_{N}^{(r)}}\left(Z_{N}^{(r-1)}(A)\right)^{a_{r-1}}.\]
It is straightforward from the construction that
\[P_{r}^{\boldsymbol{a}}(X_{r},f,N,\varepsilon)\leq Z_{N}.\]
Therefore, we only need to prove that there is a \(T_{1}\)-invariant probability measure \(\mu\) on \(X_{1}\) such that
\[\sum_{i=1}^{r}w_{i}h_{\pi^{(i-1)}*\mu}(T_{i},\mathscr{A}^{(i)})+w_{1}\int_{X_{ 1}}fd\mu\geq\lim_{N\to\infty}\frac{\log Z_{N}}{N}.\]
Since each \(A\in\mathscr{A}_{N}^{(1)}\) is closed, we can choose a point \(x_{A}\in A\) so that
\[S_{N}f(x_{A})=\sup_{A}S_{N}f.\]
We define a probability measure \(\sigma_{N}\) on \(X_{1}\) by
\[\sigma_{N}=\frac{1}{Z_{N}}\sum_{A\in\mathscr{A}_{N}^{(1)}}Z_{N}^{ (r-1)}(\widetilde{\pi}_{r}A)^{a_{r-1}-1}Z_{N}^{(r-2)}(\widetilde{\pi}_{r-1}A) ^{a_{r-2}-1}\] \[\times\cdots\times Z_{N}^{(2)}(\widetilde{\pi}_{3}A)^{a_{2}-1}Z_ {N}^{(1)}(\widetilde{\pi}_{2}A)^{a_{1}-1}e^{S_{N}f(x_{A})}\delta_{x_{A}}\]
where \(\delta_{x_{A}}\) is the Dirac measure at \(x_{A}\). This is indeed a probability measure on \(X_{1}\) since
\[\sigma_{N}(X_{1})=\frac{1}{Z_{N}}\sum_{A\in\mathscr{A}_{N}^{(1)}} Z_{N}^{(r-1)}(\widetilde{\pi}_{r}A)^{a_{r-1}-1}Z_{N}^{(r-2)}(\widetilde{\pi}_{r- 1}A)^{a_{r-2}-1}\] \[\times\cdots\times Z_{N}^{(2)}(\widetilde{\pi}_{3}A)^{a_{2}-1}Z_ {N}^{(1)}(\widetilde{\pi}_{2}A)^{a_{1}-1}e^{S_{N}f(x_{A})}\] \[=\frac{1}{Z_{N}}\sum_{A_{r}\in\mathscr{A}_{N}^{(r)}}Z_{N}^{(r-1) }(A_{r})^{a_{r-1}-1}\sum_{A_{r-1}\in\mathscr{A}_{N}^{(r-1)}(A_{r})}Z_{N}^{(r-2 )}(A_{r-1})^{a_{r-2}-1}\] \[\cdots\sum_{A_{3}\in\mathscr{A}_{N}^{(3)}(A_{4})}Z_{N}^{(2)}(A_{3 })^{a_{2}-1}\sum_{A_{2}\in\mathscr{A}_{N}^{(2)}(A_{3})}Z_{N}^{(1)}(A_{2})^{a_{ 1}-1}\underbrace{\sum_{A_{1}\in\mathscr{A}_{N}^{(1)}(A_{2})}e^{S_{N}f(x_{A_{1} })}}_{=Z_{N}^{(1)}(A_{2})}\] \[=\frac{1}{Z_{N}}\sum_{A_{r}\in\mathscr{A}_{N}^{(r)}}Z_{N}^{(r-1) }(A_{r})^{a_{r-1}-1}\sum_{A_{r-1}\in\mathscr{A}_{N}^{(r-1)}(A_{r})}Z_{N}^{(r-2 )}(A_{r-1})^{a_{r-2}-1}\] \[\cdots\sum_{A_{3}\in\mathscr{A}_{N}^{(3)}(A_{4})}Z_{N}^{(2)}(A_{3 })^{a_{2}-1}\underbrace{\sum_{A_{2}\in\mathscr{A}_{N}^{(2)}(A_{3})}Z_{N}^{(1)} (A_{2})^{a_{1}}}_{=Z_{N}^{(2)}(A_{3})}\] \[=\cdots=\frac{1}{Z_{N}}\sum_{A_{r}\in\mathscr{A}_{N}^{(r)}}Z_{N}^ {(j-1)}(A_{r})^{a_{r-1}}=1.\]
Although \(\sigma_{N}\) is not generally \(T_{1}\)-invariant, the following well-known trick allows us to create a \(T_{1}\)-invariant measure \(\mu\). We begin by setting
\[\mu_{N}=\frac{1}{N}\sum_{k=0}^{N-1}{T_{1}}^{k}{}_{*}\sigma_{N}.\]
Since \(X_{1}\) is compact, we can take a sub-sequence of \((\mu_{N})_{N}\) so that it weakly converges to a probability measure \(\mu\) on \(X_{1}\). Then \(\mu\) is \(T_{1}\)-invariant by the definition of \(\mu_{N}\). We will show that this \(\mu\) satisfies
\[\sum_{i=1}^{r}w_{i}h_{\pi^{(i-1)}{}_{*}\mu}(T_{i},\mathscr{A}^{(i)})+w_{1}\int _{X_{1}}fd\mu\geq\lim_{N\to\infty}\frac{\log Z_{N}}{N}.\]
We first prove
\[\sum_{i=1}^{r}w_{i}H_{\pi^{(i-1)}{}_{*}\sigma_{N}}(\mathscr{A}^{(i)}_{N})+w_{ 1}\int_{X_{1}}S_{N}fd\mu=\log Z_{N}.\]
To simplify the notations, let
\[\sigma^{(i)}_{N} =\pi^{(i-1)}{}_{*}\sigma_{N}\] \[=\frac{1}{Z_{N}}\sum_{B\in\mathscr{A}^{(1)}_{N}}Z_{N}^{(r-1)}( \widetilde{\pi}_{r}B)^{a_{r-1}-1}\cdots Z_{N}^{(1)}(\widetilde{\pi}_{2}B)^{a_ {1}-1}e^{S_{N}f(x_{B})}\delta_{\pi^{(i)}(x_{B})}\]
and
\[W_{N}^{(j)}=\sum_{A\in\mathscr{A}^{(j+1)}_{N}}Z_{N}^{(r-1)}(\widetilde{\pi}_{ r}A)^{a_{r-1}-1}\cdots Z_{N}^{(j+1)}(\widetilde{\pi}_{j+2}A)^{a_{j+1}-1}Z_{N}^{( j)}(A)^{a_{j}}\log\Big{(}Z_{N}^{(j)}(A)\Big{)}.\]
**Claim 4.2**.: _We have the following equations:_
\[H_{\sigma_{N}}(\mathscr{A}^{(1)}_{N})=\log Z_{N}-\int_{X_{1}}S_{N}fd\sigma_{N }-\sum_{j=1}^{r-1}\frac{a_{j}-1}{Z_{n}}W_{N}^{(j)},\]
\[H_{\sigma^{(i)}_{N}}(\mathscr{A}^{(i)}_{N})=\log Z_{N}-\frac{a_{i-1}}{Z_{n}}W _{N}^{(i-1)}-\sum_{j=i}^{r-1}\frac{a_{j}-1}{Z_{n}}W_{N}^{(j)}\ \ (\text{ for 2 }\leq i\leq r\,).\]
_Here, \(\sum_{j=r}^{r-1}\frac{a_{j}-1}{Z_{n}}W_{N}^{(j)}\) is defined to be \(0\)._
Proof.: Let \(A\in\mathscr{A}^{(1)}_{N}\). We have
\[\sigma_{N}(A)=\frac{1}{Z_{N}}Z_{N}^{(r-1)}(\widetilde{\pi}_{r}A)^{a_{r-1}-1} \cdots Z_{N}^{(1)}(\widetilde{\pi}_{2}A)^{a_{1}-1}e^{S_{N}f(x_{A})}.\]
Then
\[\sigma_{N}(A)=\frac{1}{Z_{N}}Z_{N}^{(r-1)}(\widetilde{\pi}_{r}A)^{a_{r-1}-1} \cdots Z_{N}^{(1)}(\widetilde{\pi}_{2}A)^{a_{1}-1}e^{S_{N}f(x_{A})}.\]
\[H_{\sigma_{N}}(\mathscr{A}_{N}^{(1)})=-\sum_{A\in\mathscr{A}_{N}^{(1)}} \sigma_{N}(A)\log\left(\sigma_{N}(A)\right)\] \[= \log Z_{N}-\underbrace{\sum_{A\in\mathscr{A}_{N}^{(1)}}\sigma_{N} (A)S_{N}f(x_{A})}_{\text{(I)}}\] \[-\sum_{j=1}^{r-1}\frac{a_{j}-1}{Z_{N}}\underbrace{\sum_{A\in \mathscr{A}_{N}^{(1)}}Z_{N}^{(r-1)}(\widetilde{\pi}_{r}A)^{a_{r-1}-1}\cdots Z_ {N}^{(1)}(\widetilde{\pi}_{2}A)^{a_{1}-1}e^{S_{N}f(x_{A})}\log\left(Z_{N}^{(j) }(\widetilde{\pi}_{j+1}A)\right)}_{\text{(II)}}.\]
For (I), we have
\[\int_{X_{1}}S_{N}fd\sigma_{N} =\frac{1}{Z_{N}}\sum_{A\in\mathscr{A}_{N}^{(1)}}Z_{N}^{(r-1)}( \widetilde{\pi}_{r}A)^{a_{r-1}-1}\cdots Z_{N}^{(2)}(\widetilde{\pi}_{3}A)^{a _{2}-1}Z_{N}^{(1)}(\widetilde{\pi}_{2}A)^{a_{1}-1}e^{S_{N}f(x_{A})}S_{N}f(x_{A})\] \[= \text{(I)}.\]
We will show that (II) \(=W_{N}^{(j)}\). Let \(A^{\prime}\in\mathscr{A}_{N}^{(j+1)}\). Then any \(A\in\mathscr{A}_{N}^{(1)}(A^{\prime})\) satisfies \(\widetilde{\pi}_{j+1}A=A^{\prime}\). Hence,
\[\text{(II)} =\sum_{A^{\prime}\in\mathscr{A}_{N}^{(j+1)}}\sum_{A\in\mathscr{A} _{N}^{(1)}(A^{\prime})}Z_{N}^{(r-1)}(\widetilde{\pi}_{r}A)^{a_{r-1}-1}\cdots Z _{N}^{(1)}(\widetilde{\pi}_{2}A)^{a_{1}-1}e^{S_{N}f(x_{A})}\log\left(Z_{N}^{(j) }(\widetilde{\pi}_{j+1}A)\right)\] \[= \sum_{A^{\prime}\in\mathscr{A}_{N}^{(j+1)}}Z_{N}^{(r-1)}( \widetilde{\pi}_{r}A^{\prime})^{a_{r-1}-1}\cdots Z_{N}^{(j+1)}(\widetilde{\pi }_{j+2}A^{\prime})^{a_{j+1}-1}Z_{N}^{(j)}(A^{\prime})^{a_{j}-1}\log\left(Z_{N} ^{(j)}(A^{\prime})\right)\] \[\times\underbrace{\sum_{A\in\mathscr{A}_{N}^{(1)}(A^{\prime})}Z_ {N}^{(j-1)}(\widetilde{\pi}_{j}A)^{a_{j-1}-1}\cdots Z_{N}^{(1)}(\widetilde{\pi }_{2}A)^{a_{1}-1}e^{S_{N}f(x_{A})}}_{\text{(II)}^{\prime}}.\]
The term (II)\({}^{\prime}\) can be calculated similarly to how we showed \(\sigma_{N}(X_{1})=1\). Namely,
\[\text{(II)}^{\prime}=\sum_{A_{j}\in\mathscr{A}_{N}^{(j)}(A^{\prime})}Z_{N}^{ (j-1)}(A_{j})^{a_{j-1}-1}\sum_{A_{j-1}\in\mathscr{A}_{N}^{(j-1)}(A_{j})}Z_{N}^ {(j-2)}(A_{j-1})^{a_{j-2}-1}\] \[\cdots\sum_{A_{3}\in\mathscr{A}_{N}^{(3)}(A_{4})}Z_{N}^{(2)}(A_{ 3})^{a_{2}-1}\sum_{A_{2}\in\mathscr{A}_{N}^{(2)}(A_{3})}Z_{N}^{(1)}(A_{2})^{a_ {1}-1}\underbrace{\sum_{A_{1}\in\mathscr{A}_{N}^{(1)}(A_{2})}e^{S_{N}f(x_{A_{1} })}}_{=Z_{N}^{(1)}(A_{2})}\]
\[=\cdots=\sum_{A_{j}\in\mathscr{A}_{N}^{(j)}(A^{\prime})}Z_{N}^{(j-1)}(A_{j})^{a _{j-1}}=Z_{N}^{(j)}(A^{\prime}).\]
Thus, we get
\[(\Pi) =\sum_{A\in\mathscr{A}_{N}^{(j+1)}}Z_{N}^{(r-1)}(\widetilde{\pi}_{r} A)^{a_{r-1}-1}\cdots Z_{N}^{(j+1)}(\widetilde{\pi}_{j+2}A)^{a_{j+1}-1}\cdot Z_{N}^{ (j)}(A)^{a_{j}}\log\Big{(}Z_{N}^{(j)}(A)\Big{)}\] \[=W_{N}^{(j)}.\]
This completes the proof of the first assertion.
Next, let \(2\leq i\leq r\). For any \(A\in\mathscr{A}_{N}^{(i)}\),
\[\sigma_{N}^{(i)}(A) =\frac{1}{Z_{n}}\sum_{\begin{subarray}{c}B\in\mathscr{A}_{N}^{(1) },\\ \pi^{(i)}(x_{B})\in A\end{subarray}}Z_{N}^{(r-1)}(\widetilde{\pi}_{r}B)^{a_{r-1 }-1}\cdots Z_{N}^{(1)}(\widetilde{\pi}_{2}B)^{a_{1}-1}e^{S_{N}f(x_{B})}\] \[=\frac{1}{Z_{n}}Z_{N}^{(r-1)}(\widetilde{\pi}_{r}A)^{a_{r-1}-1} \cdots Z_{N}^{(i-1)}(\widetilde{\pi}_{i}A)^{a_{i-1}-1}\] \[\qquad\qquad\times\sum_{B\in\mathscr{A}_{N}^{(1)}(A)}Z_{N}^{(i-2 )}(\widetilde{\pi}_{i-1}B)^{a_{i-2}-1}\cdots Z_{N}^{(1)}(\widetilde{\pi}_{2}B )^{a_{1}-1}e^{S_{N}f(x_{B})}.\]
As in the evaluation of \((\Pi)^{\prime}\), we have
\[\sum_{B\in\mathscr{A}_{N}^{(1)}(A)}Z_{N}^{(i-2)}(\widetilde{\pi}_{i-1}B)^{a_{ i-2}-1}\cdots Z_{N}^{(1)}(\widetilde{\pi}_{2}B)^{a_{1}-1}e^{S_{N}f(x_{B})}=Z_{N}^{ (i-1)}(A)^{a_{i-1}}.\]
Hence,
\[\sigma_{N}^{(i)}(A)=\frac{1}{Z_{n}}Z_{N}^{(r-1)}(\widetilde{\pi}_{r}A)^{a_{r- 1}-1}\cdots Z_{N}^{(i)}(\widetilde{\pi}_{i+1}A)^{a_{i}-1}Z_{N}^{(i-1)}(A)^{a_ {i-1}}.\]
Therefore,
\[H_{\sigma_{N}^{(i)}}(\mathscr{A}_{N}^{(i)})=-\sum_{A\in\mathscr{ A}_{N}^{(i)}}\sigma_{N}^{(i)}(A)\log\sigma_{N}^{(i)}(A)\] \[=\log Z_{N}-\frac{1}{Z_{n}}\sum_{A\in\mathscr{A}_{N}^{(i)}}Z_{N}^ {(r-1)}(\widetilde{\pi}_{r}A)^{a_{r-1}-1}\cdots Z_{N}^{(i)}(\widetilde{\pi}_{ i+1}A)^{a_{i}-1}Z_{N}^{(i-1)}(A)^{a_{i-1}}\] \[\qquad\qquad\qquad\qquad\times\log\Big{(}Z_{N}^{(r-1)}(\widetilde {\pi}_{r}A)^{a_{r-1}-1}\cdots Z_{N}^{(i)}(\widetilde{\pi}_{i+1}A)^{a_{i}-1}Z_{ N}^{(i-1)}(A)^{a_{i-1}}\Big{)}\] \[=\log Z_{N}-\frac{a_{i-1}}{Z_{n}}\sum_{A\in\mathscr{A}_{N}^{(i)}} Z_{N}^{(r-1)}(\widetilde{\pi}_{r}A)^{a_{r-1}-1}\cdots Z_{N}^{(i)}(\widetilde{\pi}_{ i+1}A)^{a_{i}-1}Z_{N}^{(i-1)}(A)^{a_{i-1}}\log\Big{(}Z_{N}^{(i-1)}(A)\Big{)}\] \[\qquad-\sum_{j=i}^{r-1}\frac{a_{j}-1}{Z_{n}}\sum_{A\in\mathscr{A }_{N}^{(i)}}Z_{N}^{(r-1)}(\widetilde{\pi}_{r}A)^{a_{r-1}-1}\cdots Z_{N}^{(i)}( \widetilde{\pi}_{i+1}A)^{a_{i}-1}Z_{N}^{(i-1)}(A)^{a_{i-1}}\log\Big{(}Z_{N}^{(j )}(\widetilde{\pi}_{j+1}A)\Big{)}.\]
Note that we have
\[\sum_{A\in\mathscr{A}_{N}^{(i)}}Z_{N}^{(r-1)}(\widetilde{\pi}_{r}A)^{a_{r-1}-1} \cdots Z_{N}^{(i)}(\widetilde{\pi}_{i+1}A)^{a_{i-1}}Z_{N}^{(i-1)}(A)^{a_{i-1}} \log\Big{(}Z_{N}^{(j)}(\widetilde{\pi}_{j+1}A)\Big{)}\]
\[=\sum_{A_{j+1}\in\mathscr{A}_{N}^{(j+1)}}Z_{N}^{(r-1)}(\widetilde{\pi}_{r}A_{j+ 1})^{a_{r-1}-1}\cdots Z_{N}^{(j+1)}(\widetilde{\pi}_{j+2}A_{j+1})^{a_{j}-1}Z_{ N}^{(j)}(A_{j+1})^{a_{j-1}-1}\log\Big{(}Z_{N}^{(j)}(A_{j+1})\Big{)}\]
\[\times\sum_{A_{j}\in\mathscr{A}_{N}^{(j)}(A_{j+1})}Z_{N}^{(j-1)}(A_{j})^{a_{j- 2}-1}\cdots\sum_{A_{i+1}\in\mathscr{A}_{N}^{(i+1)}(A_{i+2})}Z_{N}^{(i)}(A_{i+ 1})^{a_{i+1}-1}\underbrace{\sum_{A_{i}\in\mathscr{A}_{N}^{(i)}(A_{i+1})}Z_{N} ^{(i-1)}(A_{i})^{a_{i-1}}}_{=Z_{N}^{(i)}(A_{i+1})}\]
\[=\cdots=\sum_{A_{j+1}\in\mathscr{A}_{N}^{(j+1)}}Z_{N}^{(r-1)}(\widetilde{\pi}_ {r}A_{j+1})^{a_{r-1}-1}\cdots Z_{N}^{(j+1)}(\widetilde{\pi}_{j+2}A_{j+1})^{a_ {j}-1}Z_{N}^{(j)}(A_{j+1})^{a_{j-1}}\log\Big{(}Z_{N}^{(j)}(A_{j+1})\Big{)}.\]
We conclude that
\[H_{\sigma_{N}^{(i)}}(\mathscr{A}_{N}^{(i)})=\log Z_{N}-\frac{a_{i-1}}{Z_{n}}W_ {N}^{(i-1)}-\sum_{j=i}^{r-1}\frac{a_{j}-1}{Z_{n}}W_{N}^{(j)}.\]
This completes the proof of the claim.
By this claim,
\[\sum_{i=1}^{r}w_{i}H_{\sigma_{N}^{(i)}}(\mathscr{A}_{N}^{(i)})+w_{1}\int_{X_{ 1}}\!S_{N}fd\mu=\log Z_{N}-\sum_{i=2}^{r}\frac{w_{i}a_{i-1}}{Z_{n}}W_{N}^{(i-1 )}-\sum_{i=1}^{r-1}\sum_{j=i}^{r-1}\frac{w_{i}(a_{j}-1)}{Z_{n}}W_{N}^{(j)}.\]
However, we have
\[\sum_{i=2}^{r}w_{i}a_{i-1}W_{N}^{(i-1)}+\sum_{i=1}^{r-1}\sum_{j=i}^{r-1}w_{i}( a_{j}-1)W_{N}^{(j)}=0.\]
Indeed, the coefficient of \(W_{N}^{(k)}\) (\(1\leq k\leq r-1\)) is
\[w_{k+1}a_{k}+(a_{k}-1)\sum_{i=1}^{k}w_{i} =w_{k+1}a_{k}+(a_{k}-1)a_{k}a_{k+1}\cdots a_{r-1}\] \[=a_{k}\{w_{k+1}-(1-a_{k})a_{k+1}a_{k+2}\cdots a_{r-1}\}=0.\]
Thus, we have
\[\sum_{i=1}^{r}w_{i}H_{\sigma_{N}^{(i)}}(\mathscr{A}_{N}^{(i)})+w_{1}\int_{X_{1 }}\!S_{N}fd\mu=\log Z_{N}. \tag{4.1}\]
Let \(\mu^{(i)}=\pi^{(i-1)}{}_{*}\mu\) and \(\mu_{N}^{(i)}=\pi^{(i-1)}{}_{*}\mu_{N}.\)
**Lemma 4.3**.: _Let \(N\) and \(M\) be natural numbers. For any \(1\leq i\leq r\),_
\[\frac{1}{M}H_{\mu_{N}^{(i)}}(\mathscr{A}_{M}^{(i)})\geq\frac{1}{N}H_{\sigma_{N }^{(i)}}(\mathscr{A}_{N}^{(i)})-\frac{2M\log|\mathscr{A}^{(i)}|}{N}.\]
_Here, \(|\mathscr{A}^{(i)}|\) is the number of elements in \(\mathscr{A}^{(i)}\)._
Suppose this is true, and let \(N\) and \(M\) be natural numbers. Together with (4.1), we obtain the following evaluation;
\[\sum_{i=1}^{r}\frac{w_{i}}{M}H_{\mu_{N}^{(i)}}(\mathscr{A}_{M}^{(i )})+w_{1}\int_{X_{1}}fd\mu_{N} \geq\sum_{i=1}^{r}\frac{w_{i}}{N}H_{\sigma_{N}^{(i)}}(\mathscr{A}_ {N}^{(i)})-\sum_{i=1}^{r}\frac{2M\log|\mathscr{A}^{(i)}|}{N}+\frac{w_{1}}{N} \int_{X_{1}}S_{N}fd\sigma_{N}\] \[=\frac{\log Z_{N}}{N}-\sum_{i=1}^{r}\frac{2M\log|\mathscr{A}^{(i) }|}{N}.\]
Let \(N=N_{k}\to\infty\) along the sub-sequence \((N_{k})\) for which \(\mu_{N_{k}}\rightharpoonup\mu\). This yields
\[\sum_{i=1}^{r}\frac{w_{i}}{M}H_{\mu^{(i)}}(\mathscr{A}_{M}^{(i)})+w_{1}\int_{ X_{1}}fd\mu\geq\lim_{N\to\infty}\frac{\log Z_{N}}{N}.\]
We let \(M\to\infty\) and get
\[\sum_{i=1}^{r}w_{i}h_{\mu^{(i)}}(T_{i},\mathscr{A}^{(i)})+w_{1}\int_{X_{1}}fd \mu\geq\lim_{N\to\infty}\frac{\log Z_{N}}{N}.\]
Hence,
\[P_{\mathrm{var}}^{\boldsymbol{a}}(f)\geq P^{\boldsymbol{a}}(f).\]
We are left to prove Lemma 4.3.
Proof of Lemma 4.3.: This statement appears in the proof of variational principle in [22, Theorem 8.6], and Tsukamoto also proves it in [22, Claim 6.3]. The following proof is taken from the latter. We will explain for \(i=1\); the same argument works for all \(i\).
Let \(\mathscr{A}=\mathscr{A}^{(1)}\). Recall that \(\mu_{N}=\frac{1}{N}\sum_{k=0}^{N-1}{T_{1}^{k}}_{*}\sigma_{N}\). Since the entropy function is concave (Lemma 3.5), we have
\[H_{\mu_{N}}(\mathscr{A}_{M})\geq\frac{1}{N}\sum_{k=0}^{N-1}H_{{T_{1}^{k}}_{* }\sigma_{N}}(\mathscr{A}_{M})=\frac{1}{N}\sum_{k=0}^{N-1}H_{\sigma_{N}}(T_{1} ^{-k}\mathscr{A}_{M}).\]
Let \(N=qM+r\) with \(0\leq r<M\), then
\[\sum_{k=0}^{N-1}H_{\sigma_{N}}(T_{1}^{-k}\mathscr{A}_{M}) =\sum_{s=0}^{q}\sum_{t=0}^{M-1}H_{\sigma_{N}}(T_{1}^{-sM-t} \mathscr{A}_{M})-\sum_{k=N}^{qM+M-1}H_{\sigma_{N}}(T_{1}^{-k}\mathscr{A}_{M})\] \[\geq\sum_{t=0}^{M-1}\sum_{s=0}^{q}H_{\sigma_{N}}(T_{1}^{-sM-t} \mathscr{A}_{M})-M\log|\mathscr{A}_{M}|\] \[\geq\sum_{t=0}^{M-1}\sum_{s=0}^{q}H_{\sigma_{N}}(T_{1}^{-sM-t} \mathscr{A}_{M})-M^{2}\log|\mathscr{A}|. \tag{4.2}\]
We will evaluate \(\sum_{s=0}^{q}H_{\sigma_{N}}(T_{1}^{-sM-t}\mathscr{A}_{M})\) from below for each \(0\leq t\leq M-1\). First, observe that
\[T_{1}^{-sM-t}\mathscr{A}_{M}=\bigvee_{j=0}^{M-1}T_{1}^{-sM-t-j}\mathscr{A}.\]
We have
\[\{sM+t+j\,|\,0\leq s\leq q,0\leq j\leq M-1\}=\{t,t+1,\ldots,t+qM+M-1\}\]
without multiplicity. Therefore,
\[H_{\sigma_{{}_{N}}}(\mathscr{A}_{N}) \leq H_{\sigma_{{}_{N}}}\left(\bigvee_{k=0}^{t+(q+1)M-1}T_{1}^{-k} \mathscr{A}\right)\qquad\text{by }N<t+(q+1)M\] \[\leq\sum_{s=0}^{q}H_{\sigma_{{}_{N}}}(T_{1}^{-sM-t}\mathscr{A}_{M })+\sum_{k=0}^{t-1}H_{\sigma_{{}_{N}}}(T_{1}^{-k}\mathscr{A})\qquad\text{by Lemma \ref{lem:sM}.}\]
This implies
\[\sum_{s=0}^{q}H_{\sigma_{{}_{N}}}(T_{1}^{-sM-t}\mathscr{A}_{M}) \geq H_{\sigma_{{}_{N}}}(\mathscr{A}_{N})-\sum_{k=0}^{t-1}H_{ \sigma_{{}_{N}}}(T_{1}^{-k}\mathscr{A})\] \[\geq H_{\sigma_{{}_{N}}}(\mathscr{A}_{N})-M\log|\mathscr{A}| \qquad\text{by }t<M.\]
Now, we sum over \(t\) and obtain
\[\sum_{t=1}^{M-1}\sum_{s=0}^{q}H_{\sigma_{{}_{N}}}(T_{1}^{-sM-t}\mathscr{A}_{M })\geq MH_{\sigma_{{}_{N}}}(\mathscr{A}_{N})-M^{2}\log|\mathscr{A}|.\]
Combining with (4.2), this implies
\[\sum_{k=0}^{N-1}H_{\sigma_{{}_{N}}}(T_{1}^{-k}\mathscr{A}_{M})\geq MH_{\sigma_ {{}_{N}}}(\mathscr{A}_{N})-2M^{2}\log|\mathscr{A}|.\]
It follows that
\[\frac{1}{M}H_{\mu_{{}_{N}}}(\mathscr{A}_{N})\geq\frac{1}{MN}\sum_{k=0}^{N-1}H_ {\sigma_{{}_{N}}}(T_{1}^{-k}\mathscr{A}_{M})\geq\frac{1}{N}H_{\sigma_{{}_{N}}} (\mathscr{A}_{N})-\frac{2M\log|\mathscr{A}|}{N}.\]
This completes the proof of Theorem 4.1.
## 5. Proof of \(P^{\boldsymbol{a}}_{\rm var}(f)\leq P^{\boldsymbol{a}}(f)\).
It seems difficult to implement the zero-dimensional trick to prove \(P^{\boldsymbol{a}}_{\rm var}(f)\leq P^{\boldsymbol{a}}(f)\). Hence, the proof is more complicated.
**Theorem 5.1**.: _Suppose that \((X_{i},T_{i})\) (\(i=1,2,\ldots,r\)) are dynamical systems and \(\pi_{i}:X_{i}\to X_{i+1}\)\((i=1,2,...,r-1)\) are factor maps. Then we have_
\[P^{\boldsymbol{a}}_{\rm var}(f)\leq P^{\boldsymbol{a}}(f)\]
_for any continuous function \(f:X_{1}\to\mathbb{R}\)._
Proof.: Take and fix \(\mu\in\mathscr{M}^{T_{1}}(X_{1})\). Let \(\mu_{i}=\pi^{(i-1)}{}_{*}\mu\). We need to prove
\[\sum_{i=1}^{r}w_{i}h_{\mu_{i}}(T_{i})+w_{1}\int_{X_{1}}\!fd\mu\leq P^{\boldsymbol {a}}(f,\boldsymbol{T}).\]
However, the following argument assures that giving an evaluation up to a constant is sufficient: suppose there is a positive number \(C\) which does not depend on \(f\) nor \((T_{i})_{i}\) satisfying
\[\sum_{i=1}^{r}w_{i}h_{\mu_{i}}(T_{i})+w_{1}\int_{X_{1}}\!fd\mu\leq P^{ \boldsymbol{a}}(f,\boldsymbol{T})+C. \tag{5.1}\]
Applying this to \(S_{m}f\) and \(\boldsymbol{T}^{m}=(T_{i}{}^{m})_{i}\) for \(m\in\mathbb{N}\) yields
\[\sum_{i=1}^{r}w_{i}h_{\mu_{i}}(T_{i}{}^{m})+w_{1}\int_{X_{1}}\!S_{m}fd\mu\leq P ^{\boldsymbol{a}}(S_{m}f,\boldsymbol{T}^{m})+C.\]
We employ Lemma 3.3 and get
\[m\sum_{i=1}^{r}w_{i}h_{\mu_{i}}(T_{i})+mw_{1}\int_{X_{1}}\!fd\mu\leq mP^{ \boldsymbol{a}}(f,\boldsymbol{T})+C.\]
Divide by \(m\) and let \(m\to\infty\). We obtain the desired inequality
\[\sum_{i=1}^{r}w_{i}h_{\mu_{i}}(T_{i})+w_{1}\int_{X_{1}}\!fd\mu\leq P^{ \boldsymbol{a}}(f,\boldsymbol{T}).\]
Therefore, we only need to prove (5.1).
Let \(\mathscr{A}^{(i)}=\{A_{1}^{(i)},A_{2}^{(i)},\cdots,A_{m_{i}}^{(i)}\}\) be an arbitrary partition of \(X_{i}\) for each \(i\). We will prove
\[\sum_{i=1}^{r}w_{i}h_{\mu_{i}}(T_{i},\mathscr{A}^{(i)})+w_{1}\int_{X_{1}}\!fd \mu\leq P^{\boldsymbol{a}}(f,\boldsymbol{T})+C.\]
We start by approximating elements of \(\mathscr{A}^{(i)}\) with compact sets using backward induction. For \(1\leq i\leq r\), let
\[\Lambda_{i}^{0}=\{0,1,\cdots,m_{r}\}\times\{0,1,\cdots,m_{r-1}\}\times\cdots \times\{0,1,\cdots,m_{i+1}\}\times\{0,1,\cdots,m_{i}\},\]
\[\Lambda_{i}=\{0,1,\cdots,m_{r}\}\times\{0,1,\cdots,m_{r-1}\}\times\cdots \times\{0,1,\cdots,m_{i+1}\}\times\{1,2,\cdots,m_{i}\}.\]
We will denote an element \((j_{r},j_{r-1},\cdots,j_{i})\) in \(\Lambda_{i}^{0}\) or \(\Lambda_{i}\) by \(j_{r}j_{r-1}\cdots j_{i}\). For each \(A_{j}^{(r)}\in\mathscr{A}^{(r)}\), take a compact set \(C_{j}^{(r)}\subset A_{j}^{(r)}\) such that
\[\log m_{r}\cdot\sum_{j=1}^{m_{r}}\mu_{r}(A_{j}^{(r)}\setminus C_{j}^{(r)})<1.\]
Define \(C_{0}^{(r)}\) as the remainder of \(X_{r}\), which may not be compact;
\[C_{0}^{(r)}=\bigcup_{j=1}^{m_{r}}A_{j}^{(r)}\setminus C_{j}^{(r)}=X_{r} \setminus\bigcup_{j=1}^{m_{r}}C_{j}^{(r)}.\]
Then \(\mathscr{C}^{(r)}:=\{C_{0}^{(r)},C_{1}^{(r)},\cdots,C_{m_{r}}^{(r)}\}\) is a measurable partition of \(X_{r}\).
Next, consider the partition \(\pi_{r-1}^{-1}(\mathscr{C}^{(r)})\vee\mathscr{A}^{(r-1)}\) of \(X_{r-1}\). For \(j_{r}j_{r-1}\in\Lambda_{r-1}\), let
\[B_{j_{r}j_{r-1}}^{(r-1)}=\pi_{r-1}^{-1}(C_{j_{r}}^{(r)})\cap A_{j_{r-1}}^{(r-1)}.\]
Then
\[\pi_{r-1}^{-1}(\mathscr{C}^{(r)})\vee\mathscr{A}^{(r-1)}=\left\{B_{j_{r}j_{r- 1}}^{(r-1)}\Big{|}\ j_{r}j_{r-1}\in\Lambda_{r-1}\ \right\},\]
and for each \(j_{r}\in\Lambda_{r}^{0}\)
\[\bigcup_{j_{r-1}=1}^{m_{r-1}}B_{j_{r}j_{r-1}}^{(r-1)}=\pi_{r-1}^{-1}(C_{j_{r} }^{(r-1)}).\]
For each \(j_{r}j_{r-1}\in\Lambda_{r-1}\), take a compact set \(C_{j_{r}j_{r-1}}^{(r-1)}\subset B_{j_{r}j_{r-1}}^{(r-1)}\) (which could be empty) such that
\[\log|\Lambda_{r-1}|\cdot\sum_{j_{r}=0}^{m_{r}}\sum_{j_{r-1}=1}^{m_{r-1}}\mu_{ r-1}(B_{j_{r}j_{r-1}}^{(r-1)}\setminus C_{j_{r}j_{r-1}}^{(r-1)})<1.\]
Define \(C_{j_{r}0}^{(r-1)}\) as the remainder of \(\pi_{r-1}^{-1}(C_{j_{r}}^{(r)})\);
\[C_{j_{r}0}^{(r-1)}=\pi_{r-1}^{-1}(C_{j_{r}}^{(r)})\setminus\bigcup_{j_{r-1}=1 }^{m_{r-1}}C_{j_{r}j_{r-1}}^{(r-1)}.\]
Then \(\mathscr{C}^{(r-1)}=\left\{C_{j_{r}j_{r-1}}^{(r-1)}\Big{|}j_{r}j_{r-1}\in \Lambda_{r-1}^{0}\right\}\) is a measurable partition of \(X_{r-1}\).
Continue in this manner, and suppose we have obtained the partition \(\mathscr{C}^{(k)}=\left\{C_{J}^{(k)}\Big{|}J\in\Lambda_{k}^{0}\right\}\) of \(X_{k}\) for \(k=i+1,i+2,\ldots,r\). We will define \(\mathscr{C}^{(i)}\). Each element in \(\pi_{i}^{-1}(\mathscr{C}^{(i+1)})\vee\mathscr{A}^{(i)}\) can be expressed using \(J^{\prime}\in\Lambda_{i+1}^{0}\) and \(j_{i}\in\{1,2,\ldots,m_{i}\}\) by
\[B_{J^{\prime}j_{i}}^{(i)}=\pi_{i}^{-1}(C_{J^{\prime}}^{(i+1)})\cap A_{j_{i}}^{ (i)}.\]
Choose a compact set \(C_{J}^{(i)}\subset B_{J}^{(i)}\) for each \(J\in\Lambda_{i}\) so that
\[\log|\Lambda_{i}|\cdot\sum_{J^{\prime}\in\Lambda_{i+1}^{0}}\sum_{j_{i}=1}^{m_{ i}}\mu_{i}\left(B_{J^{\prime}j_{i}}^{(i)}\setminus C_{J^{\prime}j_{i}}^{(i)} \right)<1.\]
Finally, for \(J^{\prime}\in\Lambda_{j+1}^{0}\), let
\[C_{J^{\prime}0}^{(i)}=\pi_{i}^{-1}(C_{J^{\prime}}^{(i+1)})\setminus\bigcup_{j _{i}=1}^{m_{i}}C_{J^{\prime}j_{i}}^{(i)}.\]
Set \(\mathscr{C}^{(i)}=\left\{C_{J}^{(i)}\Big{|}J\in\Lambda_{i}^{0}\right\}\). This is a partition of \(X_{i}\).
**Lemma 5.2**.: _For \(\mathscr{C}^{(i)}\) constructed above, we have_
\[h_{\mu_{i}}(T_{i},\mathscr{A}^{(i)})\leq h_{\mu_{i}}(T_{i},\mathscr{C}^{(i)})+1.\]
Proof.: By Lemma 3.5,
\[h_{\mu_{i}}(T_{i},\mathscr{A}^{(i)}) \leq h_{\mu_{i}}\big{(}T_{i},\mathscr{A}^{(i)}\vee\pi_{i}^{-1}( \mathscr{C}^{(i+1)})\big{)}\] \[\leq h_{\mu_{i}}(T_{i},\mathscr{C}^{(i)})+H_{\mu_{i}}\left( \mathscr{A}^{(i)}\vee\pi_{i}^{-1}(\mathscr{C}^{(i+1)})\big{|}\mathscr{C}^{(i )}\right).\]
Since \(C_{J}^{(i)}\subset B_{J}^{(i)}\) for \(J\in\Lambda_{i}\),
\[H_{\mu_{i}}\left(\mathscr{A}^{(i)}\vee\pi_{i}^{-1}(\mathscr{C}^ {(i+1)})\big{|}\mathscr{C}^{(i)}\right)\] \[=-\sum_{\begin{subarray}{c}J\in\Lambda_{i}^{0}\\ \mu_{i}(C_{J}^{(i)})\neq 0\end{subarray}}\mu_{i}(C_{J}^{(i)})\sum_{K\in \Lambda_{i}}\frac{\mu_{i}(B_{K}^{(i)}\cap C_{J}^{(i)})}{\mu_{i}(C_{J}^{(i)})} \log\left(\frac{\mu_{i}(B_{K}^{(i)}\cap C_{J}^{(i)})}{\mu_{i}(C_{J}^{(i)})}\right)\] \[=-\sum_{\begin{subarray}{c}J^{\prime}\in\Lambda_{i+1}^{0}\\ \mu_{i}(C_{J^{\prime}0}^{(i)})\neq 0\end{subarray}}\mu_{i}(C_{J^{\prime}0}^{(i)}) \sum_{j_{i}=1}^{m_{i}}\frac{\mu_{i}(B_{J^{\prime}j_{i}}^{(i)}\cap C_{J^{\prime }0}^{(i)})}{\mu_{i}(C_{J^{\prime}0}^{(i)})}\log\left(\frac{\mu_{i}(B_{J^{ \prime}j_{i}}^{(i)}\cap C_{J^{\prime}0}^{(i)})}{\mu_{i}(C_{J^{\prime}0}^{(i)})} \right).\]
By Lemma 3.4, we have
\[-\sum_{j_{i}=1}^{m_{i}}\frac{\mu_{i}(B_{J^{\prime}j_{i}}^{(i)}\cap C_{J^{ \prime}0}^{(i)})}{\mu_{i}(C_{J^{\prime}0}^{(i)})}\log\left(\frac{\mu_{i}(B_{J^ {\prime}j_{i}}^{(i)}\cap C_{J^{\prime}0}^{(i)})}{\mu_{i}(C_{J^{\prime}0}^{(i)} )}\right)\leq\log|\Lambda_{i}|.\]
Therefore,
\[H_{\mu_{i}}\left(\mathscr{A}^{(i)}\vee\pi_{i}^{-1}(\mathscr{C}^{(i+1)}) \big{|}\mathscr{C}^{(i)}\right)\leq\log|\Lambda_{i}|\sum_{J^{\prime}\in\Lambda_ {i+1}^{0}}\mu_{i}\left(\pi_{i}^{-1}(C_{J^{\prime}}^{(i+1)})\setminus\bigcup_{j _{i}=1}^{m_{i}}C_{J^{\prime}j_{i}}^{(i)}\right)<1.\]
Recall the definition of \(\boldsymbol{w}\) in (2.1). We have
\[\sum_{i=1}^{r}w_{i}h_{\mu_{i}}(T_{i},\mathscr{C}^{(i)})+w_{1} \int_{X_{1}}fd\mu\] \[=\ \lim_{N\to\infty}\frac{1}{N}\Bigg{\{}H_{\mu_{r}}(\mathscr{C} _{N}^{(r)})+a_{1}a_{2}\cdots a_{r-1}N\int_{X_{1}}fd\mu\] \[\qquad\qquad\qquad\qquad\qquad\qquad\qquad+\sum_{i=1}^{r-1}a_{i} a_{i+1}\cdots a_{r-1}\left(H_{\mu_{i}}(\mathscr{C}_{N}^{(i)})-H_{\mu_{i+1}}( \mathscr{C}_{N}^{(i+1)})\right)\Bigg{\}}\] \[=\ \lim_{N\to\infty}\frac{1}{N}\Bigg{\{}H_{\mu_{r}}(\mathscr{C}_{N}^ {(r)})+a_{1}a_{2}\cdots a_{r-1}\int_{X_{1}}S_{N}fd\mu\] \[\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad+\sum_{i=1}^{r-1} a_{i}a_{i+1}\cdots a_{r-1}H_{\mu_{i}}\left(\mathscr{C}_{N}^{(i)}\Big{|}\pi_{i}^{-1}( \mathscr{C}_{N}^{(i+1)})\right)\Bigg{\}}.\]
Here, we used the relation
\[H_{\mu_{i}}(\mathscr{C}_{N}^{(i)})-H_{\mu_{i+1}}(\mathscr{C}_{N}^{(i+1)}) =H_{\mu_{i}}(\mathscr{C}_{N}^{(i)})-H_{\mu_{i}}(\pi_{i}^{-1}( \mathscr{C}_{N}^{(i+1)}))\] \[=H_{\mu_{i}}\left(\mathscr{C}_{N}^{(i)}\Big{|}\pi_{i}^{-1}( \mathscr{C}_{N}^{(i+1)})\right).\]
We fix \(N\) and evaluate from above the following terms using backward induction:
\[H_{\mu_{r}}(\mathscr{C}_{N}^{(r)})+a_{1}a_{2}\cdots a_{r-1}\int_{X_{1}}S_{N}fd \mu+\sum_{i=1}^{r-1}a_{i}a_{i+1}\cdots a_{r-1}H_{\mu_{i}}\left(\mathscr{C}_{N}^ {(i)}\Big{|}\pi_{i}^{-1}(\mathscr{C}_{N}^{(i+1)})\right). \tag{5.2}\]
First, consider the term
\[a_{1}a_{2}\cdots a_{r-1}\left(H_{\mu}\left(\mathscr{C}_{N}^{(1)}\Big{|}\pi_{1 }^{-1}(\mathscr{C}_{N}^{(2)})\right)+\int_{X_{1}}S_{N}fd\mu\right).\]
For \(C\in\mathscr{C}_{N}^{(i+1)}\), let \(\mathscr{C}_{N}^{(i)}(C)=\{D\in\mathscr{C}_{N}^{(i)}\,|\pi_{i}(D)\subset C\}\), then by Lemma 3.4,
\[H_{\mu}\left(\mathscr{C}_{N}^{(1)}\Big{|}\pi_{1}^{-1}(\mathscr{C}_{N}^{(2)}) \right)+\int_{X_{1}}S_{N}fd\mu\]
Applying this inequality to (5.2), the following term appears:
\[a_{2}a_{3}\cdots a_{r-1}\left(H_{\mu_{2}}\left(\mathscr{C}_{N}^{(2)}\Big{|}\pi _{2}^{-1}(\mathscr{C}_{N}^{(3)})\right)+a_{1}\sum_{C\in\mathscr{C}_{N}^{(2)}} \mu_{2}(C)\log\sum_{D\in\mathscr{C}_{N}^{(1)}(C)}e^{\sup_{D}S_{N}f}\right). \tag{5.3}\]
This can be evaluated similarly using Lemma 3.4 as
\[H_{\mu_{2}}\left(\mathscr{C}_{N}^{(2)}\Big{|}\pi_{2}^{-1}(\mathscr{C}_{N}^{(3 )})\right)+a_{1}\sum_{C\in\mathscr{C}_{N}^{(2)}}\mu_{2}(C)\log\sum_{D\in \mathscr{C}_{N}^{(1)}(C)}e^{\sup_{D}S_{N}f}\]
\[=\sum_{\begin{subarray}{c}C\in\mathscr{C}_{N}^{(3)}\\ \mu_{3}(C)\neq 0\end{subarray}}\mu_{3}(C)\left\{\sum_{D\in\mathscr{C}_{N}^{(2)}(C) }\left(-\frac{\mu_{2}(D)}{\mu_{3}(C)}\log\frac{\mu_{2}(D)}{\mu_{3}(C)}+\frac{ \mu_{2}(D)}{\mu_{3}(C)}\log\left(\sum_{E\in\mathscr{C}_{N}^{(1)}(D)}e^{\sup_{ E}S_{N}f}\right)^{a_{1}}\right)\right\}\]
\[\leq\sum_{C\in\mathscr{C}_{N}^{(3)}}\mu_{3}(C)\log\sum_{D\in\mathscr{C}_{N}^{(2 )}(C)}\left(\sum_{E\in\mathscr{C}_{N}^{(1)}(D)}e^{\sup_{E}S_{N}f}\right)^{a_{1}}.\]
Continue likewise and obtain the following upper bound for (5.2):
\[\log\sum_{C^{(r)}\in\mathscr{C}_{N}^{(r)}}\left(\sum_{C^{(r-1)}\in\mathscr{C}_{N}^ {(r-1)}(C^{(r)})}\left(\cdots\left(\sum_{C^{(1)}\in\mathscr{C}_{N}^{(1)}(C^{(2) })}e^{\sup_{C^{(1)}}S_{N}f}\right)^{a_{1}}\cdots\right)^{a_{r-2}}\right)^{a_{r- 1}}. \tag{5.4}\]
For \(1\leq i\leq r\), let \(\mathscr{C}_{c}^{(i)}=\{C\in\mathscr{C}^{(i)}\,|\,C\text{ is compact}\}\). There is a positive number \(\varepsilon_{i}\) such that \(d^{(i)}(y_{1},y_{2})>\varepsilon_{i}\) for any \(C_{1},C_{2}\in\mathscr{C}_{c}^{(i)}\) and \(y_{1}\in C_{1},y_{2}\in C_{2}\). Fix a positive number \(\varepsilon\) with
\[\varepsilon<\min_{1\leq i\leq r}\varepsilon_{i}. \tag{5.5}\]
Let \(\mathscr{F}^{(i)}\) be a chain of open \((N,\,\varepsilon)\)-covers of \(X_{i}\) (see Definition 3.1). Consider
\[\log\mathscr{P}^{\boldsymbol{a}}\left(f,\,N,\varepsilon,\,(\mathscr{F}^{(i)} )_{i}\right)\]
\[=\log\sum_{U^{(r)}\in\mathscr{F}^{(r)}}\left(\sum_{U^{(r-1)}\in\mathscr{F}^{( r-1)}(U^{(r)})}\left(\cdots\left(\sum_{U^{(1)}\in\mathscr{F}^{(1)}(U^{(2)})}e^{ \sup_{U^{(1)}}S_{N}f}\right)^{a_{1}}\cdots\right)^{a_{r-2}}\right)^{a_{r-1}}. \tag{5.6}\]
We will evaluate (5.4) from above by (5.6) up to a constant. We need the next lemma.
**Lemma 5.3**.:
1. _For any_ \(V\subset X_{r}\) _with_ \(\operatorname{diam}(V,d_{N}^{(r)})<\varepsilon\)_,_ \[\left|\left\{D\in\mathscr{C}_{N}^{(r)}\Big{|}\,D\cap V\neq\varnothing\right\} \right|\leq 2^{N}.\]
2. _Let_ \(1\leq i\leq r-1\) _and_ \(C\in\mathscr{C}_{N}^{(i+1)}\)_. For any_ \(V\subset X_{i}\) _with_ \(\operatorname{diam}(V,d_{N}^{(i)})<\varepsilon\)_,_ \[\left|\left\{D\in\mathscr{C}_{N}^{(i)}(C)\Big{|}\,D\cap V\neq\varnothing\right\} \right|\leq 2^{N}.\]
Proof.: (1) \(D\in\mathscr{C}_{N}^{(r)}\) can be expressed using \(C_{k_{s}}^{(r)}\in\mathscr{C}^{(r)}\) (\(s=0,1,\ldots,N-1\)) as
\[D=C_{k_{0}}^{(r)}\cap T_{r}^{-1}C_{k_{1}}^{(r)}\cap T_{r}^{-2}C_{k_{2}}^{(r)} \cap\cdots\cap T_{r}^{-N+1}C_{k_{N-1}}^{(r)}.\]
If \(D\cap V\neq\varnothing\), we have \(T_{r}^{-s}(C_{k_{s}}^{(r)})\cap V\neq\varnothing\) for every \(0\leq s\leq N-1\). Then for each \(s\)
\[\varnothing\neq T_{r}^{s}\left(T_{r}^{-s}(C_{k_{s}}^{(r)})\cap V\right)\subset C _{k_{s}}^{(r)}\cap T_{r}^{s}(V).\]
By (5.5), each \(k_{s}\) is either \(0\) or one of the elements in \(\{1,2,\ldots,m_{r}\}\). Therefore, there are at most \(2^{N}\) such sets.
(2) The proof works in the same way as in (1). \(C\) can be written using \(J_{k}\in\Lambda_{i+1}^{0}\) (\(k=0,1,\ldots,N-1\)) as
\[C=C_{J_{0}}^{(i+1)}\cap T_{i+1}^{-1}C_{J_{1}}^{(i+1)}\cap T_{i+1}^{-2}C_{J_{2} }^{(i+1)}\cap\cdots\cap T_{i+1}^{-N+1}C_{J_{N-1}}^{(i+1)}.\]
Then any \(D\in\mathscr{C}_{N}^{(i)}(C)\) is of the form
\[D=C_{J_{0}k_{0}}^{(i)}\cap T_{i}^{-1}C_{J_{1}k_{1}}^{(i)}\cap T_{i}^{-2}C_{J_{ 2}k_{2}}^{(i)}\cap\cdots\cap T_{i}^{-N+1}C_{J_{N-1}k_{N-1}}^{(i)}\]
with \(0\leq k_{l}\leq m_{i}\) (\(l=1,2,\ldots,N-1\)). If \(D\cap V\neq\varnothing\), then each \(k_{l}\) is either \(0\) or one of the elements in \(\{1,2,\ldots,m_{i}\}\). Therefore, there are at most \(2^{N}\) such sets.
For any \(C^{(1)}\in\mathscr{C}_{N}^{(1)}\), there is \(V\in\mathscr{F}^{(1)}\) with \(V\cap C^{(1)}\neq\varnothing\) and
\[e^{\sup_{C^{(1)}}S_{N}f}\leq e^{\sup_{V}S_{N}f}.\]
Let \(C^{(2)}\in\mathscr{C}_{N}^{(2)}\), then by Lemma 5.3,
\[\sum_{C^{(1)}\in\mathscr{C}_{N}^{(1)}(C^{(2)})}e^{\sup_{C^{(1)}}S_{N}f}\leq \sum_{\begin{subarray}{c}U\in\mathscr{F}^{(2)}\\ U\cap C^{(2)}\neq\varnothing\end{subarray}}2^{N}\sum_{V\in\mathscr{F}^{(1)}(U )}e^{\sup_{V}S_{N}f}.\]
By Lemma 3.4,
\[\left(\sum_{C^{(1)}\in\mathscr{C}_{N}^{(1)}(C^{(2)})}e^{\sup_{C^{(1)}}S_{N}f }\right)^{a_{1}}\leq 2^{a_{1}N}\sum_{\begin{subarray}{c}U\in\mathscr{F}^{(2)} \\ U\cap C^{(2)}\neq\varnothing\end{subarray}}\left(\sum_{V\in\mathscr{F}^{(1)}(U )}e^{\sup_{V}S_{N}f}\right)^{a_{1}}.\]
For \(C^{(3)}\in\mathscr{C}_{N}^{(3)}\), we apply Lemma 5.3 and Lemma 3.4 similarly and obtain
\[\left(\sum_{C^{(2)}\in\mathscr{C}_{N}^{(2)}(C^{(3)})}\left(\sum_ {C^{(1)}\in\mathscr{C}_{N}^{(1)}(C^{(2)})}e^{\sup_{C^{(1)}}S_{N}f}\right)^{a_ {1}}\right)^{a_{2}}\\ \leq 2^{a_{1}a_{2}N}2^{a_{2}N}\sum_{\begin{subarray}{c}O\in \mathscr{F}^{(3)}\\ O\cap C^{(3)}\neq\varnothing\end{subarray}}\left(\sum_{U\in\mathscr{F}^{(2)} (O)}\left(\sum_{V\in\mathscr{F}^{(1)}(U)}e^{\sup_{V}S_{N}f}\right)^{a_{1}} \right)^{a_{2}}.\]
We continue this reasoning and get
\[\sum_{C^{(r)}\in\mathscr{C}_{N}^{(r)}}\left(\sum_{C^{(r-1)}\in \mathscr{C}_{N}^{(r-1)}(C^{(r)})}\left(\cdots\left(\sum_{C^{(1)}\in\mathscr{C }_{N}^{(1)}(C^{(2)})}e^{\sup_{C^{(1)}}S_{N}f}\right)^{a_{1}}\cdots\right)^{a_ {r-2}}\right)^{a_{r-1}}\\ \leq 2^{\alpha N}\sum_{U^{(r)}\in\mathscr{F}^{(r)}}\left(\sum_{U ^{(r-1)}\in\mathscr{F}^{(r-1)}(U^{(r)})}\left(\cdots\left(\sum_{U^{(1)}\in \mathscr{F}^{(1)}(U^{(2)})}e^{\sup_{U^{(1)}}S_{N}f}\right)^{a_{1}}\cdots \right)^{a_{r-2}}\right)^{a_{r-1}}.\]
Here \(\alpha\) stands for \(\sum_{i=1}^{r-1}a_{i}a_{i+1}\cdots a_{r-1}\). We take the logarithm of both sides; the left-hand side equals (5.4), which is an upper bound for (5.2). Furthermore, consider the infimum over the chain of open (\(N\), \(\varepsilon\))-covers \((\mathscr{F}^{(i)})_{i}\). By Remark 3.2, this yields
\[H_{\mu_{r}}(\mathscr{C}_{N}^{(r)})+a_{1}a_{2}\cdots a_{r-1}\int_ {X_{1}}S_{N}fd\mu+\sum_{i=1}^{r-1}a_{i}a_{i+1}\cdots a_{r-1}H_{\mu_{i}}\left( \mathscr{C}_{N}^{(i)}\Big{|}\pi_{i}^{-1}(\mathscr{C}_{N}^{(i+1)})\right)\\ \leq\log P_{r}^{\mathbf{a}}(X_{r},\,f,\,N,\,\varepsilon)+\alpha N\log 2.\]
Divide by \(N\), then let \(N\to\infty\) and \(\varepsilon\to 0\). We obtain
\[\sum_{i=1}^{r}w_{i}h_{\mu_{i}}(T_{i},\mathscr{C}^{(i)})+w_{1}\int_{X_{1}}fd\mu \leq P^{\boldsymbol{a}}(f,\boldsymbol{T})+\alpha\log 2.\]
Lemma 5.2 yields
\[\sum_{i=1}^{r}w_{i}h_{\mu_{i}}(T_{i},\mathscr{A}^{(i)})+w_{1}\int_{X_{1}}fd\mu \leq P^{\boldsymbol{a}}(f,\boldsymbol{T})+\alpha\log 2+r.\]
We take the supremum over the partitions \((\mathscr{A}^{(i)})_{i}\):
\[\sum_{i=1}^{r}w_{i}h_{\mu_{i}}(T_{i})+w_{1}\int_{X_{1}}\!fd\mu\leq P^{ \boldsymbol{a}}(f,\boldsymbol{T})+\alpha\log 2+r.\]
By the argument at the beginning of this proof, we conclude that
\[\sum_{i=1}^{r}w_{i}h_{\mu_{i}}(T_{i})+w_{1}\int_{X_{1}}\!fd\mu\leq P^{ \boldsymbol{a}}(f,\boldsymbol{T}).\]
## 6. Example: Sofic Sets
Kenyon-Peres [10] calculated the Hausdorff dimension of sofic sets in \(\mathbb{T}^{2}\). In this section, we will see that we can calculate the Hausdorff dimension of certain sofic sets in \(\mathbb{T}^{d}\) with arbitrary \(d\). We give an example for the case \(d=3\).
### Definition of Sofic Sets
This subsection referred to [10]*[11]*[12] defined _sofic systems_ as subshifts which are factors of shifts of finite type. Boyle, Kitchens, and Marcus proved in [1]*[1] that this is equivalent to the following definition.
**Definition 6.1** ([10, Proposition 3.6]).: Consider a finite directed graph \(G=\langle V,E\rangle\) in which loops and multiple edges are allowed. Suppose its edges are colored in \(l\) colors in a "right-resolving" fashion: every two edges emanating from the same vertex have different colors. Then the set of color sequences that arise from infinite paths in \(G\) is called the **sofic system**.
Let \(m_{1}\leq m_{2}\leq\cdots\leq m_{r}\) be natural numbers, \(T\) an endomorphism on \(\mathbb{T}^{r}=\mathbb{R}^{r}/\mathbb{Z}^{r}\) represented by the diagonal matrix \(A=\operatorname{diag}(m_{1},m_{2},\ldots,m_{r})\), and \(D=\prod_{i=1}^{r}\{0,1,\ldots,m_{i}-1\}\). Define a map \(R_{r}:D^{\mathbb{N}}\to\mathbb{T}^{r}\) by
\[R_{r}((e^{(n)})_{n=1}^{\infty})=\left(\sum_{k=0}^{\infty}\frac{e_{1}^{(k)}}{{ m_{1}}^{k}},\cdots,\sum_{k=0}^{\infty}\frac{e_{r}^{(k)}}{{m_{r}}^{k}}\right)\]
where \(e^{(k)}=(e_{1}^{(k)},\cdots,e_{r}^{(k)})\in D\) for each \(k\). Suppose the edges in some finite directed graph are labeled by the elements in \(D\) in the right-resolving fashion, and let \(S\subset D^{\mathbb{N}}\) be the resulting sofic system. The image of \(S\) under \(R_{r}\) is called a **sofic set**.
### An example of a sofic set
Here we will look at an example of a sofic set and calculate its Hausdorff dimension via its weighted topological entropy. Let \(D=\{0,1\}\times\{0,1,2\}\times\{0,1,2,3\}\) and consider the directed graph \(G=\langle V,E\rangle\) with \(V=\{1,2,3\}\) and \(D\)-labeled edges in Figure 2.
Let \(Y_{1}\subset D^{\mathbb{N}}\) be the resulting sofic system. Let \(C=\{0,1\}\times\{0,1,2\}\) and \(B=\{0,1\}\). Define \(p_{1}:D\to C\) and \(p_{2}:C\to B\) by
\[p_{1}(i,j,k)=(i,j),\quad p_{2}(i,j)=i.\]
Let \(p_{1}^{\mathbb{N}}:D^{\mathbb{N}}\to C^{\mathbb{N}}\) and \(p_{2}^{\mathbb{N}}:C^{\mathbb{N}}\to B^{\mathbb{N}}\) be the product map of \(p_{1}\) and \(p_{2}\), respectively. Set \(Y_{2}=p_{1}^{\mathbb{N}}(Y_{1})\) and \(Y_{3}=p_{2}^{\mathbb{N}}(Y_{2})\). Note that \(Y_{2}=\{(0,0),(1,0),(0,1)\}^{\mathbb{N}}\) and \(Y_{3}=\{0,1\}^{\mathbb{N}}\), meaning they are full shifts.
The sets \(X_{i}=R_{i}(Y_{i})\)\((i=1,2,3)\) are sofic sets. Define \(\pi_{1}:X_{1}\to X_{2}\) and \(\pi_{2}:X_{2}\to X_{3}\) by
\[\pi_{1}(x,y,z)=(x,y),\quad\pi_{2}(x,y)=x.\]
Furthermore, let \(T_{1}\), \(T_{2}\), and \(T_{3}\) be the endomorphism on \(X_{1}\), \(X_{2}\), and \(X_{3}\) represented by the matrices \(\operatorname{diag}(2,3,4)\), \(\operatorname{diag}(2,3)\), and \(\operatorname{diag}(2)\), respectively. Then \((X_{i},T_{i})_{i}\) and \((\pi_{i})_{i}\) form a sequence of dynamical systems.
Figure 2. Directed graph \(G\)
For a natural number \(N\), denote by \(Y_{i}|_{N}\) the restriction of \(Y_{i}\) to its first \(N\) coordinates, and let \(p_{i,N}:Y_{i}|_{N}\to Y_{i+1}|_{N}\) be the projections for \(i=1,2\). Since \(Y_{2}\) and \(Y_{3}\) are full shifts, we can use the same technique as in Example 1.1. Therefore, we have for any exponent \(\boldsymbol{a}=(a_{1},a_{2})\in[0,1]^{2}\),
\[h^{\boldsymbol{a}}(\boldsymbol{T})=\lim_{N\to\infty}\frac{1}{N}\log\sum_{u\in \{0,1\}^{N}}\left(\sum_{v\in p_{2,N}{}^{-1}(u)}\left|p_{1,N}{}^{-1}(v)\right|^ {a_{1}}\right)^{a_{2}}.\]
Now, let us evaluate \(\left|p_{1,N}{}^{-1}(v)\right|\) using matrix products. This idea of using matrix products is due to Kenyon-Peres [10]-Peres [10]-Peres \((a,b)\in\left\{0,1\right\}^{2}\) and let
\[a_{ij}=|\{e\in E\!\mid\!e\text{ is from }j\text{ to }i\text{ and the first two coordinates of its label is }(a,b)\}|.\]
Define a \(3\times 3\) matrix by \(A_{(a,b)}=(a_{ij})_{ij}\). Then we have
\[A_{(0,0)}=\begin{pmatrix}0&1&1\\ 0&0&1\\ 1&1&0\end{pmatrix},A_{(0,1)}=\begin{pmatrix}1&1&1\\ 1&1&0\\ 0&1&2\end{pmatrix},A_{(1,0)}=\begin{pmatrix}1&2&2\\ 0&1&2\\ 2&2&1\end{pmatrix},A_{(1,1)}=O.\]
Note that \(A_{(0,0)}{}^{2}=A_{(0,1)}\) and \(A_{(0,0)}{}^{3}=A_{(1,0)}\). For \(v=(v_{1},\cdots,v_{N})\in Y_{2}|_{N}\) we have
\[\left|p_{1,N}{}^{-1}(v)\right|\asymp\|A_{v_{1}}A_{v_{2}}\cdots A_{v_{N}}\|.\]
Here \(A\asymp B\) means there is a constant \(c>0\) independent of \(N\) with \(c^{-1}B\leq A\leq cB\). For \(\alpha=\frac{1+\sqrt{5}}{2}\), we have \(\alpha^{2}=\alpha+1\) and
\[A_{(0,0)}\begin{pmatrix}\alpha\\ 1\\ \alpha\end{pmatrix}=\begin{pmatrix}1+\alpha\\ \alpha\\ 1+\alpha\end{pmatrix}=\alpha\begin{pmatrix}\alpha\\ 1\\ \alpha\end{pmatrix},\quad A_{(0,1)}\begin{pmatrix}\alpha\\ 1\\ \alpha\end{pmatrix}=\alpha^{2}\begin{pmatrix}\alpha\\ 1\\ \alpha\end{pmatrix},\quad A_{(1,0)}\begin{pmatrix}\alpha\\ 1\\ \alpha\end{pmatrix}=\alpha^{3}\begin{pmatrix}\alpha\\ 1\\ \alpha\end{pmatrix}.\]
Therefore,
\[\|A_{v_{1}}A_{v_{2}}\cdots A_{v_{N}}\|\asymp\left\|A_{v_{1}}A_{v_{2}}\cdots A _{v_{N}}\begin{pmatrix}\alpha\\ 1\\ \alpha\end{pmatrix}\right\|\asymp\lambda_{v_{1}}\lambda_{v_{2}}\cdots\lambda_{v_ {N}}\]
where \(\lambda_{(0,0)}=\alpha\), \(\lambda_{(0,1)}=\alpha^{2}\), \(\lambda_{(1,0)}=\alpha^{3}\).
Fix \(u\in\left\{0,1\right\}^{\mathbb{N}}\) and suppose there are \(n\) numbers of zeros in \(u\). Also, if there are \(k\) numbers of \((0,0)\)s in \(v=(v_{1},\cdots,v_{N})\in p_{2,N}{}^{-1}(u)\), there are \(n-k\) numbers of \((0,1)\)s and \(N-n\) numbers of \((1,0)\)s in \(v\). Then
\[\lambda_{v_{1}}{}^{a_{1}}\cdots\lambda_{v_{N}}{}^{a_{1}}=\alpha^{a_{1}k}\alpha ^{2a_{1}(n-k)}\alpha^{3a_{1}(N-n)}.\]
Therefore,
\[\sum_{v\in p_{2,N}{}^{-1}(u)}\left|p_{1,N}{}^{-1}(v)\right|^{a_{1}} =\sum_{(v_{1},\cdots,v_{N})\in p_{2,N}{}^{-1}(u)}\lambda_{v_{1}}{ }^{a_{1}}\cdots\lambda_{v_{N}}{}^{a_{1}}=\sum_{k=0}^{n}\begin{pmatrix}n\\ k\end{pmatrix}\alpha^{a_{1}k}\alpha^{2a_{1}(n-k)}\alpha^{3a_{1}(N-n)}\] \[=\left(\alpha^{a_{1}}+\alpha^{2a_{1}}\right)^{n}\alpha^{3a_{1}(N-n )}.\]
This implies
\[\sum_{u\in\{0,1\}^{N}}\left(\sum_{v\in p_{2,N}{}^{-1}(u)}{|{p_{1,N}} ^{-1}(v)|}^{a_{1}}\right)^{a_{2}} =\sum_{n=0}^{N}\binom{N}{n}\big{(}\alpha^{a_{1}}+\alpha^{2a_{1}} \big{)}^{a_{2}n}\alpha^{3a_{1}a_{2}(N-n)}\] \[=\left\{\big{(}\alpha^{a_{1}}+\alpha^{2a_{1}}\big{)}^{a_{2}}+ \alpha^{3a_{1}a_{2}}\right\}^{N}.\]
We conclude that
\[h^{\boldsymbol{a}}(\boldsymbol{T}) =\lim_{N\to\infty}\frac{1}{N}\log\left\{\big{(}\alpha^{a_{1}}+ \alpha^{2a_{1}}\big{)}^{a_{2}}+\alpha^{3a_{1}a_{2}}\right\}^{N}\] \[=\log\Bigg{\{}\left(\left(\frac{1+\sqrt{5}}{2}\right)^{a_{1}}+ \left(\frac{3+\sqrt{5}}{2}\right)^{a_{1}}\right)^{a_{2}}+\left(2+\sqrt{5} \right)^{a_{1}a_{2}}\Bigg{\}}.\]
As in Example 1.4, the Hausdorff dimension of \(X_{1}\) is obtained by letting \(a_{1}=\log_{4}3\) and \(a_{2}=\log_{3}2\);
\[\dim_{H}(X_{1}) =\log\left\{\left(\left(\frac{1+\sqrt{5}}{2}\right)^{\log_{4}3}+ \left(\frac{3+\sqrt{5}}{2}\right)^{\log_{4}3}\right)^{\log_{3}2}+\sqrt{(2+ \sqrt{5})}\right\}\] \[=1.4598\cdots.\]
## Acknowledgement
I am deeply grateful to my mentor, Masaki Tsukamoto, who not only has reviewed this paper several times throughout the writing process but has patiently helped me understand ergodic theory in general with his expertise.
I also want to thank my family and friends for their unconditional support and everyone who has participated in my study for their time and willingness to share their knowledge. This work could not have been possible without their help.
|
2303.17825 | Refinements of Katz-Sarnak theory for the number of points on curves
over finite fields | This paper goes beyond Katz-Sarnak theory on the distribution of curves over
finite fields according to their number of rational points, theoretically,
experimentally and conjecturally. In particular, we give a formula for the
limits of the moments measuring the asymmetry of this distribution for
(non-hyperelliptic) curves of genus $g \geq 3$. The experiments point to a
stronger notion of convergence than the one provided by the Katz-Sarnak
framework for all curves of genus $\geq 3$. However, for elliptic curves and
for hyperelliptic curves of every genus we prove that this stronger convergence
cannot occur. | Jonas Bergström, Everett W. Howe, Elisa Lorenzo García, Christophe Ritzenthaler | 2023-03-31T06:47:41Z | http://arxiv.org/abs/2303.17825v2 | # Refinements of Katz-Sarnak theory for the number of points on curves over finite fields
###### Abstract.
This paper goes beyond Katz-Sarnak theory on the distribution of curves over finite fields according to their number of rational points, theoretically, experimentally and conjecturally. In particular, we give a formula for the limits of the moments measuring the asymmetry of this distribution for (non-hyperelliptic) curves of genus \(g\geq 3\). The experiments point to a stronger notion of convergence than the one provided by the Katz-Sarnak framework for all curves of genus \(\geq 3\). However, for elliptic curves and for hyperelliptic curves of every genus we prove that this stronger convergence cannot occur.
Key words and phrases:Katz-Sarnak theory; distribution; moments; Serre's obstruction 2010 Mathematics Subject Classification: 11G20, 11R45, 14H10, 14H25 \({}^{1}\)Throughout this paper, the word 'curve' will always mean a projective, absolutely irreducible, smooth variety of dimension 1.
\(n\), see for instance [1, Th. 3.4] for \(\mathcal{H}_{g}\) (note that the odd \(n\) values are equal to \(0\) in this case) and [1] for \(\mathcal{M}_{3}^{\mathrm{nhyp}}\). However, it is possible to give an interpretation for
\[\mathfrak{a}_{n}(\mathcal{X}):=\lim_{q\to\infty}\frac{S_{n}(q,\mathcal{X})}{q^ {\dim\mathcal{X}+n/2}}\]
with \(\mathcal{X}=\mathcal{M}_{g}\), \(\mathcal{H}_{g}\) or \(\mathcal{M}_{g}^{\mathrm{nhyp}}\) for every \(g\geq 2\) and even \(n\geq 2\) in terms of representation theory of the compact symplectic group \(\mathrm{USp}_{2g}\). This is achieved in [1, Th. 3.8] using the ideas of Katz and Sarnak.
Our first contributions are gathered in Theorem 2.3. Using the results of Johnson [15] and Hain [14], together with results of [10, 11] about the first cohomology group of symplectic local systems on \(\mathcal{M}_{g}\), we can prove that for even values of \(n>0\) we have
\[\mathfrak{a}_{n}(\mathcal{M}_{g})-\frac{S_{n}(q,\mathcal{M}_{g})}{q^{\dim \mathcal{M}_{g}+n/2}}=O(q^{-1}) \tag{1.1}\]
when \(g\geq 2\), whereas Katz-Sarnak would only give \(O(q^{-1/2})\). Since \(\mathfrak{a}_{n}(\mathcal{M}_{g})=0\) for odd values of \(n\), this suggests replacing the exponent in the power of \(q\) in the denominator of the expression defining \(\mathfrak{a}_{n}(\mathcal{M}_{g})\) with a smaller number. As far as we know this has not been considered previously. We therefore introduce for odd \(n\)
\[\mathfrak{b}_{n}(\mathcal{M}_{g}):=-\lim_{q\to\infty}\frac{S_{n}(q,\mathcal{ M}_{g})}{q^{3g-3+(n-1)/2}}.\]
Theorem 2.3 gives \(\mathfrak{b}_{n}(\mathcal{M}_{g})\) in terms of an explicit integral and in terms of the representation theory of \(\mathrm{USp}_{2g}\). This second description makes it easy to compute.
The deep relations between the sum of traces and Katz-Sarnak theory becomes clearer once we switch to a probabilistic point of view. In Section 3, we introduce the classical probability measure \(\mu_{q,g}\) on the interval \([-2g,2g]\) derived from the numbers of \(\mathbb{F}_{q}\)-isomorphism classes of curves of genus \(g>1\) with given traces of Frobenius. From Katz-Sarnak, we then know that the sequence of measures \((\mu_{q,g})\) weakly converges to a continuous measure \(\mu_{g}\) with an explicit density \(\mathfrak{f}_{g}\) with a convergence rate of \(O(q^{-1/2})\) (see [14, Th. 2.1] for equivalent definitions of weak convergence of measures). In this language, the numbers \(\mathfrak{a}_{n}(\mathcal{M}_{g})\) can be understood as the \(n\)th moments of the measure \(\mu_{g}\) and for these moments we have a faster convergence rate of \(O(q^{-1})\) by (1.1). Notice, however, as explained in Remark 3.2, that this rate of convergence for moments cannot be extended to all continuous functions and therefore improve on the Katz-Sarnak result above.
In Section 4, we investigate whether the Katz-Sarnak limiting distributions can be used to approximate the number of curves over a given finite field \(\mathbb{F}_{q}\) of a given genus and with a given trace of Frobenius; one might hope that integrating that distribution over an interval of length \(1/\sqrt{q}\) around \(t/\sqrt{q}\) would give a value close to the number of genus-\(g\) curves over \(\mathbb{F}_{q}\) having trace \(t\). We show that this does _not_ happen for elliptic curves or for hyperelliptic curves of any genus. For elliptic curves, Proposition 4.4 shows that the number of elliptic curves with a given trace can be an arbitrarily large multiple of this naive Katz-Sarnak prediction (see also Figure 3). For hyperelliptic curves, Proposition 4.1 shows (roughly speaking) that if the number of curves is asymptotically bounded above and below by two multiples of the naive Katz-Sarnak prediction, then the ratio of these two multiples is bounded below by a fixed number strictly greater than \(1\) (see Figure 1).
On the other hand, experimentally, one sees that the elliptic and hyperelliptic cases differ in the sense that it is easy to 'correct' the distribution in the hyperelliptic cases to observe a good approximation by the density function \(\mathfrak{f}_{g}\) (see Figure 2). Even stronger, computations for all non-hyperelliptic curves of
genus 3 (see Figure 4) make us dream that the naive Katz-Sarnak approximation _does_ directly give an accurate estimate for the number of curves with a given number of points. This leads us to claim the bold Conjecture 5.1. The heuristic idea behind this conjecture is that for each trace, one is averaging over many isogeny classes which somehow would allow this stronger convergence as long as there are no obvious arithmetic obstructions. Our attempts to use the better convergence rates of the moments in the case of \(\mathcal{M}_{g}\) for \(g\geq 3\) to prove this conjecture were unfortunately unsuccessful.
Finally, in Section 5 we revisit the work of [11] on the symmetry breaking for the trace distribution of (non-hyperelliptic) genus 3 curves, by looking at the difference between the number of curves with trace \(t\) and the number of curves with trace \(-t\). In probabilistic terms, this asymmetry is given by a signed measure \(\nu_{q,g}\). Although this signed measures weakly converges to \(0\) when \(q\) goes to infinity, by Corollary 5.3, the moments of \(\sqrt{q}\,\nu_{q,g}\) converge to \(-2\mathfrak{b}_{n}(\mathcal{M}_{g})\) when \(n\) is odd (and are trivially \(0\) when \(n\) is even). In particular, this shows that 'zooming in' on the Katz-Sarnark distribution, one can spot a difference between the behaviour for hyperelliptic curves (for which the corresponding signed measures would all be \(0\)) and for non-hyperelliptic curves.
In the same spirit as Section 4, one then introduces a limit measure with density function \(\mathfrak{h}_{g}\) whose \(n\)th moments are \(\mathfrak{b}_{n}(\mathcal{M}_{g})\). The experimental data for \(g=3\) (see Figure 5) and the convergence of moments lead us to conjecture that the sequence of signed measures \((\sqrt{q}\,\nu_{q,g})\) weakly converges to the continuous signed measure with density \(-2\,\mathfrak{h}_{g}\) for all \(g\geq 3\). Notice that in contrast to the case of positive bounded measures, the convergence of moments of signed measures on a compact interval does not directly imply weak convergence; see example 5.4.
With such a conjecture in hand, one may then improve on the result of [11] which heuristically approximated the limit density of \((\sqrt{q}\,\nu_{q,g})\) by the function \(x(1-x^{2}/3)\cdot\left(\frac{1}{\sqrt{2\pi}}e^{-x^{2}/2}\right)\). Using the first values of \(\mathfrak{b}_{n}(\mathcal{M}_{3})\), we get the better approximation
\[x\left(5/4-x^{2}/2+x^{4}/60\right)\left(\frac{1}{\sqrt{2\pi}}e^{-x^{2}/2} \right).\]
**Acknowledgement.** We thank Dan Petersen for helpful conversations in connection with the Gross-Schoen cycle and Sophie Dabo for discussions on measure theory.
## 2. Limits of sums of powers of traces
Fix a prime power \(q\). Let us start by recalling some definitions and results from [1].
**Definition 2.1**.: _Let \(\mathcal{X}=\mathcal{H}_{g}\), \(\mathcal{M}_{g}\) or \(\mathcal{M}_{g}^{\mathrm{hhyp}}\) for any \(g\geq 2\), or \(\mathcal{X}=\mathcal{M}_{1,1}\)._
* _Recall from Section_ 1 _that one defines_ \[S_{n}(q,\mathcal{X})=\sum_{[C]\in\mathcal{X}(\mathbb{F}_{q})}\sum_{C^{\prime} \in[C]}\frac{(q+1-\#C^{\prime}(\mathbb{F}_{q}))^{n}}{\#\operatorname{Aut}_{ \mathbb{F}_{q}}(C^{\prime})}\] _where if_ \([C]\) _is a point of_ \(\mathcal{X}(\mathbb{F}_{q})\) _representing the_ \(\overline{\mathbb{F}}_{q}\)_-isomorphism class of a curve_ \(C/\mathbb{F}_{q}\)_, the second sum spans the set of representatives of all twists_ \(C^{\prime}\) _of_ \(C\)_._
* _For every_ \(n\geq 1\)_, let_ \[\mathfrak{a}_{n}(\mathcal{X}):=\lim_{q\to\infty}\frac{S_{n}(q,\mathcal{X})}{q^ {\dim\mathcal{X}+n/2}}\] _with_ \(\mathcal{X}=\mathcal{H}_{g}\) _or_ \(\mathcal{M}_{g}\) _or_ \(\mathcal{M}_{g}^{\mathrm{hhyp}}\) _for any_ \(g\geq 2\)_, or with_ \(\mathcal{X}=\mathcal{M}_{1,1}\)_._
Define \(w_{k}:=\sum_{j=1}^{g}2\cos k\theta_{j}\) and \(dm_{g}:=\frac{g!\pi^{\otimes n}}{g!^{\otimes n}}\prod_{i<j}(2\cos\theta_{i}-2 \cos\theta_{j})^{2}\prod_{i}2\sin^{2}\theta_{i}\,d\theta_{1}\ldots d\theta_{g}\), and recall from [1, Th. 2.1] that for every \(g\geq 2\) and \(n\geq 1\),
\[\mathfrak{a}_{n}(\mathscr{X})=\int_{(\theta_{1},\ldots,\theta_{g})\in[0,\pi]^ {g}}w_{1}^{n}\,dm_{g},\]
with \(\mathscr{X}=\mathscr{H}_{g}\) or \(\mathscr{M}_{g}\) or \(\mathscr{M}_{g}^{\text{mhyp}}\). Notice that for a fixed value of \(g\), \(\mathfrak{a}_{n}(\mathscr{X})\) does not depend on \(\mathscr{X}\) and that \(\mathfrak{a}_{n}(\mathscr{X})=0\) for odd \(n\).
In order to go deeper in the limit distribution, we will also look at the 'next term' of the limit of \(\frac{S_{n}(g,\mathscr{X})}{q!^{\dim\mathscr{X}+n/2}}\) when \(\mathscr{X}=\mathscr{M}_{g}\).
**Definition 2.2**.: _For every \(g\geq 2\) and \(n\geq 1\), let_
\[\mathfrak{b}_{n}(\mathscr{M}_{g}):=-\lim_{q\to\infty}\sqrt{q}\left(\frac{S_{n} (q,\mathscr{M}_{g})}{q^{3g-3+n/2}}-\mathfrak{a}_{n}(\mathscr{M}_{g})\right).\]
To state our results, we need to recall basic facts about the representations of \(\operatorname{USp}_{2g}\) with coefficients in \(\mathbb{Q}_{\ell}\) where \(\ell\) is a prime distinct from the characteristic of \(\mathbb{F}_{q}\). The irreducible representations \(V_{\lambda}\) of \(\operatorname{USp}_{2g}\) are indexed by the highest weight \(\lambda=(\lambda_{1},\ldots,\lambda_{g})\) with \(\lambda_{1}\geq\ldots\geq\lambda_{g}\geq 0\). The corresponding character \(\chi_{\lambda}\) are the symplectic Schur polynomials \(\mathbf{s}_{\langle\lambda\rangle}(x_{1},\ldots,x_{g})\in\mathbb{Z}[x_{1}, \ldots,x_{g},x_{1}^{-1},\ldots,x_{g}^{-1}]\) in the sense that if \(A\in\operatorname{USp}_{2g}\) has eigenvalues \(\alpha_{1},\ldots,\alpha_{g},\alpha_{1}^{-1},\ldots,\alpha_{g}^{-1}\) then \(\chi_{\lambda}(A)=\mathbf{s}_{\langle\lambda\rangle}(\alpha_{1},\ldots,\alpha _{g})\), see [1, Prop. 24.22 and (A.45)]. In the notation we will suppress the \(\lambda_{j}\) that are \(0\). Put \(|\lambda|=\lambda_{1}+\ldots+\lambda_{g}\) and note that \(V_{\lambda}^{\vee}\cong V_{\lambda}\). Let \(V=V_{(1)}\) denote the standard representation.
**Theorem 2.3**.:
1. _Let_ \(\mathscr{X}=\mathscr{H}_{g}\)_,_ \(\mathscr{M}_{g}\)_,_ \(\mathscr{M}_{g}^{\text{mhyp}}\) _for any_ \(g\geq 2\) _or_ \(\mathscr{M}_{1,1}\)_. For every_ \(n\geq 1\)_,_ \(\mathfrak{a}_{n}(\mathscr{X})\) _is equal to the number of times the trivial representation appears in the_ \(\operatorname{USp}_{2g}\)_-representation_ \(V^{\otimes n}\) _with_ \(V\) _the standard representation. (This is precisely_ _[_1_, Th. 3.8]__, but we will give a different proof._)
2. _For every_ \(g\geq 3\) _and_ \(n\geq 1\)_,_ \(\mathfrak{b}_{n}(\mathscr{M}_{g})\) _is equal to the number of times the representation_ \(V_{(1,1,1)}\) _appears in the_ \(\operatorname{USp}_{2g}\)_-representation_ \(V^{\otimes n}\) _with_ \(V\) _the standard representation. In particular_ \(\mathfrak{b}_{n}(\mathscr{M}_{g})=0\) _for_ \(n\) _even._
3. _For every_ \(n\geq 1\)_,_ \(\mathfrak{b}_{n}(\mathscr{M}_{2})=0\)_._
4. _For every_ \(g\geq 2\) _and_ \(n\geq 1\)_,_ \[\mathfrak{a}_{n}(\mathscr{M}_{g})-\frac{\mathfrak{b}_{n}(\mathscr{M}_{g})}{ \sqrt{q}}=\frac{S_{n}(q,\mathscr{M}_{g})}{q^{3g-3+n/2}}+O(q^{-1}).\]
5. _For every_ \(g\geq 3\) _and_ \(n\geq 1\) _we have_ \[\mathfrak{b}_{n}(\mathscr{M}_{g})=\int_{(\theta_{1},\ldots,\theta_{g})\in[0,\pi]^ {g}}w_{1}^{n}\Big{(}\frac{1}{6}w_{1}^{3}-\frac{1}{2}w_{1}w_{2}+\frac{1}{3}w_{3} -w_{1}\Big{)}\,dm_{g}.\] (2.1)
Proof.: Poincare duality gives a symplectic pairing on the first \(\ell\)-adic etale cohomology group of a curve. We will be interested in the action of Frobenius on these cohomology groups and since we need to take the size of the eigenvalues of Frobenius into account we will consider representations of \(\operatorname{GSp}_{2g}\). Let \(\mathbb{Q}_{\ell}(-1)\) denote the _multiplier representation_ or _similitude character_; if we identify \(\operatorname{GSp}_{2g}\) as the group of automorphisms of a \(2g\)-dimensional vector space that preserve a symplectic form \(s\) up to scaling, then \(\mathbb{Q}_{\ell}(-1)\) is the representation \(\eta\) that sends an element of \(\operatorname{GSp}_{2g}(\mathbb{Q}_{\ell})\) to the factor by which it scales \(s\). Let \(\mathbb{Q}_{\ell}(1)\) be the inverse (or dual) of \(\mathbb{Q}_{\ell}(-1)\), and for an integer \(j\) put \(\mathbb{Q}_{\ell}(j)=\mathbb{Q}_{\ell}(\operatorname{sgn}j)^{\otimes|j|}\). For a representation \(U\) put \(U(j):=U\otimes\mathbb{Q}_{\ell}(j)\). With the standard representation \(W\) of \(\operatorname{GSp}_{2g}\) we can get irreducible representations \(W_{\lambda}\), for \(\lambda=(\lambda_{1},\ldots,\lambda_{g})\) with \(\lambda_{1}\geq\ldots\geq\lambda_{g}\geq 0\), using the same construction as for \(\operatorname{USp}_{2g}\), see [1, (17.9)]. If we homogenize the polynomial \(s_{\langle\lambda\rangle}(x_{1},\ldots,x_{g},t)\) to degree \(|\lambda|\) using a variable \(t\) of weight \(2\) and with \(x_{i}\) of weight \(1\) for \(i=1,\ldots,g\), then for \(A\in\operatorname{GSp}_{2g}\) with \(\eta(A)=s\) and
eigenvalues \(\alpha_{1},\dots,\alpha_{g},s\alpha_{1}^{-1},\dots,s\alpha_{g}^{-1}\) we have \(\chi_{\lambda}(A)=s_{\langle\lambda\rangle}(\alpha_{1},\dots,\alpha_{g},s)\). Now, for every \(n\), there are integers \(c_{\lambda,n}\geq 0\) such that
\[W^{\otimes n}\cong\bigoplus_{|\lambda|\leq n}W^{\oplus c_{\lambda,n}}_{\lambda} \big{(}(-n+|\lambda|)/2\big{)}. \tag{2.2}\]
Note that if \(n\not\equiv|\lambda|\bmod 2\) then \(c_{\lambda,n}=0\). Note also that (2.2) holds with the same \(c_{\lambda,n}\) when replacing \(\operatorname{GSp}_{2g}\) with \(\operatorname{USp}_{2g}\), i.e. replacing \(W\) by \(V\) and ignoring the multiplier representation. Note also that \(W^{\vee}_{\lambda}\cong W_{\lambda}(|\lambda|)\).
Let \(\mathcal{X}=\mathcal{H}_{g}\), \(\mathcal{M}_{g}\) or \(\mathcal{M}_{g}^{\operatorname{nhyp}}\) for any \(g\geq 2\), or \(\mathcal{X}=\mathcal{M}_{1,1}\). Let \(\pi:\mathcal{Y}\to\mathcal{X}\) be the universal object and define the \(\ell\)-adic local system \(\mathbb{V}=R^{1}\pi_{*}\mathbb{Q}_{\ell}\). To any irreducible representation of \(\operatorname{GSp}_{2g}\) (the symplectic pairing coming as above from the first cohomology group of the curves) corresponding to \(\lambda\) we can then use Schur functors to define a local system \(\mathbb{V}_{\lambda}\). Let \(H^{j}_{c}\) denote compactly supported \(\ell\)-adic cohomology and \(\operatorname{Fr}_{q}\) the geometric Frobenius acting on \(\mathcal{X}\otimes\overline{\mathbb{F}}_{q}\). For general results on etale cohomology of stacks, see for instance [14].
For almost all primes \(p\) we have \(H^{j}_{c}(\mathcal{X}\otimes\mathbb{C},\mathbb{V}_{\lambda})\cong H^{j}_{c}( \mathcal{X}\otimes\overline{\mathbb{Q}}_{p},\mathbb{V}_{\lambda})\cong H^{j} _{c}(\mathcal{X}\otimes\overline{\mathbb{F}}_{p},\mathbb{V}_{\lambda})\). From this we get bounds on \(\dim_{\mathbb{Q}_{\ell}}H^{j}_{c}(\mathcal{X}\otimes\overline{\mathbb{F}}_{p},\mathbb{V}_{\lambda})\) that are independent of \(p\). This will tacitly be used below when we let \(q\) go to infinity.
Put \(\overline{\mathcal{X}}=\mathcal{X}\otimes\overline{\mathbb{F}}_{q}\). The Lefschetz trace formula and (2.2) then tell us that
\[S_{n}(q,\mathcal{X}) =\sum_{j=0}^{2\dim\mathcal{X}}(-1)^{j}\operatorname{Tr}( \operatorname{Fr}_{q},H^{j}_{c}(\overline{\mathcal{X}},\mathbb{V}_{1}^{ \otimes n}))\] \[=\sum_{\lambda}c_{\lambda,n}\sum_{j=0}^{2\dim\mathcal{X}}(-1)^{j} \operatorname{Tr}(\operatorname{Fr}_{q},H^{j}_{c}(\overline{\mathcal{X}}, \mathbb{V}_{\lambda}))\,q^{(n-|\lambda|)/2}\,;\]
compare [1, SS8]. Since \(\mathbb{V}_{\lambda}\) is pure of weight \(\lambda\), it follows from Deligne's theory of weights [13, 14] that the trace of Frobenius on \(H^{j}_{c}(\overline{\mathcal{X}},\mathbb{V}_{\lambda})\) is equal (after choosing an embedding of \(\overline{\mathbb{Q}}_{\ell}\) in \(\mathbb{C}\)) to a sum of complex numbers with absolute value at most \(q^{(j+|\lambda|)/2}\).
From this we see that only when \(j=2\dim\mathcal{X}\) can we get a contribution to \(\mathfrak{a}_{n}(\mathcal{X})\). Since \(\mathcal{X}\) is a smooth Deligne-Mumford stack, Poincare duality shows that for every \(i\) with \(0\leq i\leq 2\dim\mathcal{X}\), we have
\[H^{2\dim\mathcal{X}-i}_{c}(\overline{\mathcal{X}},\mathbb{V}_{\lambda})\cong H ^{i}(\overline{\mathcal{X}},\mathbb{V}_{\lambda})^{\vee}(-\dim\mathcal{X}-| \lambda|).\]
The zeroth cohomology group of a local system consists of the global invariants, and among the irreducible local systems, only the constant local system \(\mathbb{V}_{(0)}\cong\mathbb{Q}_{\ell}\) has such. Moreover, \(H^{0}(\overline{\mathcal{X}},\mathbb{Q}_{\ell})\) is one-dimensional, since \(\mathcal{X}\) is irreducible. Finally, since the action of \(\operatorname{Fr}_{q}\) on \(H^{0}(\overline{\mathcal{X}},\mathbb{Q}_{\ell})\) is trivial, we get by Poincare duality that \(\operatorname{Fr}_{q}\) acts on \(H^{2}_{c}\smash{\dim\mathcal{X}}(\overline{\mathcal{X}},\mathbb{Q}_{\ell})\) by multiplication by \(q^{\dim\mathcal{X}}\). It follows that \(\mathfrak{a}_{n}(\mathcal{X})=c_{(0),n}\). This proves (1).
Assume now that \(g\geq 3\). From the work of Johnson and Hain we know that \(H^{1}(\mathcal{M}_{g},\mathbb{V}_{\lambda})\) is nonzero if and only if \(\lambda=(1,1,1)\); see [10], [11] and [12, Th. 4.1 and Cor. 4.2]. In these references, it is the rational Betti cohomology group of \(\mathcal{M}_{g}\) over the complex numbers that is considered. Furthermore, \(H^{1}(\mathcal{M}_{g}\otimes\overline{\mathbb{F}}_{q},\mathbb{V}_{(1,1,1)})\) is one-dimensional and generated by the Gross-Schoen cycle, see [17, Rem. 12.1], which lives in the second Chow group, see [17, Ex. 6.4]. Since this result also hold in \(\ell\)-adic cohomology, as noted in [17, SS1.2], the action of \(\operatorname{Fr}_{q}\) on this cohomology group is by multiplication by \(q^{2}\).
Recall that \(\dim\mathcal{M}_{g}=3g-3\). By Poincare duality we find that the action of \(\operatorname{Fr}_{q}\) on \(H^{6g-7}_{c}(\mathcal{M}_{g}\otimes\overline{\mathbb{F}}_{q},\mathbb{V}_{(1,1,1)})\) is by \(q^{3g-3+3-2}\). We can now conclude the following. If \(n\) is even then \(c_{(1,1,1),n}=0\), and so every eigenvalue of Frobenius contributing to \(q^{3g-3+n/2}c_{(0),n}-S_{n}(q,\mathcal{M}_{g})\) has absolute value at most \(q^{3g-4+n/2}\). If \(n\) is odd then \(c_{(0),n}=0\), and so there are no eigenvalues of Frobenius contributing to
\(S_{n}(q,\mathcal{M}_{g})\) of absolute value \(q^{3g-3+n/2}\) and we can conclude by the above that \(\mathfrak{b}_{n}(\mathcal{M}_{g})=c_{(1,1,1),n}\). This proves (2)
Because of the hyperelliptic involution, \(H^{i}_{c}(\mathcal{M}_{2},\mathbb{V}_{\lambda})=0\) for all \(\lambda\) such that \(|\lambda|\) is odd. Moreover, \(H^{1}(\mathcal{M}_{2},\mathbb{V}_{\lambda})\) is nonzero precisely when \(\lambda=(2,2)\). It is then one-dimensional and \(\mathrm{Fr}_{q}\) acts by multiplication by \(q^{3}\). This follows from results of [14, 14] and will be explained in more detail in forthcoming work by Petersen and Tommasi. By Poincare duality, \(\mathrm{Fr}_{q}\) acts on \(H^{5}_{c}(\mathcal{M}_{2},\mathbb{V}_{2,2})\) by multiplication by \(q^{3+4-3}\). Hence, for all even \(n\), every eigenvalue of Frobenius contributing to \(q^{3+n/2}c_{(0),n}-S_{n}(q,\mathcal{M}_{2})\) has absolute value at most \(q^{3+(n-2)/2}\). This proves (3).
Statement (4) is only a reformulation of the properties of \(\mathfrak{a}_{n}(\mathcal{M}_{g})\) and \(\mathfrak{b}_{n}(\mathcal{M}_{g})\) proven above.
Finally, for every \(k\geq 1\), put \(p_{k}(x_{1},\ldots,x_{g}):=\sum_{i=1}^{g}(x_{i}^{k}+x_{i}^{-k})\). The polynomial \(\mathbf{s}_{\langle(1,1,1)\rangle}(x_{1},\ldots,x_{g})\) equals
\[\frac{1}{6}p_{1}^{3}-\frac{1}{2}p_{1}p_{2}+\frac{1}{3}p_{3}-p_{1}.\]
The irreducible representations of \(\mathrm{USp}_{2g}\) are self-dual. As a consequence, if \(U\) is a representation of \(\mathrm{USp}_{2g}\) then the number of times the representation \(V_{\lambda}\) appears in \(U\) equals the number of times the trivial representation appears in \(V_{\lambda}\otimes U\). If \(A\in\mathrm{USp}_{2g}\) has eigenvalues \(\alpha_{1},\ldots,\alpha_{g},\alpha_{1}^{-1},\ldots,\alpha_{g}^{-1}\), with \(\alpha_{j}=e^{i\theta_{j}}\) for \(j=1,\ldots,g\), then \(p_{k}(\alpha_{1},\ldots,\alpha_{g})=w_{k}(\theta_{1},\ldots,\theta_{g})\). Statement (5) now follows from (2).
_Remark 2.4_.: Why did we not define \(\mathfrak{b}_{n}\) for \(\mathcal{M}_{1,1}\)? For every prime \(p\) and \(n>0\) it follows from [14] (see also [1, SS2]) that
\[\sum_{j=0}^{2}(-1)^{j}\operatorname{Tr}(\mathrm{Fr}_{p},H^{j}_{ c}(\mathcal{M}_{1,1}\otimes\overline{\mathbb{F}}_{p},\mathbb{V}_{(n)})) =-\operatorname{Tr}(\mathrm{Fr}_{p},H^{1}_{c}(\mathcal{M}_{1,1} \otimes\overline{\mathbb{F}}_{p},\mathbb{V}_{(n)}))\] \[=-1-\operatorname{Tr}(T_{p},\mathbf{S}_{n+2}),\]
where \(T_{p}\) is the \(p\)th Hecke operator acting on \(\mathbf{S}_{n+2}\), the (complex) vector space of elliptic modular cusp forms of level \(1\) and weight \(n+2\). Moreover, for every prime power \(q\), the eigenvalues of \(\mathrm{Fr}_{q}\) acting on \(H^{1}_{c}(\mathcal{M}_{1,1}\otimes\overline{\mathbb{F}}_{p},\mathbb{V}_{(n)})\) will have absolute value \(q^{(n+1)/2}\). It is in general not clear that the limit
\[-\lim_{q\to\infty}\sqrt{q}\left(\frac{S_{n}(q,\mathcal{M}_{1,1})}{q^{1+n/2}}- \mathfrak{a}_{n}(\mathcal{M}_{1,1})\right), \tag{2.3}\]
which would be the way to define \(\mathfrak{b}_{n}(\mathcal{M}_{1,1})\), always exists when \(n\) is even. (For odd \(n\), \(S_{n}(q,\mathcal{M}_{1,1})=0\), hence the limit (2.3) will be \(0\).)
For even \(0\leq n\leq 8\), the limit (2.3) is also \(0\) since there are no elliptic cusp forms level \(1\) and weight less than or equal to \(10\). We then have that \(S_{10}(p,\mathcal{M}_{1,1})=42p^{6}-\operatorname{Tr}(T_{p},\mathbf{S}_{12})+O (p^{5})\) and \(S_{12}(p,\mathcal{M}_{1,1})=132p^{7}-11p\cdot\operatorname{Tr}(T_{p},\mathbf{S }_{12})+O(p^{6})\). The so-called Frobenius angle, \(0\leq\varphi_{p}\leq\pi\), of the Hecke eigenform (the Ramanujan \(\Delta\) function) in the one-dimensional space \(\mathbf{S}_{12}\) is defined by \(a_{p}:=\operatorname{Tr}(T_{p},\mathbf{S}_{12})=p^{11/2}\cos\varphi_{p}\). The Sato-Tate conjecture for \(\Delta\) (proven in [1]) then tells us that there are sequences of primes \(p^{\prime}_{1},p^{\prime}_{2},\ldots\) and \(p^{\prime\prime}_{1},p^{\prime\prime}_{2},\ldots\) such that the Frobenius angles of \(a_{p^{\prime}_{1}},a_{p^{\prime}_{2}},\ldots\) (respectively \(a_{p^{\prime\prime}_{1}},a_{p^{\prime\prime}_{2}},\ldots\)) are all between \(0\) and \(\pi/3\) (respectively \(2\pi/3\) and \(\pi\)). This implies that the limit (2.3) does not exist for \(n=10\) and \(n=12\). It is unlikely to exist for even \(n>12\), but the limit will then involve an interplay between different Hecke eigenforms.
In [1, Th. 3.9] it is shown that for fixed \(g\) we have
\[\lim_{n\to\infty}\mathfrak{a}_{2n}(\mathcal{M}_{g})^{1/(2n)}=2g.\]
In the remainder of this section we prove a similar result for \(\mathfrak{b}_{2n+1}(\mathcal{M}_{g})\).
**Proposition 2.5**.: _For fixed \(g\geq 3\) one has_
\[\lim_{n\to\infty}\mathfrak{b}_{2n+1}(\mathcal{M}_{g})^{1/(2n+1)}=2g.\]
Proof.: Consider the functions \(w_{1}\) and \(f:=\frac{1}{6}w_{1}^{3}-\frac{1}{2}w_{1}w_{2}+\frac{1}{3}w_{3}-w_{1}\) on \(X:=[0,\pi]^{g}\). The maximum value of \(|w_{1}|\) is attained at exactly two points in \(X\), namely the points \(x:=(0,\ldots,0)\) and \(y:=(\pi,\ldots,\pi)\). We have \(w_{1}(x)=2g\) and \(w_{1}(y)=-2g\), and we also have \(f(x)=(2/3)(2g^{3}-3g^{2}-2g)>0\) and \(f(y)=(-2/3)(2g^{3}-3g^{2}-2g)<0\).
Let \(V\) be the (open) subset of \(X\) where \(w_{1}f>0\), so that \(x\) and \(y\) both lie in \(V\), and let \(W=X\setminus V\). Let \(M\) be the supremum of \(|w_{1}|\) on \(W\), so that \(M<2g\). For \(\varepsilon\in(0,2g-M)\) let \(U_{\varepsilon}\) be the subset of \(X\) where \(|w_{1}|>2g-\varepsilon\), so that \(U_{\varepsilon}\subset V\), and let \(V_{\varepsilon}=V\setminus U_{\varepsilon}\).
Let \(\varepsilon\) be an element of \((0,2g-M)\). For every \(n\) we have
\[\mathfrak{b}_{2n+1}(\mathcal{M}_{g}) =\int_{X}w_{1}^{2n+1}f\,dm_{g}\] \[=\int_{U_{\varepsilon}}w_{1}^{2n+1}f\,dm_{g}+\int_{V_{\varepsilon }}w_{1}^{2n+1}f\,dm_{g}+\int_{W}w_{1}^{2n+1}f\,dm_{g}\] \[\geq\int_{U_{\varepsilon}}w_{1}^{2n+1}f\,dm_{g}+\int_{W}w_{1}^{2n +1}f\,dm_{g}\] \[\geq(2g-\varepsilon)^{2n+1}\int_{U_{\varepsilon}}|f|\,dm_{g}-M^{2 n+1}\int_{W}|f|\,dm_{g},\]
where the third line follows from the fact that \(w_{1}^{2n+1}f\) is positive on \(V_{\varepsilon}\) and the fourth follows from the bounds on \(|w_{1}|\) in \(U_{\varepsilon}\) and \(W\). Let \(A:=\int_{U_{\varepsilon}}|f|\,dm_{g}\) and \(B:=\int_{W}|f|\,dm_{g}.\) Then
\[\mathfrak{b}_{2n+1}(\mathcal{M}_{g})^{1/(2n+1)}\geq(2g-\varepsilon)\bigg{(}A- \Big{(}\frac{M}{2g-\varepsilon}\Big{)}^{2n+1}B\bigg{)}^{1/(2n+1)},\]
and the rightmost factor tends to \(1\) as \(n\to\infty\). Therefore, \(\liminf\mathfrak{b}_{2n+1}(\mathcal{M}_{g})^{1/(2n+1)}\geq 2g\).
We also have
\[\mathfrak{b}_{2n+1}(\mathcal{M}_{g}) =\int_{U_{\varepsilon}}w_{1}^{2n+1}f\,dm_{g}+\int_{X\setminus U_{ \varepsilon}}w_{1}^{2n+1}f\,dm_{g}\] \[\leq(2g)^{2n+1}\int_{U_{\varepsilon}}|f|\,dm_{g}+(2g-\varepsilon )^{2n+1}\int_{X\setminus U_{\varepsilon}}|f|\,dm_{g},\]
so if we let \(C:=\int_{X}|f|\,dm_{g}\) then \(\mathfrak{b}_{2n+1}(\mathcal{M}_{g})\leq(2g)^{2n+1}A+(2g-\varepsilon)^{2n+1}C\), so
\[\mathfrak{b}_{2n+1}(\mathcal{M}_{g})^{1/(2n+1)}\leq 2g\bigg{(}A+\Big{(}\frac{2g- \varepsilon}{2g}\Big{)}^{2n+1}C\bigg{)}^{1/(2n+1)}.\]
Once again the rightmost factor tends to \(1\) as \(n\to\infty\), so \(\limsup\mathfrak{b}_{2n+1}(\mathcal{M}_{g})^{1/(2n+1)}\leq 2g\), and the proposition is proven.
## 3. Convergence of moments of the measures \(\mu_{q,g}\)
Let \(\mathcal{M}_{g}^{\prime}(\mathbb{F}_{q})\) be the set of \(\mathbb{F}_{q}\)-isomorphism classes of curves of genus \(g>1\) over \(\mathbb{F}_{q}\). If \(g=1\), we abuse notation and let \(\mathcal{M}_{1}=\mathcal{M}_{1,1}\) be the moduli space of elliptic curves and \(\mathcal{M}_{1}^{\prime}(\mathbb{F}_{q})\) the set of \(\mathbb{F}_{q}\)-isomorphism classes of elliptic curves over \(\mathbb{F}_{q}\). Define a measure \(\mu_{q,g}\) by
\[\mu_{q,g}:=\frac{1}{\#\mathcal{M}_{g}(\mathbb{F}_{q})}\sum_{C\in\mathcal{M}_{g }^{\prime}(\mathbb{F}_{q})}\frac{\delta_{\tau(C)}}{\#\operatorname{Aut}_{q}(C) }\,,\]
where \(\tau(C):=\operatorname{Tr}(C)/\sqrt{q}\) is the _normalized trace_ of \(C\) and \(\delta_{\tau(C)}\) is the Dirac \(\delta\) measure supported at \(\tau(C)\). We see that \(\mu_{q,g}\) is a discrete probability measure on \(I_{g}=[-2g,2g]\), since
\[\mu_{q,g}(I_{g}) =\frac{1}{\#\mathcal{M}_{g}(\mathbb{F}_{q})}\sum_{C\in\mathcal{M} _{g}^{\prime}(\mathbb{F}_{q})}\frac{1}{\#\operatorname{Aut}_{\mathbb{F}_{q}}( C)}\] \[=\frac{1}{\#\mathcal{M}_{g}(\mathbb{F}_{q})}\sum_{C\in\mathcal{M} _{g}(\mathbb{F}_{q})}\underbrace{\sum_{C^{\prime}\in\operatorname{Twist}(C)} \frac{1}{\#\operatorname{Aut}_{\mathbb{F}_{q}}(C)}}_{=1\text{ by [$\operatorname{\text{\rm lord}}$}\operatorname{\text{\rm V92}$, Prop.~{}\ref{eq:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:defdef:def:def:def:def:def:def:defdef:def:def:def:def:def:defdef:def:def:def:def:defdef:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:defdef:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:defdef:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:defdef:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:def:defdef:defdef:def:def:def:def:def:defdef:def:def:def:defdef:def:def:defdef:def:defdef:def:def:defdef:defdef:defdef:
* a plateau function: take a piecewise linear function equal to \(1\) on \((-1/\sqrt{q}+1/q,1/\sqrt{q}-1/q)\) and \(0\) on \((-\infty,1/\sqrt{q}]\cup[1/\sqrt{q},\infty)\);
* a signal function: zero everywhere except for a small triangle with vertices \((-1/\sqrt{q},0),(0,1)\) and \((1/\sqrt{q},0)\).
Such a stronger convergence would lead to the convergence of \(\sqrt{q}\cdot\mathcal{N}_{q,g}(0)\) to \(2\operatorname{f}_{g}(0)\) in the first case and to \(\operatorname{f}_{g}(0)\) in the second case. Indeed, in both cases \(\int_{I_{g}}f\,d\mu_{q,g}=\mathcal{N}_{q,g}(0)\) and we can write \(\operatorname{f}_{g}(\tau)=\operatorname{f}_{g}(0)+(\operatorname{f}_{g}( \tau)-\operatorname{f}_{g}(0))\) with \(|\operatorname{f}_{g}(\tau)-\operatorname{f}_{g}(0)|\leq c\tau\) with \(c\geq 0\) a constant when \(|\tau|\) is small enough. For instance, in the second case, rewriting the right member gives
\[\int_{I_{g}}f(\tau)\operatorname{f}_{g}(\tau)\,d\tau=\operatorname{f}_{g}(0) \underbrace{\int_{-1/\sqrt{q}}^{1/\sqrt{q}}f(\tau)\,d\tau}_{=1/\sqrt{q}}+ \int_{-1/\sqrt{q}}^{1/\sqrt{q}}f(\tau)(\operatorname{f}_{g}(\tau)- \operatorname{f}_{g}(0))\,d\tau+O\Big{(}\frac{1}{q}\Big{)}.\]
But
\[\left|\int_{-1/\sqrt{q}}^{1/\sqrt{q}}f(\tau)(\operatorname{f}_{g}(\tau)- \operatorname{f}_{g}(0))\,d\tau\right|\leq c\int_{-1/\sqrt{q}}^{1/\sqrt{q}}| \tau|\,d\tau=O\Big{(}\frac{1}{q}\Big{)}.\]
Multiplying both sides by \(\sqrt{q}\) gives the announced results.
## 4. The elliptic and hyperelliptic cases: results and experiments
Katz-Sarnak results show that for every interval \(J\subseteq I_{g}\), the probability that a random curve of genus \(g\) over \(\mathbb{F}_{q}\) (or a random hyperelliptic curve of genus \(g\) over \(\mathbb{F}_{q}\)) has normalized trace in \(J\) tends towards a fixed value as \(q\to\infty\), this value being \(\int_{J}\operatorname{f}_{g}(\tau)\,d\tau\), where \(\operatorname{f}_{g}\) is the density function for the measure \(\mu_{g}\) defined at the beginning of Section 3. Here the interval \(J\) is fixed, and we let \(q\) tend to infinity. One can wonder how rapid this convergence is. For instance, suppose the interval \(J\) has length \(x\). How large must \(q\) become in order for the actual probability that a normalized trace lies in \(J\) is well-approximated by the Katz-Sarnak prediction? Could it even be the case that the approximation is reasonably good when \(q\) is as large as \(1/x^{2}\), so that \(x\approx 1/\sqrt{q}\) and there is exactly one integer \(t\) with \(t/\sqrt{q}\in J\)? In other words, can we use the Katz-Sarnak distribution to estimate the number of curves over \(\mathbb{F}_{q}\) with a given trace? Since the measures \(\mu_{q,g}\) converge weakly to \(\mu_{g}\), one might hope that for every \(\tau\in I_{g}\), the integral of \(\mu_{q,g}\) over an interval of length \(1/\sqrt{q}\) containing \(\tau\) would be close to the integral of \(\mu_{g}\) over this interval. If we let \(t\) be the unique integer such that \(t/\sqrt{q}\) is contained in this interval, this optimistic approximation then translates to
\[\mathcal{N}_{q,g}\bigg{(}\frac{t}{\sqrt{q}}\bigg{)}\approx\frac{1}{\sqrt{q}} \operatorname{f}_{g}\bigg{(}\frac{t}{\sqrt{q}}\bigg{)}.\]
Since \(\mathcal{N}_{q,g}(t/\sqrt{q})\) gives us the weighted number of curves with trace \(t\), if this approximation is close to the truth we would have a good estimate for the number of such curves.
For hyperelliptic curves, we can prove that this type of naive approximation cannot hold. To state our result precisely, we introduce a function \(\mathcal{N}_{q,g}^{\text{hyp}}(\tau)\), which we define analogously to how we defined \(\mathcal{N}_{q,g}(\tau)\):
\[\mathcal{N}_{q,g}^{\text{hyp}}(\tau):=\frac{1}{\#\mathcal{H}_{g}(\mathbb{F}_{ q})}\sum_{\begin{subarray}{c}C\in\mathcal{H}_{g}^{\prime}(\mathbb{F}_{q})\\ \tau(C)=\tau\end{subarray}}\frac{1}{\#\operatorname{Aut}(C)}.\]
Here by \(\mathcal{H}_{g}(\mathbb{F}_{q})\) we mean the set of \(\overline{\mathbb{F}}_{q}\)-isomorphism classes of hyperelliptic curves of genus \(g\) over \(\mathbb{F}_{q}\), and by \(\mathcal{H}_{g}^{\prime}(\mathbb{F}_{q})\) we mean the set of \(\mathbb{F}_{q}\)-isomorphism classes of such curves. Note that for an integer \(t\) in \(I_{g}\), the value \(q^{2g-1}\mathcal{N}_{q,g}^{\text{hyp}}(t/\sqrt{q})\) is then the weighted number of genus-\(g\) hyperelliptic curves over \(\mathbb{F}_{q}\) with trace \(t\).
**Proposition 4.1**.: _Fix \(g>1\) and \(\varepsilon\in[0,2g)\), let \(r_{g}:=\sum_{i=0}^{2g+2}(-2)^{i}/i!\), and let \(v=\int_{2g-\varepsilon}^{2g}\mathfrak{f}_{g}(\tau)\,d\tau\). Suppose there are constants \(b_{g}\leq c_{g}\) such that for every sufficiently large prime power \(q\) and for every integer \(t\) in \([-(2g-\varepsilon)\sqrt{q},(2g-\varepsilon)\sqrt{q}\,]\), we have_
\[\frac{b_{g}}{\sqrt{q}}\mathfrak{f}_{g}\bigg{(}\frac{t}{\sqrt{q}}\bigg{)}\leq \mathfrak{N}_{q,g}^{\mathrm{hyp}}\bigg{(}\frac{t}{\sqrt{q}}\bigg{)}\leq\frac{ c_{g}}{\sqrt{q}}\mathfrak{f}_{g}\bigg{(}\frac{t}{\sqrt{q}}\bigg{)}.\]
_Then \(b_{g}\leq(1-r_{g})/(1-2v)\) and \(c_{g}\geq(1+r_{g}-4v)/(1-2v)\)._
The proof is based on the following lemma.
**Lemma 4.2**.: _Fix \(g>1\), and let \(r_{g}\) be as in Proposition 4.1. If \(q\) is an odd prime power then_
\[\sum_{t\,\mathrm{even}}\mathfrak{N}_{q,g}^{\mathrm{hyp}}\bigg{(}\frac{t}{ \sqrt{q}}\bigg{)}=\frac{1+r_{g}}{2}+O\Big{(}\frac{1}{q}\Big{)}\quad\text{and} \quad\sum_{t\,\mathrm{odd}}\mathfrak{N}_{q,g}^{\mathrm{hyp}}\bigg{(}\frac{t} {\sqrt{q}}\bigg{)}=\frac{1-r_{g}}{2}+O\Big{(}\frac{1}{q}\Big{)}.\]
Proof.: Fix an odd prime power \(q\), fix a nonsquare \(n\in\mathbb{F}_{q}\), and consider the set \(H\) consisting of all pairs \((c,f)\), where \(c\in\{1,n\}\) and \(f\in\mathbb{F}_{q}[x]\) is a monic separable polynomial of degree \(2g+1\) or \(2g+2\). A result of Carlitz [10, SS6] shows that \(\#H=2q^{2g+2}-2q^{2g}.\) The group \(\mathrm{PGL}_{2}(\mathbb{F}_{q})\) acts on \(H\): Given a matrix \([\begin{smallmatrix}r&s\\ t&u\end{smallmatrix}]\) and an element \((c,f)\) of \(H\), let \((d,g)\) be the unique element of \(H\) such that
\[dg(x)=ce^{2}(tx+u)^{2g+2}f\Big{(}\frac{rx+s}{tx+u}\Big{)}\]
for some \(e\in\mathbb{F}_{q}^{\times}.\) Note that the stabilizer of \((c,f)\) is isomorphic to the reduced automorphism group \(\mathrm{RedAut}(C)\) of the hyperelliptic curve \(C\colon y^{2}=cf\), that is, the quotient of the full automorphism group of \(C\) by the subgroup generated by the hyperelliptic involution.
The map \(\gamma\) that sends \((c,f)\in H\) to the hyperelliptic curve \(y^{2}=cf\) takes \(H\) onto \(\mathcal{H}_{g}^{\prime}(\mathbb{F}_{q})\). Given a curve \(C\in\mathcal{H}_{g}^{\prime}(\mathbb{F}_{q})\), let \((c,f)\in H\) be such that \(\gamma((c,f))=C\). Then
\[\#(\mathrm{PGL}_{2}(\mathbb{F}_{q})\cdot(c,f))=\frac{\#\,\mathrm{PGL}_{2}( \mathbb{F}_{q})}{\#\,\mathrm{RedAut}(C)},\]
so that
\[\frac{\#\gamma^{-1}(C)}{\#\,\mathrm{PGL}_{2}(\mathbb{F}_{q})}=\frac{1}{\#\, \mathrm{RedAut}(C)}=\frac{2}{\#\,\mathrm{Aut}(C)}. \tag{4.1}\]
Let \(H_{\mathrm{even}}\) be the subset of \(H\) consisting of the pairs \((c,f)\) such that the curve \(\gamma(c,f)\) has even trace. Let \(H_{\mathrm{even}}^{\prime}\) be the subset of \(H\) consisting of the pairs \((c,f)\) such that \(f\) has degree \(2g+2\) and has an even number of roots. Then \(H_{\mathrm{even}}^{\prime}\subseteq H_{\mathrm{even}}\), and \(H_{\mathrm{even}}\setminus H_{\mathrm{even}}^{\prime}\) consists of pairs \((c,f)\in H_{\mathrm{even}}\) such that \(f\). Therefore
\[\big{|}\#H_{\mathrm{even}}-\#H_{\mathrm{even}}^{\prime}\big{|}\leq 2q^{2g+1}.\]
Leont\({}^{\prime}\)ev [13, Lem. 4, p. 302] gives the generating function for the number of (not necessarily separable) monic polynomials of a fixed degree over \(\mathbb{F}_{q}\) that have a given number of roots. To find the number of such polynomials with an even number of roots, we simply need to take the average of the values of this generating function evaluated at \(-1\) and at \(1\). We find that
\[\#\left\{\begin{aligned} &\text{monic polynomials of degree $2g+2$}\\ &\text{over $\mathbb{F}_{q}$ with an even number of roots}\end{aligned}\right\}=\frac{1+r_{g}}{2}q^{2g+2}+O(q^{2g+1}).\]
The result of Carlitz mentioned earlier shows that
\[\#\left\{\begin{aligned} &\text{non-separable monic polynomials}\\ &\text{of degree $2g+2$ over $\mathbb{F}_{q}$}\end{aligned}\right\}=q^{2g+1}.\]
Therefore \(\#H_{\mathrm{even}}^{\prime}=(1+r_{g})q^{2g+2}+O(q^{2g+1})\), so that \(\#H_{\mathrm{even}}=(1+r_{g})q^{2g+2}+O(q^{2g+1})\) as well.
Using (4.1) we see that
\[\sum_{t\,\mathrm{even}}\mathcal{N}_{q,g}^{\mathrm{hyp}}\bigg{(}\frac{t }{\sqrt{q}}\bigg{)} =\frac{1}{\#\mathcal{H}_{\mathrm{g}}(\mathbb{F}_{q})}\sum_{ \begin{subarray}{c}C\in\mathcal{N}_{q}^{\prime}(\mathbb{F}_{q})\\ \mathrm{Tr}(C)\,\mathrm{even}\end{subarray}}\frac{1}{\#\operatorname{Aut}_{ \mathbb{F}_{q}}(C)}\] \[=\frac{1}{\#\mathcal{H}_{\mathrm{g}}(\mathbb{F}_{q})}\sum_{ \begin{subarray}{c}C\in\mathcal{N}_{q}^{\prime}(\mathbb{F}_{q})\\ \mathrm{Tr}(C)\,\mathrm{even}\end{subarray}}\frac{\#\gamma^{-1}(C)}{2\# \operatorname{PGL}_{2}(\mathbb{F}_{q})}\] \[=\frac{1}{2\#\mathcal{H}_{\mathrm{g}}(\mathbb{F}_{q})\# \operatorname{PGL}_{2}(\mathbb{F}_{q})}\#H_{\mathrm{even}}\] \[=\frac{1}{2q^{2g-1}(q^{3}-q)}\big{(}(1+r_{g})q^{2g+2}+O(q^{2g+1} )\big{)}\] \[=\frac{1+r_{g}}{2}+O\Big{(}\frac{1}{q}\Big{)}.\]
This gives us the first equality in the conclusion of the lemma. The second follows analogously.
Proof of Proposition 4.1.: Suppose the hypothesis of the proposition holds for a given \(g\) and \(\varepsilon\). For a given \(q\), we let \(m=\lfloor 2\sqrt{q}\rfloor\) and we consider several subintervals of \([-2g\sqrt{q},2g\sqrt{q}]\):
\[J_{0} :=\big{[}-mg,mg\big{]} J_{2} :=\big{[}-2g\sqrt{q},-(2g-\varepsilon)\sqrt{q}\big{)}\] \[J_{1} :=\big{[}-(2g-\varepsilon)\sqrt{q},(2g-\varepsilon)\sqrt{q}\, \big{]} J_{3} :=\big{(}(2g-\varepsilon)\sqrt{q},2g\sqrt{q}\,\big{]}.\]
Now we interpret the sum
\[S_{\mathrm{even}}:=\sum_{t\,\mathrm{even}}\mathcal{N}_{q,g}^{\mathrm{hyp}} \bigg{(}\frac{t}{\sqrt{q}}\bigg{)}\]
in two ways. On the one hand, from Lemma 4.2 we have
\[S_{\mathrm{even}}=\bigg{(}\frac{1+r_{g}}{2}\bigg{)}+O\Big{(}\frac{1}{q}\Big{)}\,.\]
On the other hand, for \(q\) large enough we have
\[S_{\mathrm{even}} =\sum_{\begin{subarray}{c}t\in J_{1}\\ t\,\mathrm{even}\end{subarray}}\mathcal{N}_{q,g}^{\mathrm{hyp}}\bigg{(}\frac{ t}{\sqrt{q}}\bigg{)}+\sum_{\begin{subarray}{c}t\in J_{2}\\ t\,\mathrm{even}\end{subarray}}\mathcal{N}_{q,g}^{\mathrm{hyp}}\bigg{(}\frac{ t}{\sqrt{q}}\bigg{)}+\sum_{\begin{subarray}{c}t\in J_{3}\\ t\,\mathrm{even}\end{subarray}}\mathcal{N}_{q,g}^{\mathrm{hyp}}\bigg{(}\frac{ t}{\sqrt{q}}\bigg{)}\] \[=\sum_{\begin{subarray}{c}t\in J_{1}\\ t\,\mathrm{even}\end{subarray}}\mathcal{N}_{q,g}^{\mathrm{hyp}}\bigg{(}\frac{ t}{\sqrt{q}}\bigg{)}+2\sum_{\begin{subarray}{c}t\in J_{3}\\ t\,\mathrm{even}\end{subarray}}\mathcal{N}_{q,g}^{\mathrm{hyp}}\bigg{(}\frac{ t}{\sqrt{q}}\bigg{)}\] \[\leq\frac{c_{g}}{2}\sum_{\begin{subarray}{c}t\in J_{1}\\ t\,\mathrm{even}\end{subarray}}\mathrm{f}_{g}\bigg{(}\frac{t}{\sqrt{q}}\bigg{)} \bigg{(}\frac{2}{\sqrt{q}}\bigg{)}+2\sum_{t\in J_{3}}\mathcal{N}_{q,g}^{ \mathrm{hyp}}\bigg{(}\frac{t}{\sqrt{q}}\bigg{)}\,. \tag{4.2}\]
The first sum in (4.2) is a Riemann sum for the integral of \(\mathrm{f}_{g}(\tau)\,d\tau\) over the interval \([-2g+\varepsilon,2g-\varepsilon]\), so as \(q\to\infty\) the first term in (4.2) approaches \(c_{g}(1-2v)/2\). The second sum is the measure, with respect to \(\mu_{q,g}\), of the interval \([2g-\varepsilon,2g]\). Since the \(\mu_{q,g}\) converge weakly to \(\mu_{g}\), the second term of (4.2) approaches \(2v\) as \(q\to\infty\).
Combining these two interpretations of \(S_{\mathrm{even}}\), we find that
\[\bigg{(}\frac{1+r_{g}}{2}\bigg{)}\leq\frac{c_{g}(1-2v)}{2}+2v\]
so that \(c_{g}\geq(1+r_{g}-4v)/(1-2v)\).
Similarly, we can consider the sum
\[S_{\mathrm{odd}}:=\sum_{t\,\mathrm{odd}}\mathcal{N}_{q,g}^{\mathrm{hyp}} \bigg{(}\frac{t}{\sqrt{q}}\bigg{)}.\]
From Lemma 4.2 we see that
\[S_{\rm odd}=\left(\frac{1-r_{g}}{2}\right)+O\Big{(}\frac{1}{q}\Big{)}\,.\]
But we also have
\[S_{\rm odd}\geq\frac{b_{g}}{2}\sum_{\begin{subarray}{c}t\in J_{1}\\ t\ {\rm odd}\end{subarray}}\mathfrak{f}_{g}\Big{(}\frac{t}{\sqrt{q}}\Big{)}\Big{(} \frac{2}{\sqrt{q}}\Big{)},\]
and the expression on the right approaches \(b_{g}(1-2v)/2\) as \(q\to\infty\). This shows that
\[\left(\frac{1-r_{g}}{2}\right)\geq\frac{b_{g}(1-2v)}{2},\]
so we find that \(b_{g}\leq(1-r_{g})/(1-2v)\).
_Remark 4.3_.: In the statement of Proposition 4.1, we only assume that the condition on \(\mathcal{N}_{q,g}^{\rm hyp}(t/\sqrt{q})\) holds for \(t\) more than \(\varepsilon\sqrt{q}\) away from the ends of the interval \([-2g\sqrt{q},2g\sqrt{q}]\) because when \(|t|>g[2\sqrt{q}]\) we have \(\mathcal{N}_{q,g}^{\rm hyp}(t/\sqrt{q})=0\). If we did not exclude the tail ends of the interval, the hypothesis of the proposition would only hold if we took \(b_{g}=0\), which is not an interesting approximation.
Figure 1 shows the value of \(\mathcal{N}_{q,g}^{\text{hyp}}(t/\sqrt{q})\) for all integers \(t\in[-4\sqrt{q},4\sqrt{q}]\), where \(q=1009\), together with the density function \(\mathfrak{f}_{2}\) for the limiting Katz-Sarnak measure, scaled by the two factors \(b=38/45\) and \(c=52/45\) given by Proposition 4.1 for \(g=2\) and \(\varepsilon=0\).
The key to Proposition 4.1 is the imbalance between the likelihood of even versus odd traces for hyperelliptic curves. The obvious workaround would be to scale the counts for the even and odd traces by the factors given in the proposition for \(\varepsilon=0\). One can ask whether the scaled curve counts then better match the limiting Katz-Sarnak distribution. Figure 2 suggests that perhaps this parity factor is the main obstruction to obtaining decent estimates from the naive Katz-Sarnak approximation.
The proof of Proposition 4.1 carries through for elliptic curves exactly as it does for hyperelliptic curves of a given genus \(g>1\). We do not include genus-1 curves in the statement of the proposition, however, because as we will see in Proposition 4.4, for \(g=1\) there is no value of \(c_{1}\) that satisfies the hypothesis of the proposition when \(\varepsilon\leq 1\), while the conclusion of the proposition is trivial when \(\varepsilon>1\) because the resulting upper bound on \(b_{1}\) will be greater than \(1\) and the lower bound on \(c_{1}\) will be less than \(1\).
When \(g=1\), the density function of the limiting Katz-Sarnak measure on \(I_{1}\) is \(\mathfrak{f}_{1}=(2\pi)^{-1}\sqrt{4-\tau^{2}}\). Let \(N_{q,t}\) denote the weighted number of elliptic curves over \(\mathbb{F}_{q}\) with trace \(t\). For some values of \(t\) in \([-2\sqrt{q},2\sqrt{q}\,]\) we have \(N_{q,t}=0\); in addition to those \(t\) with \(|t|>\lfloor 2\sqrt{q}\rfloor\), this happens for most values of \(t\) that are not coprime to \(q\). But even if we exclude these values, and even if we restrict attention to values of \(t\) that are near the center of the interval \([-2\sqrt{q},2\sqrt{q}\,]\), the following proposition shows that we cannot hope to approximate \(N_{q,t}\) by the quantity
\[q^{1/2}\,\mathfrak{f}_{1}\bigg{(}\frac{t}{\sqrt{q}}\bigg{)}=\frac{1}{2\pi} \sqrt{4q-t^{2}}\,.\]
**Proposition 4.4**.: _For every \(c>0\), there are infinitely many values of \(q\) and \(t\) such that \(|t|\leq\sqrt{q}\) and \(N_{q,t}>c\sqrt{4q-t^{2}}\)._
Proof.: Let \(\Delta_{0}\) be a fundamental quadratic discriminant with \(\Delta_{0}<-4\) and let \(\chi\) be the quadratic character modulo \(\Delta_{0}\). For a given value of \(n\), let \(f\) be the product of the first \(n\) primes \(p\) that are inert in \(\mathbb{Q}(\sqrt{\Delta_{0}})\). Since the product over all inert primes of \(1+1/p\) diverges (see [13, Lem. 1.14] and [1, p. 176]), when \(n\) is large enough we have
\[\prod_{p|f}\bigg{(}1+\frac{1}{p}\bigg{)}>\frac{c\pi^{2}}{3}\frac{\sqrt{|\Delta _{0}|}}{h(\Delta_{0})}\,.\]
Choose \(n\) so that this holds, and let \(q_{0}\) be a prime of the form \(x^{2}-f^{2}\Delta_{0}y^{2}\), where \(x\) and \(y\) are positive integers. Note that \(x\) must be coprime to \(q_{0}\) because \(0<x<q_{0}\). Let \(\varpi=x+fy\sqrt{\Delta_{0}}\), viewed as an element of the upper half plane. Since \(x\) is coprime to \(q_{0}\), \(\varpi\) is the Weil number of an isogeny class of ordinary elliptic curves over \(\mathbb{F}_{q_{0}}\).
Let \(\theta\) be the argument of \(\varpi\) and let \(m\) be the smallest integer such that \(\pi/3\leq m\theta<2\pi/3\). Write \(\varpi^{m}=u+fv\sqrt{\Delta}\) for integers \(u\) and \(v\), let \(q=q_{0}^{m}=u^{2}-f^{2}v^{2}\Delta\), and let \(t=2u\). Then \(\varpi^{m}\) is the Weil number for an isogeny class \(\mathfrak{I}\) of ordinary elliptic curves over \(\mathbb{F}_{q}\), and the trace of this isogeny class is \(t\). We have \(|t|\leq\sqrt{q}\) because the argument of \(\varpi^{m}\) lies between \(\pi/3\) and \(2\pi/3\).
The number of elliptic curves in the isogeny class \(\mathfrak{I}\) is equal to the Kronecker class number \(H(\Delta)\) of the discriminant \(\Delta:=t^{2}-4q=4f^{2}v^{2}\Delta_{0}\). By [11, p. 696] we have
\[H(\Delta)=h(\Delta_{0})\prod_{p^{e}\parallel F}\left(1+\Big{(}1-\tfrac{\chi(p )}{p}\Big{)}(p+\cdots+p^{e})\right),\]
where \(F=2fv\), so
\[\frac{H(\Delta)}{\sqrt{4q-t^{2}}}=\frac{h(\Delta_{0})}{\sqrt{|\Delta_{0}|}} \prod_{p^{e}\parallel F}\left(p^{-e}+\Big{(}1-\tfrac{\chi(p)}{p}\Big{)}(1+p^{ -1}+\cdots+p^{1-e})\right).\]
Now,
\[p^{-e}+\Big{(}1-\tfrac{\chi(p)}{p}\Big{)}(1+p^{-1}+\cdots+p^{1-e})\geq\begin{cases} 1+1/p&\text{if }\chi(p)=-1;\\ 1-1/p^{2}&\text{if }\chi(p)\neq-1,\end{cases}\]
so we have
\[\frac{H(\Delta)}{\sqrt{4q-t^{2}}} \geq\frac{h(\Delta_{0})}{\sqrt{|\Delta_{0}|}}\prod_{\begin{subarray}{c \text{$\mathcal{H}$}\mathcal{P}\\ \chi(p)\mathcal{P}=1\end{subarray}}}\Big{(}1+\frac{1}{p}\Big{)}\prod_{ \begin{subarray}{c}\text{$\mathcal{H}$}\mathcal{P}\\ \chi(p)\mathcal{P}-1\end{subarray}}\Big{(}1-\frac{1}{p^{2}}\Big{)}\] \[\geq\frac{h(\Delta_{0})}{\sqrt{|\Delta_{0}|}}\prod_{p|f}\Big{(}1+ \frac{1}{p}\Big{)}\prod_{p}\Big{(}1-\frac{1}{p^{2}}\Big{)}\] \[\geq\frac{h(\Delta_{0})}{\sqrt{|\Delta_{0}|}}\Big{(}\frac{c\pi^{2 }}{3}\frac{\sqrt{|\Delta_{0}|}}{h(\Delta_{0})}\Big{)}\Big{(}\frac{6}{\pi^{2}} \Big{)}\] \[\geq 2c.\]
Since the curves in \(\mathcal{I}\) are ordinary and the discriminants of their endomorphism rings are neither \(-3\) nor \(-4\), they all have automorphism groups of order \(2\), so \(N_{q,t}=H(\Delta)/2\). It follows that
\[N_{q,t}\geq c\sqrt{4q-t^{2}},\]
as claimed.
Figure 3 shows the weighted number of elliptic curves over \(\mathbb{F}_{100003}\) of each possible trace, as well as the limiting density function \(\mathfrak{f}_{1}(\tau)=(2/\pi)\sqrt{4-\tau^{2}}\). We see that the plotted points do not appear to be near the density function.
## 5. The non-hyperelliptic case: experiments and conjectures
We consider now the case of non-hyperelliptic curves of genus \(g=3\) (considering all curves of genus \(3\) would certainly show the same pattern). For this purpose, for \(g\geq 3\) we introduce the function \(\mathcal{N}_{q,g}^{\text{hyp}}(\tau)\), which we define analogously to how we defined \(\mathcal{N}_{q,g}(\tau)\) and \(\mathcal{N}_{q,g}^{\text{hyp}}(\tau)\):
\[\mathcal{N}_{q,g}^{\text{hyp}}(\tau):=\frac{1}{\#\mathcal{M}_{g}^{\text{hyp} }(\mathbb{F}_{q})}\sum_{\begin{subarray}{c}C\in\mathcal{M}_{q}^{\text{hyp} ^{\prime}}(\mathbb{F}_{q})\\ \tau(C)=\tau\end{subarray}}\frac{1}{\#\operatorname{Aut}(C)}.\]
Here by \(\mathcal{M}_{g}^{\text{hyp}}(\mathbb{F}_{q})\) we mean the set of \(\overline{\mathbb{F}}_{q}\)-isomorphism classes of non-hyperelliptic curves of genus \(g\) over \(\mathbb{F}_{q}\), and by \(\mathcal{M}_{g}^{\text{hyp}^{\prime}}(\mathbb{F}_{q})\) we mean the set of \(\mathbb{F}_{q}\)-isomorphism classes of such curves. The associated measures will still weakly converge to the measure \(\mu_{g}\) with density \(\mathfrak{f}_{g}\). But experimentally, the behavior looks much smoother than in the elliptic or hyperelliptic cases as illustrated by Figure 4 for \(g=3\) and \(q=53\).2 Heuristically, this could be understood as an averaging for a given trace over several isogeny classes but this idea does not work for the hyperelliptic locus as we have seen in Section 4 and something more is needed for a family of curves to 'behave nicely.' As seen in Remark 3.2, if the higher convergence rate of moments observed in Theorem 2.3 fails to provide a proof of a faster weak convergence, it does single out the non-hyperelliptic case. Added to the experimental data in genus \(3\) it leads us to state the following conjecture.
Footnote 2: When using the data of [14] to draw this figure, we noticed that there were some errors in the code when computing the automorphism group of twists for small dimensional strata, giving \(728\) extra ‘weighted’ curves. This is a very small proportion with respect to \(53^{6}+1\) curves and does not affect the general shape of the curve.
**Conjecture 5.1**.: _Let \(g\geq 3\). For all \(\tau\in I_{g}\), for all \(\varepsilon>0\) and for all large enough \(q\), there exists \(t\in\mathbb{N}\) such that \(|\tau-t/\sqrt{q}|<1/(2\sqrt{q})\) and \(|\sqrt{q}\cdot\mathcal{N}_{q,g}^{\text{hyp}}(t/\sqrt{q})-\mathfrak{f}_{g}(t/ \sqrt{q})|<\varepsilon\)._
Another way to phrase this conjecture is to replace the measure \(\mu_{q,g}\) by a measure with density given by the histogram with height \(\sqrt{q}\cdot\mathcal{N}_{q,g}^{\text{hyp}}(t/\sqrt{q})\) and base centered at \(t/\sqrt{q}\) of length \(1/\sqrt{q}\) for all \(t\in[-2g\sqrt{q},2g\sqrt{q}]\). The conjecture asserts that the densities of these measures converge to the density
\(\mathfrak{f}_{g}\) at each point of \(I_{g}\). This is stronger than weak convergence of the measures [14].
We now conclude by looking at the symmetry breaking for the trace distribution of (non-hyperelliptic) genus \(3\) curves. In general, if \(C\) is a hyperelliptic curve of genus \(g\) over \(\mathbb{F}_{q}\) with trace \(t\), then its quadratic twist for the hyperelliptic involution has trace \(-t\) and therefore the distribution of the number of hyperelliptic curves of genus \(g\) over \(\mathbb{F}_{q}\) as a function of their trace is symmetric. For non-hyperelliptic curves, the distribution has no reason to be symmetric anymore. Actually, if a principally polarized abelian variety over \(\mathbb{F}_{q}\) is the Jacobian (over \(\mathbb{F}_{q}\)) of a non-hyperelliptic curve, then its quadratic twist is never a Jacobian. This obstruction, known as _Serre's obstruction_, is a huge obstacle to finding a closed formula for the maximal number of rational points for \(g=3\)[1], whereas such formulas are known for \(g=1\)[1] and \(g=2\)[13]. Although we cannot improve on the state-of-art of this question, we can study this asymmetry with the probabilistic angle and the results we got before.
To visualize this asymmetry, let us consider the signed measure \(\nu_{q,g}=\mu_{q,g}-(-1)^{*}\mu_{q,g}\) where \((-1)^{*}\mu_{q,g}\) is the discrete image signed measure defined by
\[(-1)^{*}\mu_{q,g}=\frac{1}{\#\mathcal{M}_{g}(\mathbb{F}_{q})}\sum_{C\in \mathcal{M}_{g}^{\prime}(\mathbb{F}_{q})}\frac{\delta_{-\tau(C)}}{\#\operatorname {Aut}_{\mathbb{F}_{q}}(C)}.\]
We get the following consequence of Theorem 2.3.
**Proposition 5.2**.: _The sequence of signed measures \((\nu_{q,g})\) weakly converges to the \(0\) measure._
Proof.: By definition, the even moments of \(\nu_{q,g}\) are zero. By Theorem 2.3 the odd moments of \(\sqrt{q}\,\nu_{q,g}\) are equal to
\[2\frac{S_{n}(q,\mathcal{M}_{g})}{q^{3g-3+(n-1)/2}}=-2\mathfrak{b}_{n}( \mathcal{M}_{g})+O\left(\frac{1}{\sqrt{q}}\right).\]
Hence all moments of \(\nu_{q,g}\) are \(0\). Now if \(f\) is any continuous function on the compact interval \(I_{g}=[-2g,2g]\), then by the Stone-Weierstrass theorem, for every \(\varepsilon>0\) we can find a polynomial \(P\) such that \(|f(\tau)-P(\tau)|\leq\varepsilon\) for all \(\tau\in I_{g}\). Therefore we have
\[\left|\int_{I_{g}}f\,d\nu_{q,g}\right|\leq\left|\int_{I_{g}}(f-P)\,d\nu_{q,g}+ \int_{I_{g}}P\,d\nu_{q,g}\right|\leq\varepsilon\|\nu_{q,g}\|+\left|\int_{I_{g} }P\,d\nu_{q,g}\right|.\]
The last term is a sum of moments which converges to \(0\) when \(q\) goes to infinity. The variation of \(\nu_{g,q}\) is also uniformly bounded since
\[\|\nu_{q,g}\|=|\nu_{q,g}|(I_{g})=\sum_{\tau}\left|\mathcal{N}_{q,g}(\tau)- \mathcal{N}_{q,g}(-\tau)\right|\leq 2\sum_{\tau}\mathcal{N}_{q,g}(\tau)=2\mu_{q,g} (I_{g})=2.\]
Having a \(0\) measure is not very interesting and the proof of Proposition 5.2 shows that it would be much more interesting to study the weak convergence of the sequence of signed measures \((\sqrt{q}\,\nu_{q,g})\). We have from the previous proof the following corollary.
**Corollary 5.3**.: _The even moments of \(\sqrt{q}\,\nu_{q,g}\) are zero and the odd \(n\)th moments of the sequence \((\sqrt{q}\,\nu_{q,g})\) converge to \(-2\mathfrak{b}_{n}(\mathcal{M}_{g})\)._
Unfortunately we cannot prove weak convergence: the rest of the proof fails as we do not know if one can bound \(\sqrt{q}\,\|\nu_{q,g}\|\) uniformly in \(q\) (which is a necessary condition for weak convergence). Moreover, one cannot expect a general result from the convergence of moments alone as in the case of (positive) measures as the following counterexample shows.
_Example 5.4_.: Consider the sequence of signed measures \((\mu_{i})\) with density \(i\sin ix\) on the interval \([0,2\pi]\). The sequence of \(n\)th moments converges to \(-(2\pi)^{n}\) which is the \(n\)th moment of the signed measure \(\mu=-\delta_{2\pi}\). But \(\|\mu_{i}\|=4i\) which is not bounded and therefore the sequence \((\mu_{i})\) does not weakly converge (to \(\mu\)), see for instance [1, Prop. 1.4.7].
The integral interpretation (2.1) of \(\mathfrak{b}_{n}(\mathcal{M}_{g})\) shows that it is equal to the \(n\)th moment of
\[\mathfrak{h}_{g}(\tau)=\int_{A_{\tau}}\Bigl{(}\frac{1}{6}w_{1}^{3}-\frac{1}{2 }w_{1}w_{2}+\frac{1}{3}w_{3}-w_{1}\Bigr{)}\,dm_{g},\]
with \(A_{\tau}=\{(\theta_{1},\ldots,\theta_{g})\in[0,\pi]^{g}\,:\,\sum_{j}2\cos \theta_{j}=\tau\}\). Because of the convergence of the moments, we conjecture the following.
**Conjecture 5.5**.: _For \(g\geq 3\), the sequence of signed measures \((\sqrt{q}\,\nu_{q,g})\) weakly converges to the continuous signed measure with density \(-2\,\mathfrak{h}_{g}\)._
Such a result would for instance imply that \(\sqrt{q}\,\|\nu_{q,g}\|\) is uniformly bounded, hence there exists a constant \(C>0\) such that for all \(q\) and all \(\tau=t/\sqrt{q}\), we have \(|\mathcal{N}_{q,g}(\tau)-\mathcal{N}_{q,g}(-\tau)|\leq C/\sqrt{q}\).
In genus \(3\), in the same spirit as in Section 4, one can run experiments which illustrate how the values
\[\left\{q\,\left(\mathcal{N}_{q,g}\left(\frac{t}{\sqrt{q}}\right)-\mathcal{N}_ {q,g}\left(\frac{-t}{\sqrt{q}}\right)\right)\right\}_{0\leq t\leq g\lfloor 2 \sqrt{q}\rfloor}\]
are close to the values \(-2\,\mathfrak{h}_{3}(t/\sqrt{q})\). See for instance Fig. 5 for \(q=53\). Seeing the data, one may even wonder if something stronger would hold in the same line as Conjecture 5.1, at least for \(g=3\).
Under this conjecture, one can use the moments of the density function \(\mathfrak{h}_{3}\) to revisit the result of [12]. Based on results of [1], the authors gave a heuristic explanation for the distribution of the points
\[p_{t,q}=\left(\frac{t}{\sqrt{q}},q\,\left(\mathcal{N}_{q,g}\left(\frac{t}{ \sqrt{q}}\right)-\mathcal{N}_{q,g}\left(\frac{-t}{\sqrt{q}}\right)\right)\right)\]
when \(0\leq t\leq g\lfloor 2\sqrt{q}\rfloor\) by comparing it with the distribution of differences around the mean in the binomial law [12, Cor. 2.3]. With the arguments given there, the distribution is approximated by the function
\[\mathcal{V}^{\lim}(\tau)=\tau(1-\tau^{2}/3)\cdot\left(\frac{1}{\sqrt{2\pi}}e^ {-\tau^{2}/2}\right).\]
Graphically for \(q=53\), the comparison looks acceptable but not perfect (see Fig. 5). This is fair as the heuristic grew from a result true when the degree of the plane curves in play is larger than \(2q-1\). As presently we are dealing with non-hyperelliptic curves of genus \(3\), represented as plane curves of degree \(4\), the condition is obviously never fulfilled. It is therefore already stunning that a close, albeit imperfect, match was found in this way.
We now take a different road based on Conjecture 5.5 and approximate the density \(-2\,\mathfrak{h}_{3}\) by a function \(\nu^{\lim}\) using the moments \(\mathfrak{b}_{n}(\mathcal{M}_{3})\). By Theorem 2.3, they can be efficiently computed using any symmetric polynomial package. We used Maple and the package SF [10] to compute \(\mathfrak{b}_{n}(\mathcal{M}_{3})\) for \(n=1,3,5,\ldots,25\), and found the following values:
\[\begin{array}{ccccc}\hline\hline\hline n&\mathfrak{b}_{n}(\mathcal{M}_{3})&n& \mathfrak{b}_{n}(\mathcal{M}_{3})&n&\mathfrak{b}_{n}(\mathcal{M}_{3})\\ \hline 1&0&11&10395&19&4818\,35250\\ 3&1&13&1\,35564&21&83083\,61040\\ 5&9&15&19\,27926&23&15\,03096\,79212\\ 7&84&17&295\,24716&25&283\,65681\,18720\\ 9&882&\end{array}\]
Taking \(\nu^{\lim}(\tau)\) of the form \(P(\tau)\left(\frac{1}{\sqrt{2\pi}}e^{-\tau^{2}/2}\right)\) with \(P\) an odd polynomial of degree \(5\), we want
\[\int_{\mathbb{R}}\tau^{2n+1}\cdot\nu^{\lim}(\tau)\,d\tau=-2\mathfrak{b}_{2n+1 }(\mathcal{M}_{3}),\]
for \(n=0,1\) and \(2\), and one finds that
\[\nu^{\lim}(\tau)=\left(1/60\,\tau^{5}-1/2\,\tau^{3}+5/4\,\tau\right)\left( \frac{1}{\sqrt{2\pi}}e^{-\tau^{2}/2}\right).\]
Remarkably, the moments of \(\nu^{\lim}(\tau)\) still agree with \(-2\mathfrak{b}_{2n+1}(\mathcal{M}_{3})\) for \(n=3,4\) and \(5\). However, for \(n=6\) we find that \(\int_{\mathbb{R}}\tau^{13}\cdot\nu^{\lim}(\tau)\,d\tau=-2\cdot 135135\neq-2\cdot \mathfrak{b}_{13}(\mathcal{M}_{3})\).
In Figure 5 we see a comparison between the graph of points \(\{p_{t,53}\}_{0\leq t\leq 42}\) and the function \(\mathcal{V}^{\lim}(\tau)\) and \(\nu^{\lim}(\tau)\) in favor of the latter.
|
2310.20169 | Plateau borders in soap films and Gauss' capillarity theory | We provide, in the setting of Gauss' capillarity theory, a rigorous
derivation of the equilibrium law for the three dimensional structures known as
Plateau borders which arise in "wet" soap films and foams. A key step in our
analysis is a complete measure-theoretic overhaul of the homotopic spanning
condition introduced by Harrison and Pugh in the study of Plateau's laws for
two-dimensional area minimizing surfaces ("dry" soap films). This new point of
view allows us to obtain effective compactness theorems and energy
representation formulae for the homotopic spanning relaxation of Gauss'
capillarity theory which, in turn, lead to prove sharp regularity properties of
energy minimizers. The equilibrium law for Plateau borders in wet foams is also
addressed as a (simpler) variant of the theory for wet soap films. | Francesco Maggi, Michael Novack, Daniel Restrepo | 2023-10-31T04:37:55Z | http://arxiv.org/abs/2310.20169v1 | # Plateau borders in soap films
###### Abstract.
We provide, in the setting of Gauss' capillarity theory, a rigorous derivation of the equilibrium law for the three dimensional structures known as _Plateau borders_ which arise in "wet" soap films and foams. A key step in our analysis is a complete measure-theoretic overhaul of the homotopic spanning condition introduced by Harrison and Pugh in the study of Plateau's laws for two-dimensional area minimizing surfaces ("dry" soap films). This new point of view allows us to obtain effective compactness theorems and energy representation formulae for the homotopic spanning relaxation of Gauss' capillarity theory which, in turn, lead to prove sharp regularity properties of energy minimizers. The equilibrium law for Plateau borders in wet foams is also addressed as a (simpler) variant of the theory for wet soap films.
###### Contents
* 1 Introduction
* 2 Induced essential partitions (Theorem 1.2)
* 3 Homotopic spanning on generalized soap films (Theorem 1.3)
* 4 The fundamental closure theorem for homotopic spanning conditions
* 5 Direct Method on generalized soap films (Theorem 1.4)
* 6 Existence of minimizers and convergence to Plateau's problem (Theorem 1.5)
* 7 Equilibrium across transition lines in wet soap films (Theorem 1.6)
* 8 Equilibrium across transition lines in wet foams (Theorem 1.7)
* A Equivalence of homotopic spanning conditions
* B Convergence of every minimizing sequence of \(\Psi_{\rm bk}(v)\)
* C An elementary lemma
## 1. Introduction
### Overview
Equilibrium configurations of soap films and foams are governed, at leading order, by the balance between surface tension forces and atmospheric pressure. This balance is expressed by the _Laplace-Young law of pressures_, according to which such systems can be decomposed into smooth interfaces with constant mean curvature equal to the pressure difference across them, and by the _Plateau laws_, which precisely postulate which arrangements of smooth interfaces joined together along lines of "singular" points are stable, and thus observable.
The physics literature identifies two (closely related) classes of soap films and foams, respectively labeled as "dry" and "wet". This difference is either marked in terms of the amount of liquid contained in the soap film/foam [14, Section 1.3], or in terms of the scale at which the soap film/foam is described [1, Chapter 2, Section 3 and 4].
In the dry case, Plateau laws postulates that (i) interfaces can only meet in three at a time forming 120-degree angles along lines of "\(Y\)-points"; and (ii) lines of \(Y\)-points can only
meet in fours at isolated "\(T\)-points", where six interfaces asymptotically form a perfectly symmetric tetrahedral angle; see, e.g. [12, Equilibrium rules A1, A2, page 24].
In the wet case, small but positive amounts of liquid are bounded by negatively curved interfaces, known as _Plateau borders_, and arranged near ideal lines of \(Y\)-points or isolated \(T\)-points; see Figure 1.1 and [13, Fig. 1.8 and Fig. 1.9]. A "third Plateau law" is then postulated to hold across the transition lines between wet and dry parts of soap films/foams, and can be formulated as follows:
_the unit normal to a soap film/foam changes continuously_ (1.1) _across the transition lines between wet and dry interfaces_ ;
see, e.g., [12, Equilibrium rule B, page 25] and [9, Section 4.1.4]. It is important to recall that Plateau borders play a crucial role in determining the mechanical properties of the many physical and biological systems in which they are observed. As a sample of older and newer papers discussing Plateau borders, we mention here [10, 1, 11, 12, 13, 14, 15]. Postulate (1.1) is assumed in all these works.
The goal of this paper is answering the natural problem of rigorously deriving the equilibrium condition for Plateau borders (1.1) in the context of Gauss' capillarity theory. Since the case of soap films is much harder and interesting from the mathematical viewpoint, we will postpone the discussion of foams until the very last section of this introduction. The main highlight is that, in addressing Plateau borders of soap films, we will develop a new "theory of spanning" for surfaces of geometric measure theory (GMT) which will find further applications in the two companion papers [16, 17]; see the closing of this overview for more details about these additional applications.
We now give an informal description of our approach. The starting point is [18], where the idea is introduced of modeling soap films as regions \(E\) of positive volume \(|E|=v\) contained in the complement \(\Omega=\mathbb{R}^{n+1}\setminus\mathbf{W}\) of a "wire frame" \(\mathbf{W}\) (\(n=2\) is the physical case, although the planar case \(n=1\) is also quite interesting in applications). We
Figure 1.1. (a) A Plateau border develops around a “wet” line of \(Y\)-points. The wet region is bounded by interfaces of _negative_ constant mean curvature. The equilibrium condition which needs to hold across the transition lines (here depicted in bold) between the negatively curved interfaces of a Plateau border and the incoming dry interfaces is that these interfaces meet tangentially. In the case of soap films, where the dry interfaces have zero mean curvature, the jump in the mean curvature across the transition lines implies a discontinuity in the gradient of the unit normal. (b) An arrangement of Plateau borders near a tetrahedral singularity. The transition lines are again depicted in bold. The incoming dry interfaces are omitted for clarity.
associate to \(E\) the surface tension energy \(\mathcal{H}^{n}(\Omega\cap\partial E)\) (where \(\mathcal{H}^{n}\) stands for \(n\)-dimensional (Hausdorff) measure, i.e., area when \(n=2\) and length when \(n=1\)), and minimize \(\mathcal{H}^{n}(\Omega\cap\partial E)\) under the constraints that \(|E|=v\) (for some given \(v>0\)) and
\[\Omega\cap\partial E\text{ is spanning }\mathbf{W}\,. \tag{1.2}\]
From the mathematical viewpoint the meaning assigned to (1.2) is, of course, the crux of the matter. In the informal spirit of this overview, we momentarily leave the concept of "spanning" only intuitively defined.
As proved in [10], this minimization process leads to the identification of _generalized minimizers_ in the form of pairs \((K,E)\) with \(E\subset\Omega\), \(|E|=v\), and such that
\[\Omega\cap\partial E\subset K\text{ and }K\text{ is spanning }\mathbf{W}\,. \tag{1.3}\]
These pairs are minimizing in the sense that
\[\mathcal{H}^{n}(\Omega\cap\partial E)+2\,\mathcal{H}^{n}(K\setminus\partial E )\leq\mathcal{H}^{n}(\Omega\cap\partial E^{\prime})\,, \tag{1.4}\]
whenever \(E^{\prime}\subset\Omega\), \(|E^{\prime}|=v\) and \(\Omega\cap\partial E^{\prime}\) is spanning \(\mathbf{W}\).
If \(K=\Omega\cap\partial E\), then generalized minimizers are of course minimizers in the proper sense. If not, the _collapsed interface_\(K\setminus\partial E\) is a surface whose positive area has to be counted with a multiplicity factor \(2\) (which arises from the asymptotic collapsing along \(K\setminus\partial E\) of oppositely oriented boundaries in minimizing sequences \(\{E_{j}\}_{j}\), see Figure 1.2). We expect collapsing to occur whenever the Plateau problem for \(\mathbf{W}\) admits one minimizer \(S\) with Plateau-type singularities. Whenever this happens, a _wetting conjecture_ is made: sequences \(\{(K_{v_{j}},E_{v_{j}})\}_{j}\) of generalized minimizers with \(|E_{v_{j}}|=v_{j}\to 0^{+}\) as \(j\to\infty\) will be such that the set of Plateau's singularities \(\Sigma(S)\) of \(S\) is such that \(\sup\{\operatorname{dist}(x,E_{v_{j}}):x\in\Sigma(S)\}\to 0\). Thus we expect that Plateau's singularities are never "left dry" in the small volume capillarity approximation of the Plateau problem.
A lot of information about generalized minimizers can be extracted from (1.4), and this is the content of [10, 11, 12]. With reference to the cases when \(n=1\) or \(n=2\), one can deduce from (1.4) that if \(\mathcal{H}^{n}(K\setminus\partial E)>0\), then \(K\setminus\partial E\) is a smooth minimal surface (a union of segments if \(n=1\)) and that \(\partial E\) contains a regular part \(\partial^{*}E\) that is a smooth constant mean curvature surface (a union of circular arcs if \(n=1\)) with _negative_ curvature. This is of course strongly reminiscent of the behavior of Plateau borders, and invites to analyze the validity of (1.1) in this context. A main obstacle is that, due to serious technical issues (described in more detail later on) related to how minimality is
expressed in (1.4), it turns out to be very difficult to say much about the "transition line"
\[\partial E\setminus\partial^{*}E\]
between the zero and the negative constant mean curvature interfaces in \(K\), across which one should check the validity of (1.1). More precisely, all that descends from (1.4) and a direct application of Allard's regularity theorem [1] is that \(\partial E\setminus\partial^{*}E\)_has empty interior in \(K\)_. Far from being a line in dimension \(n=2\), or a discrete set of points when \(n=1\), the transition line \(\partial E\setminus\partial^{*}E\) could very well have positive \(\mathcal{H}^{n}\)-measure and be everywhere dense in \(K\)! With such poor understanding of \(\partial E\setminus\partial^{*}E\), proving the validity of (1.1) - that is, the continuity of the unit normals to \(K\setminus\partial E\) and \(\partial^{*}E\) in passing across \(\partial E\setminus\partial^{*}E\) - is of course out of question.
We overcome these difficulties by performing a major measure-theoretic overhaul of the Harrison-Pugh homotopic spanning condition [11, 12] used in [13, 14, 15, 16] to give a rigorous meaning to (1.2), and thus to formulate the homotopic spanning relaxation of Gauss' capillarity discussed above.
The transformation of this purely topological concept into a measure-theoretic one is particularly powerful. Its most important consequence for the problem discussed in this paper is that it allows us to upgrade the partial minimality property (1.4) of \((K,E)\) into the full minimality property
\[\mathcal{H}^{n}(\Omega\cap\partial E)+2\,\mathcal{H}^{n}(K\setminus\partial E )\leq\mathcal{H}^{n}(\Omega\cap\partial E^{\prime})+2\,\mathcal{H}^{n}(K^{ \prime}\setminus\partial E^{\prime}) \tag{1.5}\]
whenever \(E^{\prime}\subset\Omega\), \(|E^{\prime}|=v\), \(\Omega\cap\partial E^{\prime}\subset K^{\prime}\) and \(K^{\prime}\) is spanning \(\mathbf{W}\). The crucial difference between (1.4) and (1.5) is that the latter is much more efficient than the former when it comes to study the regularity of generalized minimizers \((K,E)\), something that is evidently done by energy comparison with competitors \((K^{\prime},E^{\prime})\). Such comparisons are immediate when working with (1.5), but they are actually quite delicate to set up when we only have (1.4). In the latter case, given a competitor \((K^{\prime},E^{\prime})\), to set up the energy comparison with \((K,E)\) we first need to find a sequence of non-collapsed competitors \(\{E^{\prime}_{j}\}_{j}\) (with \(E^{\prime}_{j}\subset\Omega\), \(|E^{\prime}_{j}|=v\), and \(\Omega\cap\partial E^{\prime}_{j}\) spanning \(\mathbf{W}\)) such that \(\mathcal{H}^{n}(\Omega\cap\partial E^{\prime}_{j})\to\mathcal{H}^{n}(\Omega \cap\partial E^{\prime})+2\,\mathcal{H}^{n}(K^{\prime}\setminus\partial E^{ \prime})\). Intuitively, \(E^{\prime}_{j}\) needs to be a \(\delta_{j}\)-neighborhood of \(K^{\prime}\cup E^{\prime}\) for some \(\delta_{j}\to 0^{+}\) and the energy approximation property has to be deduced from the theory of Minkowski content. But applying the theory of Minkowski content to \((K^{\prime},E^{\prime})\) (which is the approach followed, e.g., in [14]) requires \((K^{\prime},E^{\prime})\) to satisfy rectifiability and uniform density properties that substantially restrict the class of available competitors \((K^{\prime},E^{\prime})\).
In contrast, once the validity of (1.5) is established, a suitable generalization (Theorem 1.2) of the partition theorem of sets of finite perimeter into indecomposable components [1, Theorem 1] combined with a subtle variational argument (see Figure 1.7) allows us to show that, in any ball \(B\subset\!\!\subset\Omega\) with sufficiently small radius and for some sufficiently large constant \(\Lambda\) (both depending just on \((K,E)\)), the connected components \(\{U_{i}\}_{i}\) of \(B\setminus(K\cup E)\) satisfy a perturbed area minimizing property of the form
\[\mathcal{H}^{n}(B\cap\partial U_{i})\leq\mathcal{H}^{n}(B\cap\partial V)+ \Lambda\,|U_{i}\Delta V|\,, \tag{1.6}\]
with respect to _completely arbitrary perturbations_\(V\subset B\), \(V\Delta U_{i}\subset\!\!\subset B\). By a classical theorem of De Giorgi [1, 13], (1.6) implies (away from a closed singular set of codimension at least \(8\), which is thus empty if \(n\leq 6\)) the \(C^{1,\alpha}\)-regularity of \(B\cap\partial U_{i}\) for each \(i\), and thus establishes _the continuity of the normal stated in (1.1)_. In fact, locally at each \(x\) on the transition line, \(K\) is the union of the graphs of two \(C^{1,\alpha}\)-functions \(u_{1}\leq u_{2}\) defined on an \(n\)-dimensional disk, having zero mean curvature above the interior of \(\{u_{1}=u_{2}\}\), and opposite constant mean curvature above \(\{u_{1}<u_{2}\}\). We can thus exploit the regularity theory for double-membrane free boundary problems devised in [17, 18] to deduce
that the transition line \(\partial E\setminus\partial^{*}E\) is indeed \((n-1)\)-dimensional, and to improve the \(C^{1,\alpha}\)-regularity of \(B\cap\partial U_{i}\) to \(C^{1,1}\)-regularity. Given the mean curvature jump across \(\partial E\setminus\partial^{*}E\) we have thus established the _sharp_ degree of regularity for minimizers of the homotopic spanning relaxation of Gauss' capillarity theory.
The measure-theoretic framework for homotopic spanning conditions laid down in this paper provides the starting point for additional investigations that would otherwise seem unaccessible. In two forthcoming companion papers we indeed establish (i) the convergence towards Plateau-type singularities of energy-minimizing diffused interface solutions of the Allen-Cahn equation [14], and (ii) some sharp convergence theorems for generalized minimizers in the homotopic spanning relaxation of Gauss' capillarity theory in the vanishing volume limit, including a proof of the above mentioned wetting conjecture [14].
The rest of this introduction is devoted to a rigorous formulation of the results presented in this overview. We begin in Section 1.2 with a review of the Harrison and Pugh homotopic spanning condition in relation to the classical Plateau problem and to the foundational work of Almgren and Taylor [1, 20]. In Section 1.3 we introduce the new measure-theoretic formulation of homotopic spanning and discuss its relation to the measure-theoretic notion of _essential connectedness_ introduced by Cagnetti, Colombo, De Philippis and the first-named author in the study of symmetrization inequalities [1, 1]. In Section 1.4 we introduce the _bulk_ and _boundary_ spanning relaxations of Gauss' capillarity theory, state a general closure theorem for "generalized soap films" that applies to both relaxed problems (Theorem 1.4). In Section 1.5 we prove the existence of generalized soap film minimizers (Theorem 1.5) and their convergence in energy to solutions to the Plateau problem. A sharp regularity theorem (Theorem 1.6) for these minimizers, which validates (1.1), is stated in Section 1.6. Finally, in Section 1.7 we reformulate the above results in the case of foams, see in particular Theorem 1.7.
### Homotopic spanning: from Plateau's problem to Gauss' capillarity
The theories of currents and of sets of finite perimeter, i.e. the basic distributional theories of surface area at the center of GMT, fall short in the task of modeling Plateau's laws. Indeed, two-dimensional area minimizing currents in \(\mathbb{R}^{3}\) are carried by smooth minimal surfaces, and thus cannot model \(Y\)-type1 and \(T\)-type singularities. This basic issue motivated the introduction of **Almgren minimal sets** as models for soap films in [15]: these are sets \(S\subset\mathbb{R}^{n+1}\) that are relatively closed in a given open set \(\Omega\subset\mathbb{R}^{n+1}\), and satisfy \(\mathcal{H}^{n}(S)\leq\mathcal{H}^{n}(f(S))\) whenever \(f:\Omega\to\Omega\) is a _Lipschitz_ (not necessarily injective) map with \(\{f\neq\mathrm{id}\}\subset\subset\Omega\). Taylor's historical result [20] validates the Plateau laws in this context, by showing that, when\({}^{2}\)\(n=2\), Almgren minimal sets are locally \(C^{1,\alpha}\)-diffeomorphic either to planes, to \(Y\)-cones, or to \(T\)-cones.
Footnote 1: Currents modulo \(3\) are compatible with \(Y\)-type singularities, but not with \(T\)-type singularities.
The issue of proposing and solving a formulation of Plateau's problem whose minimizers are Almgren minimal sets, and indeed admit Plateau-type singularities, is quite elusive, as carefully explained in [13]. In this direction, a major breakthrough has been obtained by Harrison and Pugh in [12] with the introduction of a new spanning condition, which, following the presentation in [10], can be defined as follows:
**Definition A** (Homotopic spanning (on closed sets)).: Given a closed set \(\mathbf{W}\subset\mathbb{R}^{n+1}\) (the "wire frame"), a **spanning class for \(\mathbf{W}\)** is a family \(\mathcal{C}\) of smooth embeddings of \(\mathbb{S}^{1}\) into
\[\Omega=\mathbb{R}^{n+1}\setminus\mathbf{W}\]
that is _closed under homotopies in \(\Omega\)_, that is, if \(\Phi:[0,1]\times\mathbb{S}^{1}\to\Omega\) is smooth family of embeddings \(\Phi_{t}=\Phi(t,\cdot):\mathbb{S}^{1}\to\Omega\) with \(\Phi_{0}\in\mathcal{C}\), then \(\Phi_{t}\in\mathcal{C}\) for every \(t\in(0,1]\). A set \(S\), contained and relatively closed in \(\Omega\), is said to be \(\mathcal{C}\)**-spanning W** if
\[S\cap\gamma\neq\varnothing\,,\qquad\forall\gamma\in\mathcal{C}\,.\]
Denoting by \(\mathcal{S}(\mathcal{C})\) the class of sets \(S\)\(\mathcal{C}\)-spanning \(\mathbf{W}\), one can correspondingly formulate the **Plateau problem** (with homotopic spanning)
\[\ell=\ell(\mathcal{C}):=\inf\left\{\mathcal{H}^{n}(S):S\in\mathcal{S}( \mathcal{C})\right\}. \tag{1.7}\]
Existence of minimizers of \(\ell\) holds as soon as \(\ell<\infty\), and minimizers \(S\) of \(\ell\) are Almgren minimal sets in \(\Omega\)[14, 15] that are indeed going to exhibit Plateau-type singularities (this is easily seen in the plane, but see also [1] for a higher dimensional example). Moreover, given a same \(\mathbf{W}\), different choices of \(\mathcal{C}\) are possible and can lead to different minimizers, see Figure 1.3. Finally, the approach is robust enough to provide the starting point for several important extensions [1, 1, 1, 10, 11, 12, 13, 14, 15], including higher codimension, anisotropic energies, etc.
The study of soap films as minimizers of Gauss's capillarity energy with small volume and under homotopic spanning conditions has been initiated in [16, 17], with the introduction of the model
\[\psi(v):=\inf\left\{\mathcal{H}^{n}(\Omega\cap\partial E):|E|=v\,,\ \Omega\cap \partial E\text{ is $\mathcal{C}$-spanning $\mathbf{W}$}\right\}, \tag{1.8}\]
where \(E\subset\Omega\) is an open set with smooth boundary. Without the spanning condition, at small volumes, minimizers of \(\mathcal{H}^{n}(\Omega\cap\partial E)\) would be small diffeomorphic images of half-balls [13]. However, the introduction of the \(\mathcal{C}\)-spanning constraint rules out small droplets, and forces the exploration of a different part of the energy landscape of \(\mathcal{H}^{n}(\Omega\cap\partial E)\). As informally discussed in Section 1.1, this leads to the emergence of generalized minimizers \((K,E)\). More precisely, in [17] the existence is proved of \((K,E)\) in the class
\[\mathcal{K}=\left\{(K,E):K\text{ is relatively closed and $\mathcal{H}^{n}$-rectifiable in $\Omega$, $E$ is open,}\right. \tag{1.9}\]
\[\left.\begin{aligned} & E\text{ has finite perimeter in $\Omega$, and }\Omega\cap\operatorname{cl}\left(\partial^{*}E\right)=\Omega\cap \partial E\subset K\right\},\end{aligned}\right.\]
(where \(\partial^{*}E\) denotes the reduced boundary of \(E\)) such that, for every competitor \(E^{\prime}\) in \(\psi(v)\), it holds
\[\mathcal{H}^{n}(\Omega\cap\partial^{*}E)+2\,\mathcal{H}^{n}(\Omega\cap(K \setminus\partial^{*}E))\leq\mathcal{H}^{n}(\Omega\cap\partial E^{\prime})\,. \tag{1.10}\]
Starting from (1.10) one can apply Allard's regularity theorem [10] and various _ad hoc_ comparison arguments [17, 18] to prove that \(\Omega\cap\partial^{*}E\) is a smooth hypersurface with constant mean curvature (negative if \(\mathcal{H}^{n}(K\setminus\partial^{*}E)>0\)), \(\Omega\cap(\partial E\setminus\partial^{*}E)\) has empty
Figure 1.3. The dashed lines denote the embeddings of \(\mathbb{S}^{1}\) whose homotopy classes relative to \(\Omega\) generate different spanning classes \(\mathcal{C}\), to which there correspond different minimizers of \(\ell\).
interior in \(K\), and that \(K\setminus(\Sigma\cup\partial E)\) is a smooth minimal hypersurface, where \(\Sigma\) is a closed set with codimension at least \(8\).
### Measure theoretic homotopic spanning
In a nutshell, the idea behind our measure theoretic revision the Harrison-Pugh homotopic spanning condition is the following. Rather than asking that \(S\cap\gamma(\mathbb{S}^{1})\neq\varnothing\) for every \(\gamma\in\mathcal{C}\), as done in Definition A, we shall replace \(\gamma\) with an open "tube" \(T\) containing \(\gamma(\mathbb{S}^{1})\), and ask that \(S\), with the help of a generic "slice" \(T[s]\) of \(T\), "disconnects" \(T\) itself into two nontrivial regions \(T_{1}\) and \(T_{2}\); see Figure 1.4. The key to make this idea work is, of course, giving a proper meaning to the word "disconnects".
To this end, we recall the notion of **essential connectedness** introduced in [1, 1] in the study of the rigidity of equality cases in Gaussian and Euclidean perimeter symmetrization inequalities. Essential connectedness is the "right" notion to deal with such problems since it leads to the formulation of sharp rigidity theorems, and can indeed be used to address other rigidity problems (see [1, 2, 3]). This said, it seems remarkable that the very same notion of what it means for "one Borel set to disconnect another Borel set" proves to be extremely effective also in the context of the present paper, which is of course very far from the context of symmetrization theory.
Denoting by \(T^{({\scalebox{0.8}{${\rm{}^{\rm{(t)}}}$}})}\) (\(0\leq t\leq 1\)) the **points of density \(t\)** of a Borel set \(T\subset\mathbb{R}^{n+1}\) (i.e., \(x\in T^{({\scalebox{0.8}{${\rm{}^{\rm{(t)}}}$}})}\) if and only if \(|T\cap B_{r}(x)|/\omega_{n+1}\,r^{n+1}\to t\) as \(r\to 0^{+}\), where \(\omega_{k}\) is the Lebesgue measure of the unit ball in \(\mathbb{R}^{k}\)), and by \(\partial^{e}T=\mathbb{R}^{n+1}\setminus(T^{({\scalebox{0.8}{${\rm{}^{\rm{(0)}} $}}}}\cup T^{({\scalebox{0.8}{${\rm{}^{\rm{(1)}}}$}})})\) the **essential boundary** of \(T\), given Borel sets \(S\), \(T\), \(T_{1}\) and \(T_{2}\) in \(\mathbb{R}^{n+1}\), and given \(n\geq 0\), we say that \(S\)**essentially disconnects \(T\) into**\(\{T_{1},T_{2}\}\), if
\[\begin{split}&\{T_{1},T_{2}\}\text{ is a non-trivial Borel partition of }T\,,\\ &\text{ and }T^{({\scalebox{0.8}{${\rm{}^{\rm{(1)}}}$}})}\cap \partial^{e}T_{1}\cap\partial^{e}T_{2}\text{ is }\mathcal{H}^{n}\text{-contained in }S\,.\end{split} \tag{1.11}\]
(For example, if \(K\) is a set of full \(\mathcal{L}^{1}\)-measure in \([-1,1]\), then \(S=K\times\{0\}\) essentially disconnects the unit disk in \(\mathbb{R}^{2}\).) Moreover, we say that \(T\) is **essentially connected3** if \(\varnothing\) does not essentially disconnect \(T\). The requirement that \(\{T_{1},T_{2}\}\) is a non-trivial Borel partition of \(T\) means that \(|T\Delta(T_{1}\cup T_{2})|=0\) and \(|T_{1}|\,|T_{2}|>0\). By saying that "\(E\) is \(\mathcal{H}^{n}\)-contained in \(F\)" we mean that \(\mathcal{H}^{n}(E\setminus F)=0\). We also notice that, in (1.11), we have \(T^{({\scalebox{0.8}{${\rm{}^{\rm{(1)}}}$}})}\cap\partial^{e}T_{1}\cap\partial^ {e}T_{2}=T^{({\scalebox{0.8}{${\rm{}^{\rm{(1)}}}$}})}\cap\partial^{e}T_{i}\) (\(i=1,2\)), a fact that is tacitly and repeatedly considered in the use of (1.11) in order to shorten formulas.
Footnote 3: Whenever \(T\) is of locally finite perimeter, being essentially connected is equivalent to being indecomposable.
Figure 1.4. (a) Homotopic spanning according to Harrison–Pugh: \(S\) must intersect every curve \(\gamma\in\mathcal{C}\), in particular, the \(\mathcal{C}\)-spanning property may be lost by removing a single point from \(S\); (b) Homotopic spanning based on essential connectedness: for a.e. section \(T[s]\) of the tube \(T\) around a curve \(\gamma\in\mathcal{C}\), the union \(T[s]\cup S\) (essentially) disconnects \(T\) (i.e., divides \(T\) into two non-trivial parts, depicted here with two different shades of gray).
With this terminology in mind, we introduce the following definition:
**Definition B** (Measure theoretic homotopic spanning).: Given a closed set \(\mathbf{W}\) and a spanning class \(\mathcal{C}\) for \(\mathbf{W}\), the **tubular spanning class \(\mathcal{T}(\mathcal{C})\)** associated to \(\mathcal{C}\) is the family of triples \((\gamma,\Phi,T)\) such that \(\gamma\in\mathcal{C}\), \(T=\Phi(\mathbb{S}^{1}\times B_{1}^{n})\), and4
Footnote 4: Here \(B_{1}^{n}=\{x\in\mathbb{R}^{n}:|x|<1\}\) and \(\mathbb{S}^{1}=\{s\in\mathbb{R}^{2}:|s|=1\}\).
\[\Phi:\mathbb{S}^{1}\times\operatorname{cl}B_{1}^{n}\to\Omega\text{ is a diffeomorphism with }\Phi|_{\mathbb{S}^{1}\times\{0\}}=\gamma\,.\]
When \((\gamma,\Phi,T)\in\mathcal{T}(\mathcal{C})\), the **slice of \(T\)** defined by \(s\in\mathbb{S}^{1}\) is
\[T[s]=\Phi(\{s\}\times B_{1}^{n})\,.\]
Finally, we say that a Borel set \(S\subset\Omega\) is \(\mathcal{C}\)**-spanning \(\mathbf{W}\)** if for each \((\gamma,\Phi,T)\in\mathcal{T}(\mathcal{C})\), \(\mathcal{H}^{1}\)-a.e. \(s\in\mathbb{S}^{1}\) has the following property:
\[\text{for }\mathcal{H}^{n}\text{-a.e. }x\in T[s] \tag{1.12}\] \[\exists\text{ a partition }\{T_{1},T_{2}\}\text{ of }T\text{ s.t. }x\in\partial^{e}T_{1}\cap\partial^{e}T_{2}\] \[\text{and s.t. }S\cup T[s]\text{ essentially disconnects }T\text{ into }\{T_{1},T_{2}\}\,.\]
Before commenting on (1.12), we notice that the terminology of Definition B is coherent with that of Definition A thanks to the following theorem.
**Theorem 1.1**.: _Given a closed set \(\mathbf{W}\subset\mathbb{R}^{n+1}\), a spanning class \(\mathcal{C}\) for \(\mathbf{W}\), and a set \(S\) relatively closed in \(\Omega\), then \(S\) is \(\mathcal{C}\)-spanning \(\mathbf{W}\) in the sense of Definition A if and only if \(S\) is \(\mathcal{C}\)-spanning \(\mathbf{W}\) in the sense of Definition B._
Theorem 1.1 is proved in Appendix A. There we also comment on the delicate reason why, in formulating (1.12), the partition \(\{T_{1},T_{2}\}\) must be allowed to depend on specific points \(x\in T[s]\). This would not seem necessary by looking at the simple situation depicted in Figure 1.4, but it is actually so when dealing with more complex situations; see Figure A.1.
Homotopic spanning according to Definition B is clearly stable under modifications of \(S\) by \(\mathcal{H}^{n}\)-negligible sets, but there is more to it. Indeed, even a notion like "\(\mathcal{H}^{n}(S\cap T)>0\) for every \(T\in\mathcal{T}(\mathcal{C})\)" would be stable under modifications by \(\mathcal{H}^{n}\)-negligible sets, and would probably look more appealing in its simplicity. The catch, of course, is finding an extension of Definition A for which compactness theorems, like Theorem 1.4 below, hold true. This is evidently not the case, for example, if one tries to work with a notion like "\(\mathcal{H}^{n}(S\cap T)>0\) for every \(T\in\mathcal{T}(\mathcal{C})\)".
The first key insight on Definition B is that, if restricted to Borel sets \(S\) that are locally \(\mathcal{H}^{n}\)-finite in \(\Omega\), then it can be reformulated in terms of partitions into indecomposable
Figure 1.5. An example of induced essential partition. The union of the boundaries of the \(U_{i}\)’s (inside of \(U\)) is contained in \(S\), and the containment may be strict. However, the part of \(S\) not contained in \(U\cap\bigcup_{i}\partial U_{i}\) is not such to disconnect any of the \(U_{i}\)’s. In particular, each \(U_{i}\) is essentially connected.
sets of finite perimeter. This is the content of the following theorem, whose case \(S=\varnothing\) corresponds to the standard decomposition theorem for sets of finite perimeter [1, Theorem 1]. For an illustration of this result, see Figure 1.5.
**Theorem 1.2** (Induced essential partitions (Section 2)).: _If \(U\subset\mathbb{R}^{n+1}\) is a bounded set of finite perimeter and \(S\subset\mathbb{R}^{n+1}\) is a Borel set with \(\mathcal{H}^{n}(S\cap U^{{(1)}})<\infty\), then there exists a unique5 essential partition\(\{U_{i}\}_{i}\) of \(U\) induced by \(S\), that is to say, \(\{U_{i}\}_{i}\) is a countable partition of \(U\) modulo Lebesgue negligible sets such that, for each \(i\), \(S\) does not essentially disconnect \(U_{i}\)._
Footnote 5: Uniqueness is meant modulo relabeling and modulo Lebesgue negligible modifications of the \(U_{i}\)’s.
Given \(U\) and \(S\) as in the statement of Theorem 1.2 we can define6 the **union of the** (reduced) **boundaries** (relative to \(U\)) **of the essential partition** induced by \(S\) on \(U\) by setting7
Footnote 6: Uniquely modulo \(\mathcal{H}^{n}\)-null sets thanks to Federer’s theorem recalled in (1.37) below.
Footnote 7: Given a Borel set \(E\), we denote by \(\partial^{*}E\) its reduced boundary relative to the maximal open set \(A\) wherein \(E\) has locally finite perimeter.
\[\operatorname{UBEP}(S;U)=U^{{(1)}}\cap\bigcup_{i} \partial^{*}U_{i}\,. \tag{1.13}\]
Two properties of \(\operatorname{UBEP}\)'s which well illustrate the concept are: first, if \(\mathcal{R}(S)\) denotes the rectifiable part of \(S\), then \(\operatorname{UBEP}(S;U)\) is \(\mathcal{H}^{n}\)-equivalent to \(\operatorname{UBEP}(\mathcal{R}(S);U)\); second, if \(S^{*}\) is \(\mathcal{H}^{n}\)-contained in \(S\), then \(\operatorname{UBEP}(S;U)\) is \(\mathcal{H}^{n}\)-contained in \(\operatorname{UBEP}(S;U)\); both properties are proved in Theorem 2.1 (an expanded restatement of Theorem 1.2).
We can use the concepts just introduced to provide an alternative and technically more workable characterization of homotopic spanning in the measure theoretic setting. This is the content of our first main result, which is illustrated in Figure 1.6.
**Theorem 1.3** (Homotopic spanning for locally \(\mathcal{H}^{n}\)-finite sets (Section 3)).: _If \(\mathbf{W}\subset\mathbb{R}^{n+1}\) is a closed set in \(\mathbb{R}^{n+1}\), \(\mathcal{C}\) is a spanning class for \(\mathbf{W}\), and \(S\subset\Omega\) is locally \(\mathcal{H}^{n}\)-finite in \(\Omega\), then \(S\) is \(\mathcal{C}\)-spanning \(\mathbf{W}\) if and only if for every \((\gamma,\Phi,T)\in\mathcal{T}(\mathcal{C})\) we have that, for \(\mathcal{H}^{1}\)-a.e. \(s\in\mathbb{S}^{1}\),_
\[T[s]\text{ is $\mathcal{H}^{n}$-contained in }\operatorname{UBEP}(S\cup T[s];T)\,. \tag{1.14}\]
Figure 1.6. With \(\mathbf{W}\) consisting of two disks in the plane, and \(T\) a test tube for the \(\mathcal{C}\)-spanning condition: (a) \(S\) consists of a segment with a gap: since the gap is inside of \(T\), the essential partition of \(T\) induced by \(S\cup T[s]\) consists of only one set, \(U_{1}=T\), so that \(T\cap\partial^{*}U_{1}=\varnothing\) and (1.14) cannot hold; (b) \(S\) consists of a full segment and in this case (with the possible exception of a choice of \(s\) such that \(T[s]\) is contained in \(S\)), the essential partition of \(T\) induced by \(S\cup T[s]\) consists of two sets \(\{U_{1},U_{2}\}\), such that \(T[s]\subset T\cap\partial^{*}U_{1}\cap\partial^{*}U_{2}\); in this case (1.14) holds.
### Direct Method on generalized soap films and Gauss' capillarity
The most convenient setting for addressing the existence of minimizers in Gauss' capillarity theory is of course that of sets of finite perimeter [12, 13]. However, if the notion of homotopic spanning is limited to closed sets, as it is the case when working with Definition A, then one cannot directly use homotopic spanning on sets of finite perimeter, and this is the reason behind the specific formulation (1.8) of \(\psi(v)\) used in [19, 18]. Equipped with Definition B we can now formulate Gauss' capillarity theory with homotopic spanning conditions directly on sets of finite perimeter. We shall actually consider _two_ different possible formulations
\[\psi_{\rm bk}(v) =\inf\left\{\mathcal{H}^{n}(\Omega\cap\partial^{*}E):|E|=v\text{ and }\Omega\cap(\partial^{*}E\cup E^{{(1)}})\text{ is $ \mathcal{C}$-spanning $\mathbf{W}$}\right\},\] \[\psi_{\rm bd}(v) =\inf\left\{\mathcal{H}^{n}(\Omega\cap\partial^{*}E):|E|=v\text{ and }\Omega\cap\partial^{*}E\text{ is $\mathcal{C}$-spanning $\mathbf{W}$}\right\},\]
where the subscripts "\(\rm bk\)" and "\(\rm bd\)" stand to indicate that the spanning is prescribed via the _bulk_ of \(E\) (that is, in measure theoretic terms, via the set \(\Omega\cap(\partial^{*}E\cup E^{{(1)}})\) or via the (reduced) boundary of \(E\). Inspired by the definition of the class \(\mathcal{K}\) introduced in (1.9), we also introduce the class \(\mathcal{K}_{\rm B}\) of **generalized soap films** defined by
\[\mathcal{K}_{\rm B}=\left\{(K,E):K\text{ and }E\text{ are Borel subsets of }\Omega,\right. \tag{1.15}\]
\[\left.\begin{array}{c}E\text{ has locally finite perimeter in }\Omega\text{ and }\partial^{*}E\cap\Omega\stackrel{{\Re^{n}}}{{\subset}}K \end{array}\right\}.\]
Here the subscript "\(\rm B\)" stands for "\(\rm B\)", and \(\mathcal{K}_{\rm B}\) stands as a sort of measure-theoretic version of \(\mathcal{K}\).
In the companion paper [16] the following relaxation formulas for problems \(\psi_{\rm bk}\) and \(\psi_{\rm bd}\) are proved,
\[\psi_{\rm bk}(v)=\Psi_{\rm bk}(v)\,,\qquad\psi_{\rm bd}(v)=\Psi_{\rm bd}(v)\,, \qquad\forall v>0\,, \tag{1.16}\]
where the following minimization problems on \(\mathcal{K}_{\rm B}\) are introduced
\[\Psi_{\rm bk}(v) =\inf\left\{\mathcal{F}_{\rm bk}(K,E):(K,E)\in\mathcal{K}_{\rm B} \,,|E|=v\,,K\cup E^{{(1)}}\text{ is $\mathcal{C}$-spanning $\mathbf{W}$}\right\}, \tag{1.17}\] \[\Psi_{\rm bd}(v) =\inf\left\{\mathcal{F}_{\rm bd}(K,E):(K,E)\in\mathcal{K}_{\rm B }\,,|E|=v\,,K\text{ is $\mathcal{C}$-spanning $\mathbf{W}$}\right\}. \tag{1.18}\]
Here \(\mathcal{F}_{\rm bk}\) and \(\mathcal{F}_{\rm bd}\) are the relaxed energies defined for \((K,E)\in\mathcal{K}_{\rm B}\) and \(A\subset\Omega\) as
\[\mathcal{F}_{\rm bk}(K,E;A) =2\,\mathcal{H}^{n}(A\cap K\cap E^{{(0)}})+\mathcal{H}^{n}(A \cap\partial^{*}E)\,, \tag{1.19}\] \[\mathcal{F}_{\rm bd}(K,E;A) =2\,\mathcal{H}^{n}(A\cap K\setminus\partial^{*}E)+\mathcal{H}^{ n}(A\cap\partial^{*}E)\,, \tag{1.20}\]
(We also set, for brevity, \(\mathcal{F}_{\rm bk}(K,E):=\mathcal{F}_{\rm bk}(K,E;\Omega)\) and \(\mathcal{F}_{\rm bd}(K,E):=\mathcal{F}_{\rm bd}(K,E;\Omega)\).) We refer to these problems, respectively, as the "bulk-spanning" or "boundary-spanning" Gauss' capillarity models. In this paper we shall directly work with these relaxed models. In particular, the validity of (1.16), although of definite conceptual importance, is not playing any formal role in our deductions.
A first remark concerning the advantage of working with the relaxed problems \(\Psi_{\rm bk}\) and \(\Psi_{\rm bd}\) rather than with their "classical" counterparts \(\psi_{\rm bk}\) and \(\psi_{\rm bd}\) is that while the latter two with \(v=0\) are trivial (sets with zero volume have zero distributional perimeter), the problems \(\Psi_{\rm bk}(0)\) and \(\Psi_{\rm bd}(0)\) are actually non-trivial, equal to each other, and amount to a measure-theoretic version of the Harrison-Pugh formulation of Plateau's problem \(\ell\) introduced in (1.7): more precisely, if we set
\[\ell_{\rm B}:=\frac{\Psi_{\rm bk}(0)}{2}=\frac{\Psi_{\rm bd}(0)}{2}=\inf\left\{ \mathcal{H}^{n}(S):S\text{ is a Borel set $\mathcal{C}$-spanning $\mathbf{W}$}\right\}, \tag{1.21}\]
then, by Theorem 1.1, we evidently have \(\ell_{\rm B}\leq\ell\); and, as we shall prove in the course of our analysis, we actually have that \(\ell=\ell_{\rm B}\) as soon as \(\ell<\infty\).
Our second main result concerns the applicability of the Direct Method on the competition classes of \(\Psi_{\mathrm{bk}}(v)\) and \(\Psi_{\mathrm{bd}}(v)\).
**Theorem 1.4** (Direct Method for generalized soap films (Sections 4 and 5)).: _Let \(\mathbf{W}\) be a closed set in \(\mathbb{R}^{n+1}\), \(\mathcal{C}\) a spanning class for \(\mathbf{W}\), \(\{(K_{j},E_{j})\}_{j}\) be a sequence in \(\mathcal{K}_{\mathrm{B}}\) such that \(\sup_{j}\mathcal{H}^{n}(K_{j})<\infty\), and let a Borel set \(E\) and Radon measures \(\mu_{\mathrm{bk}}\) and \(\mu_{\mathrm{bd}}\) in \(\Omega\) be such that \(E_{j}\stackrel{{\mathrm{\,loc}}}{{\to}}E\) and_
\[\mathcal{H}^{n}\mathop{\hbox{\vrule height 6.0pt width 0.4pt depth 0.0pt \vrule height 6.0pt width 0.4pt depth 0.0pt}}\nolimits(\Omega\cap\partial^{*}E_{j})+2\,\mathcal{H}^{n}\mathop{ \hbox{\vrule height 6.0pt width 0.4pt depth 0.0pt \vrule height 6.0pt width 0.4pt depth 0.0pt}}\nolimits(\mathcal{R}(K_{j})\cap E_{j}^{{}^{(0)}}) \stackrel{{*}}{{\rightharpoonup}}\mu_{\mathrm{bk}}\,,\] \[\mathcal{H}^{n}\mathop{\hbox{\vrule height 6.0pt width 0.4pt depth 0.0pt \vrule height 6.0pt width 0.4pt depth 0.0pt}}\nolimits(\Omega\cap\partial^{*}E_{j})+2\,\mathcal{H}^{n}\mathop{ \hbox{\vrule height 6.0pt width 0.4pt depth 0.0pt \vrule height 6.0pt width 0.4pt depth 0.0pt}}\nolimits(\mathcal{R}(K_{j})\setminus\partial^{*}E_{j}) \stackrel{{*}}{{\rightharpoonup}}\mu_{\mathrm{bd}}\,,\]
_as \(j\to\infty\). Then:_
**(i) Lower semicontinuity:** _the sets_
\[K_{\mathrm{bk}} :=\;\big{(}\Omega\cap\partial^{*}E\big{)}\cup\Big{\{}x\in\Omega \cap E^{{}^{(0)}}:\theta_{*}^{n}(\mu_{\mathrm{bk}})(x)\geq 2\Big{\}}\,,\] \[K_{\mathrm{bd}} :=\;\big{(}\Omega\cap\partial^{*}E\big{)}\cup\Big{\{}x\in\Omega \setminus\partial^{*}E:\theta_{*}^{n}(\mu_{\mathrm{bd}})(x)\geq 2\Big{\}}\,,\]
_are such that \((K_{\mathrm{bk}},E),(K_{\mathrm{bd}},E)\in\mathcal{K}_{\mathrm{B}}\) and_
\[\mu_{\mathrm{bk}} \geq\;\mathcal{H}^{n}\mathop{\hbox{\vrule height 6.0pt width 0.4pt depth 0.0pt \vrule height 6.0pt width 0.4pt depth 0.0pt}}\nolimits(\Omega\cap\partial^{*}E)+2\,\mathcal{H}^{n}\mathop{ \hbox{\vrule height 6.0pt width 0.4pt depth 0.0pt \vrule height 6.0pt width 0.4pt depth 0.0pt}}\nolimits(K_{\mathrm{bk}}\cap E^{{}^{(0)}})\,,\] \[\mu_{\mathrm{bd}} \geq\;\mathcal{H}^{n}\mathop{\hbox{\vrule height 6.0pt width 0.4pt depth 0.0pt \vrule height 6.0pt width 0.4pt depth 0.0pt}}\nolimits(\Omega\cap\partial^{*}E)+2\,\mathcal{H}^{n}\mathop{ \hbox{\vrule height 6.0pt width 0.4pt depth 0.0pt \vrule height 6.0pt width 0.4pt depth 0.0pt}}\nolimits(K_{\mathrm{bd}}\setminus\partial^{*}E)\,,\]
_with_
\[\liminf_{j\to\infty}\mathcal{F}_{\mathrm{bk}}(K_{j},E_{j})\geq\mathcal{F}_{ \mathrm{bk}}(K_{\mathrm{bk}},E)\,,\qquad\liminf_{j\to\infty}\mathcal{F}_{ \mathrm{bd}}(K_{j},E_{j})\geq\mathcal{F}_{\mathrm{bd}}(K_{\mathrm{bd}},E)\,.\]
**(ii) Closure:** _we have that_
\[\text{if }K_{j}\cup E_{j}^{{}^{(1)}}\text{ is }\mathcal{C} \text{-spanning }\mathbf{W}\text{ for every }j\text{,}\] \[\text{then }K_{\mathrm{bk}}\cup E^{{}^{(1)}}\text{ is }\mathcal{C} \text{-spanning }\mathbf{W}\,,\]
_and that_
\[\text{if }K_{j}\text{ is }\mathcal{C}\text{-spanning }\mathbf{W}\text{ for every }j\text{,}\] \[\text{then }K_{\mathrm{bd}}\text{ is }\mathcal{C} \text{-spanning }\mathbf{W}\,.\]
The delicate part of Theorem 1.4 is proving the closure statements. This will require first to extend the characterization of homotopic spanning from locally \(\mathcal{H}^{n}\)-finite sets to generalized soap films (Theorem 3.1), and then to discuss the behavior under weak-star convergence of the associated Radon measures of the objects appearing in conditions like (1.14) (Theorem 4.1).
### Existence of minimizers in \(\Psi_{\mathrm{bk}}(v)\) and convergence to \(\ell\)
From this point onward, we focus our analysis on the bulk-spanning relaxation \(\Psi_{\mathrm{bk}}(v)\) of Gauss' capillarity. There are a few important reasons for this choice: (i) from the point of view of physical modeling, working with the boundary or with the bulk spanning conditions seem comparable; (ii) the fact that \(\Psi_{\mathrm{bk}}(0)=\Psi_{\mathrm{bd}}(0)\) suggest that, at small values of \(v\), the two problems should actually be equivalent (have the same infima and the same minimizers); (iii) the bulk spanning variant is the one which is relevant for the approximation of Plateau-type singularities with solutions of the Allen-Cahn equations discussed in [14]; (iv) despite their similarities, carrying over the following theorems for both problems would require the repeated introduction of two versions of many arguments, with a significant increase in length, and possibly with at the expense of clarity.
The following theorem provides the starting point in the study of \(\Psi_{\mathrm{bk}}(v)\).
**Theorem 1.5** (Existence of minimizers and vanishing volume limit for \(\Psi_{\rm bk}\) (Section 6)).: _If \({\bf W}\) is a compact set in \({\mathbb{R}}^{n+1}\) and \({\mathcal{C}}\) is a spanning class for \({\bf W}\) such that \(\ell<\infty\), then_
\[\ell_{\rm B}=\ell\,, \tag{1.22}\]
_and, moreover:_
**(i) Existence of minimizers and Euler-Lagrange equation:** _for every \(v>0\) there exist minimizers \((K,E)\) of \(\Psi_{\rm bk}(v)\) such that \((K,E)\in{\mathcal{K}}\) and both \(E\) and \(K\) are bounded; moreover, there is \(\lambda\in{\mathbb{R}}\) such that_
\[\lambda\int_{\partial^{*}E}X\cdot\nu_{E}\,d{\mathcal{H}}^{n}=\int_{\partial^{ *}E}{\rm div}^{K}\,X\,d{\mathcal{H}}^{n}+2\int_{K\cap E^{(0)}}{\rm div}^{K}\,X \,d{\mathcal{H}}^{n}\,, \tag{1.23}\]
_for every \(X\in C^{1}_{c}({\mathbb{R}}^{n+1};{\mathbb{R}}^{n+1})\) with \(X\cdot\nu_{\Omega}=0\) on \(\partial\Omega\);_
**(ii) Regularity from the Euler-Lagrange equations:** _if \((K,E)\in{\mathcal{K}}\) is a minimizer of either \(\Psi_{\rm bk}(v)\), then there is a closed set \(\Sigma\subset K\), with empty interior in \(K\), such that \(K\setminus\Sigma\) is a smooth hypersurface; moreover, \(K\setminus(\Sigma\cup\partial E)\) is a smooth minimal hypersurface, \(\Omega\cap\partial^{*}E\) is a smooth hypersurface with mean curvature constantly equal to \(\lambda\), and \({\mathcal{H}}^{n}(\Sigma\setminus\partial E)=0\); in particular, \(\Omega\cap(\partial E\setminus\partial^{*}E)\) has empty interior in \(K\);_
**(iii) Convergence to the Plateau problem:** _if \((K_{j},E_{j})\) is a sequence of minimizers for \(\Psi_{\rm bk}(v_{j})\) with \(v_{j}\to 0^{+}\), then there exists a minimizer \(S\) of \(\ell\) such that, up to extracting subsequences, as Radon measures in \(\Omega\),_
\[{\mathcal{H}}^{n}\mathop{\hbox{\vrule height 6.0pt width 0.4pt depth 0.0pt\vrule heigh t 0.4pt width 6.0pt depth 0.0pt\vrule height 6.0pt width 0.4pt depth 0.0pt}}\nolimits(\partial^{*}E_{j}\cap\Omega)+2{ \mathcal{H}}^{n}\mathop{\hbox{\vrule height 6.0pt width 0.4pt depth 0.0pt\vrule heigh t ight 0.4pt width 6.0pt depth 0.0pt\vrule height 6.0pt width 0.4pt depth 0.0pt}}\nolimits(K_{j}\cap E_{j}^{(0)})\stackrel{{ *}}{{\rightharpoonup}}2{\mathcal{H}}^{n}\mathop{\hbox{\vrule heigh t 6.0pt width 0.4pt depth 0.0pt\vrule height 6.0pt width 0.4pt depth 0.0pt}}\nolimits S\,, \tag{1.24}\]
_as \(j\to\infty\); In particular, \(\Psi_{\rm bk}(v)\to 2\,\ell=\Psi_{\rm bk}(0)\) as \(v\to 0^{+}\)._
The conclusions of Theorem 1.5 about \(\Psi_{\rm bk}(v)\) can be read in parallel to the conclusions about \(\psi(v)\) obtained in [10]. The crucial difference is that, in place of the "weak" minimality inequality (1.10), which in this context would be equivalent to \({\mathcal{F}}_{\rm bk}(K,E)\leq{\mathcal{H}}^{n}(\Omega\cap\partial^{*}E^{ \prime})\) for every competitor \(E^{\prime}\) in \(\psi_{\rm bk}(v)\), we now have the proper minimality inequality
\[{\mathcal{F}}_{\rm bk}(K,E)\leq{\mathcal{F}}_{\rm bk}(K^{\prime},E^{\prime}) \tag{1.25}\]
for every competitor \((K^{\prime},E^{\prime})\) in \(\Psi_{\rm bk}(v)\). Not only the final conclusion is stronger, but the proof is also entirely different: whereas [10] required the combination of a whole bestiary of specific competitors (like the cup, cone, and slab competitors described therein) with the full force of Preiss' theorem, the approach presented here seems more robust as it does not exploit any specific geometry, and it is squarely rooted in the basic theory of sets of finite perimeter.
### Equilibrium across transition lines in wet soap films
We now formalize the validation of (1.1) for soap films in the form of a sharp regularity theorem for minimizers \((K,E)\) of \(\Psi_{\rm bk}(v)\).
The starting point to obtain this result is the connection between homotopic spanning and partitions into indecomposable sets of finite perimeter established in Theorem 1.3/Theorem 3.1. This connection hints at the possibility of showing that if \((K,E)\) is a minimizer of \(\Psi_{\rm bk}(v)\), then the elements \(\{U_{i}\}_{i}\) of the essential partition of \(\Omega\) induced by \(K\cup E^{(1)}\) are actually \((\Lambda,r_{0})\)-minimizers of the perimeter in \(\Omega\), i.e., there exist \(\Lambda\) and \(r_{0}\) positive constants such that
\[P(U_{i};B_{r}(x))\leq P(V;B_{r}(x))+\Lambda\left|V\Delta U_{i}\right|,\]
whenever \(V\Delta U_{i}\subset\subset\Omega\) and \({\rm diam}\,(V\Delta U_{i})<r_{0}\). The reason why this property is not obvious is that proving the \((\Lambda,r_{0})\)-minimality of \(U_{i}\) requires working with _arbitrary local competitors_\(V_{i}\) of \(U_{i}\). However, when working with homotopic spanning conditions, checking the admissibility of competitors is the notoriously delicate heart of the matter - as
reflected in the fact that only very special classes of competitors have been considered in the literature (see, e.g., the cup and cone competitors and the Lipschitz deformations considered in [10], the slab competitors and exterior cup competitors of [14], etc.).
The idea used to overcome this difficulty, which is illustrated in Figure 1.7, is the following. By Theorem 1.2, we can locally represent \(\mathcal{F}_{\mathrm{bk}}(K,E;B_{r}(x))\) as the sum of perimeters \(P(U_{i};B_{r}(x))+P(U_{j};B_{r}(x))+P(U_{k};B_{r}(x))\). Given a local competitor \(V_{i}\) for \(U_{i}\) we can carefully define a competitor \((K^{\prime},E^{\prime})\) so that the elements of the essential partition induced by \(K^{\prime}\cup(E^{\prime})^{{(1)}}\) in \(\Omega\), that can be used to represent \(\mathcal{F}_{\mathrm{bk}}(K^{\prime},E^{\prime};B_{r}(x))\) as the sum \(P(V_{i};B_{r}(x))+P(V_{j};B_{r}(x))+P(V_{k};B_{r}(x))\), are such that
\[\mathcal{F}_{\mathrm{bk}}(K^{\prime},E^{\prime};B_{r}(x))-\mathcal{F}_{ \mathrm{bk}}(K,E;B_{r}(x))=P(V;B_{r}(x))-P(U_{i};B_{r}(x))\,. \tag{1.26}\]
The trick is that by suitably defining \(K^{\prime}\) and \(E^{\prime}\) we can recover the entirety of \(B_{r}(x)\cap\partial^{*}U_{j}\) and \(B_{r}(x)\cap\partial^{*}U_{k}\) by attributing different parts of these boundaries to different terms in the representation of \(\mathcal{F}_{\mathrm{bk}}(K^{\prime},E^{\prime};B_{r}(x))\). In other words we are claiming that things can be arranged so that we still have
\[B_{r}(x)\cap\left(\partial^{*}U_{j}\cap\partial^{*}U_{k}\right)\overset{ \mathcal{H}^{n}}{\subset}K^{\prime}\cup(E^{\prime})^{{(1)}}\,. \tag{1.27}\]
The fact that we have been able to preserve all but one reduced boundary among those of the elements of the essential partition of \(B_{r}(x)\) induced by \((K,E)\) is enough to shows that \(K^{\prime}\cup(E^{\prime})^{{(1)}}\) is still \(\mathcal{C}\)-spanning \(\mathbf{W}\) by means of Theorem 1.3/Theorem 3.1.
By the regularity theory of \((\Lambda,r_{0})\)-perimeter minimizers (see, e.g. [13, Part III]) we can deduce the \(C^{1,\alpha}\)-regularity of the elements of the partition (away from a closed singular set with area minimizing dimensional bounds). This is already sufficient to prove the continuity of the normal across \(\Omega\cap(\partial E\setminus\partial^{*}E)\), but it also allows us to invoke the regularity theory for free boundaries in the double membrane problem, and to obtain the following sharp regularity result, with which we conclude our introduction.
**Theorem 1.6** (Equilibrium along transition lines for soap films (Section 7)).: _If \(\mathbf{W}\) is a compact set in \(\mathbb{R}^{n+1}\), \(\mathcal{C}\) is a spanning class for \(\mathbf{W}\) such that \(\ell<\infty\), \(v>0\), and \((K_{*},E_{*})\) is a minimizer of \(\Psi_{\mathrm{bk}}(v)\), then there is \((K,E)\in\mathcal{K}\) such that \(K\) is \(\mathcal{H}^{n}\)-equivalent to \(K_{*}\),
Figure 1.7. On the left, a minimizer \((K,E)\) of \(\Psi_{\mathrm{bk}}(v)\), and the essential partition induced by \((K,E)\) in a ball \(B_{r}(x)\); the multiplicity \(2\) part of \(K\cap B_{r}(x)\) are depicted with bold lines, to distinguish them from the multiplicity one parts in \(B_{r}(x)\cap\partial^{*}E\). On the right, a choice of \((K^{\prime},E^{\prime})\) that guarantees both the energy gap identity (1.26) and the \(\mathcal{H}^{n}\)-containment (1.27) needed to preserve homotopic spanning. The volume constraint can of course be restored as a lower order perimeter perturbation by taking a diffeomorphic image of \((K^{\prime},E^{\prime})\), an operation that trivially preserves homotopic spanning.
_is Lebesgue equivalent to \(E_{*}\), \((K,E)\) is a minimizer of \(\Psi_{\rm bk}(v)\), both \(E\) and \(K\) are bounded, \(K\cup E\) is \(\mathcal{C}\)-spanning \(\mathbf{W}\), and_
\[K\cap E^{{}_{(1)}}=\varnothing\,; \tag{1.28}\]
_in particular, \(K\) is the disjoint union of \(\Omega\cap\partial^{*}E\), \(\Omega\cap(\partial E\setminus\partial^{*}E)\), and \(K\setminus\partial E\)._
_Moreover, there is a closed set \(\Sigma\subset K\) with the following properties:_
**(i):**_\(\Sigma=\varnothing\) if \(1\leq n\leq 6\), \(\Sigma\) is locally finite in \(\Omega\) if \(n=7\), and \(\mathcal{H}^{s}(\Sigma)=0\) for every \(s>n-7\) if \(n\geq 8\);_
**(ii):**_\((\Omega\cap\partial^{*}E)\setminus\Sigma\) is a smooth hypersurface with constant mean curvature (denoted by \(\lambda\) if computed with respect to \(\nu_{E}\));_
**(iii):**_\((K\setminus\partial E)\setminus\Sigma\) is a smooth minimal hypersurface;_
**(iv):** _if_ \(\Omega\cap(\partial E\setminus\partial^{*}E)\setminus\Sigma\neq\varnothing\)_, then_ \(\lambda<0\)_; moreover, for every_ \(x\in\Omega\cap(\partial E\setminus\partial^{*}E)\setminus\Sigma\)_,_ \(K\) _is the union of two_ \(C^{1,1}\)_-hypersurfaces that detach tangentially at_ \(x\)_; more precisely, there are_ \(r>0\)_,_ \(\nu\in\mathbb{S}^{n}\)_,_ \(u_{1},u_{2}\in C^{1,1}(\mathbf{D}^{\nu}_{r}(x))\) _such that_
\[u_{1}(x)=u_{2}(x)=0\,,\qquad u_{1}\leq u_{2}\text{ on }\mathbf{D}^{\nu}_{r}(x)\,,\]
_with_ \(\{u_{1}<u_{2}\}\) _and_ \(\operatorname{int}\{u_{1}=u_{2}\}\) _both non-empty, and_
\[\mathbf{C}^{\nu}_{r}(x)\cap K = \cup_{i=1,2}\big{\{}y+u_{i}(y)\,\nu:y\in\mathbf{D}^{\nu}_{r}(x) \big{\}}\,, \tag{1.29}\] \[\mathbf{C}^{\nu}_{r}(x)\cap\partial^{*}E = \cup_{i=1,2}\big{\{}y+u_{i}(y)\nu:y\in\{u_{1}<u_{2}\}\big{\}}\,,\] (1.30) \[\mathbf{C}^{\nu}_{r}(x)\cap E = \big{\{}y+t\,\nu:t\in\big{(}u_{1}(y),u_{2}(y)\big{)}\big{\}}\,. \tag{1.31}\]
_Here,_
\[\mathbf{D}^{r}_{\nu}(x)=x+\{y\in\nu^{\perp}:|y|<r\}\,,\] \[\mathbf{C}^{r}_{\nu}(x)=x+\{y+t\,\nu:y\in\nu^{\perp}\,,|y|<r\,,|t |<r\}\,.\]
**(v):** _we have_
\[\Gamma:=\Omega\cap(\partial E\setminus\partial^{*}E)=\Gamma_{\rm reg}\cup \Gamma_{\rm sing}\,,\qquad\Gamma_{\rm reg}\cap\Gamma_{\rm sing}=\varnothing\,,\]
_where: \(\Gamma_{\rm reg}\) is relatively open in \(\Gamma\) and for every \(x\in\Gamma_{\rm reg}\) there are \(r>0\) and \(\beta\in(0,1)\) such that \(\Gamma_{\rm reg}\cap B_{r}(x)\) is a \(C^{1,\beta}\)-embedded \((n-1)\)-dimensional manifold; \(\Gamma_{\rm sing}\) is relatively closed in \(\Gamma\) and can be partitioned into a family \(\{\Gamma_{\rm sing}^{k}\}_{k=0}^{n-1}\) where, for each \(k\), \(\Gamma_{\rm sing}^{k}\) is locally \(\mathcal{H}^{k}\)-rectifiable in \(\Omega\)._
### Equilibrium across transition lines in wet foams
Based on the descriptions provided in [10, 11], an effective mathematical model for dry foams at equilibrium in a container is that of locally perimeter minimizing clusters, originating with different terminology in [1], and presented in [15, Part IV] as follows. Given an open set \(\Omega\subset\mathbb{R}^{n+1}\), a locally perimeter minimizing clusters is a finite Lebesgue partition \(\{U_{i}\}_{i}\) of \(\Omega\) into sets of finite perimeter such that, for some \(r_{0}>0\),
\[\sum_{i}P(U_{i};B)\leq\sum_{i}P(V_{i};B) \tag{1.32}\]
whenever \(B\subset\subset\Omega\) is a ball with radius less than \(r_{0}\), and \(\{V_{i}\}_{i}\) is a Lebesgue partition of \(\Omega\) with \(V_{i}\Delta U_{i}\subset\subset B\) and \(|V_{i}|=|U_{i}|\) for every \(i\). The previously cited results of Almgren and Taylor [1, 16] imply that, up to modification of the \(U_{i}\)'s by sets of zero Lebesgue measure, when \(n=2\), \(K=\Omega\cap\bigcup_{i}\partial U_{i}\) is a closed subset of \(\Omega\) that is locally \(C^{1,\alpha}\)-diffeomorphic to a plane, a \(Y\)-cone, or a \(T\)-cone; moreover, the part of \(K\) that is a surface is actually smooth and each of its connected component has constant mean curvature. Similar results holds when \(n=1\) (by elementary methods) and when \(n\geq 3\) (by exploiting [12]).
The theory for the relaxed capillarity energy \(\mathcal{F}_{\rm bk}\) developed in this paper provides an option for modeling wet foams. Again based on the descriptions provided in [11, 13], the following seems to be a reasonable model for wet foams at equilibrium in a container. Given an open set \(\Omega\subset\mathbb{R}^{n+1}\) we model wet foams by introducing the class
\[\mathcal{K}_{\rm foam}\]
of those \((K,E)\in\mathcal{K}_{\rm B}\) such that, for some positive constants \(\Lambda_{0}\) and \(r_{0}\),
\[\mathcal{F}_{\rm bk}(K,E;B)\leq\mathcal{F}_{\rm bk}(K^{\prime},E^{\prime};B)+ \,\Lambda_{0}\,|E\Delta E^{\prime}| \tag{1.33}\]
whenever \(B\) is a ball compactly contained in \(\Omega\) and with radius less than \(r_{0}\), and \((K^{\prime},E^{\prime})\in\mathcal{K}_{\rm B}\) is such that \((K\Delta K^{\prime})\cup(E\Delta E^{\prime})\subset\subset B\) and there are finite Lebesgue partitions \(\{U_{i}\}_{i}\) and \(\{U_{i}^{\prime}\}_{i}\) of \(B\) induced, respectively, by \(K\cup E^{{(1)}}\) and by \(K^{\prime}\cup(E^{\prime})^{{(1)}}\), such that \(|U_{i}|=|U_{i}^{\prime}|\) for every \(i\). Notice that inclusion of the term \(\Lambda_{0}\,|E\Delta E^{\prime}|\) in (1.33) allows for the inclusion of energy perturbations due to gravity or other forces. Lemma 7.1 will clarify that by taking \((K,E)\in\mathcal{K}_{\rm foam}\) with \(|E|=0\) we obtain a slightly more general notion of dry foam than the one proposed in (1.32).
**Theorem 1.7** (Equilibrium along transition lines for soap films (Section 8)).: _If \(\Omega\subset\mathbb{R}^{n+1}\) is open and \((K_{*},E_{*})\in\mathcal{K}_{\rm foam}\), then there is \((K,E)\in\mathcal{K}\cap\mathcal{K}_{\rm foam}\) such that \(K\) is \(\mathcal{H}^{n}\)-equivalent to \(K_{*}\), \(E\) Lebesgue equivalent to \(E_{*}\), \(K\cap E^{{(1)}}=\varnothing\), and such that, for every ball \(B\subset\subset\Omega\), the open connected components \(\{U_{i}\}_{i}\) of \(B\setminus(K\cup E)\) are such that each \(U_{i}\) is (Lebesgue equivalent to an) open set with \(C^{1,\alpha}\)-boundary in \(B\setminus\Sigma\). Here \(\Sigma\) is a closed subset of \(\Omega\) with \(\Sigma=\varnothing\) if \(1\leq n\leq 6\), \(\Sigma\) locally finite in \(\Omega\) if \(n=7\), and \(\mathcal{H}^{s}(\Sigma)=0\) for every \(s>n-7\) if \(n\geq 8\)._
### Organization of the paper
The sections of the paper contain the proofs of the main theorems listed above, as already specified in the statements. To these section we add three appendices. In Appendix A, as already noted, we prove the equivalence of Definition A and Definition B. In Appendix B we prove that, with some regularity of \(\partial\Omega\), _every_ minimizing sequence of \(\Psi_{\rm bk}(v)\) is converging to a minimizers, without need for modifications at infinity: this is, strictly speaking, not needed to prove Theorem 1.5, but it is a result of its own conceptual interest, it will be crucial for the analysis presented in [12], and it is easily discussed here in light of the proof of Theorem 1.5. Finally, Appendix C contains an elementary lemma concerning the use of homotopic spanning in the plane that, to our knowledge, has not been proved in two dimensions.
### Acknowledgements
We thank Guido De Philippis, Darren King, Felix Otto, Antonello Scardicchio, Salvatore Stuvard, and Bozhidar Velichkov for several interesting discussions concerning these problems. FM has been supported by NSF Grant DMS-2247544. FM, MN, and DR have been supported by NSF Grant DMS-2000034 and NSF FRG Grant DMS-1854344. MN has been supported by NSF RTG Grant DMS-1840314.
### Notation
**Sets and measures:** We denote by \(B_{r}(x)\) (resp., \(B_{r}^{k}(x)\)) the open ball of center \(x\) and radius \(r\) in \(\mathbb{R}^{n+1}\) (resp., \(\mathbb{R}^{k}\)), and omit \((x)\) when \(x=0\). We denote by \({\rm cl}\,(X)\), \({\rm int}(X)\), and \(I_{r}(X)\) the closure, interior and open \(\varepsilon\)-neighborhood of \(X\subset\mathbb{R}^{k}\). We denote by \(\mathcal{L}^{n+1}\) and \(\mathcal{H}^{s}\) the Lebesgue measure and the \(s\)-dimensional Hausdorff measure on \(\mathbb{R}^{n+1}\), \(s\in[0,n+1]\). If \(E\subset\mathbb{R}^{k}\), we set \(|E|=\mathcal{L}^{k}(E)\) and \(\omega_{k}=|B_{1}^{k}|\). We denote by \(E^{{(t)}}\), \(t\in[0,1]\), the **points of density \(t\)** of a Borel set \(E\subset\mathbb{R}^{n+1}\), so that \(E\) is \(\mathcal{L}^{n+1}\)-equivalent to \(E^{{(1)}}\), and, for every pair of Borel sets \(E,F\subset\mathbb{R}^{n+1}\),
\[(E\cup F)^{{(0)}}=E^{{(0)}}\cap F^{{(0)}}\,. \tag{1.34}\]
We define by \(\partial^{e}E=\mathbb{R}^{n+1}\setminus(E^{{(0)}}\cup E^{ {(1)}})\) the **essential boundary** of \(E\). Given Borel sets \(E_{j},E\subset\Omega\) we write
\[E_{j}\to E\,,\qquad E_{j}\stackrel{{\rm loc}}{{\to}}E\,,\]
when, respectively, \(|E_{j}\Delta E|\to 0\) or \(|(E_{j}\Delta E)\cap\Omega^{\prime}|\to 0\) for every \(\Omega^{\prime}\subset\subset\Omega\), as \(j\to\infty\). Given a Radon measure \(\mu\) on \(\mathbb{R}^{n+1}\), the \(k\)-dimensional lower density of \(\mu\) is the Borel function \(\theta_{*}^{k}(\mu):\mathbb{R}^{n+1}\to[0,\infty]\) defined by
\[\theta_{*}^{k}(\mu)(x)=\liminf_{r\to 0^{+}}\frac{\mu(\operatorname{cl} \left(B_{r}(x)\right))}{\omega_{k}r^{k}}\,.\]
We repeatedly use the fact that, if \(\theta_{*}^{k}(\mu)\geq\lambda\) on some Borel set \(K\) and for some \(\lambda\geq 0\), then \(\mu\geq\lambda\,\mathcal{H}^{k}\ll K\); see, e.g. [13, Theorem 6.4].
**Rectifiable sets:** Given an integer \(0\leq k\leq n+1\), a Borel set \(S\subset\mathbb{R}^{n+1}\) is **locally**\(\mathcal{H}^{k}\)**-rectifiable** in an open set \(\Omega\) if \(S\) is locally \(\mathcal{H}^{k}\)-finite in \(\Omega\) and \(S\) can be covered, modulo \(\mathcal{H}^{k}\)-null sets, by a countable union of Lipschitz images of \(\mathbb{R}^{k}\) in \(\mathbb{R}^{n+1}\). We say that \(S\) is **purely \(\mathcal{H}^{k}\)-unrectifiable** if \(\mathcal{H}^{k}(S\cap M)=0\) whenever \(M\) is a Lipschitz image of \(\mathbb{R}^{k}\) into \(\mathbb{R}^{n+1}\). Finally, we recall that if \(S\) is a locally \(\mathcal{H}^{k}\)-finite set in \(\Omega\), then there is a pair \((\mathcal{R}(S),\mathcal{P}(S))\) of Borel sets, uniquely determined modulo \(\mathcal{H}^{k}\)-null sets, and that are thus called, with a slight abuse of language, _the_**rectifiable part** and _the_**unrectifiable part** of \(S\), so that \(\mathcal{R}(S)\) is locally \(\mathcal{H}^{k}\)-rectifiable in \(\Omega\), \(\mathcal{P}(S)\) is purely \(\mathcal{H}^{k}\)-unrectifiable, and \(S=\mathcal{R}(S)\cup\mathcal{P}(S)\); see, e.g. [10, 13.1].
**Sets of finite perimeter:** If \(E\) is a Borel set in \(\mathbb{R}^{n+1}\) and \(D1_{E}\) is the distributional derivative of the characteristic function of \(E\), then we set \(\mu_{E}=-D1_{E}\). If \(A\) is the _largest open set_ of \(\mathbb{R}^{n+1}\) such that \(\mu_{E}\) is a Radon measure in \(A\) (of course it could be \(A=\varnothing\)), then \(E\) is of locally finite perimeter in \(A\) and the reduced boundary \(\partial^{*}E\) of \(E\) is defined as the set of those \(x\in A\cap\operatorname{spt}\mu_{E}\) such that \(\mu_{E}(B_{r}(x))/|\mu_{E}|(B_{r}(x))\) has a limit \(\nu_{E}(x)\in\mathbb{S}^{n}\) as \(r\to 0^{+}\). Moreover, we have the general identity (see [13, (12.12) & pag. 168])
\[A\cap\operatorname{cl}\left(\partial^{*}E\right)=A\cap\operatorname{spt}\mu_{ E}=\left\{x\in A:0<|E\cap B_{r}(x)|<|B_{r}(x)|\ \forall r>0\right\}\subset A\cap\partial E\,. \tag{1.35}\]
By De Giorgi's rectifiability theorem, \(\partial^{*}E\) is locally \(\mathcal{H}^{n}\)-rectifiable in \(A\), \(\mu_{E}=\nu_{E}\,\mathcal{H}^{n}\,\mathbin{\vrule height 6.0pt depth -0.0pt width 0.4pt depth -0.0pt}\,(A\cap \partial^{*}E)\) on \(A\), and \(\partial^{*}E\subset A\cap E^{{{(1/2)}}}\subset A\cap\partial^{e}E\), and
\[(E-x)/r\stackrel{{\rm loc}}{{\to}}H_{E,x}:=\left\{y\in\mathbb{R}^ {n+1}:y\cdot\nu_{E}(x)<0\right\},\qquad\text{as $r\to 0^{+}$}\,. \tag{1.36}\]
By a result of Federer,
\[A\text{ is $\mathcal{H}^{n}$-contained in $E^{{{(0)}}}\cup E^{{{(1)}}}\cup \partial^{*}E$}\,; \tag{1.37}\]
in particular, \(\partial^{*}E\) is \(\mathcal{H}^{n}\)-equivalent to \(A\cap\partial^{e}E\), a fact frequently used in the following. By _Federer's criterion for finite perimeter_, if \(\Omega\) is open and \(E\) is a Borel set, then
\[\mathcal{H}^{n}(\Omega\cap\partial^{e}E)<\infty\qquad\Rightarrow\qquad E\text { is of finite perimeter in $\Omega$}\,, \tag{1.38}\]
see [10, 4.5.11]. If \(E\) and \(F\) are of locally finite perimeter in \(\Omega\) open, then so are \(E\cup F\), \(E\cap F\), and \(E\setminus F\), and by [13, Theorem 16.3], we have
\[\Omega\cap\partial^{*}(E\cup F)\stackrel{{\rm loc}}{{\to}} \Omega\cap\left\{\left(E^{{{(0)}}}\cap\partial^{*}F\right)\cup\left(F^{{ {(0)}}}\cap\partial^{*}E\right)\cup\left\{\nu_{E}=\nu_{F}\right\}\right\}, \tag{1.39}\] \[\Omega\cap\partial^{*}(E\cap F)\stackrel{{\rm loc}}{{ \to}}\Omega\cap\left\{\left(E^{{{(1)}}}\cap\partial^{*}F\right)\cup\left(F^{{ {(1)}}}\cap\partial^{*}E\right)\cup\left\{\nu_{E}=\nu_{F}\right\}\right\},\] (1.40) \[\Omega\cap\partial^{*}(E\setminus F)\stackrel{{\rm loc }^{\prime\prime}}{{=}}\Omega\cap\left\{\left(E^{{ {(1)}}}\cap\partial^{*}F\right)\cup\left(F^{{ {(0)}}}\cap\partial^{*}E\right)\cup\left\{\nu_{E}=-\nu_{F}\right\}\right\}, \tag{1.41}\]
where \(\left\{\nu_{E}=\pm\nu_{F}\right\}:=\left\{x\in\partial^{*}E\cap\partial^{*}F: \nu_{E}(x)=\pm\nu_{F}(x)\right\}\). By exploiting Federer's theorem (1.37), (1.39), (1.40), and (1.41) we can also deduce (the details are left to the reader)
\[(E\cap F)^{{{(0)}}} \stackrel{{\rm loc}^{\prime\prime}}{{=}} E^{{{(0)}}}\cup F^{{{(0)}}}\cup\left\{\nu_{E}=-\nu_{F}\right\}, \tag{1.42}\] \[(E\setminus F)^{{{(0)}}} \stackrel{{\rm loc}^{\prime\prime}}{{=}} E^{{{(0)}}}\cup F^{{{(1)}}}\cup\left\{\nu_{E}=\nu_{F}\right\}. \tag{1.43}\]
Finally, combining (1.39), (1.41), and (1.43), we find
\[\partial^{*}(E\Delta F)\stackrel{{\mathcal{H}^{n}}}{{=}}(\partial^{* }E)\Delta(\partial^{*}F)\,. \tag{1.44}\]
**Partitions:** Given a Radon measure \(\mu\) on \(\mathbb{R}^{n+1}\) and Borel set \(U\subset\mathbb{R}^{n+1}\) we say that \(\{U_{i}\}_{i}\) is a \(\mu\)**-partition of \(U\)** if \(\{U_{i}\}_{i}\) is an at most countable family of Borel subsets of \(U\) such that
\[\mu\Big{(}U\setminus\bigcup_{i}U_{i}\Big{)}=0\,,\qquad\mu(U_{i}\cap U_{j})=0 \quad\forall i,j\,; \tag{1.45}\]
and we say that \(\{U_{i}\}_{i}\) is a **monotone \(\mu\)-partition** if, in addition to (1.45), we also have \(\mu(U_{i})\geq\mu(U_{i+1})\) for every \(i\). When \(\mu=\mathcal{L}^{n+1}\) we replace "\(\mu\)-partition" with "Lebesgue partition". When \(U\) is a set of finite perimeter in \(\mathbb{R}^{n+1}\), we say that \(\{U_{i}\}_{i}\) is a **Caccioppoli partition** of \(U\) if \(\{U_{i}\}_{i}\) is a Lebesgue partition of \(U\) and each \(U_{i}\) is a set of finite perimeter in \(\mathbb{R}^{n+1}\): in this case we have
\[\partial^{*}U\stackrel{{\mathcal{H}^{n}}}{{\subset}}\bigcup_{i} \partial^{*}U_{i}\,,\qquad 2\,\mathcal{H}^{n}\Big{(}U^{{(1)}} \cap\bigcup_{i}\partial^{*}U_{i}\Big{)}=\sum_{i}\mathcal{H}^{n}(U^{{(1)}} \cap\partial^{*}U_{i})\,, \tag{1.46}\]
see, e.g., [1, Section 4.4]; moreover,
\[1\leq\#\Big{\{}i:x\in\partial^{*}U_{i}\Big{\}}\leq 2\,,\qquad\forall x\in \bigcup_{i}\partial^{*}U_{i}\,, \tag{1.47}\]
thanks to (1.36) and to the fact that there cannot be three disjoint half-spaces in \(\mathbb{R}^{n+1}\).
## 2. Induced essential partitions (Theorem 1.2)
Given a Borel set \(S\), we say that a Lebesgue partition \(\{U_{i}\}_{i}\) of \(U\) is **induced by**\(S\) if, for each \(i\),
\[U^{{(1)}}\cap\partial^{e}U_{i}\text{ is $\mathcal{H}^{n}$-contained in $S$}\,. \tag{2.1}\]
We say that \(\{U_{i}\}_{i}\) is _an_ **essential partition of \(U\) induced by**\(S\) if it is a Lebesgue partition of \(U\) induced by \(S\) such that, for each \(i\),
\[S\text{ does not essentially disconnect $U_{i}$}\,. \tag{2.2}\]
The next theorem, which expands the statement of Theorem 1.2, shows that when \(\mathcal{H}^{n}\)-finite sets uniquely determine induced essential partitions on sets of finite perimeter.
**Theorem 2.1** (Induced essential partitions).: _If \(U\subset\mathbb{R}^{n+1}\) is a bounded set of finite perimeter and \(S\subset\mathbb{R}^{n+1}\) is a Borel set with \(\mathcal{H}^{n}(S\cap U^{{(1)}})<\infty\), then there exists an essential partition \(\{U_{i}\}_{i}\) of \(U\) induced by \(S\) such that each \(U_{i}\) is a set of finite perimeter and_
\[\sum_{i}P(U_{i};U^{{(1)}})\leq 2\,\mathcal{H}^{n}(S\cap U^{{(1)}})\,. \tag{2.3}\]
_Moreover:_ **(a):** _if \(S^{*}\) is a Borel set with \(\mathcal{H}^{n}(S^{*}\cap U^{{(1)}})<\infty\), \(S^{*}\) is \(\mathcal{H}^{n}\)-contained in \(S\), \(\{V_{j}\}_{j}\) is a Lebesgue partition8 of \(U\) induced by \(S^{*}\), and \(\{U_{i}\}_{i}\) is the essential partition of \(U\) induced by \(S\), then_
Footnote 8: Notice that here we are not requiring that \(S^{*}\) does not essentially disconnect each \(V_{j}\), i.e., we are not requiring that \(\{V_{j}\}_{j}\) is an essential partition induced by \(S^{*}\). This detail will be useful in the applications of this theorem.
\[\bigcup_{j}\,\partial^{*}V_{j}\text{ is $\mathcal{H}^{n}$-contained in $\bigcup_{i}\,\partial^{*}U_{i}$}\,; \tag{2.4}\]
**(b):** _if \(S\) and \(S^{*}\) are \(\mathcal{H}^{n}\)-finite sets in \(U^{{(1)}}\), and either9\(S^{*}=\mathcal{R}(S)\) or \(S^{*}\) is \(\mathcal{H}^{n}\)-equivalent to \(S\), then \(S\) and \(S^{*}\) induce \(\mathcal{L}^{n+1}\)-equivalent essential partitions of \(U\)._
Footnote 9: Here \(\mathcal{R}(S)\) denotes the \(\mathcal{H}^{n}\)-rectifiable part of \(S\).
Proof of Theorem 1.2.: Immediate consequence of Theorem 2.1.
The proof of Theorem 2.1 follows the main lines of the proof of [1, Theorem 1], which is indeed the case \(S=\varnothing\) of Theorem 2.1. We premise to this proof two lemmas that will find repeated applications in later sections too. To introduce the first lemma, we notice that, while it is evident that if \(S\) is \(\mathcal{C}\)-spanning \(\mathbf{W}\) and \(S\) is \(\mathcal{H}^{n}\)-contained into some Borel set \(S^{*}\), then \(S^{*}\) is also \(\mathcal{C}\)-spanning \(\mathbf{W}\), however, it is not immediately clear if the rectifiable part \(\mathcal{R}(S)\) of \(S\) (which may not be \(\mathcal{H}^{n}\)-equivalent to \(S\)) retains the \(\mathcal{C}\)-spanning property.
**Lemma 2.2**.: _If \(\mathbf{W}\) is compact, \(\mathcal{C}\) is a spanning class for \(\mathbf{W}\), \(S\) is \(\mathcal{C}\)-spanning \(\mathbf{W}\), and is a Radon measure in \(\Omega\), then \(\mathcal{R}(S)\) is \(\mathcal{C}\)-spanning \(\mathbf{W}\). Moreover, the sets \(T_{1}\) and \(T_{2}\) appearing in (1.12) are sets of finite perimeter._
Proof.: We make the following _claim_: if \(T\) is open, \(T^{{(1)}}\stackrel{{\mathcal{H}^{n}}}{{\subset}}T\), \(\mathcal{H}^{n}\mathop{\hbox{\vrule height 6.0pt width 0.4pt depth 0.0pt\vrule height 6.0pt width 0.4pt depth 0.0pt}}Z\) is a Radon measure in an open neighborhood of \(T\), and \(Z\) essentially disconnects \(T\) into \(\{T_{1},T_{2}\}\), then
\[T_{1}\text{ and }T_{2}\text{ are of locally finite perimeter in }T\,, \tag{2.5}\] \[\mathcal{R}(Z)\text{ essentially disconnects }T\text{ into }\{T_{1},T_{2}\}\,. \tag{2.6}\]
Indeed: Since \(T\) is open, we trivially have \(T\subset T^{{(1)}}\), and hence \(T\) is \(\mathcal{H}^{n}\)-equivalent to \(T^{{(1)}}\). Taking also into account that \(Z\) essentially disconnects \(T\) into \(\{T_{1},T_{2}\}\), we thus find
\[T\cap\partial^{e}T_{1}\cap\partial^{e}T_{2}\stackrel{{\mathcal{H }^{n}}}{{=}}T^{{(1)}}\cap\partial^{e}T_{1}\cap\partial^{e}T_{2} \stackrel{{\mathcal{H}^{n}}}{{\subset}}Z\cap T^{{(1)}} \stackrel{{\mathcal{H}^{n}}}{{\subset}}Z\cap T\,.\]
By Federer's criterion (1.38) and the \(\mathcal{H}^{n}\)-finiteness of \(Z\) in an open neighborhood of \(T\) we deduce (2.5). By Federer's theorem (1.37), \(\partial^{e}T_{i}\) is \((\mathcal{H}^{n}\mathop{\hbox{\vrule height 6.0pt width 0.4pt depth 0.0pt\vrule height 6.0pt width 0.4pt depth 0.0pt}}T_{i}\), which combined with the \(\mathcal{H}^{n}\)-equivalence of \(T^{{(1)}}\) and \(T\) gives
\[\partial^{e}T_{1}\cap\partial^{e}T_{2}\cap T^{{(1)}} \stackrel{{\mathcal{H}^{n}}}{{=}}\partial^{*}T_{1}\cap \partial^{*}T_{2}\cap T\,.\]
Since \(\partial^{*}T_{1}\cap\partial^{*}T_{2}\cap T\) is \(\mathcal{H}^{n}\)-rectifiable and \(\partial^{e}T_{1}\cap\partial^{e}T_{2}\cap T^{{(1)}} \stackrel{{\mathcal{H}^{n}}}{{\subset}}Z\), we conclude that \(\mathcal{H}^{n}(\partial^{e}T_{1}\cap\partial^{e}T_{2}\cap T^{{(1)}}\cap \mathcal{P}(Z))=0\). Hence,
\[\partial^{e}T_{1}\cap\partial^{e}T_{2}\cap T^{{(1)}} \stackrel{{\mathcal{H}^{n}}}{{\subset}}\mathcal{R}(Z)\,,\]
and (2.6) follows.
To prove the lemma: Let \((\gamma,\Phi,T)\in\mathcal{T}(\mathcal{C})\), let \(J\) be of full measure such that (A.1) holds for every \(s\in J\), so that, for every \(s\in J\) one finds that for \(\mathcal{H}^{n}\)-a.e. \(x\in T[s]\) there is a partition \(\{T_{1},T_{2}\}\) of \(T\) with \(x\in\partial^{e}T_{1}\cap\partial^{e}T_{2}\) and such that \(S\cup T[s]\) essentially disconnects \(T\) into \(\{T_{1},T_{2}\}\). By applying the claim with \(Z=S\cup T[s]\), we see that \(\mathcal{R}(S\cup T[s])\) essentially disconnects \(T\) into \(\{T_{1},T_{2}\}\), and that \(T_{1}\) and \(T_{2}\) have locally finite perimeter in \(T\). On noticing that \(\mathcal{R}(S\cup T[s])\) is \(\mathcal{H}^{n}\)-equivalent to \(\mathcal{R}(S)\cup T[s]\), we conclude the proof.
The second lemma is just a simple compactness statement for finite perimeter partitions.
**Lemma 2.3** (Compactness for partitions by sets of finite perimeter).: _If \(U\) is a bounded open set and \(\{\{U_{i}^{j}\}_{i=1}^{\infty}\}_{j=1}^{\infty}\) is a sequence of Lebesgue partitions of \(U\) into sets of finite perimeter such that_
\[\sup_{j}\,\sum_{i=1}^{\infty}P(U_{i}^{j})<\infty\,, \tag{2.7}\]
_then, up to extracting a subsequence, there exists a Lebesgue partition \(\{U_{i}\}_{i\in\mathbb{N}}\) of \(U\) such that for every \(i\) and every \(A\subset U\) open,_
\[\lim_{j\to\infty}|U_{i}^{j}\Delta U_{i}|=0\,,\qquad P(U_{i};A)\leq\liminf_{j \to\infty}P(U_{i}^{j};A)\,. \tag{2.8}\]
_Moreover,_
\[\lim_{i\to\infty}\limsup_{j\to\infty}\sum_{k=i+1}^{\infty}|U_{k}^{j}|^{s}=0\,, \qquad\forall s\in\Big{(}\frac{n}{n+1},1\Big{)}\,. \tag{2.9}\]
Proof.: Up to a relabeling we can assume each \(\{U_{i}^{j}\}_{i}\) is monotone. By (2.7) and the boundedness of \(U\), a diagonal argument combined with standard lower semicontinuity and compactness properties of sets of finite perimeter implies that we can find a not relabeled subsequence in \(j\) and a family \(\{U_{i}\}_{i}\) of Borel subsets of \(U\) with \(|U_{i}|\geq|U_{i+1}|\) and \(|U_{i}\cap U_{j}|=0\) for every \(i\neq j\), such that (2.8) holds. We are thus left to prove (2.9) and
\[\Big{|}U\setminus\bigcup_{i=1}^{\infty}U_{i}\Big{|}=0\,. \tag{2.10}\]
We start by noticing that for each \(i\) there is \(J(i)\in\mathbb{N}\) such that \(|U_{k}^{j}|\leq 2\,|U_{k}|\) for every \(j\geq J(i)\) and \(1\leq k\leq i\). Therefore if \(k\geq i+1\) and \(j\geq J(i)\) we find \(|U_{k}^{j}|\leq|U_{i}^{j}|\leq 2\,|U_{i}|\), so that, if \(j\geq J(i)\),
\[\sum_{k=i+1}^{\infty}|U_{k}^{j}|^{s}\leq C(n)\,\sum_{k=i+1}^{ \infty}P(U_{k}^{j})|U_{k}^{j}|^{s-(n/(n+1))}\leq C\,|U_{i}|^{s-(n/(n+1))}\,, \tag{2.11}\]
where we have also used the isoperimetric inequality and (2.7). Since \(|U_{i}|\to 0\) as \(i\to\infty\) (indeed, \(\sum_{i}|U_{i}|\leq|U|<\infty\)), (2.11) implies (2.9). To prove (2.10), we notice that if we set \(M=|U\setminus\cup_{i}U_{i}|\), and we assume that \(M\) is positive, then up to further increasing the value of \(J(i)\) we can require that
\[|U_{k}^{j}|\leq|U_{k}|+\frac{M}{2^{k+2}}\,,\qquad\forall 1\leq k \leq i\,,\,\forall j\geq J(i)\,, \tag{2.12}\]
(in addition to \(|U_{k}^{j}|\leq 2\,|U_{k}|\)). By (2.12) we obtain that, if \(j\geq J(i)\), then
\[|U|-\sum_{k=i+1}^{\infty}|U_{k}^{j}|=\sum_{k=1}^{i}|U_{k}^{j}|\leq \sum_{k=1}^{i}|U_{k}|+\frac{M}{2^{k+2}}\leq|U|-M+\sum_{k=1}^{i}\frac{M}{2^{k+2 }}\leq|U|-\frac{M}{4}\,. \tag{2.13}\]
Rearranging (2.13) and using the sub-additivity of \(z\mapsto z^{s}\) we conclude that
\[(M/4)^{s}\leq\sum_{k=i+1}^{\infty}|U_{k}^{j}|^{s}\,.\]
We obtain a contradiction with \(M>0\) by letting \(i\to\infty\) and by using (2.9).
Proof of Theorem 2.1.: Let \(\mathcal{U}(S)\) be the set of all the monotone Lebesgue partitions of \(U\) induced by \(S\). We notice that \(\mathcal{U}(S)\neq\varnothing\), since \(\mathcal{U}(S)\) contains the trivial partition with \(U_{1}=U\) and \(U_{i}=\varnothing\) if \(i\geq 2\). If \(U_{i}\in\{U_{i}\}_{i}\) for \(\{U_{i}\}_{i}\in\mathcal{U}(S)\), then \(\partial^{e}U_{i}\) is \(\mathcal{H}^{n}\)-contained in \(\partial^{e}U\cup(U^{(1)}\cap S)\), which, by Federer's criterion applied to \(U\) and \(\mathcal{H}^{n}(S\cap U^{(1)})<\infty\), has finite \(\mathcal{H}^{n}\)-measure; it follows then that \(U_{i}\) is a set of finite perimeter due to Federer's criterion. We now fix \(s\in(n/(n+1),1)\), and consider a maximizing sequence \(\{\{U_{i}^{j}\}_{i}\}_{j}\) for
\[m=\max\Big{\{}\sum_{i=1}^{\infty}|U_{i}|^{s}:\{U_{i}\}_{i}\in \mathcal{U}(S)\Big{\}}\,.\]
By standard arguments concerning reduced boundaries of disjoint sets of finite perimeter (see, e.g. [14, Chapter 16]), we deduce from (2.1) that for every \(j\),
\[\sum_{i=1}^{\infty}\mathcal{H}^{n}\,\mathsf{L}\,\partial^{*}U_{i }^{j} = \sum_{i=1}^{\infty}\mathcal{H}^{n}\,\mathsf{L}\,(\partial^{*}U_{i }^{j}\cap U^{(1)})+\sum_{i=1}^{\infty}\mathcal{H}^{n}\,\mathsf{L}\,(\partial^{ *}U_{i}^{j}\cap\partial^{*}U) \tag{2.14}\] \[\leq 2\,\mathcal{H}^{n}\,\mathsf{L}\,(S\cap U^{(1)})+\mathcal{H}^{n} \,\mathsf{L}\,\partial^{*}U\,.\]
Also, due to the sub-additivity of \(z\mapsto z^{s}\) and the general fact that \(\partial^{e}(A\cap B)\subset\partial^{e}A\cup\partial^{e}B\), we can refine \(\{U_{i}^{j}\}_{i}\) by replacing each \(U_{i}^{j}\) with the disjoint family
\[\left\{U_{i}^{j}\cap U_{k}^{\ell}:k\geq 1\,,1\leq\ell<j\right\},\]
thus obtaining a new sequence in \(\mathcal{U}(S)\) which is still maximizing for \(m\). As a consequence of this remark, we can assume without loss of generality that the considered maximizing sequence \(\{\{U_{i}^{j}\}_{i}\}_{j}\) for \(m\) has the additional property that
\[U\cap\bigcup_{i}\partial^{*}U_{i}^{j}\subset U\cap\bigcup_{i}\partial^{*}U_{i }^{j+1}\,,\qquad\forall j\,. \tag{2.15}\]
Thanks to (2.14) we can apply Lemma 2.3 and, up to extracting a subsequence in \(j\), we can find a Lebesgue partition \(\{U_{i}\}_{i\in\mathbb{N}}\) of \(U\) by sets of finite perimeter which satisfies (2.8) and (2.9). Moreover, after taking a subsequence, we may assume that \(\mathcal{H}^{n}\mathop{\hbox{\vrule height 6.0pt width 0.4pt depth 0.0pt\vrule height 6.0pt width 0.4pt depth 0.0pt}}\nolimits \partial^{*}U_{i}^{j}\stackrel{{\ast}}{{\rightharpoonup}}\mu_{i}\) for some Radon measures \(\mu_{i}\) such that \(\mathcal{H}^{n}\mathop{\hbox{\vrule height 6.0pt width 0.4pt depth 0.0pt\vrule height 6.0pt width 0.4pt depth 0.0pt}} \nolimits\partial^{*}U_{i}\leq\mu_{i}\)[13, Prop. 12.15]. Therefore, by (2.8), Federer's theorem for reduced boundaries, and by (2.1) for \(\{U_{i}^{j}\}_{i}\), we see that
\[\mathcal{H}^{n}\mathop{\hbox{\vrule height 6.0pt width 0.4pt depth 0.0pt\vrule heigh t 6.0pt width 0.4pt depth 0.0pt}}\nolimits(\partial^{*}U)+\sum_{i=1}^{\infty}\mathcal{H}^{n} \mathop{\hbox{\vrule height 6.0pt width 0.4pt depth 0.0pt\vrule heigh t 6.0pt width 0.4pt depth 0.0pt}}\nolimits(\partial^{*}U_{i}\cap U^{(1)})=\sum_{i=1}^{\infty} \mathcal{H}^{n}\mathop{\hbox{\vrule height 6.0pt width 0.4pt depth 0.0pt\vrule heigh t 6.0pt width 0.4pt depth 0.0pt}}\nolimits(\partial^{*}U_{i})\leq \mathrm{w}^{*}\lim_{j\to\infty}\sum_{i=1}^{\infty}\mathcal{H}^{n}\mathop{\hbox{ \vrule height 6.0pt width 0.4pt depth 0.0pt\vrule height 6.0pt width 0.4pt depth 0.0pt}} \nolimits(\partial^{*}U_{i}^{j})\]
\[=\mathrm{w}^{*}\lim_{j\to\infty}\mathcal{H}^{n}\mathop{\hbox{\vrule height 6.0pt width 0.4pt depth 0.0pt\vrule heigh t 6.0pt width 0.4pt depth 0.0pt}}\nolimits(\partial^{*}U)+\sum_{i=1}^{\infty}\mathcal{H}^{n} \mathop{\hbox{\vrule height 6.0pt width 0.4pt depth 0.0pt\vrule heigh t 6.0pt width 0.4pt depth 0.0pt}}\nolimits(\partial^{*}U_{i}^{j}\cap U^{(1)})\leq \mathcal{H}^{n}\mathop{\hbox{\vrule height 6.0pt width 0.4pt depth 0.0pt\vrule heigh t 6.0pt width 0.4pt depth 0.0pt}}\nolimits(\partial^{*}U)+2\mathcal{H}^{n}\mathop{\hbox{ \vrule height 6.0pt width 0.4pt depth 0.0pt\vrule heigh t 6.0pt width 0.4pt depth 0.0pt}}\nolimits(S\cap U^{(1)})\,.\]
By subtracting \(\mathcal{H}^{n}\mathop{\hbox{\vrule height 6.0pt width 0.4pt depth 0.0pt\vrule heigh t 6.0pt width 0.4pt depth 0.0pt}}\nolimits(\partial^{*}U)\) from both sides, we deduce (2.3).
We now show, first, that \(\{U_{i}\}_{i}\in\mathcal{U}(S)\) (i.e., we check the validity of (2.1) on \(\{U_{i}\}_{i}\)), and then that \(S\) does not essentially disconnect any of the \(U_{i}\). This will complete the proof of the first part of the statement.
To prove that \(U^{(1)}\cap\partial^{e}U_{i}\stackrel{{\mathcal{H}^{n}}}{{ \subset}}S\), let us introduce the \(\mathcal{H}^{n}\)-rectifiable set \(S_{0}\) defined by
\[S_{0}=U^{(1)}\cap\bigcup_{i,j}\partial^{*}U_{i}^{j}\,. \tag{2.16}\]
By \(\{U_{i}^{j}\}_{i}\in\mathcal{U}(S)\), \(S_{0}\) is contained into \(S\) modulo \(\mathcal{H}^{n}\)-null sets. Therefore, in order to prove (2.1) it will be enough to show that
\[U^{(1)}\cap\partial^{*}U_{i}\stackrel{{\mathcal{H}^{n}}}{{ \subset}}S_{0}\,,\qquad\forall i\,. \tag{2.17}\]
Should this not be the case, it would be \(\mathcal{H}^{n}(U^{(1)}\cap\partial^{*}U_{i}\setminus S_{0})>0\) for some \(i\). We could thus pick \(x\in U^{(1)}\cap\partial^{*}U_{i}\) such that
\[\theta^{n}(\mathcal{H}^{n}\mathop{\hbox{\vrule height 6.0pt width 0.4pt depth 0.0pt\vrule heigh t 6.0pt width 0.4pt depth 0.0pt}}\nolimits(U^{(1)}\cap\partial^{*}U_{i}\setminus S_{0}))(x)=1\,. \tag{2.18}\]
Since \(\theta^{n}(\mathcal{H}^{n}\mathop{\hbox{\vrule height 6.0pt width 0.4pt depth 0.0pt\vrule heigh t 6.0pt width 0.4pt depth 0.0pt}}\nolimits\partial^{*}U_{i})(x)=1\) and \(S_{0}\subset U^{(1)}\) this implies \(\mathcal{H}^{n}(S_{0}\cap B_{r}(x))=\mathrm{o}(r^{n})\), while \(\partial^{*}U_{i}\subset U_{i}^{(1/2)}\) gives \(|U_{i}\cap B_{r}(x)|=(\omega_{n+1}/2)\,r^{n+1}+\mathrm{o}(r^{n+1})\). Therefore, given \(\delta>0\) we can find \(r>0\) such that
\[\mathcal{H}^{n}(S_{0}\cap B_{r}(x))<\delta\,r^{n}\,,\qquad\min\left\{|U_{i} \cap B_{r}(x)|,|U_{i}\setminus B_{r}(x)|\right\}\geq\left(\frac{\omega_{n+1}} {2}-\delta\right)r^{n+1}\,,\]
and then exploit the relative isoperimetric inequality and (2.8) to conclude that
\[c(n)\left[\left(\frac{\omega_{n+1}}{2}-\delta\right)r^{n+1}\right]^ {n/(n+1)} \leq P(U_{i};B_{r}(x))\leq\liminf_{j\to\infty}P(U_{i}^{j};B_{r}(x))\] \[\leq \mathcal{H}^{n}(S_{0}\cap B_{r}(x))\leq\delta\,r^{n}\,,\]
where in the next to last inequality we have used the definition (2.16) of \(S_{0}\). Choosing \(\delta>0\) small enough we reach a contradiction, thus deducing that \(\{U_{i}\}_{i}\in\mathcal{U}(S)\).
Taking into account the subadditivity of \(z\mapsto z^{s}\), in order to prove that \(S\) does not essentially disconnect any \(U_{i}\) it is sufficient to show that \(\{U_{i}\}_{i}\) is a maximizer of \(m\). To see this, we notice that \(|U_{j}^{j}\Delta U_{i}|\to 0\) as \(j\to\infty\) implies
\[m=\lim_{j\to\infty}\sum_{i=1}^{k}|U_{i}^{j}|^{s}+\sum_{i=k+1}^{\infty}|U_{i}^{j }|^{s}=\sum_{i=1}^{k}|U_{i}|^{s}+\lim_{j\to\infty}\sum_{i=k+1}^{\infty}|U_{i}^{j }|^{s}\,,\]
so that, letting \(k\to\infty\) and exploiting (2.9), we conclude that
\[m=\sum_{i=1}^{\infty}|U_{i}|^{s}\,. \tag{2.19}\]
This completes the proof of the first part of the statement (existence of essential partitions).
Let now \(S\), \(S^{*}\), \(\{U_{i}\}_{i}\), and \(\{U_{j}^{*}\}_{j}\) be as in statement (a) - that is, \(S^{*}\) is \(\mathcal{H}^{n}\)-contained in \(S\), \(\{U_{i}\}_{i}\) is an essential partition of \(U\) induced by \(S\), and, for every \(j\), \(\{U_{j}^{*}\}_{j}\) is a Lebesgue partition of \(U\) induced by \(S^{*}\) - and set \(Z=\cup_{i}\partial^{*}U_{i}\) and \(Z^{*}=\cup_{j}\partial^{*}U_{j}^{*}\). Arguing by contradiction with (2.4), let us assume \(\mathcal{H}^{n}(Z^{*}\setminus Z)>0\). By the definition of Lebesgue partition we have that \(Z\setminus U^{{(1)}}\) and \(Z^{*}\setminus U^{{(1)}}\) are both \(\mathcal{H}^{n}\)-equivalent to \(\partial^{*}U\). Therefore we have \(\mathcal{H}^{n}((Z^{*}\setminus Z)\cap U^{{(1)}})>0\). Since \(U^{{(1)}}\) is \(\mathcal{H}^{n}\)-equivalent to the union of the \(\{U_{i}^{{(1)}}\cup\partial^{*}U_{i}\}_{i\in I}\) we can find \(i\in I\) and \(j\in J\) such that \(\mathcal{H}^{n}(U_{i}^{{(1)}}\cap\partial^{*}U_{j}^{*})>0\). This implies that both \((U_{i}\cap U_{j}^{*})^{{(1)}/{2}}\) and \((U_{i}\setminus U_{j}^{*})^{{(1)}/{2}}\) are non-empty, and thus that \(\{U_{j}^{*}\cap U_{i},U_{i}\setminus U_{j}^{*}\}\) is a non-trivial Borel partition of \(U_{i}\). Since
\[U_{i}^{{(1)}}\cap\partial^{e}(U_{j}^{*}\cap U_{i})\stackrel{{ \mathcal{H}^{n}}}{{\subset}}U^{{(1)}}\cap \partial^{*}U_{j}^{*}\stackrel{{\mathcal{H}^{n}}}{{\subset}}S^{*}\,,\]
we conclude that \(S^{*}\) is essentially disconnecting \(U_{i}\), against the fact that \(S\) is not essentially disconnecting \(U_{i}\) and the fact that \(S^{*}\) is \(\mathcal{H}^{n}\)-contained in \(S\).
We finally prove statement (b). Let \(\{U_{i}\}_{i\in I}\), and \(\{U_{j}^{*}\}_{j\in J}\) be essential partitions of \(U\) induced by \(S\) and \(S^{*}\) respectively. Given \(i\in I\) such that \(|U_{i}|>0\), there is at least one \(j\in J\) such that \(|U_{i}\cap U_{j}^{*}|>0\). We _claim_ that it must be \(|U_{i}\setminus U_{j}^{*}|=0\). Should this not be the case, \(\partial^{*}U_{j}^{*}\) would be essentially disconnecting \(U_{i}\), thus implying that \(S^{*}\) (which contains \(\partial^{*}U_{j}^{*}\)) is essentially disconnecting \(U_{i}\). Now, either because we are assuming that \(S^{*}\) is \(\mathcal{H}^{n}\)-equivalent to \(S\), or because we are assuming that \(S^{*}=\mathcal{R}(S)\) and we have Lemma 2.2, the fact that \(S^{*}\) is essentially disconnecting \(U_{i}\) implies that \(S\) is essentially disconnecting \(U_{i}\), a contradiction. Having proved the claim, for each \(i\in I\) with \(|U_{i}|>0\) there is a unique \(\sigma(i)\in J\) such that \(|U_{i}\Delta U_{\sigma(j)}^{*}|=0\). This completes the proof.
## 3. Homotopic spanning on generalized soap films (Theorem 1.3)
The goal of this section is proving Theorem 1.3, and, actually, to obtain an even more general result. Let us recall that the objective of Theorem 1.3 was to reformulate the homotopic spanning property for a Borel set \(S\), in the case when \(S\) is locally \(\mathcal{H}^{n}\)-finite, in terms of unions of boundaries of induced essential partitions. We shall actually need this kind of characterization also for sets \(S\) of the more general form \(S=K\cup E^{{(1)}}\), where \((K,E)\in\mathcal{K}_{\rm B}\). For an illustration of the proposed characterization of homotopic spanning on this type of sets, see Figure 3.1.
**Theorem 3.1** (Homotopic spanning for generalized soap films).: _If \(\mathbf{W}\subset\mathbb{R}^{n+1}\) is a closed set in \(\mathbb{R}^{n+1}\), \(\mathcal{C}\) is a spanning class for \(\mathbf{W}\), \(K\) is a Borel set locally \(\mathcal{H}^{n}\)-finite in \(\Omega\), and \(E\) is of locally finite perimeter in \(\Omega\) such that \(\Omega\cap\partial^{*}E\) is \(\mathcal{H}^{n}\)-contained in \(K\), then the set_
\[S=\mathcal{R}(K)\cup E^{{(1)}} \tag{3.1}\]
_is \(\mathcal{C}\)-spanning \(\mathbf{W}\) if and only if, for every \((\gamma,\Phi,T)\in\mathcal{T}(\mathcal{C})\) and \(\mathcal{H}^{1}\)-a.e. \(s\in\mathbb{S}^{1}\),_
\[T[s]\cap E^{{(0)}}\text{ is $\mathcal{H}^{n}$-contained in $\operatorname{UBEP}(K\cup T[s];T)$}\,. \tag{3.2}\]
**Remark 3.2**.: An immediate corollary of Theorem 3.1 is that if \(K\) is \(\mathcal{H}^{n}\)-finite and \((K,E)\in\mathcal{K}_{\rm B}\) then \(K\cup E^{{(1)}}\) is \(\mathcal{C}\)-spanning \(\mathbf{W}\) if and only if \(\mathcal{R}(K)\cup E^{{(1)}}\) is \(\mathcal{C}\)-spanning \(\mathbf{W}\). Indeed, \(\mathcal{R}(K\cup T[s])=\mathcal{R}(K)\cup T[s]\), so that, by (1.13), \(\operatorname{UBEP}(K\cup T[s])=\operatorname{UBEP}(\mathcal{R}(K)\cup T[s])\).
Proof of Theorem 1.3.: This is Theorem 3.1 with \(E=\varnothing\).
Proof of Theorem 3.1.: _Step one_: We prove the following claim: If \(S\) essentially disconnects \(G\) into \(\{G_{1},G_{2}\}\) and \(H\subset G\) satisfies
\[\min\{\left|H\cap G_{1}\right|,\,\left|H\cap G_{2}\right|\}>0\,, \tag{3.3}\]
then \(S\) essentially disconnects \(H\) into \(H\cap G_{1}\) and \(H\cap G_{2}\). Indeed, if \(x\in H^{{(1)}}\), then \(x\in\partial^{e}(H\cap G_{i})\) if and only if \(x\in\partial^{e}G_{i}\) (\(i=1,2\)). Hence \(H^{{(1)}}\cap\partial^{e}(G_{1}\cap H)\subset H^{{(1)}}\cap \partial^{e}G_{1}\subset G^{{(1)}}\cap \partial^{e}G_{1}\), which, by (3.3) and our assumption on \(S\) and \(G\), gives the desired conclusion.
_Step two_: Taking from now on \(S\), \(K\) and \(E\) as in the statement we preliminary notice that if \((\gamma,\Phi,T)\in\mathcal{T}(\mathcal{C})\), \(s\in\mathbb{S}^{1}\), and \(\{U_{i}\}_{i}\) is the essential partition of \(T\) induced by \((\mathcal{R}(K)\cup T[s])\), then
\[T\cap\partial^{*}E\overset{\mathcal{H}^{n}}{\subset}T\cap\bigcup_{i} \partial^{*}U_{i}\,. \tag{3.4}\]
Indeed, since \(\Omega\cap\partial^{*}E\) is \(\mathcal{H}^{n}\)-contained in \(\mathcal{R}(K)\), if a Borel set \(G\) is such that \(\left|G\cap E\right|\left|G\setminus E\right|>0\) then, by step one, \(\mathcal{R}(K)\) essentially disconnects \(G\). In particular, since, for each \(i\), \(\mathcal{R}(K)\cup T[s]\) does not essentially disconnect \(U_{i}\), we find that, for each \(i\),
\[\text{either }U_{i}^{{(1)}}\subset E^{{(0)}} \qquad\text{or }U_{i}^{{(1)}}\subset E^{{ (1)}}\,. \tag{3.5}\]
Clearly, (3.5) immediately implies (3.4).
Figure 3.1. In panel (a) we have depicted a pair \((K,E)\) where \(E\) is a tube inside \(T\) and \(K\) consists of the union of the boundary of \(E\) and the _non_-spanning set \(S\) of Figure 1.6-(a). Notice that \(K\) is not \(\mathcal{C}\)-spanning, if we see things from the point of view of Definition A, since it misses every loop \(\gamma\) contained in the interior of \(E\); while, of course, \(K\cup E\) is \(\mathcal{C}\)-spanning because \(E\) has been added. In panel (b) we have depicted the essential partition \(\{U_{i}\}_{i=1}^{5}\) of \(T\) induced by \(K\cup T[s]\). Notice that \(E=U_{1}\), therefore no \(\partial^{*}U_{i}\cap\partial^{*}U_{j}\)\(\mathcal{H}^{1}\)-contains \(T[s]\cap E\). In particular, \(T[s]\cap E\) (which \(\mathcal{H}^{1}\)-equivalent to \(T[s]\setminus E^{{(0)}}\)) is not \(\mathcal{H}^{1}\)-contained in \(\operatorname{UBEP}(K\cup T[s];T)\), and we see again, this time from the point of view of Definition B as reformulated in Theorem 1.3, that \(K\) is not \(\mathcal{C}\)-spanning. As stated in Theorem 3.1, from the viewpoint of Definition B it is only the \(\mathcal{H}^{1}\)-containment of \(T[s]\cap E^{{(0)}}\) into \(\operatorname{UBEP}(K\cup T[s];T)\) that establishes the \(\mathcal{C}\)-spanning property of \(K\cup E\): and this \(\mathcal{H}^{1}\)-containment indeed holds, since \(T[s]\cap E^{{(0)}}=T[s]\setminus\operatorname{cl}(E)\) is \(\mathcal{H}^{1}\)-contained in the union of \(\partial^{*}U_{2}\cap\partial^{*}U_{3}\) and \(\partial^{*}U_{4}\cap\partial^{*}U_{5}\).
_Step three_: We prove the "only if" part of the statement, that is, given \((\gamma,\Phi,T)\in\mathcal{T}(\mathcal{C})\) and \(s\in\mathbb{S}^{1}\), we assume that
\[\text{for $\mathcal{H}^{n}$-a.e. $x\in T[s]$}\,, \tag{3.6}\] \[\exists\text{ a partition $\{T_{1},T_{2}\}$ of $T$ with $x\in\partial^{e}T_{1}\cap\partial^{e}T_{2}$}\,,\] \[\text{and s.t. $\mathcal{R}(K)\cup E^{{(1)}} \cup T[s]$ essentially disconnects $T$ into $\{T_{1},T_{2}\}$}\,,\]
and then prove that
\[T[s]\cap E^{{(0)}}\text{ is $\mathcal{H}^{n}$-contained in $\bigcup_{i}\partial^{*}U_{i}$}\,, \tag{3.7}\]
where \(\{U_{i}\}_{i}\) is the essential partition of \(T\) induced by \(\mathcal{R}(K)\cup T[s]\). To this end, arguing by contradiction, we suppose that for some \(s\in\mathbb{S}^{1}\), there is \(G\subset T[s]\cap E^{{(0)}}\) with \(\mathcal{H}^{n}(G)>0\) and such that \(G\cap_{i}\partial^{*}U_{i}=\varnothing\). In particular, there is an index \(i\) such that \(\mathcal{H}^{n}(G\cap U_{i}^{{(1)}})>0\), which, combined with (3.5) and \(G\subset E^{{(0)}}\), implies
\[U_{i}^{{(1)}}\subset E^{{(0)}}\,. \tag{3.8}\]
Now by (3.6) and \(\mathcal{H}^{n}(G\cap U_{i}^{{(1)}})>0\), we can choose \(x\in G\cap U_{i}^{{(1)}}\) such that \(\mathcal{R}(K)\cup E^{{(1)}}\cup T[s]\) essentially disconnects \(T\) into some \(\{T_{1},T_{2}\}\) such that \(x\in\partial^{e}T_{1}\cap\partial^{e}T_{2}\). Then, \(\{U_{i}\cap T_{1},U_{i}\cap T_{2}\}\) is a non-trivial partition of \(U_{i}\), so that, by step one and (3.8), \(\mathcal{R}(K)\cup T[s]\) essentially disconnects \(U_{i}\) into \(\{U_{i}\cap T_{1},U_{i}\cap T_{2}\}\). This contradicts the defining property (2.2) of essential partitions, and concludes the proof.
_Step four_: We prove the "if" part of the statement. More precisely, given \((\gamma,\Phi,T)\in\mathcal{T}(\mathcal{C})\) and \(s\in\mathbb{S}^{1}\), we assume that (3.7) holds at \(s\), and then proceed to prove that (3.6) holds at \(s\). We first notice that, since \(\{E^{{(1)}},E^{{(0)}},\partial^{*}E\}\) is a partition of \(\Omega\) modulo \(\mathcal{H}^{n}\), it is enough to prove (3.6) for \(\mathcal{H}^{n}\)-a.e. \(x\in T[s]\cap(E^{{(1)}}\cup E^{{(0)}}\cup\partial^{*}E)\).
If \(x\in T[s]\cap\partial^{*}E\), then by letting \(T_{1}=T\cap E\) and \(T_{2}=T\setminus E\) we obtain a partition of \(T\) such that \(x\in T\cap\partial^{*}E=T\cap\partial^{*}T_{1}\cap\partial^{*}T_{2}\subset \partial^{e}T_{1}\cap\partial^{e}T_{2}\), and such that \(\partial^{*}E\) essentially disconnects \(T\) into \(\{T_{1},T_{2}\}\). Since \(\Omega\cap\partial^{*}E\) is \(\mathcal{H}^{n}\)-contained in \(\mathcal{R}(K)\), we deduce (3.6).
If \(x\in T[s]\cap E^{{(0)}}\), then, thanks to (3.7) and denoting by \(\{U_{i}\}_{i}\) the essential partition of \(T\) induced by \((\mathcal{R}(K)\cup T[s])\), there is an index \(i\) such that \(x\in T\cap\partial^{*}U_{i}\). Setting \(T_{1}=U_{i}\) and \(T_{2}=T\setminus U_{i}\), we have that \(T\cap\partial^{*}U_{i}\) (which contains \(x\)) is in turn contained into \(\partial^{e}T_{1}\cap\partial^{e}T_{2}\cap T\). Since the latter set is non-empty, \(\{T_{1},T_{2}\}\) is a non-trivial partition of \(T\). Moreover, by definition of essential partition,
\[T^{{(1)}}\cap\partial^{e}T_{1}\cap\partial^{e}T_{2}=T\cap \partial^{e}U_{i}\overset{\mathcal{H}^{n}}{\subset}\mathcal{R}(K)\cup T[s]\,,\]
so that \(\mathcal{R}(K)\cup T[s]\) essentially disconnects \(T\), and (3.6) holds.
Finally, if \(x\in T[s]\cap E^{{(1)}}\), we let \(s_{1}=s\), pick \(s_{2}\neq s\), denote by \(\{I_{1},I_{2}\}\) the partition of \(\mathbb{S}^{1}\) defined by \(\{s_{1},s_{2}\}\), and set
\[T_{1}=\Phi(I_{1}\times B_{1}^{n})\cap E\,,\qquad T_{2}=\Phi(I_{2}\times B_{1}^{ n})\cup\,\left(\Phi(I_{1}\times B_{1}^{n})\setminus E\right).\]
This is a Borel partition of \(T\), and using the fact that \(x\in E^{{(1)}}\), we compute
\[|T_{1}\cap B_{r}(x)|=|\Phi(I_{1}\times B_{1}^{n})\cap E\cap B_{r}(x)|=|\Phi(I_ {1}\times B_{1}^{n})\cap B_{r}(x)|+\mathrm{o}(r^{n+1})=\frac{|B_{r}(x)|}{2}+ \mathrm{o}(r^{n+1})\,.\]
Therfore \(x\in\partial^{e}T_{1}\cap\partial^{e}T_{2}\), and by standard facts about reduced boundaries [13, Chapter 16],
\[\partial^{e}T_{1}\cap\partial^{e}T_{2}\cap T^{{(1)}} \overset{\mathcal{H}^{n}}{\subset}\partial^{*}T_{1}\cap T^{{(1)}} \overset{\mathcal{H}^{n}}{\subset}\left(\partial^{*}E\cup\left((T[s_{1}] \cup T[s_{2}])\cap E^{{(1)}}\right)\right)\cap T^{{(1)}}\,.\]
Since \(\Omega\cap\partial^{*}E\) is \(\mathcal{H}^{n}\)-contained in \(\mathcal{R}(K)\), we have shown (3.6).
## 4. The fundamental closure theorem for homotopic spanning conditions
In Theorem 1.3 and Theorem 3.1 we have presented two reformulations of the homotopic spanning condition in terms of \(\mathcal{H}^{n}\)-containment into union of boundaries of essential partitions. The goal of this section is discussing the closure of such reformulations, and provide a statement (Theorem 4.1 below) which will lie at the heart of the closure theorems proved in Section 5.
**Theorem 4.1** (Basic closure theorem for homotopic spanning).: _Let \(\mathbf{W}\subset\mathbb{R}^{n+1}\) be closed and let \(\mathcal{C}\) be a spanning class for \(\mathbf{W}\). Let us assume that:_
**(a):**: \(K_{j}\) _are_ \(\mathcal{H}^{n}\)_-finite Borel subsets of_ \(\Omega\) _with_ \(\mathcal{H}^{n}\mathop{\mathsf{L}}K_{j}\stackrel{{\ast}}{{\rightharpoonup}}\mu\) _as Radon measures in_ \(\Omega\)_;_
**(b):**: \((\gamma,\Phi,T)\in\mathcal{T}(\mathcal{C})\)_,_ \(\{s_{j}\}_{j}\) _is a sequence in_ \(\mathbb{S}^{1}\) _with_ \(s_{j}\to s_{0}\) _as_ \(j\to\infty\)_;_
**(c):**: _if_ \(\{U_{i}^{j}\}_{i}\) _denotes the essential partition of_ \(T\) _induced by_ \(K_{j}\cup T[s_{j}]\)_, then there is a limit partition_ \(\{U_{i}\}_{i}\) _of_ \(\{U_{i}^{j}\}_{i}\) _in the sense of (_2.8_) in Lemma_ 2.3_;_
_Under these assumptions, if \(\mu(T[s_{0}])=0\), \(F_{j},F\subset\Omega\) are sets of finite perimeter with \(F_{j}\to F\) as \(j\to\infty\) and such that, for every \(j\), \(\Omega\cap\partial^{\ast}F_{j}\) is \(\mathcal{H}^{n}\)-contained in \(K_{j}\) and_
\[T[s_{j}]\cap F_{j}^{(0)}\text{ is $\mathcal{H}^{n}$-contained in $K_{j}^{\ast}$}\,, \tag{4.1}\]
_then_
\[T[s_{0}]\cap F^{(0)}\text{ is $\mathcal{H}^{n}$-contained in $K^{\ast}$}\,, \tag{4.2}\]
_where we have set_
\[K_{j}^{\ast}=\operatorname{UBEP}(K_{j}\cup T[s_{j}];T)=T\cap\bigcup_{i} \partial^{\ast}U_{i}^{j}\,,\qquad K^{\ast}=T\cap\bigcup_{i}\partial^{\ast}U_{ i}\,. \tag{4.3}\]
**Remark 4.2**.: Notice that \(\{U_{i}\}_{i}\) may fail to be the essential partition of \(T\) induced by \(K^{\ast}\) (which is the "optimal" choice of a Borel set potentially inducing \(\{U_{i}\}_{i}\) on \(T\)): indeed, some of the sets \(U_{i}\) may fail to be essentially connected, even though \(U_{i}^{j}\to U_{i}\) as \(j\to\infty\) and every \(U_{i}^{j}\), as an element of an essential partition, is necessarily essentially connected; see Figure 4.1.
Figure 4.1. The situation in the proof of Theorem 4.1 in the basic case when \(K_{j}=\Omega\cap\partial^{\ast}F_{j}\). The essential partition of \(T\) induced by \(K_{j}\cup T[s_{j}]\) is denoted by \(\{U_{i}^{j}\}_{i}\). The limit partition \(\{U_{i}\}_{i}\) of \(\{U_{i}^{j}\}_{i}\) may fail to be the essential partition of \(T\) induced by \(K^{\ast}=T\cap\cup_{i}\partial^{\ast}U_{i}\), since some of the \(U_{i}\) may be essentially disconnected. In the picture, denoting by \(\{V_{k}\}_{k}\) the essential partition of \(T\) induced by \(K^{\ast}\), we have \(U_{5}=V_{5}\cup V_{6}=T\cap F\). We also notice, in reference to the notation set in (4.6), that \(X_{1}^{j}=\{5\}\) and \(X_{0}^{j}=\{1,2,3,4\}\).
Proof of Theorem 4.1.: _Step one_: We start by showing that, for each \(j\) and \(i\) such that \(|U_{i}^{j}|>0\), we have
\[\text{either}\quad(U_{i}^{j})^{{}^{(1)}}\subset F_{j}^{{}^{(1)}}\,,\qquad\text{ or}\quad(U_{i}^{j})^{{}^{(1)}}\subset F_{j}^{{}^{(0)}}\,, \tag{4.4}\]
and for each \(i\) such that \(|U_{i}|>0\),
\[\text{either}\quad U_{i}^{{}^{(1)}}\subset F^{{}^{(1)}}\,,\qquad\text{or} \quad U_{i}^{{}^{(1)}}\subset F^{{}^{(0)}}\,. \tag{4.5}\]
Postponing for the moment the proof of (4.4) and (4.5), let us record several consequences of these inclusions. First, if we set
\[X_{1}^{j} =\left\{i:|U_{i}^{j}|>0\,,\,(U_{i}^{j})^{{}^{(1)}} \subset F_{j}^{{}^{(1)}}\right\}, X_{0}^{j} =\left\{i:|U_{i}^{j}|>0\,,\,(U_{i}^{j})^{{}^{(1)}} \subset F_{j}^{{}^{(0)}}\right\}, \tag{4.6}\] \[X_{1} =\left\{i:|U_{i}|>0\,,\,U_{i}^{{}^{(1)}} \subset F^{{}^{(1)}}\right\}, X_{0} =\left\{i:|U_{i}|>0\,,\,U_{i}^{{}^{(1)}} \subset F^{{}^{(0)}}\right\}, \tag{4.7}\]
then, thanks to (4.4) and (4.5), we have
\[X^{j}:=\left\{i:|U_{i}^{j}|>0\right\}=X_{0}^{j}\cup X_{1}^{j}\,,\qquad X:=\left\{ i:|U_{i}|>0\right\}=X_{0}\cup X_{1}\,. \tag{4.8}\]
Combining (4.4) and (4.5) with \(F_{j}\to F\) and \(U_{i}^{j}\to U_{i}\), we find that for every \(i\in X\), there is \(J_{i}\in\mathbb{N}\) such that, for every \(m\in\{0,1\}\),
\[\text{if $i\in X_{m}$, then $i\in X_{m}^{j}$ for all $j\geq J_{i}$}. \tag{4.9}\]
Lastly, \(\left\{U_{i}^{j}\right\}_{i\in X_{l}^{j}}\) is a Lebesgue partition of \(T\cap F_{j}\), and thus, by Federer's theorem (1.37),
\[T\cap F_{j}^{{}^{(1)}}\stackrel{{\mathcal{H}^{n}}}{{\subset}} \bigcup_{i\in X_{l}^{j}}(U_{i}^{j})^{{}^{(1)}}\cup\partial^{*}U_{i}^{j}\,, \qquad T\cap\partial^{*}F_{j}\stackrel{{\mathcal{H}^{n}}}{{\subset }}T\cap\bigcup_{i\in X_{l}^{j}}\partial^{*}U_{i}^{j}\ \subset\ T\cap K_{j}^{*}\,. \tag{4.10}\]
_To prove_ (4.4) _and_ (4.5): Since \(\{U_{i}^{j}\}_{i}\) is the essential partition of \(T\) induced by \(K_{j}\cup T[s_{j}]\) and \(K_{j}^{*}=\text{UBEP}(K_{j}\cup T[s_{j}];T)\), we have
\[K_{j}^{*}\text{ is $\mathcal{H}^{n}$-contained in $K_{j}\cup T[s_{j}]$}\,, \qquad\forall j\,, \tag{4.11}\] \[K_{j}\cup T[s_{j}]\text{ does not essentially disconnect }U_{i}^{j}\,,\qquad\forall i,j\,. \tag{4.12}\]
Since \(\Omega\cap\partial^{*}F_{j}\) is \(\mathcal{H}^{n}\)-contained in \(K_{j}\cup T[s_{j}]\), the combination of (4.12) with Federer's theorem (1.37) gives (4.4). The combination of \(|U_{i}^{j}\Delta U_{i}|\to 0\) as \(j\to\infty\) with (4.4) gives (4.5).
_Step two_: We reduce the proof of (4.2) to that of
\[\mathcal{H}^{n}(U_{i}^{{}^{(1)}}\cap T[s_{0}])=0\,,\qquad\forall i\in X_{0}\,. \tag{4.13}\]
Indeed, \(\{U_{i}^{{}^{(1)}}:i\in X_{0}\}\cup\{F^{{}^{(0)}} \cap\partial^{*}U_{i}:i\in X_{0}\}\) is an \(\mathcal{H}^{n}\)-partition of \(T\cap F^{{}^{(0)}}\). In particular, \(T\cap F^{{}^{(0)}}\) is \(\mathcal{H}^{n}\)-contained in \(\cup_{i\in X_{0}}U_{i}^{{}^{(1)}}\cup\partial^{*}U_{i}\), so that, should (4.13) hold, then \(T[s_{0}]\cap F^{{}^{(0)}}\) would be \(\mathcal{H}^{n}\)-contained in \(\cup_{i\in X_{0}}\partial^{*}U_{i}\), and thus in \(K^{*}\), thus proving (4.2).
_Step three_: We change variables from \(T\) to10\(Y=\Phi^{-1}(T)=\mathbb{S}^{1}\times B_{1}^{n}\). We set \(Y[s]=\Phi^{-1}(T[s])=\{s\}\times B_{1}^{n}\) for the \(s\)-slice of \(Y\), and
Footnote 10: Here we identify \(\mathbb{S}^{1}\) with \(\mathbb{R}/(2\pi\mathbb{Z})\) and, with a slight abuse of notation, denote by \(\mathcal{L}^{n+1}\) the “Lebesgue measure on \(\mathbb{S}^{1}\times B_{1}^{n}\), which we use to define sets of finite perimeter and points of density in \(\mathbb{S}^{1}\times B_{1}^{n}\).
\[Y_{i}=\Phi^{-1}(U_{i})\,,\qquad Y_{i}^{j}=\Phi^{-1}(U_{i}^{j})\,,\qquad W_{i}=Y \setminus Y_{i}\,,\qquad W_{i}^{j}=Y\setminus Y_{i}^{j}\,, \tag{4.14}\]
Since \(\Phi\) is a diffeomorphism, by [10, Lemma A.1] and the area formula we have that
\[\partial^{*}\Phi^{-1}(H)=\Phi^{-1}(\partial^{*}H)\,,\qquad(\Phi^{-1}(H))^{{}^{ (m)}}=\Phi^{-1}(H^{{}^{(m)}})\,,m\in\{0,1\}\,, \tag{4.15}\]
for every set of finite perimeter \(H\subset T\); in particular, setting
\[M_{j}=\Phi^{-1}(F_{j}\cap T)\,,\qquad M=\Phi^{-1}(F\cap T)\,,\]
by Federer's theorem (1.37), we see that (4.1) is equivalent
\[Y[s_{j}]\text{ is $\mathcal{H}^{n}$-contained in $\bigcup_{i}\partial^{\ast}Y_{i}^{j}\cup M_{j}^{{}^{(1)}} \cup\partial^{\ast}M_{j}$}\,, \tag{4.16}\]
By (4.10) and (4.15), we may rewrite (4.16) as
\[Y[s_{j}]\text{ is $\mathcal{H}^{n}$-contained in $\bigcup_{i\in\mathbb{N}} \partial^{\ast}Y_{i}^{j}\cup\bigcup_{i\in X_{i}^{j}}(Y_{i}^{j})^{{}^{(1)}}$}\,. \tag{4.17}\]
Similarly, \(Y_{i}^{{}^{(1)}}=\Phi^{-1}(U_{i}^{{}^{(1)}})\) for every \(i\), and thus (4.13) is equivalent to
\[\mathcal{H}^{n}(Y_{i}^{{}^{(1)}}\cap Y[s_{0}])=0\,,\qquad \forall i\in X_{0}\,. \tag{4.18}\]
We are thus left to prove that (4.17) implies (4.18).
To this end, let us denote by \(\mathbf{p}\) the projection of \(Y=\mathbb{S}^{1}\times B_{1}^{n}\) onto \(B_{1}^{n}\), and consider the sets
\[G_{i}=\mathbf{p}\big{(}Y_{i}^{{}^{(1)}}\cap Y[s_{0}]\big{)}\,, \qquad G_{i}^{\ast}=G^{\ast}\cap G_{i}\,,\]
corresponding to the set \(G^{\ast}\subset B_{1}^{n}\) with \(\mathcal{H}^{n}(B_{1}^{n}\setminus G^{\ast})=0\) defined as follows:
(i) denoting by \(H_{y}=\{s\in\mathbb{S}^{1}:(s,y)\in H\}\) the "circular slice of \(H\subset Y\) above \(y\)", if \(y\in G^{\ast}\), \(j\in\mathbb{N}\), \(k\) is an index for the partitions \(\{Y_{k}\}_{k}\) and \(\{Y_{k}^{j}\}\), and \(H\in\{Y_{k},W_{k},Y_{k}^{j},W_{k}^{j}\}\), then \(H_{y}\) is a set of finite perimeter in \(\mathbb{S}^{1}\) with
\[H_{y}\stackrel{{\mathcal{H}^{1}}}{{=}}(H_{y})^{{}^{(1)}\mathbb{S }^{1}}\,,\qquad\partial^{\ast}_{\mathbb{S}^{1}}(H_{y})\stackrel{{ \mathcal{H}^{0}}}{{=}}(\partial^{\ast}H)_{y}\,, \tag{4.19}\]
(and thus with \(\partial^{\ast}_{\mathbb{S}^{1}}(H_{y})=(\partial^{\ast}H)_{y}\)); this is a standard consequence of the slicing theory for sets of finite perimeter, see, e.g., [1, Theorem 2.4] or [16, Remark 18.13];
(ii) for every \(y\in G^{\ast}\) and \(j\in\mathbb{N}\),
\[(s_{j},y)\in\bigcup_{k\in\mathbb{N}}\partial^{\ast}Y_{k}^{j}\cup \bigcup_{k\in X_{1}^{j}}(Y_{k}^{j})^{{}^{(1)}}\,; \tag{4.20}\]
this is immediate from (4.17);
(iii) for every \(y\in G^{\ast}\), and \(k\) an index for the partitions \(\{Y_{k}\}_{k}\) and \(\{Y_{k}^{j}\}\),
\[\lim_{j\to\infty}\mathcal{H}^{1}((Y_{k})_{y}\Delta(Y_{k}^{j})_{y})= 0\,; \tag{4.21}\]
this is immediate from Fubini's theorem and \(Y_{k}^{j}\to Y_{k}\) as \(j\to\infty\);
(iv) for every \(y\in G^{\ast}\),
\[\sum_{k}\mathcal{H}^{0}((\partial^{\ast}Y_{k}^{j})_{y})<\infty; \tag{4.22}\]
indeed, by applying in the order the coarea formula, the area formula and (2.3) we find
\[\sum_{k}\int_{B_{1}^{n}}\mathcal{H}^{0}((\partial^{\ast}Y_{k}^{j })_{y})\,d\mathcal{H}^{n} \leq \sum_{k}P(Y_{k}^{j};Y)\leq(\text{Lip}\Phi^{-1})^{n}\,\sum_{k}P(U_{ k}^{j};T)\] \[\leq 2\,(\text{Lip}\Phi^{-1})^{n}\,\mathcal{H}^{n}(K_{j}\cup T[s_{j} ])\,.\]
Now, let us pick \(y\in G_{i}^{\ast}\). Since \(y\in G_{i}\) implies \((s_{0},y)\in Y_{i}^{{}^{(1)}}\), and \(Y_{i}^{{}^{(1)}}\cap\partial^{\ast}Y_{i}=\varnothing\), we find \((s_{0},y)\not\in\partial^{\ast}Y_{i}\), i.e. \(s_{0}\not\in(\partial^{\ast}Y_{i})_{y}\). By \(y\in G^{\ast}\), we have \((\partial^{\ast}Y_{i})_{y}=\partial^{\ast}_{\mathbb{S}^{1}}(Y_{i})_{y}\), so that
\[s_{0}\not\in\partial^{\ast}_{\mathbb{S}^{1}}(Y_{i})_{y}\,. \tag{4.23}\]
Since \((Y_{i})_{y}\) has finite perimeter, \(\partial^{\ast}_{\mathbb{S}^{1}}(Y_{i})_{y}\) is a finite set, and so (4.23) implies the existence of an open interval \(\mathcal{A}_{y}\subset\mathbb{S}^{1}\), containing \(s_{0}\), \(\mathcal{H}^{1}\)-contained either in \((Y_{i})_{y}\) or in \((W_{i})_{y}\), and such that
\[\partial_{\mathbb{S}^{1}}\mathcal{A}_{y}\subset(\partial^{\ast}Y_{i})_{y}= \partial^{\ast}_{\mathbb{S}^{1}}(W_{i})_{y}\,. \tag{4.24}\]
We claim that there is \(G_{i}^{**}\subset G_{i}^{*}\), with full \(\mathcal{H}^{n}\)-measure in \(G_{i}^{*}\) (and thus in \(G_{i}\)), such that
\[\mathcal{A}_{y}\text{ is $\mathcal{H}^{1}$-contained in $(Y_{i})_{y}$}\,,\qquad \forall y\in G_{i}^{**}\,. \tag{4.25}\]
Indeed, let us consider the countable decomposition \(\{G_{i,m}^{*}\}_{m=1}^{\infty}\) of \(G_{i}^{*}\) given by
\[G_{i,m}^{*}=\Big{\{}y\in G_{i}^{*}:\text{dist}\big{(}\{s_{0}\},\partial_{ \mathbb{S}^{1}}\mathcal{A}_{y}\big{)}\in\big{[}1\big{/}(m+1),1\big{/}m\big{)} \Big{\}}\subset B_{1}^{n}\,,\]
and let
\[Z_{i,m}=\big{\{}y\in G_{i,m}^{*}:\mathcal{A}_{y}\text{ is $\mathcal{H}^{1}$- contained in $(W_{i})_{y}$}\big{\}}\,.\]
If \(\mathcal{H}^{n}(Z_{i,m})>0\), then there is \(y^{*}\in Z_{i,m}^{{}^{(1)}}\), so that \(\mathcal{H}^{n}(Z_{i,m}\cap B_{r}^{n}(y^{*}))=\omega_{n}\,r^{n}+\text{o}(r^{n})\). Therefore, if \(r<1/(m+1)\) and \(B_{r}^{1}(s_{0})\) denotes the open interval of center \(s_{0}\) and radius \(r\) inside \(\mathbb{S}^{1}\), then
\[\mathcal{L}^{n+1}\big{(}Y_{i}\cap\big{(}B_{r}^{1}(s_{0})\times B_ {r}^{n}(y^{*})\big{)}\big{)}=\int_{B_{r}^{n}(y^{*})}\mathcal{H}^{1}(B_{r}^{1} (s_{0})\cap(Y_{i})_{y})\,d\mathcal{H}_{y}^{n}\] \[= \int_{Z_{i,m}\cap B_{r}^{n}(y^{*})}\mathcal{H}^{1}(B_{r}^{1}(s_{ 0})\cap(Y_{i})_{y})\,d\mathcal{H}_{y}^{n}+\text{o}(r^{n+1})=\text{o}(r^{n+1})\]
where in the last identity we have used the facts that \(y\in Z_{i,m}\cap B_{r}^{n}(y^{*})\), \(s_{0}\in\mathcal{A}_{y}\), and \(r<1/(m+1)\) to conclude that \(B_{r}^{1}(s_{0})\) is \(\mathcal{H}^{1}\)-contained in \((W_{i})_{y}\); in particular, \((s_{0},y^{*})\in Y_{i}^{{}^{(0)}}\), against the fact that \(Z_{i,m}\subset G_{i}(=\mathbf{p}(Y[s_{0}]\cap Y_{i}^{{}^{(1)}}))\). We have thus proved that each \(Z_{i,m}\) is \(\mathcal{H}^{n}\)-negligible, and therefore that there is \(G_{i}^{**}\subset G_{i}^{*}\) and \(\mathcal{H}^{n}\)-equivalent to \(G_{i}^{*}\), such that (4.25) holds true.
Having proved (4.25), we now notice that, by (4.20), \(y\in G_{i}^{*}\) implies
\[s_{j}\in\bigcup_{k\in\mathbb{N}}(\partial^{*}Y_{k}^{j})_{y}\cup\bigcup_{k\in X _{1}^{j}}\big{(}(Y_{k}^{j})^{{}^{(1)}}\big{)}_{y}=\bigcup_{k}\partial_{ \mathbb{S}^{1}}^{*}(Y_{k}^{j})_{y}\cup\bigcup_{k\in X_{1}^{j}}\big{(}(Y_{k}^{ j})_{y}\big{)}^{{}^{(1)}{}_{\mathbb{S}^{1}}}\,. \tag{4.26}\]
If (4.26) holds because \(s_{j}\in\partial_{\mathbb{S}^{1}}^{*}(Y_{k}^{j})_{y}\) for some \(k\), then, thanks to (4.22) there must \(k^{\prime}\neq k\) such that \(s_{j}\in\partial_{\mathbb{S}^{1}}^{*}(Y_{k^{\prime}}^{j})_{y}\) too; since either \(k\) or \(k^{\prime}\) must be different from \(i\), we conclude that \(s_{i}\in\partial_{\mathbb{S}^{1}}^{*}(Y_{k(i)}^{j})_{y}\) for some \(k(i)\neq i\); if, instead, (4.26) holds because \(s_{j}\in\big{(}(Y_{k}^{j})_{y}\big{)}^{{}^{(1)}{}_{\mathbb{S}^{1}}}\) for some \(k\in X_{1}^{j}\), then we can recall that, thanks to (4.9), \(i\in X_{0}^{j}\) for every \(j\geq J_{i}\), and thus \(i\neq k\); in summary, for each \(y\in G_{i}^{*}\),
\[\text{if $j\geq J_{i}$, then $\exists k(j)\neq i$ s.t. $s_{j}\in\partial_{\mathbb{S}^{1}}^{*}(Y_{k(j)}^{j})_{y}\cup\big{(}(Y_{k(j)}^{j})_{ y}\big{)}^{{}^{(1)}{}_{\mathbb{S}^{1}}}$}\,. \tag{4.27}\]
With the goal of obtaining a lower bound on the relative perimeters of the sets \(Y_{i}^{j}\) in a neighborhood of \(G_{i}\) (see (4.31) below), we now consider \(y\in G_{i}^{**}\), and pick \(r>0\) such that \(\text{cl}\,B_{r}^{1}(s_{0})\subset\mathcal{A}_{y}\). Correspondingly, since \(s_{j}\to s_{0}\) and (4.27) holds, we can find \(J^{*}=J^{*}(i,y,r)\geq J_{i}\) such that, for \(j\geq J^{*}\),
\[s_{j}\in B_{r}^{1}(s_{0})\cap\big{[}\partial_{\mathbb{S}^{1}}^{*}(Y_{k(j)}^{j})_ {y}\cup\big{(}(Y_{k(j)}^{j})_{y}\big{)}^{{}^{(1)}{}_{\mathbb{S}^{1}}}\big{]} \subset\mathcal{A}_{y}\cap\big{[}\partial_{\mathbb{S}^{1}}^{*}(Y_{k(j)}^{j})_ {y}\cup\big{(}(Y_{k(j)}^{j})_{y}\big{)}^{{}^{(1)}{}_{\mathbb{S}^{1}}}\big{]}\,. \tag{4.28}\]
Now, by (4.21), \(k(j)\neq i\), and \(\mathcal{A}_{y}\overset{\mathcal{H}^{1}}{\subset}(Y_{i})_{y}\), we have
\[\lim_{j\to\infty}\mathcal{H}^{1}(\mathcal{A}_{y}\cap(Y_{k(j)}^{j})_{y})=0\,. \tag{4.29}\]
Since, by (4.19), \((Y_{k(j)}^{j})_{y}\) is \(\mathcal{H}^{1}\)-equivalent to a finite union of intervals, (4.28) implies the existence of an open interval \(\mathcal{I}_{y}^{j}\) such that
\[s_{j}\in\text{cl}\,_{\mathbb{S}^{1}}\mathcal{I}_{y}^{j}\,,\qquad\mathcal{I}_{y}^{j }\overset{\mathcal{H}^{1}}{\subset}(Y_{k(j)}^{j})_{y}\,,\qquad\partial_{ \mathbb{S}^{1}}\mathcal{I}_{y}^{j}\subset(\partial^{*}Y_{k(j)}^{j})_{y}\subset( \partial^{*}W_{i}^{j})_{y}\,, \tag{4.30}\]
which, due to (4.28) and (4.29), must satisfy
\[\lim_{j\to\infty}\operatorname{diam}\big{(}\mathcal{I}^{j}_{y}\big{)}=0\,.\]
In particular,
\[\partial_{\mathbb{S}^{1}}\,\mathcal{I}^{j}_{y}\subset B^{1}_{r}(s_{0})\,,\qquad \forall j\geq J^{*}\,,\]
and thus, by the last inclusion in (4.30),
\[\mathcal{H}^{0}\big{(}B^{1}_{r}(s_{0})\cap\partial_{\mathbb{S}^{1}}^{*}(W^{j}_ {i})_{y}\big{)}\geq\mathcal{H}^{0}(B^{1}_{r}(s_{0})\cap\partial_{\mathbb{S}^{1 }}\mathcal{I}^{y}_{j})\geq 2\,,\]
whenever \(j\geq J^{*}\). Since \(y\in G^{**}_{i}\) and \(r>0\) were arbitrary, by the coarea formula and Fatou's lemma,
\[\liminf_{j\to\infty}P(W^{j}_{i};B^{1}_{r}(s_{0})\times G^{**}_{i}) \geq \liminf_{j\to\infty}\int_{G^{**}_{i}}\mathcal{H}^{0}\big{(}B^{1 }_{r}(s_{0})\cap\partial_{\mathbb{S}^{1}}^{*}(W^{j}_{i})_{y}\big{)}\,d \mathcal{H}^{n}_{y} \tag{4.31}\] \[\geq 2\,\mathcal{H}^{n}(G^{**}_{i})=2\,\mathcal{H}^{n}(G_{i})\,.\]
Now, since \(\partial^{*}W^{j}_{i}=\partial^{*}Y^{j}_{i}=\Phi^{-1}(\partial^{*}U^{j}_{i})\), by (4.11) we have
\[Y\cap\bigcup_{i}\partial^{*}W^{j}_{i}\text{ is $\mathcal{H}^{n}$-contained in $Y[s_{j}]\cup\Phi^{-1}\big{(}T\cap K_{j}\big{)}$}\,,\]
which implies, for every \(j\) large enough to have \(s_{j}\in B^{1}_{r}(s_{0})\),
\[P(W^{j}_{i};B^{1}_{r}(s_{0})\times G^{**}_{i})\] \[=\mathcal{H}^{n}(G^{**}_{i})+\mathcal{H}^{n}\big{(}\Phi^{-1}(T \cap K_{j})\cap(B^{1}_{r}(s_{0})\times B^{n}_{1})\big{)}\] \[\leq\mathcal{H}^{n}(G_{i})+\operatorname{Lip}(\Phi^{-1})^{n}\, \mathcal{H}^{n}\big{(}K_{j}\cap\Phi(B^{1}_{r}(s_{0})\times B^{n}_{1})\big{)}\,. \tag{4.32}\]
By combining (4.31) with (4.32) we conclude that for every \(r>0\)
\[\mathcal{H}^{n}(G_{i})\leq\operatorname{Lip}(\Phi^{-1})^{n}\,\mu\big{(}\Phi( \operatorname{cl}\left(B^{1}_{r}(s_{0})\right)\times B^{n}_{1})\big{)}\,, \tag{4.33}\]
By \(\mu(T[s_{0}])=0\), if we let \(r\to 0^{+}\) in (4.33), we conclude that \(\mathcal{H}^{n}(G_{i})=0\). Now, since \(G_{i}=\operatorname{\mathbf{p}}\bigl{(}Y^{{(1)}}_{i}\cap Y [s_{0}]\bigr{)}\), we have
\[\mathcal{H}^{n}\big{(}Y^{{(1)}}_{i}\cap Y [s_{0}]\big{)}=\mathcal{H}^{n}(G_{i})\,, \tag{4.34}\]
thus proving (4.18), and hence the theorem.
## 5. Direct Method on generalized soap films (Theorem 1.4)
In Section 5.1 we prove Theorem 1.4, while in Section 5.2 we notice the changes to that argument that are needed to prove a different closure theorem that will be crucial in the companion papers [14, 15]. In particular, Section 5.2 will not be needed for the other main results of this paper (although it is included here since it is definitely easier to understand in this context).
### Proof of Theorem 1.4
Let us first of all recall the setting of the theorem. We are given a closed set \(\mathbf{W}\) in \(\mathbb{R}^{n+1}\), a spanning class \(\mathcal{C}\) for \(\mathbf{W}\), and a sequence \(\{(K_{j},E_{j})\}_{j}\) in \(\mathcal{K}_{\mathrm{B}}\) such that
\[\sup_{j}\,\mathcal{H}^{n}(K_{j})<\infty\,, \tag{5.1}\]
and, for some Borel set \(E\) and Radon measures \(\mu_{\mathrm{bk}}\) and \(\mu_{\mathrm{bd}}\) in \(\Omega\), it holds that \(E_{j}\stackrel{{\mathrm{loc}}}{{\to}}E\) and
\[\mathcal{H}^{n}\operatorname{\mathsf{L}}\left(\Omega\cap\partial ^{*}E_{j}\right)+2\,\mathcal{H}^{n}\operatorname{\mathsf{L}}\left(\mathcal{R} (K_{j})\cap E^{{(0)}}_{j}\right)\xrightarrow{\ \ }\mu_{\mathrm{bk}}\,, \tag{5.2}\] \[\mathcal{H}^{n}\operatorname{\mathsf{L}}\left(\Omega\cap\partial ^{*}E_{j}\right)+2\,\mathcal{H}^{n}\operatorname{\mathsf{L}}\left(\mathcal{R} (K_{j})\setminus\partial^{*}E_{j}\right)\xrightarrow{\ \ }\mu_{\mathrm{bd}}\,, \tag{5.3}\]
as \(j\to\infty\). In this setting we want to prove that the sets
\[K_{\rm bk} :=\ \left(\Omega\cap\partial^{*}E\right)\cup\left\{x\in\Omega\cap E^{ {(0)}}:\theta_{*}^{n}(\mu_{\rm bk})(x)\geq 2\right\}, \tag{5.4}\] \[K_{\rm bd} :=\ \left(\Omega\cap\partial^{*}E\right)\cup\left\{x\in\Omega\setminus \partial^{*}E:\theta_{*}^{n}(\mu_{\rm bd})(x)\geq 2\right\}, \tag{5.5}\]
are such that \((K_{\rm bk},E),(K_{\rm bd},E)\in{\mathcal{K}}_{\rm B}\) and
\[\mu_{\rm bk} \geq\ {\mathcal{H}}^{n}\mathop{\hbox{\vrule height 6.0pt depth 0.0pt width 0.0pt \vrule height 6.0pt depth 0.0pt width 0.0pt}}\nolimits(\Omega\cap\partial^{*}E)+2\,{\mathcal{H}}^{n}\mathop{\hbox{ \vrule height 6.0pt depth 0.0pt width 0.0pt\vrule height 6.0pt depth 0.0pt width 0.0pt}}\nolimits(K_{\rm bk}\cap E^{{ (0)}})\,, \tag{5.6}\] \[\mu_{\rm bd} \geq\ {\mathcal{H}}^{n}\mathop{\hbox{\vrule height 6.0pt depth 0.0pt width 0.0pt \vrule height 6.0pt depth 0.0pt width 0.0pt}}\nolimits(\Omega\cap\partial^{*}E)+2\,{\mathcal{H}}^{n}\mathop{\hbox{ \vrule height 6.0pt depth 0.0pt width 0.0pt\vrule height 6.0pt depth 0.0pt width 0.0pt}}\nolimits(K_{\rm bd}\setminus\partial^{*}E)\,, \tag{5.7}\]
with
\[\liminf_{j\to\infty}{\mathcal{F}}_{\rm bk}(K_{j},E_{j})\geq{\mathcal{F}}_{ \rm bk}(K_{\rm bk},E)\,,\qquad\liminf_{j\to\infty}{\mathcal{F}}_{\rm bd}(K_{j },E_{j})\geq{\mathcal{F}}_{\rm bd}(K_{\rm bd},E)\,; \tag{5.8}\]
and that the closure statements
\[\text{if }K_{j}\cup E_{j}^{{(1)}}\text{ is }{\mathcal{C}} \text{-spanning }{\mathbf{W}}\text{ for every }j, \tag{5.9}\] \[\text{then }K_{\rm bk}\cup E^{{(1)}}\text{ is }{\mathcal{C}} \text{-spanning }{\mathbf{W}}\,, \tag{5.10}\]
and
\[\text{if }K_{j}\text{ is }{\mathcal{C}} \text{-spanning }{\mathbf{W}}\text{ for every }j, \tag{5.11}\] \[\text{then }K_{\rm bd}\text{ is }{\mathcal{C}} \text{-spanning }{\mathbf{W}}\,, \tag{5.12}\]
hold true.
Proof of Theorem 1.4.: By \(\Omega\cap\partial^{*}E\subset K_{\rm bk}\cap K_{\rm bd}\) we have \((K_{\rm bk},E),(K_{\rm bd},E)\in{\mathcal{K}}_{\rm B}\). By [13, Theorem 6.4], \(\theta_{*}^{n}(\mu_{\rm bk})\geq 2\) on \(K_{\rm bk}\cap E^{{(0)}}\) implies \(\mu_{\rm bk}\mathop{\hbox{\vrule height 6.0pt depth 0.0pt width 0.0pt \vrule height 6.0pt depth 0.0pt width 0.0pt}}\nolimits(K_{\rm bk}\cap E^{{ (0)}})\geq 2{\mathcal{H}}^{n}\mathop{\hbox{\vrule height 6.0pt depth 0.0pt width 0.0pt \vrule height 6.0pt depth 0.0pt width 0.0pt}}\nolimits(K_{\rm bk}\cap E^{{ (0)}})\), and, similarly, we have \(\mu_{\rm bd}\mathop{\hbox{\vrule height 6.0pt depth 0.0pt width 0.0pt \vrule height 6.0pt depth 0.0pt width 0.0pt}}\nolimits(K_{\rm bd}\setminus\partial^{*}E)\geq 2\,{ \mathcal{H}}^{n}\mathop{\hbox{\vrule height 6.0pt depth 0.0pt width 0.0pt \vrule height 6.0pt depth 0.0pt width 0.0pt}}\nolimits(K_{\rm bd}\setminus\partial^{*}E)\). Since, by the lower semicontinuity of distributional perimeter, we have \(\min\{\mu_{\rm bk},\mu_{\rm bd}\}\geq{\mathcal{H}}^{n}\mathop{\hbox{ \vrule height 6.0pt depth 0.0pt width 0.0pt \vrule height 6.0pt depth 0.0pt width 0.0pt}}\nolimits(\partial^{*}E\cap\Omega)\), (5.6), (5.7) and (5.8) follow. We are thus left to prove that if either (5.9) or (5.11) holds, then (5.10) or (5.12) holds respectively. We divide the proof into three parts, numbered by Roman numerals.
**I. Set up of the proof:** Fixing from now on a choice of \((\gamma,\Phi,T)\in{\mathcal{T}}({\mathcal{C}})\) against which we want to test the \({\mathcal{C}}\)-spanning properties (5.10) and (5.12), we introducing several key objects related to \((\gamma,\Phi,T)\).
_Introducing \(s_{0}\)_: Up to extracting subsequences, let \(\mu\) be the weak-star limit of \({\mathcal{H}}^{n}\mathop{\hbox{\vrule height 6.0pt depth 0.0pt width 0.0pt \vrule height 6.0pt depth 0.0pt width 0.0pt}}\nolimits K_{j}\), and set
\[J=\{s\in{\mathbb{S}}^{1}:\mu(T[s])=0\}\,, \tag{5.13}\]
so that \({\mathcal{H}}^{1}({\mathbb{S}}^{1}\setminus J)=0\). We fix \(s_{0}\in J\).
_Introducing \(s_{j}\), \(\{U_{i}^{j}\}_{i}\), and \(K_{j}^{*}\)_: For \({\mathcal{H}}^{1}\)-a.e. \(s\in{\mathbb{S}}^{1}\) it holds that \({\mathcal{H}}^{n}(K_{j}\cap T[s])=0\) for every \(j\) and (thanks to Theorem 1.3/Theorem 3.1) the essential partition \(\{U_{i}^{j}[s]\}_{i}\) induced on \(T\) by \(K_{j}\cup T[s]\) is such that
\[T[s]\cap E_{j}^{{(0)}}\text{ is }{\mathcal{H}}^{n}\text{-contained in }{\rm UBEP}(K_{j}\cup T[s];T)\,, \text{(if (\ref{eq:1}) holds)}\,,\] \[T[s]\text{ is }{\mathcal{H}}^{n}\text{-contained in }{\rm UBEP}(K_{j}\cup T[s];T)\,, \text{(if (\ref{eq:2}) holds)}\,.\]
Therefore we can find a sequence \(s_{j}\to s_{0}\) as \(j\to\infty\) such that
\[{\mathcal{H}}^{n}(K_{j}\cap T[s_{j}])=0\qquad\forall j\,, \tag{5.14}\]
and, denoting by \(\{U_{i}^{j}\}_{i}\) the essential partition of \(T\) induced by \(K_{j}\cup T[s_{j}]\) (i.e. \(U_{i}^{j}=U_{i}^{j}[s_{j}]\)), and setting for brevity
\[K_{j}^{*}={\rm UBEP}(K_{j}\cup T[s_{j}];T)=T\cap\bigcup_{i}\partial^{*}U_{i}^{j }\,, \tag{5.15}\]
we have
\[T[s_{j}]\cap E^{(0)}_{j} \text{ is $\mathcal{H}^{n}$-contained in $K^{*}_{j}\,,$}\qquad\qquad\text{(if (\ref{eq:T}) holds)}\,, \tag{5.16}\] \[T[s_{j}] \text{ is $\mathcal{H}^{n}$-contained in $K^{*}_{j}\,,$}\qquad\qquad\text{(if (\ref{eq:T}) holds)}\,. \tag{5.17}\]
_Introducing \(\{U_{i}\}_{i}\) and \(K^{*}\):_ By (5.1), Lemma 2.3, and up to extract a subsequence we can find a Lebesgue partition \(\{U_{i}\}_{i}\) of \(T\) such that,
\[\{U_{i}\}_{i}\text{ is the limit of $\{\{U_{i}^{j}\}_{i}\}_{j}$ in the sense specified by (\ref{eq:T})}\,. \tag{5.18}\]
Correspondingly we set
\[K^{*}=T\cap\bigcup_{i}\partial^{*}U_{i}\,. \tag{5.19}\]
Having introduced \(s_{0}\), \(s_{j}\), \(\{U_{i}^{j}\}_{i}\), \(K^{*}_{j}\), \(\{U_{i}\}_{i}\), and \(K^{*}\), we notice that if (5.9) holds, then we can apply Theorem 4.1 with \(F_{j}=E_{j}\) and find that
\[T[s_{0}]\cap E^{(0)}\text{ is $\mathcal{H}^{n}$-contained in $K^{*}\,,$}\qquad\text{(if (\ref{eq:T}) holds)}\,; \tag{5.20}\]
if, instead, (5.11) holds, then Theorem 4.1 can be applied with \(F_{j}=F=\varnothing\) to deduce
\[T[s_{0}]\text{ is $\mathcal{H}^{n}$-contained in $K^{*}\,,$}\qquad\text{(if (\ref{eq:T}) holds)}\,. \tag{5.21}\]
We now make the following claim:
**Claim:** We have
\[K^{*}\setminus(T[s_{0}]\cup E^{(1)})\text{ is $\mathcal{H}^{n}$-contained in $K_{\rm bk}$}\,, \tag{5.22}\] \[K^{*}\setminus T[s_{0}]\text{ is $\mathcal{H}^{n}$-contained in $K_{\rm bd}$}\,. \tag{5.23}\]
The rest of the proof of the theorem is then divided in two parts: the conclusion follows from the claim, and the proof of the claim.
**II. Conclusion of the proof from the claim:**_Proof that_ (5.11) _implies_ (5.12): By \(\mathcal{H}^{1}(\mathbb{S}^{1}\setminus J)=0\), the arbitrariness of \(s_{0}\in J\), and that of \((\gamma,\Phi,T)\in\mathcal{T}(\mathcal{C})\), thanks to Theorem 1.3 we can conclude that \(K_{\rm bd}\) is \(\mathcal{C}\)-spanning \(\mathbf{W}\) by showing that
\[T[s_{0}]\text{ is $\mathcal{H}^{n}$-contained in $\text{UBEP}(K_{\rm bd}\cup T[s_{0}];T)$}. \tag{5.24}\]
Now, since \(\{U_{i}\}_{i}\) is a Lebesgue partition of \(T\) induced by \(K^{*}\) (in the very tautological sense that \(K^{*}\) is defined as \(T\cap\cup_{i}\partial^{*}U_{i}!\)) and, by (5.23) in claim one, \(K^{*}\) is \(\mathcal{H}^{n}\)-contained in \(K_{\rm bd}\cup T[s_{0}]\), by Theorem 2.1-(a) we have that if \(\{Z_{i}\}_{i}\) is the essential partition of \(T\) induced by \(K_{\rm bd}\cup T[s_{0}]\), then \(\cup_{i}\partial^{*}U_{i}\) is \(\mathcal{H}^{n}\)-contained in \(\cup_{i}\partial^{*}Z_{i}\): therefore, by definition of \(K^{*}\) and by definition of UBEP, we have that
\[K^{*}\text{ is $\mathcal{H}^{n}$-contained in $\text{UBEP}\big{(}K_{\rm bd}\cup T[s_{0}];T\big{)}$}\,. \tag{5.25}\]
By combining (5.25) with (5.21) we immediately deduce (5.24) and conclude.
_Proof that_ (5.9) _implies_ (5.10): Thanks to Theorem 3.1 it suffices to prove that
\[T[s_{0}]\cap E^{(0)}\text{ is $\mathcal{H}^{n}$-contained in $\text{UBEP}(K_{\rm bk}\cup T[s_{0}];T)$}\,. \tag{5.26}\]
By (5.20), the proof of (5.26) can be reduced to that of
\[K^{*}\cap E^{(0)}\text{ is $\mathcal{H}^{n}$-contained in $\text{UBEP}(K_{\rm bk}\cup T[s_{0}];T)$}\,. \tag{5.27}\]
Now, let us consider the Lebesgue partition of \(T\) defined by \(\{V_{k}\}_{k}=\{U_{i}\setminus E\}_{i}\cup\{T\cap E\}\). By [10, Theorem 16.3] we easily see that for each \(i\)
\[E^{(0)}\cap\partial^{*}U_{i}\overset{\mathcal{H}^{n}}{\subset}\partial^{*}(U _{i}\setminus E)\overset{\mathcal{H}^{n}}{\subset}\left(E^{(0)}\cap\partial^ {*}U_{i}\right)\cup\partial^{*}E\,, \tag{5.28}\]
which combined with \(T\cap\partial^{*}(T\cap E)=T\cap\partial^{*}E\subset K_{\rm bk}\) and with (5.22) in claim one, gives
\[T\cap\bigcup_{k}\partial^{*}V_{k} = (T\cap\partial^{*}E)\cup\Big{\{}T\cap\bigcup_{i}\partial^{*}(U_{i} \setminus E)\Big{\}}\overset{\mathcal{H}^{n}}{\subset}(T\cap\partial^{*}E) \cup\left(E^{(0)}\cap K^{*}\right)\]
\[\stackrel{{\mathcal{H}^{n}}}{{\subset}}\ \ (T\cap\partial^{*}E)\cup\, \left(K^{*}\setminus E^{{(1)}}\right)\stackrel{{\mathcal{H}^{n}}}{{ \subset}}K_{\rm bk}\cup T[s_{0}]\,. \tag{5.29}\]
By (5.29) we can exploit Theorem 2.1-(a) to conclude that
\[T\cap\bigcup_{k}\partial^{*}V_{k}\ \text{is $\mathcal{H}^{n}$-contained in $\operatorname{UBEP}(K_{\rm bk}\cup T[s_{0}];T)$}\,. \tag{5.30}\]
By the first inclusion in (5.28), \(E^{{(0)}}\cap K^{*}\) is \(\mathcal{H}^{n}\)-contained in \(T\cap\bigcup_{k}\partial^{*}V_{k}\), therefore (5.30) implies (5.27), as required. We are thus left to prove the two claims.
**III. Proof of the claim:** We finally prove that \(K^{*}\setminus(T[s_{0}]\cup E^{{(1)}})\) is \(\mathcal{H}^{n}\)-contained in \(K_{\rm bk}\) (that is (5.22)), and that \(K^{*}\setminus T[s_{0}]\) is \(\mathcal{H}^{n}\)-contained in \(K_{\rm bd}\) (that is (5.23)).
To this end, repeating the argument in the proof of Theorem 4.1 with \(F_{j}=E_{j}\) and \(F=E\) we see that, if we set \(X_{m}^{j}=\{i:(U_{i}^{j})^{{(1)}}\subset E_{j}^{{ (m)}}\}\) and \(X_{m}=\{i:U_{i}^{{(1)}}\subset E^{{ (m)}}\}\) for \(m\in\{0,1\}\) (see (4.6) and (4.7)), then
\[X^{j}:=\{i:|U_{i}^{j}|>0\}=X_{0}^{j}\cup X_{1}^{j}\,,\qquad X:=\{i:|U_{i}|>0\} =X_{0}\cup X_{1}\,; \tag{5.31}\]
and, moreover, for every \(i\) there is \(j(i)\) such that \(i\in X_{m}\) implies \(i\in X_{m}^{j}\) for every \(j\geq j(i)\). Thanks to (5.31) we easily see that \(K_{j}^{*}=T\cap\cup_{i}\partial^{*}U_{i}^{j}\) can be decomposed as
\[K_{j}^{*}\stackrel{{\mathcal{H}^{n}}}{{=}}\bigcup_{(i,k)\in X_{ 0}^{j}\times X_{0}^{j}\,,i\neq j}M_{ik}^{j}\cup\bigcup_{(i,k)\in X_{1}^{j} \times X_{1}^{j}\,,i\neq j}M_{ik}^{j}\cup\bigcup_{(i,k)\in X_{0}^{j}\times X _{1}^{j}}M_{ik}^{j}\,, \tag{5.32}\]
where \(M_{ik}^{j}=T\cap\partial^{*}U_{i}^{j}\cap\partial^{*}U_{k}^{j}\) (an analogous decomposition of \(K^{*}\) holds as well, and will be used in the following, but is not explicitly written for the sake of brevity). We now prove that
\[M_{ik}^{j}\subset E_{j}^{{(0)}}\,,\qquad\forall i,k\in X_{0}^{j} \,,i\neq k\,, \tag{5.33}\] \[M_{ik}^{j}\subset\partial^{e}E_{j}\,,\qquad\forall i\in X_{0}^{j }\,,k\in X_{1}^{j}\,,\] (5.34) \[M_{ik}^{j}\subset E_{j}^{{(1)}}\,,\qquad\forall i,k\in X_{1}^{j }\,,i\neq k\,. \tag{5.35}\]
_To prove (5.33) and (5.35)_: if \(i\neq k\), \(i,k\in X_{0}^{j}\), and \(x\in M_{ik}^{j}\), then (by \(|U_{i}^{j}\cap U_{k}^{j}|=0\)) \(U_{i}^{j}\) and \(U_{k}^{j}\) blow-up two complementary half-spaces at \(x\), an information that combined with the \(\mathcal{L}^{n+1}\)-inclusion of \(U_{i}^{j}\cup U_{k}^{j}\) in \(\mathbb{R}^{n+1}\setminus E_{j}\) implies
\[|B_{r}(x)|+{\rm o}(r^{n+1})=|B_{r}(x)\cap U_{i}^{j}|+|B_{r}(x)\cap U_{k}^{j}| \leq|B_{r}(x)\setminus E_{j}|\,,\]
that is, \(x\in E_{j}^{{(0)}}\), thus proving (5.33); the proof of (5.35) is analogous.
_To prove (5.34)_: if \(i\in X_{0}^{j}\), \(k\in X_{1}^{j}\), and \(x\in M_{ik}^{j}\), then
\[|B_{r}(x)\cap E_{j}|\geq|B_{r}(x)\cap U_{k}^{j}|=\frac{|B_{r}(x)|}{2}+{\rm o} (r^{n+1})\,,\]
\[|B_{r}(x)\setminus E_{j}|\geq|B_{r}(x)\cap U_{i}^{j}|=\frac{|B_{r}(x)|}{2}+{ \rm o}(r^{n+1})\,,\]
so that \(x\not\in E_{j}^{{(0)}}\) and \(x\not\in E_{j}^{{(1)}}\), i.e. \(x\in\partial^{e}E_{j}\), that is (5.34).
With (5.33)-(5.35) at hand, we now prove that
\[T\cap\partial^{*}E_{j}\stackrel{{\mathcal{H}^{n}}}{{=}}\bigcup_{( i,k)\in X_{0}^{j}\times X_{1}^{j}}M_{ik}^{j}\,, \tag{5.36}\]
\[K_{j}^{*}\cap E_{j}^{{(0)}}\stackrel{{\mathcal{H}^{n}}}{{=}}\bigcup_ {(i,k)\in X_{0}^{j}\times X_{0}^{j}\,,k\neq i}M_{ik}^{j}\,. \tag{5.37}\]
(Analogous relations hold with \(K^{*}\) and \(E\) in place of \(K_{j}^{*}\) and \(E_{j}\).)
_To prove (5.36)_: By \(\partial^{*}E_{j}\subset\partial^{e}E_{j}\) and (4.4) we find \(\partial^{*}E_{j}\cap(U^{j}_{i})^{{}_{(1)}}=\varnothing\) for every \(i,j\); hence, since \(\{(U^{j}_{i})^{{}_{(1)}}\}_{i}\cup\{\partial^{*}U^{j}_{i}\}_{i}\) is an \(\mathcal{H}^{n}\)-partition of \(T\), and by repeatedly applying (5.33), (5.34) and (5.35), we find
\[\bigcup_{(i,k)\in X^{j}_{0}\times X^{j}_{1}}M^{j}_{ik} \stackrel{{\eta^{n}}}{{\subset}} T\cap\partial^{*}E_{j}\stackrel{{\eta^{n}}}{{=}} \bigcup_{i}\bigl{(}T\cap\partial^{*}E_{j}\cap\partial^{*}U^{j}_{i}\bigr{)} \stackrel{{\eta^{n}}}{{=}}\bigcup_{i,k}M^{j}_{ik}\cap\partial^{* }E_{j}\] \[\stackrel{{\eta^{n}}}{{=}} \bigcup_{(i,k)\in X^{j}_{0}\times X^{j}_{1}}M^{j}_{ik}\cap \partial^{*}E_{j}\,,\]
which gives (5.36).
_To prove (5.37)_: By (5.33), (5.34), and (5.35), \(M^{j}_{ik}\) has empty intersection with \(E^{{}_{(0)}}_{j}\) unless \(i,k\in X^{j}_{0}\), in which case \(M^{j}_{ik}\) is \(\mathcal{H}^{n}\)-contained in \(E^{{}_{(0)}}_{j}\): hence,
\[\bigcup_{(i,k)\in X^{j}_{0}\times X^{j}_{0},k\neq i}M^{j}_{ik}\stackrel{{ \mathcal{H}^{n}}}{{\subset}}K^{*}_{j}\cap E^{{}_{(0)}}_{j}= \bigcup_{(i,k)\in X^{j}_{0}\times X^{j}_{0},\,k\neq i}E^{{}_{(0)}}_{j}\cap M^{ j}_{ik}\,,\]
that is (5.37).
With (5.36) and (5.37) at hand, we now prove the following perimeter formulas: for every open set \(A\subset T\) and every \(j\),
\[\sum_{i\in X^{j}_{0}}P(U^{j}_{i};A)=\mathcal{H}^{n}\bigl{(}A\cap \partial^{*}E_{j}\bigr{)}+2\,\mathcal{H}^{n}\bigl{(}A\cap K^{*}_{j}\cap E^{{} _{(0)}}_{j}\bigr{)}\,, \tag{5.38}\] \[\sum_{i\in X^{j}_{1}}P(U^{j}_{i};A)=\mathcal{H}^{n}\bigl{(}A\cap \partial^{*}E_{j}\bigr{)}+2\,\mathcal{H}^{n}\bigl{(}A\cap K^{*}_{j}\cap E^{{} _{(1)}}_{j}\bigr{)}\,. \tag{5.39}\]
Analogously, for \(\alpha=0,1\),
\[\sum_{i\in X_{\alpha}}P(U_{i};A)=\mathcal{H}^{n}\bigl{(}A\cap\partial^{*}E \bigr{)}+2\,\mathcal{H}^{n}\bigl{(}A\cap K^{*}\cap E^{{}_{(\alpha)}}\bigr{)}\,. \tag{5.40}\]
_To prove (5.38) and (5.39)_: Indeed, by (5.36) and (5.37),
\[\sum_{i\in X^{j}_{0}}P(U^{j}_{i};A) = \sum_{(i,k)\in X^{j}_{0}\times X^{j}_{1}}\mathcal{H}^{n}(A\cap M ^{j}_{ik})+\sum_{i\in X^{j}_{0}}\sum_{k\in X^{j}_{0}\setminus\{i\}}\mathcal{H} ^{n}(A\cap M^{j}_{ik})\] \[= \mathcal{H}^{n}\Bigl{(}\bigcup_{(i,k)\in X^{j}_{0}\times X^{j}_{ 1}}A\cap M^{j}_{ik}\Bigr{)}+2\,\mathcal{H}^{n}\Bigl{(}\bigcup_{(i,k)\in X^{j}_ {0}\times X^{j}_{0},\,i\neq k}A\cap M^{j}_{ik}\Bigr{)}\] \[= \mathcal{H}^{n}(A\cap\partial^{*}E)+2\,\mathcal{H}^{n}\bigl{(}A \cap K^{*}_{j}\cap E^{{}_{(0)}}_{j}\bigr{)}\,,\]
that is (5.38). The proof of (5.39) is analogous (since (5.39) is (5.38) applied to the complements of the \(E_{j}\)'s - recall indeed that \(\Omega\cap\partial^{*}E_{j}=\Omega\cap\partial^{*}(\Omega\setminus E_{j})\)).
_Conclusion of the proof of (5.22) in the claim_: We want to prove that \(K^{*}\setminus(T[s_{0}]\cup E^{{}_{(1)}})\) is \(\mathcal{H}^{n}\)-contained in \(K_{\rm bk}\). Since \(\{E^{{}_{(0)}},E^{{}_{(1)}},\partial^{*}E\}\) is an \(\mathcal{H}^{n}\)-partition of \(\Omega\), and \(\Omega\cap\partial^{*}E\) is contained in \(K_{\rm bk}\), looking back at the definition (5.4) of \(K_{\rm bk}\) it is enough to show that
\[\theta^{n}_{*}(\mu_{\rm bk})(x)\geq 2\ \text{for}\ \mathcal{H}^{n}\text{-a.e.}\ x \in(K^{*}\cap E^{{}_{(0)}})\setminus T[s_{0}]\,. \tag{5.41}\]
To this end, we begin noticing that, if \(Y_{0}\) is an arbitrary finite subset of \(X_{0}\), then there is \(j(Y_{0})\) such that \(Y_{0}\subset X^{j}_{0}\) for every \(j\geq j(Y_{0})\); correspondingly,
\[\sum_{i\in Y_{0}}P(U_{i};A)\leq\liminf_{j\to\infty}\sum_{i\in Y_{0}}P(U^{j}_{i} ;A)\leq\liminf_{j\to\infty}\sum_{i\in X^{j}_{0}}P(U^{j}_{i};A)\,.\]
By arbitrariness of \(Y_{0}\), (5.40) with \(\alpha=0\), (5.38), and (4.11) (notice that the \(\mathcal{H}^{n}\)-containment of the \(\mathcal{H}^{n}\)-rectifiable set \(K_{j}^{*}\) into \(K_{j}\cup T[s_{0}]\) is equivalent to its \(\mathcal{H}^{n}\)-containment in \(\mathcal{R}(K_{j}\cup T[s_{j}])=\mathcal{R}(K_{j})\cup T[s_{j}]\)) we conclude that, if \(A\subset T\) is open and such that \(\operatorname{cl}\left(A\right)\cap T[s_{0}]=\varnothing\), so that \(A\cap T[s_{j}]=\varnothing\) for \(j\) large enough, then
\[\mathcal{H}^{n}\big{(}A\cap\partial^{*}E\big{)}+2\,\mathcal{H}^{ n}\big{(}A\cap K^{*}\cap E^{{(0)}}\big{)}\] \[=\sum_{i\in X_{0}}P(U_{i};A)\leq\liminf_{j\to\infty}\sum_{i\in X _{0}^{j}}P(U_{i}^{j};A)\] \[=\liminf_{j\to\infty}\mathcal{H}^{n}\big{(}A\cap\partial^{*}E_{ j}\big{)}+2\,\mathcal{H}^{n}\big{(}A\cap K_{j}^{*}\cap E_{j}^{{(0)}}\big{)}\] \[\leq\liminf_{j\to\infty}\mathcal{H}^{n}\big{(}A\cap\partial^{*}E_ {j}\big{)}+2\,\mathcal{H}^{n}\big{(}A\cap\big{(}\mathcal{R}(K_{j})\cup T[s_{j} ]\big{)}\cap E_{j}^{{(0)}}\big{)}\] \[=\liminf_{j\to\infty}\mathcal{H}^{n}\big{(}A\cap\partial^{*}E_{ j}\big{)}+2\,\mathcal{H}^{n}\big{(}A\cap\mathcal{R}(K_{j})\cap E_{j}^{{(0)}} \big{)}\leq\mu_{\mathrm{bk}}(\operatorname{cl}\left(A\right)), \tag{5.42}\]
where we have used the definition (5.2) of \(\mu_{\mathrm{bk}}\). Now, if \(x\in(K^{*}\cap E^{{(0)}})\setminus T[s_{0}]\), then we we can apply (5.42) with \(A=B_{s}(x)\) and \(s>0\) such that \(\operatorname{cl}\left(B_{s}(x)\right)\cap T[s_{0}]=\varnothing\), together with the fact that \(x\in E^{{(0)}}\) implies \(\mathcal{H}^{n}(B_{s}(x)\cap\partial^{*}E)=\operatorname{o}(s^{n})\) as \(s\to 0^{+}\), to conclude that
\[\mu_{\mathrm{bk}}(\operatorname{cl}\left(B_{s}(x)\right))\geq 2\,\mathcal{H}^{n} \big{(}B_{s}(x)\cap K^{*}\cap E^{{(0)}}\big{)}+ \operatorname{o}(s^{n})\,,\qquad\text{as }s\to 0^{+}\,. \tag{5.43}\]
Since \(K^{*}\cap E^{{(0)}}\) is an \(\mathcal{H}^{n}\)-rectifiable set, and thus \(\mathcal{H}^{n}\big{(}B_{s}(x)\cap K^{*}\cap E^{{(0)}} \big{)}=\omega_{n}\,s^{n}+\operatorname{o}(s^{n})\) for \(\mathcal{H}^{n}\)-a.e. \(x\in K^{*}\cap E^{{(0)}}\), we deduce (5.41) from (5.43).
_Conclusion of the proof of (5.23) in the claim_: We want to prove the \(\mathcal{H}^{n}\)-containment of \(K^{*}\setminus T[s_{0}]\) in \(K_{\mathrm{bd}}\). As in the proof of (5.22), combining Federer's theorem (1.37) with the definition (5.5) of \(K_{\mathrm{bd}}\), we are left to prove that
\[\theta_{*}^{n}(\mu_{\mathrm{bd}})(x)\geq 2\text{ for }\mathcal{H}^{n}\text{-a.e. }x\in K^{*} \setminus(T[s_{0}]\cup\partial^{*}E)\,. \tag{5.44}\]
As proved in (5.42), if \(A\subset T\) is open and such that \(\operatorname{cl}\left(A\right)\cap T[s_{0}]=\varnothing\), then by exploiting (5.38) and (5.40) with \(\alpha=0\) we have
\[\mathcal{H}^{n}\big{(}A\cap\partial^{*}E\big{)}+2\,\mathcal{H}^{ n}\big{(}A\cap K^{*}\cap E^{{(0)}}\big{)} \tag{5.45}\] \[\leq\liminf_{j\to\infty}\mathcal{H}^{n}\big{(}A\cap\partial^{*}E_ {j}\big{)}+2\,\mathcal{H}^{n}\big{(}A\cap\mathcal{R}(K_{j})\cap E_{j}^{{(0)}} \big{)}\,;\]
the same argument, this time based on (5.39) and (5.40) with \(\alpha=1\), also gives
\[\mathcal{H}^{n}\big{(}A\cap\partial^{*}E\big{)}+2\,\mathcal{H}^{ n}\big{(}A\cap K^{*}\cap E^{{(1)}}\big{)} \tag{5.46}\] \[\leq\liminf_{j\to\infty}\mathcal{H}^{n}\big{(}A\cap\partial^{*}E_ {j}\big{)}+2\,\mathcal{H}^{n}\big{(}A\cap\mathcal{R}(K_{j})\cap E_{j}^{{(1)}} \big{)}\,;\]
and, finally, since \(\Omega\setminus\partial^{*}E\) is \(\mathcal{H}^{n}\)-equivalent to \(\Omega\cap(E^{{(0)}}\cup E^{{(1)}})\), the combination of (5.45) and (5.46) gives
\[\mathcal{H}^{n}\big{(}A\cap\partial^{*}E\big{)}+2\,\mathcal{H}^{ n}\big{(}A\cap K^{*}\setminus\partial^{*}E\big{)} \tag{5.47}\] \[\leq\liminf_{j\to\infty}\mathcal{H}^{n}\big{(}A\cap\partial^{*}E_ {j}\big{)}+2\,\mathcal{H}^{n}\big{(}A\cap\mathcal{R}(K_{j})\setminus\partial^{* }E_{j}\big{)}\leq\mu_{\mathrm{bd}}(\operatorname{cl}\left(A\right)),\]
where we have used the definition (5.3) of \(\mu_{\mathrm{bd}}\). Now, for \(\mathcal{H}^{n}\)-a.e. \(x\in K^{*}\setminus(T[s_{0}]\cup\partial^{*}E)\) we have \(\mathcal{H}^{n}(B_{r}(x)\cap\partial^{*}E)=\operatorname{o}(r^{n})\) and \(\mathcal{H}^{n}(B_{r}(x)\cap K^{*}\setminus\partial^{*}E)=\omega_{n}\,r^{n}+ \operatorname{o}(r^{n})\) as \(r\to 0^{+}\), as well as \(\operatorname{cl}\left(B_{r}(x)\right)\cap T[s_{0}]=\varnothing\) for \(r\) small enough, so that (5.47) with \(A=B_{r}(x)\) readily implies (5.44). The proof of the claim, and thus of the theorem, is now complete.
### A second closure theorem
We now present a variant of the main arguments presented in this section and alternative closure theorem to Theorem 1.4. As already noticed, this second closure theorem, Theorem 5.1 below, will play a role only in the companion paper [14], where Plateau's laws will be studied in the relation to the Allen-Cahn equation, so that this section can be omitted on a first reading focused on Gauss' capillarity theory alone.
To introduce Theorem 1.4, let us consider the following question: given an \(\mathcal{H}^{n}\)-finite set \(S\) which is \(\mathcal{C}\)-spanning \(\mathbf{W}\), _what parts of \(S\) are essential to its \(\mathcal{C}\)-spanning property_? We already know from Lemma 2.2 that the unrectifiable part of \(S\) is not necessary, since \(\mathcal{R}(S)\) is also \(\mathcal{C}\)-spanning. However, some parts of \(\mathcal{R}(S)\) could be discarded too - indeed rectifiable sets can be "porous at every scale", and thus completely useless from the point of view of achieving \(\mathcal{C}\)-spanning. To make an example, consider the rectifiable set \(P\subset\mathbb{R}^{2}\) obtained by removing from \([0,1]\) all the intervals \((q_{i}-\varepsilon_{i},q_{i}+\varepsilon_{i})\) where \(\{q_{i}\}_{i}\) are the rational numbers in \([0,1]\) and \(2\)\(\sum_{i}\varepsilon_{i}=\varepsilon\) for some given \(\varepsilon\in(0,1)\): it is easily seen that \(P\) is a rectifiable set with positive \(\mathcal{H}^{1}\)-measure in \(\mathbb{R}^{2}\), contained in \(\mathbb{R}\times\{0\}\), which fails to essentially disconnect any stripe of the form \((a,b)\times\mathbb{R}\) with \((a,b)\subset\subset(0,1)\). Intuitively, if a set like \(P\) stands as an isolated portion of \(S\), then \(\mathcal{R}(S)\setminus P\) should still be \(\mathcal{C}\)-spanning.
We can formalize this idea as follows. Denoting as usual \(\Omega=\mathbb{R}^{n+1}\setminus\mathbf{W}\), we consider the open covering \(\{\Omega_{k}\}_{k}\) of \(\Omega\) defined by
\[\{\Omega_{k}\}_{k}=\{B_{r_{mh}}(x_{m})\}_{m,h}\,, \tag{5.48}\]
where \(\{x_{m}\}_{m}=\mathbb{Q}^{n+1}\cap\Omega\) and \(\{r_{mh}\}_{h}=\mathbb{Q}\cap(0,\operatorname{dist}(x_{m},\partial\Omega))\). For every \(\mathcal{H}^{n}\)-finite set \(S\) we define the **essential spanning part of \(S\) in \(\Omega\)** as the Borel set
\[\operatorname{ESP}(S)=\bigcup_{k}\,\operatorname{UBEP}(S;\Omega_{k})=\bigcup _{k}\,\left\{\Omega_{k}\cap\bigcup_{i}\partial^{*}U_{i}[\Omega_{k}]\right\},\]
if \(\{U_{i}[\Omega_{k}]\}_{i}\) denotes the essential partition of \(\Omega_{k}\) induced by \(S\). Since each \(\operatorname{UBEP}(S;\Omega_{k})\) is a countable union of reduced boundaries and is \(\mathcal{H}^{n}\)-contained in the \(\mathcal{H}^{n}\)-finite set \(S\), we see that \(\operatorname{ESP}(S)\) is always \(\mathcal{H}^{n}\)-rectifiable. The idea is that by following the unions of boundaries of essential partitions induced by \(S\) over smaller and smaller balls we are capturing all the parts of \(S\) that may potentially contribute to achieve a spanning condition with respect to \(\mathbf{W}\). Thinking about Figure 1.5: the tendrils of \(S\) appearing in panel (a) and not captured by \(\operatorname{UBEP}(S;U)\), will eventually be included into \(\operatorname{ESP}(S)\) by considering \(\operatorname{UBEP}\)'s of \(S\) relative to suitable subsets of \(U\). Another way to visualize the construction of \(\operatorname{ESP}(S)\) is noticing that if \(B_{r}(x)\subset B_{s}(x)\subset\Omega\), then
\[B_{r}(x)\cap\operatorname{UBEP}(S;B_{s}(x))\subset\operatorname{UBEP}(S;B_{r} (x))\,,\]
which points to the monotonicity property behind the construction of \(\operatorname{ESP}(S)\). Intuitively, we expect that
\[\text{if $S$ is $\mathcal{C}$-spanning $\mathbf{W}$, then $\operatorname{ESP}(S)$ is $\mathcal{C}$-spanning $\mathbf{W}$} \tag{5.49}\]
(where \(\mathcal{C}\) is an arbitrary spanning class for \(\mathbf{W}\)). This fact will proved in a moment as a particular case of Theorem 5.1 below.
Next, we introduce the notion of convergence behind our second closure theorem. Consider a sequence \(\{S_{j}\}_{j}\) of Borel subsets of \(\Omega\) such that \(\sup_{j}\mathcal{H}^{n}(S_{j})<\infty\). If we denote by \(\{U_{i}^{j}[\Omega_{k}]\}_{i}\) the essential partition induced on \(\Omega_{k}\) by \(S_{j}\), then a diagonal argument based on Lemma 2.3 shows the existence of a (not relabeled) subsequence in \(j\), and, for each \(k\), of a Borel partition \(\{U_{i}[\Omega_{k}]\}_{i}\) of \(\Omega_{k}\) such that \(\{U_{i}^{j}[\Omega_{k}]\}_{i}\) converges to \(\{U_{i}[\Omega_{k}]\}_{i}\) as \(j\to\infty\) in the sense specified by (2.8). Since \(\operatorname{UBEP}(S_{j};\Omega_{k})=\Omega_{k}\cap\bigcup_{i}\partial^{*}U _{i}^{j}[\Omega_{k}]\), we call any set \(S\) of
the form11
Footnote 11: The limit partition \(\{U_{i}[\Omega_{k}]\}_{i}\) appearing in (5.50) may not be the essential partition induced by \(S\) on \(\Omega_{k}\) since the individual \(U_{i}[\Omega_{k}]\), arising as \(L^{1}\)-limits, may fail to be essentially connected. This said, \(\{U_{i}[\Omega_{k}]\}_{i}\) is automatically a partition of \(\Omega_{k}\) induced by \(S_{0}\).
\[S=\bigcup_{k}\,\Big{\{}\Omega_{k}\cap\bigcup_{i}\partial^{*}U_{i}[\Omega_{k}] \Big{\}}\,, \tag{5.50}\]
a **subsequential partition limit of \(\{S_{j}\}_{j}\) in \(\Omega\)**. Having in mind (5.49), it is natural to ask if the following property holds:
if
\[S_{j}\]
is
\[\mathcal{C}\]
-spanning
\[\mathbf{W}\]
for each
\[j\,,\]
and
\[S\]
is a subsequential partition limit of
\[\{S_{j}\}_{j}\]
in
\[\Omega\,,\]
then
\[S\]
is
\[\mathcal{C}\]
-spanning
\[\mathbf{W}\]
. (5.51)
Our next theorem implies both (5.49) and (5.51) as particular cases (corresponding to be taking \(E_{j}=\varnothing\) and, respectively, \(K_{j}=S\) and \(K_{j}=S_{j}\) for every \(j\)).
**Theorem 5.1** (Closure theorem for subsequential partition limits).: _Let \(\mathbf{W}\) be a closed set in \(\mathbb{R}^{n+1}\), \(\mathcal{C}\) a spanning class for \(\mathbf{W}\), and \(\{(K_{j},E_{j})\}_{j}\) a sequence in \(\mathcal{K}_{\mathrm{B}}\) such that \(\sup_{j}\mathcal{H}^{n}(K_{j})<\infty\) and \(K_{j}\cup E_{j}^{({}^{1})}\) is \(\mathcal{C}\)-spanning \(\mathbf{W}\) for every \(j\)._
_If \(S_{0}\) and \(E_{0}\) are, respectively, a subsequential partition limit of \(\{K_{j}\}_{j}\) in \(\Omega\) and an \(L^{1}\)-subsequential limit of \(\{E_{j}\}_{j}\) (corresponding to a same not relabeled subsequence in \(j\)), and we set_
\[K_{0}=(\Omega\cap\partial^{*}E_{0})\cup S_{0}\,,\]
_then \((K_{0},E_{0})\in\mathcal{K}_{\mathrm{B}}\) and \(K_{0}\cup E_{0}^{({}^{1})}\) is \(\mathcal{C}\)-spanning \(\mathbf{W}\)._
Proof.: Since \(\Omega\cap\partial^{*}E_{0}\subset K_{0}\) by definition of \(K_{0}\) we trivially have \((K_{0},E_{0})\in\mathcal{K}_{\mathrm{B}}\). Aiming to prove that \(K_{0}\cup E_{0}^{({}^{1})}\) is \(\mathcal{C}\)-spanning \(\mathbf{W}\), we fix \((\gamma,\Phi,T)\in\mathcal{T}(\mathcal{C})\), and define \(s_{0}\), \(s_{j}\), \(\{U_{i}^{j}\}_{i}\) and \(\{U_{i}\}_{i}\) exactly as in part I of the proof of Theorem 1.4. Thanks to Theorem 4.1 and by arguing as in part II of the proof of Theorem 1.4, we have reduced to prove that
\[K^{*}\setminus(T[s_{0}]\cup E^{({}^{1})})\text{ is }\mathcal{H}^{n}\text{-contained in }K_{0}\,. \tag{5.52}\]
By Federer's theorem (1.37) and since \(\Omega\cap\partial^{*}E\subset K_{0}\) it is enough to prove
\[(K^{*}\cap E^{({}^{0})})\setminus T[s_{0}]\text{ is }\mathcal{H}^{n}\text{-contained in }S_{0}\,,\]
and, thanks to the construction of \(S_{0}\), we shall actually be able to prove
\[K^{*}\setminus T[s_{0}]\text{ is }\mathcal{H}^{n}\text{-contained in }S_{0}\,. \tag{5.53}\]
To this end let us pick \(k\) such that \(\Omega_{k}\subset\subset T\) and \(\Omega_{k}\cap T[s_{0}]=\emptyset\). Then, for \(j\geq j(k)\), we have \(\Omega_{k}\cap T[s_{j}]=\varnothing\), so that
\[\Omega_{k}\cap\text{UBEP}\big{(}K_{j}\cup T[s_{j}];T\big{)}\subset\text{UBEP }\big{(}K_{j}\cup T[s_{j}];\Omega_{k}\big{)}=\text{UBEP}\big{(}K_{j};\Omega_{ k}\big{)}\,.\]
Since \(\{U_{i}^{j}\}_{i}\) is the essential partition of \(T\) induced by \(K_{j}\cup T[s_{j}]\), if \(\{U_{m}^{j}[\Omega_{k}]\}_{m}\) is the essential partition of \(\Omega_{k}\) induced by \(K_{j}\), we have just claimed that, for every \(i\) and \(j\geq j(k)\),
\[\Omega_{k}\cap\partial^{*}U_{i}^{j}\subset\Omega_{k}\cap\bigcup_{m}\partial^{* }U_{m}^{j}[\Omega_{k}]\,. \tag{5.54}\]
Since \(\{U_{m}^{j}[\Omega_{k}]\}_{m}\) is a Lebesgue partition of \(\Omega_{k}\) into essentially connected sets, by (5.54) the indecomposable components of \(\Omega_{k}\cap U_{i}^{j}\) must belong to \(\{U_{m}^{j}[\Omega_{k}]\}_{m}\). In other words, for each \(i\) and each \(j\geq j(k)\) there is \(M(k,i,j)\) such that
\[\Omega_{k}\cap U_{i}^{j}=\bigcup_{m\in M(k,i,j)}U_{m}^{j}[\Omega_{k}]\,.\]
As a consequence of \(U_{i}^{j}\to U_{i}\) and of \(U_{m}^{j}[\Omega_{k}]\to U_{m}[\Omega_{k}]\) as \(j\to\infty\) we find that, for a set of indexes \(M(k,i)\), it must be
\[\Omega_{k}\cap U_{i}=\bigcup_{m\in M(k,i)}U_{m}[\Omega_{k}]\,,\]
and therefore
\[\Omega_{k}\cap\partial^{*}U_{i}\stackrel{{\mathcal{H}^{n}}}{{ \subset}}\bigcup_{m\in M(k,i)}\partial^{*}U_{m}[\Omega_{k}]\subset S_{0}\,.\]
Since we have proved this inclusion for every \(i\) and for every \(k\) such that \(\Omega_{k}\subset\subset T\) with \(\Omega_{k}\cap T[s_{0}]=\emptyset\), it follows that \(K^{*}\setminus T[s_{0}]\) is \(\mathcal{H}^{n}\)-contained in \(S_{0}\), that is (5.53).
## 6. Existence of minimizers and convergence to Plateau's problem
(Theorem 1.5)
In this section we prove two main results: the first one (Theorem 6.1) concerns the equivalence of Harrison-Pugh Plateau's problem \(\ell\) with its measure theoretic reformulation \(\ell_{\rm B}\) (see (1.21)); the second (Theorem 1.5) is a very refined version of Theorem 1.5.
**Theorem 6.1** (Existence for \(\ell_{\rm B}\) and \(\ell=\ell_{\rm B}\)).: _If \({\bf W}\subset\mathbb{R}^{n+1}\) is closed, \(\mathcal{C}\) is a spanning class for \({\bf W}\), and the Harrison-Pugh formulation of the Plateau problem_
\[\ell=\inf\big{\{}\mathcal{H}^{n}(S):S\text{ is a closed subset $\Omega$, $S$ is $\mathcal{C}$-spanning ${\bf W}$}\big{\}}\]
_is finite, then the problem_
\[\ell_{\rm B}=\inf\big{\{}\mathcal{H}^{n}(S):S\text{ is a Borel subset $\Omega$, $S$ is $\mathcal{C}$-spanning ${\bf W}$}\big{\}}\]
_admits minimizers, and given any minimizer \(S\) for \(\ell_{\rm B}\), there exists relatively closed \(S^{*}\) which is \(\mathcal{H}^{n}\)-equivalent to \(S\) and a minimizer for \(\ell\). In particular, \(\ell=\ell_{\rm B}\)._
**Theorem 6.2** (Theorem 1.5 refined).: _If \({\bf W}\) is a compact set in \(\mathbb{R}^{n+1}\) and \(\mathcal{C}\) is a spanning class for \({\bf W}\) such that \(\ell<\infty\), then for every \(v>0\) there exist minimizers \((K,E)\) of \(\Psi_{\rm bk}(v)\). Moreover,_
**(i):** _if_ \((K_{*},E_{*})\) _is a minimizer of_ \(\Psi_{\rm bk}(v)\)_, then there is_ \((K,E)\in\mathcal{K}\) _such that_ \(K\) _is_ \(\mathcal{H}^{n}\)_-equivalent to_ \(K^{*}\)_,_ \(E\) _is Lebesgue equivalent to_ \(E_{*}\)_,_ \((K,E)\) _is a minimizer of_ \(\Psi_{\rm bk}(v)\)_, both_ \(E\) _and_ \(K\) _are bounded,_ \(K\cup E\) _is_ \(\mathcal{C}\)_-spanning_ \({\bf W}\)_,_ \(K\cap E^{{(1)}}=\varnothing\)_, and there is_ \(\lambda\in\mathbb{R}\) _such that_
\[\lambda\int_{\Omega\cap\partial^{*}E}X\cdot\nu_{E}\,d\mathcal{H}^{ n}=\int_{\Omega\cap\partial^{*}E}\operatorname{div}^{K}X\,d\mathcal{H}^{n}+2 \int_{K\cap E^{(0)}}\operatorname{div}^{K}X\,d\mathcal{H}^{n}\,, \tag{6.1}\] \[\forall X\in C_{c}^{1}(\mathbb{R}^{n+1};\mathbb{R}^{n+1})\quad \text{with $X\cdot\nu_{\Omega}=0$ on $\partial\Omega$}\,,\]
_and there are positive constants \(c=c(n)\) and \(r_{1}=r_{1}(K,E)\) such that_
\[|E\cap B_{\rho}(y)|\leq(1-c)\,\omega_{n+1}\,\rho^{n+1}\,, \tag{6.2}\]
_for every \(y\in\Omega\cap\partial E\) and \(\rho<\min\{r_{1},\operatorname{dist}(y,{\bf W})\}\); under the further assumption that \(\partial{\bf W}\) is \(C^{2}\), then there is positive \(r_{0}=r_{0}(n,{\bf W},|\lambda|)\) such that_
\[\mathcal{H}^{n}(K\cap B_{r}(x))\geq c\,r^{n} \tag{6.3}\]
_for every \(x\in\operatorname{cl}(K)\) and \(r<r_{0}\);_
**(ii):** _if_ \((K_{j},E_{j})\) _is a sequence of minimizers for_ \(\Psi_{\rm bk}(v_{j})\) _with_ \(v_{j}\to 0^{+}\)_, then there exists a minimizer_ \(S\) _of_ \(\ell\) _such that, up to extracting subsequences, as Radon measures in_ \(\Omega\)_,_
\[\mathcal{H}^{n}\mathop{\hbox{\vrule height 6.0pt width 0.4pt depth 0.0pt\vrule heig ht 0.4pt width 6.0pt depth 0.0pt\vrule height 6.0pt width 0.4pt depth 0.0pt\vrule heig ht 0.4pt width 6.0pt depth 0.0pt\vrule height 6.0pt width 0.4pt depth 0.0pt\vrule heig ht 0.4pt width 6.0pt depth 0.0pt\vrule height 6.0pt width 0.4pt depth 0.0pt\vrule heig ht 0.4pt width 6.0pt depth 0.0pt\vrule height 6.0pt width 0.4pt depth 0.0pt\vrule heig ht 0.4pt width 6.0pt depth 0.0pt\vrule height 6.0pt width 0.4pt depth 0.0pt\vrule heig ht 0.4pt width 6.0pt depth 0.0pt\vrule height 6.0pt width 0.4pt depth 0.0pt\vrule heig ht 0.4pt width 6.0pt depth 0.0pt\vrule height 6.0pt width 0.4pt depth 0.0pt\vrule heig ht 0.4pt width 6.0pt depth 0.0pt\vrule height 6.0pt width 0.4pt depth 0.0pt\vrule heig ht 0.4pt width 6.0pt depth 0.0pt\vrule height 6.0pt width 0.4pt depth 0.0pt\vrule heig ht 0.4pt width 6.0pt depth 0.0pt\vrule height 6.0pt width 0.4pt depth 0.0pt\vrule heig ht 0.4pt width 6.0pt depth 0.0pt\vrule height 6.0pt width 0.4pt depth 0.0pt\vrule heig ht 0.4pt width 6.0pt depth 0.0pt\vrule height 6.0pt width 0.4pt depth 0.0pt\vrule height 6.0pt width 0.4pt depth 0.0pt\vrule heig ht 0.4pt width 6.0pt depth 0.0pt\vrule height 6.0pt width 0.4pt depth 0.0pt\vrule height 6.0pt width 0.4pt depth 0.0pt\vrule heig ht 0.4pt width 6.0pt depth 0.0pt\vrule height 6.0pt width 0.4pt depth 0.0pt\vrule height 6.0pt width 0.4pt depth 0.0pt\vrule heig ht 0.4pt width 6.0pt depth 0.0pt\vrule height 6.0pt width 0.4pt depth 0.0pt\vrule heig ht 0.4pt width 6.0pt depth 0.0pt\vrule height 6.0pt width 0.4pt depth 0.0pt\vrule height 6.0pt width 0.4pt depth 0.0pt\vrule heig ht 0.4pt width 6.0pt depth 0.0pt\vrule height 6.0pt width 0.4pt depth 0.0pt\vrule height 6.0pt width 0.4pt depth 0.0pt\vrule heig ht 0.4pt width 6.0pt depth 0.0pt\vrule height 6.0pt width 0.4pt depth 0.0pt\vrule height 6.0pt width 0.4pt depth 0.0pt\vrule heig ht 0.4pt width 6.0pt depth 0.0pt\vrule height 6.0pt width 0.4pt depth 0.0pt\vrule height 6.0pt width 0.4pt depth 0.0pt\vrule height 6.0pt width 0.4pt depth 0.0pt\vrule heig ht 0.4pt width 6.0pt depth 0.0pt\vrule height 6.0pt width 0.4pt depth 0.0pt\vrule height 6.0pt width 0.4pt depth 0.0pt\vrule heig ht 0.4pt width 6.0pt depth 0.0pt\vrule height 6.0pt width 0.4pt depth 0.0pt\vrule heig ht 0.4pt width 6.0pt depth 0.0pt\vrule height 6.0pt width 0.4pt depth 0.0pt\vrule height 6.0pt width 0.4pt depth 0.0pt\vrule height 6.0pt width 0.4pt depth 0.0pt\vrule height 6.0pt width 0.4pt depth 0.0pt\vrule heig ht 0.4pt width 6.0pt depth 0.0pt\vrule height 6.0pt width 0.4pt depth 0.0pt\vrule height 6.0pt width 0.4pt depth 0.0pt\vrule heig ht 0.4pt width 6.0pt depth 0.0pt\vrule height 6.0pt width 0.4pt depth 0.0pt\vrule height 6.0pt width 0.4pt depth 0.0pt\vrule heig ht 0.4pt width 6.0pt depth 0.0pt\vrule height 6.0pt width 0.4pt depth 0.0pt\vrule height 6.0pt width 0.4pt depth 0.0pt\vrule height 6.0pt width 0.4pt depth 0.0pt\vrule height 6.0pt width 0.4pt depth 0.0pt\vrule height 6.0pt width 0.4pt depth 0.0pt\vrule height 6.0pt width 0.4pt depth 0.0pt\vrule heig ht 0.4pt width 6.0pt depth 0.0pt\vrule height 6.0pt width 0.4pt depth 0.0pt\vrule height 6.
Proof of Theorem 6.1.: By Theorem A.1, if \(\ell<\infty\), then \(\ell_{\rm B}<\infty\). Let now \(\{S_{j}\}_{j}\) be a minimizing sequence for \(\ell_{\rm B}\), then \(\{(S_{j},\varnothing)\}_{j}\) is a sequence in \({\mathcal{K}}_{\rm B}\) satisfying (5.1). By Theorem 1.4, we find a Borel set \(S\) which is \({\mathcal{C}}\)-spanning \({\bf W}\) and is such that
\[2\,\liminf_{j\to\infty}{\mathcal{H}}^{n}(S_{j})=\liminf_{j\to\infty}{ \mathcal{F}}_{\rm bk}(S_{j},\varnothing)\geq{\mathcal{F}}_{\rm bk}(S, \varnothing)=2\,{\mathcal{H}}^{n}(S)\,.\]
This shows that \(S\) is a minimizer of \(\ell_{\rm B}\). By Lemma 2.2, \(S\) is \({\mathcal{H}}^{n}\)-rectifiable, for, otherwise, \({\mathcal{R}}(S)\) would be admissible for \(\ell_{\rm B}\) and have strictly less area than \(S\). We conclude the proof by showing that up to modifications on a \({\mathcal{H}}^{n}\)-null set, \(S\) is relatively closed in \(\Omega\) (and thus is a minimizer of \(\ell\) too). Indeed the property of being \({\mathcal{C}}\)-spanning \({\bf W}\) is preserved under diffeomorphism \(f\) with \(\{f\neq{\rm id}\,\}\subset\subset\Omega\). In particular, \({\mathcal{H}}^{n}(S)\leq{\mathcal{H}}^{n}(f(S))\) for every such \(f\), so that the multiplicity one rectifiable varifold \(V_{S}={\bf var}\,(S,1)\) associated to \(S\) is stationary. By a standard application of the monotonicity formula, we can find \(S^{*}\)\({\mathcal{H}}^{n}\)-equivalent to \(S\) such that \(S^{*}\) is relative closed in \(\Omega\). Since \({\mathcal{H}}^{n}(S)={\mathcal{H}}^{n}(S^{*})\) and \({\mathcal{C}}\)-spanning is preserved under \({\mathcal{H}}^{n}\)-null modifications, we conclude the proof.
Proof of Theorem 6.2.: _Step one_: We prove conclusion (i). To this end, let \((K_{*},E_{*})\in{\mathcal{K}}_{\rm B}\) be a minimizer of \(\Psi_{\rm bk}(v)\). Clearly, \(({\mathcal{R}}(K_{*}),E_{*})\in{\mathcal{K}}_{\rm B}\) is such that \({\mathcal{R}}(K_{*})\cup E^{{(1)}}\) is \({\mathcal{C}}\)-spanning \({\bf W}\) (thanks to Theorem 3.1/Remark 3.2) and \({\mathcal{F}}_{\rm bk}({\mathcal{R}}(K_{*}),E_{*})\leq{\mathcal{F}}_{\rm bk}( K_{*},E_{*})\). In particular, \(({\mathcal{R}}(K_{*}),E_{*})\) is a minimizer of \(\Psi_{\rm bk}(v)\), and energy comparison between \(({\mathcal{R}}(K_{*}),E_{*})\) and \(({\mathcal{R}}(K_{*})\setminus E_{*}^{{(1)}},E_{*})\) (which is also a competitor for \(\Psi_{\rm bk}(v)\)) proves that
\[{\mathcal{H}}^{n}({\mathcal{R}}(K_{*})\cap E_{*}^{{(1)}})=0\,. \tag{6.5}\]
Since "\({\mathcal{C}}\)-spanning \({\bf W}\)" is preserved under diffeomorphisms, by a standard first variation argument (see, e.g. [10, Appendix C]) wee see that \(({\mathcal{R}}(K_{*}),E_{*})\) satisfies (6.1) for some \(\lambda\in{\mathbb{R}}\). In particular, the integer \(n\)-varifold \(V={\rm var}({\mathcal{R}}(K_{*}),\theta)\), with multiplicity function \(\theta=2\) on \({\mathcal{R}}(K_{*})\cap E_{*}^{{(0)}}\) and \(\theta=1\) on \(\Omega\cap\partial^{*}E_{*}\), has bounded mean curvature in \(\Omega\), and thus satisfies \(\|V\|(B_{r}(x))\geq c(n)\,r^{n}\) for every \(x\in K\) and \(r<\min\{r_{0},{\rm dist}(x,{\bf W})\}\), where \(r_{0}=r_{0}(n,|\lambda|)\) and, by definition,
\[K:=\Omega\cap{\rm spt}V\,.\]
In particular, since (6.5) implies \(\|V\|\leq 2\,{\mathcal{H}}^{n}\mathop{\hbox{\vrule height 6.0pt width 0.4pt depth 0.0pt \vrule height 6.0pt width 0.4pt depth 0.0pt}}\nolimits{\mathcal{L}}{\mathcal{R}}(K^{*})\), we conclude (e.g. by [13, Corollary 6.4]) that \(K\) is \({\mathcal{H}}^{n}\)-equivalent to \({\mathcal{R}}(K_{*})\), and is thus \({\mathcal{H}}^{n}\)-rectifiable and relatively closed in \(\Omega\). Now let
\[E=\big{\{}x\in\Omega:\exists\ r<{\rm dist}(x,{\bf W})\ {\rm s.t.}\ |E_{*}\cap B_{r}(x)|=|B_{r}(x)| \big{\}}\,,\]
so that, trivially, \(E\) is an open subset of \(\Omega\) with \(E\subset E_{*}^{{(1)}}\). By applying (1.35) to \(E_{*}\), and by noticing that if \(x\in\Omega\setminus E\) then \(|E_{*}\cap B_{r}(x)|<|B_{r}(x)|\) for every \(r>0\), and that if \(x\in\Omega\cap{\rm cl}\,(E)\) then \(|E_{*}\cap B_{r}(x)|>0\) for every \(r>0\), we see that
\[\Omega\cap\partial E\ \subset\ \big{\{}x\in\Omega:0<|E_{*}\cap B_{r}(x)|<|B_{r}(x)| \ \forall r>0\big{\}}\ =\ \Omega\cap{\rm cl}\,(\partial^{*}E_{*})\,. \tag{6.6}\]
Since \(\|V\|\geq{\mathcal{H}}^{n}\mathop{\hbox{\vrule height 6.0pt width 0.4pt depth 0.0pt \vrule height 6.0pt width 0.4pt depth 0.0pt}}\nolimits(\Omega\cap\partial^{*}E_{*})\) and \({\mathcal{H}}^{n}(B_{r}(x)\cap\partial^{*}E)=\omega_{n}\,r^{n}+{\rm o}(r^{n})\) as \(r\to 0^{+}\) for every \(x\in\Omega\cap\partial^{*}E\), we see that \(\Omega\cap\partial^{*}E_{*}\subset\Omega\cap{\rm spt}\|V\|=K\), and since \(K\) is relatively closed in \(\Omega\), we have \(\Omega\cap{\rm cl}\,(\partial^{*}E_{*})\subset K\), and so \(\Omega\cap\partial E\subset K\). In particular, \(E\) is of finite perimeter, and thus by applying (1.35) to \(E\),
\[\Omega\cap{\rm cl}\,(\partial^{*}E)\ =\ \big{\{}x\in\Omega:0<|E\cap B_{r}(x)|<|B_{r}(x)| \ \forall r>0\big{\}}\ \subset\ \Omega\cap\partial E\,. \tag{6.7}\]
Finally, if there is \(x\in(\Omega\cap E_{*}^{{(1)}})\setminus E\), then it must be \(0<|E_{*}\cap B_{r}(x)|<|B_{r}(x)|\) for every \(r>0\), and thus \(x\in\Omega\cap{\rm cl}\,(\partial^{*}E_{*})\subset K\). However, we _claim_ that for every \(x\in\Omega\cap{\rm cl}\,(\partial^{*}E_{*})\) and \(r<\min\{r_{*},{\rm dist}(x,{\bf W})\}\) (with \(r_{*}=r_{*}(K_{*},E_{*})\)) it holds
\[|B_{r}(x)\cap E_{*}|\leq(1-c)\,\omega_{n+1}\,r^{n+1}\,, \tag{6.8}\]
in contradiction with \(x\in E^{{(1)}}\); this proves that \(\Omega\cap E_{*}^{{(1)}}\subset E\), and thus that \(E_{*}\) and \(E\) are Lebesgue equivalent. Combining the latter information with (6.6) and (6.7) we conclude
that \(\Omega\cap\operatorname{cl}\left(\partial^{*}E\right)=\Omega\cap\partial E\subset K\) and conclude the proof of \((K,E)\in\mathcal{K}\) - conditional to proving (6.8).
To prove (6.8), let us fix \(x\in\Omega\cap\operatorname{cl}\left(\partial^{*}E_{*}\right)\) and set \(u(r)=|B_{r}(x)\setminus E_{*}|\), so that, for a.e. \(r>0\) we have
\[u^{\prime}(r)=\mathcal{H}^{n}(E_{*}^{{(0)}}\cap\partial B_{r}(x))\,,\qquad P( B_{r}(x)\setminus E_{*})=u^{\prime}(r)+P(E_{*};B_{r}(x))\,. \tag{6.9}\]
Since \(|E_{*}|=v>0\), we have \(\mathcal{H}^{n}(\Omega\cap\partial^{*}E_{*})>0\), therefore there must be \(y_{1},y_{2}\in\Omega\cap\partial^{*}E_{*}\) with \(|y_{1}-y_{2}|>4r_{*}\) for some \(r_{*}\) depending on \(E_{*}\). In particular there is \(i\in\{1,2\}\) such that \(B_{r_{*}}(x)\cap B_{r_{*}}(y_{i})=\varnothing\), and we set \(y=y_{i}\). Since \(y_{i}\in\Omega\cap\partial^{*}E_{*}\), there is \(w_{*}>0\) and smooth maps \(\Phi:\Omega\times(-w_{*},w_{*})\to\Omega\) such that \(\Phi(\cdot,w)\) is a diffeomorphism of \(\Omega\) with \(\{\Phi(\cdot,w)\neq\operatorname{Id}\}\subset\subset B_{r_{*}}(y)\), and
\[|\Phi(E_{*},w)|=|E_{*}|-w\,,\qquad P(\Phi(E_{*},w);B_{r_{*}}(y))\leq P(E_{*},B _{r_{*}}(y))(1+2\left|\lambda\right|\left|w\right|)\,, \tag{6.10}\]
for every \(|w|<w_{*}\). We then consider \(r_{1}\) such that \(|B_{r_{1}}|<w_{*}\), so that for every \(r<\min\{r_{1},\operatorname{dist}(x,\mathbf{W})\}\) we have \(0\leq u(r)<w_{*}\), and thus we can define
\[(K_{r},E_{r})=\Big{(}\Phi^{u(r)}\big{(}K\cup\partial B_{r}(x)\big{)},\Phi^{u( r)}\big{(}E_{*}\cup B_{r}(x)\big{)}\Big{)}\,.\]
Since \(\Phi^{u(r)}\) is a diffeomorphism, we have \(\Omega\cap\partial^{*}E_{r}\subset K_{r}\), and by the first relation in (6.10) and \(\Phi^{u(r)}=\operatorname{Id}\) on \(\Omega\setminus B_{r_{*}}(y)\), we get
\[|E_{r}|-|E|=|B_{r}(x)|-|B_{r}(x)\cap E_{*}|+|\Phi^{u(r)}(E_{*})\cap B_{r_{*}}( y)|-|E_{*}\cap B_{r_{*}}(y)|=u(r)-u(r)=0\,.\]
Hence \(\mathcal{F}_{\operatorname{bk}}(K_{*},E_{*})\leq\mathcal{F}_{\operatorname{ bk}}(K_{r},E_{r})\), from which we deduce
\[P(E;B_{r}(x))+P(E;B_{r_{*}}(y))+2\,\mathcal{H}^{n}(K_{*}\cap E _{*}^{{(0)}}\cap B_{r}(x))\] \[\leq\mathcal{H}^{n}(B_{r}(x)\cap E^{{(0)}})+P(\Phi^{u(r)}(E_{*}); B_{r_{*}}(y))\leq u^{\prime}(r)+P(E_{*},B_{r_{*}}(y))(1+2\left|\lambda\right| \left|w\right|)\,;\]
where we have used (6.9) and (6.10); by adding up \(u^{\prime}(r)\) on both sides of the inequality, and using (6.9) again, we find that
\[c(n)\,u(r)^{n/(n+1)}\leq P(B_{r}(x)\setminus E_{*})\leq 2\,u^{\prime}(r)+2 \left|\lambda\right|\Psi_{\operatorname{bk}}(v)\,u(r)\,,\]
for a.e. \(r<\min\{r_{1},\operatorname{dist}(x,\mathbf{W})\}\); since, by (6.6), \(x\in\Omega\cap\operatorname{cl}\left(\partial^{*}E_{*}\right)\) implies \(u(r)>0\) for every \(r>0\), we can apply a standard ODE argument to conclude that (6.8) holds true.
We now prove the remaining assertions in statement (i). First of all, when \(\partial\mathbf{W}\) is \(C^{2}\), we can argue similarly to [10, Theorem 4.1] to deduce from the modified monotonicity formula of Kagaya and Tonegawa [11] that the area lower bound in (6.3) holds for every \(x\in\operatorname{cl}\left(K\right)\) and every \(r<r_{0}\). The validity of the volume upper bound in (6.2) is immediate from (6.8) and the Lebesgue equivalence of \(E_{*}\) and \(E\). The monotonicity formula for \(V\) combined with \(\mathcal{H}^{n}(\Omega\cap K)<\infty\) implies of course that \(V\) has bounded support. Having proved that \(K\) is bounded, \(|E|<\infty\) and \(\Omega\cap\partial E\subset K\) imply that \(E\) is bounded too. Since \(\mathcal{R}(K_{*})\) and \(K\) are \(\mathcal{H}^{n}\)-equivalent, we have that \(K\cup E_{*}^{{(1)}}\) is \(\mathcal{C}\)-spanning \(\mathbf{W}\). It turns out that \(K\cup E^{{(1)}}\) is \(\mathcal{C}\)-spanning \(\mathbf{W}\) too, since \(E\) and \(E_{*}\) are Lebesgue equivalent _and_ of finite perimeter, therefore such that \(E^{{(1)}}\) and \(E_{*}^{{(1)}}\) are \(\mathcal{H}^{n}\)-equivalent. In fact, on noticing that \(\Omega\cap(E^{{(1)}}\setminus E)\subset\Omega\cap\partial E\subset K\), we see that \(K\cup E^{{(1)}}=K\cup E\), so that \(K\cup E\) is \(\mathcal{C}\)-spanning \(\mathbf{W}\), as claimed.
Finally, we prove that \(K\cap E^{{(1)}}=\varnothing\). We first notice that, since \(E\subset\Omega\) is open and \(K=\Omega\cap\operatorname{spt}V\) with \(\left\|V\right\|\leq 2\,\mathcal{H}^{n}\operatorname{\mathbb{L}}\mathcal{R}(K^{*})\), if \(K\cap E\neq\emptyset\), then \(\mathcal{H}^{n}(\mathcal{R}(K_{*})\cap E)>0\); and since \(E\subset E_{*}^{{(1)}}\) by construction, we arrive at a contradiction with (6.5). Hence, \(K\cap E=\varnothing\). Now, if \(x\in K\cap E^{{(1)}}\), then, by (6.2), \(x\not\in\Omega\cap\partial E\); combining this with \(K\cap E=\varnothing\), we find \(K\cap E^{{(1)}}\subset\Omega\setminus\operatorname{cl}\left(E\right)\subset E^{ {(0)}}\), and thus \(K\cap E^{{(1)}}=\varnothing\).
_Step two_: For every \(v_{1}\geq 0\) and \(v_{2}>0\) we have
\[\Psi_{\operatorname{bk}}(v_{1}+v_{2})\leq\Psi_{\operatorname{bk}}(v_{1})+(n+1) \,\omega_{n+1}^{1/(n+1)}\,v_{2}^{n/(n+1)}\,. \tag{6.11}\]
Since \(\Psi_{\rm bk}(0)=2\,\ell<\infty\), (6.11) implies in particular that \(\Psi_{\rm bk}(v)<\infty\) for every \(v>0\) (just take \(v_{1}=0\) and \(v_{2}=v\)).
Indeed, let \((K_{1},E_{1})\) be a competitor in \(\Psi_{\rm bk}(v_{1})\) and let \(\{B_{r_{j}}(x_{j})\}_{j}\) be a sequence of balls with \(|x_{j}|\to\infty\) and \(|E_{1}\cup B_{r_{j}}(x_{j})|=v_{1}+v_{2}\) for every \(j\). Setting for the sake of brevity \(B_{j}=B_{r_{j}}(x_{j})\), sine \(\partial^{*}(E_{1}\cup B_{j})\) is \({\mathcal{H}}^{n}\)-contained in \((\partial^{*}E_{1})\cup\partial B_{j}\) we have that \((K_{2},E_{2})\), with \(K_{2}=K_{1}\cup\partial B_{j}\) and \(E_{2}=E_{1}\cup B_{j}\), is a competitor of \(\Psi_{\rm bk}(v_{1}+v_{2})\). Since \(\partial B_{j}\cap E_{2}^{{(0)}}=\varnothing\) implies \(E_{2}^{{(0)}}\subset E_{1}^{{(0)}}\setminus\partial B _{j}\), we find that
\[\Psi_{\rm bk}(v_{1}+v_{2}) \leq 2\,{\mathcal{H}}^{n}\big{(}K_{2}\cap E_{2}^{{(0)} }\big{)}+{\mathcal{H}}^{n}(\Omega\cap\partial^{*}E_{2})\] \[\leq 2\,{\mathcal{H}}^{n}(K_{1}\cap E_{1}^{{(0)}} \setminus\partial B_{j})+{\mathcal{H}}^{n}(\Omega\cap\partial^{*}E_{1})+{ \mathcal{H}}^{n}(\partial B_{j})\] \[\leq {\mathcal{F}}_{\rm bk}(K_{1},E_{1})+(n+1)\,\omega_{n+1}^{1/(n+1) }\,|B_{j}|^{n/(n+1)}\,.\]
Since \(|x_{j}|\to\infty\), \(|E_{1}|=v_{1}\), and \(|E_{1}\cup B_{r_{j}}(x_{j})|=v_{1}+v_{2}\) imply \(|B_{j}|\to v_{2}\), we conclude by arbitrariness of \((K_{1},E_{1})\).
_Step three_: Now let \(\{(K_{j},E_{j})\}_{j}\) be a minimizing sequence for \(\Psi_{\rm bk}(v)\). Since \(\Psi_{\rm bk}(v)<\infty\), assumption (5.1) of Theorem 1.4 holds. Therefore there is \((K,E)\in{\mathcal{K}}_{\rm B}\) with \(K\cup E^{{(1)}}\) is \({\mathcal{C}}\)-spanning \({\bf W}\) and such that, up to extracting subsequences,
\[\lim_{j\to\infty}|(E_{j}\Delta E)\cap B_{R}|=0\quad\forall R>0\,,\qquad\liminf _{j\to\infty}{\mathcal{F}}_{\rm bk}(K_{j},E_{j})\geq{\mathcal{F}}_{\rm bk}(K,E )\,; \tag{6.12}\]
actually, to be more precise, if \(\mu\) denotes the weak-star limit of \({\mathcal{H}}^{n}\mathop{\hbox{\vrule height 6.0pt width 0.4pt depth 0.0pt\vrule hei ght 0.4pt width 6.0pt depth 0.0pt}}\nolimits(\Omega\cap\partial^{*}E_{j})+2\,{ \mathcal{H}}^{n}\mathop{\hbox{\vrule height 6.0pt width 0.4pt depth 0.0pt\vrule hei ght 0.4pt width 6.0pt depth 0.0pt}}\nolimits({\mathcal{R}}(K_{j})\cap E_{j}^{{(0)}})\) in \(\Omega\), then
\[\mu\geq 2\,{\mathcal{H}}^{n}\mathop{\hbox{\vrule height 6.0pt width 0.4pt depth 0.0pt\vrule hei ght 0.4pt width 6.0pt depth 0.0pt}}\nolimits(K\cap E^{{(0)}})+{\mathcal{H}}^{n}\mathop{\hbox{\vrule height 6.0pt width 0.4pt depth 0.0pt\vrule hei ght 0.4pt width 6.0pt depth 0.0pt}}\nolimits(\Omega\cap\partial^{*}E)\,. \tag{6.13}\]
We _claim_ that
\[(K,E)\mbox{ is a minimizer of }\Psi_{\rm bk}(|E|)\,.\]
(Notice that, at this stage of the argument, we are not excluding that \(v^{*}:=v-|E|\) is positive, nor that \(|E|=0\).) Taking into account (6.11), to prove the claim it suffices to show that
\[\Psi_{\rm bk}(v)\geq{\mathcal{F}}_{\rm bk}(K,E)+(n+1)\,\omega_{n+1}^{1/(n+1)} \,(v^{*})^{n/(n+1)}\,. \tag{6.14}\]
To see this, we start noticing that, given any sequence \(\{r_{j}\}_{j}\) with \(r_{j}\to\infty\), by (6.12) and (6.13) we have that
\[E_{j}\cap B_{r_{j}}\stackrel{{\rm log}}{{\to}}E\,, \qquad|E_{j}\setminus B_{r_{j}}|\to v^{*}\,,\qquad\mbox{as }j\to\infty\,, \tag{6.15}\] \[\liminf_{j\to\infty}\,2\,{\mathcal{H}}^{n}\big{(}{\mathcal{R}}(K_ {j})\cap E_{j}^{{(0)}}\cap B_{r_{j}}\big{)}+{\mathcal{H}}^{n}(B_{r_{j}}\cap \partial^{*}E_{j})\geq{\mathcal{F}}_{\rm bk}(K,E)\,, \tag{6.16}\]
Moreover, since \(|E_{j}|<\infty\), we can choose \(r_{j}\to\infty\) so that \({\mathcal{H}}^{n}(E_{j}^{{(1)}}\cap\partial B_{r_{j}})\to 0\), while, taking into account that \(P(E_{j}\setminus B_{r_{j}})={\mathcal{H}}^{n}(E_{j}^{{(1)}} \cap\partial B_{r_{j}})+{\mathcal{H}}^{n}((\partial^{*}E_{j})\setminus B_{r_{j}})\), we have
\[{\mathcal{F}}_{\rm bk}(K_{j},E_{j}) \geq 2\,{\mathcal{H}}^{n}\big{(}{\mathcal{R}}(K_{j})\cap E_{j}^{{ (0)}}\cap B_{r_{j}}\big{)}+{\mathcal{H}}^{n}(B_{r_{j}}\cap\partial^{*}E_{j})\] \[+P(E_{j}\setminus B_{r_{j}})-{\mathcal{H}}^{n}(E_{j}^{{(1)}} \cap\partial B_{r_{j}})\,.\]
By combining these facts with (6.15), (6.16), and the Euclidean isoperimetric inequality, we conclude that
\[\Psi_{\rm bk}(v)=\lim_{j\to\infty}{\mathcal{F}}_{\rm bk}(K_{j},E_{j})\geq{ \mathcal{F}}_{\rm bk}(K,E)+(n+1)\,\omega_{n+1}^{1/(n+1)}\,\lim_{j\to\infty}|E_{j }\setminus B_{r_{j}}|^{n/(n+1)}\,,\]
that is (6.14).
_Step four_: We prove the existence of minimizers in \(\Psi_{\rm bk}(v)\), \(v>0\). By step three, there is \((K,E)\in\mathcal{K}_{\rm B}\) such that \(K\cup E^{{(1)}}\) is \(\mathcal{C}\)-spanning \(\mathbf{W}\), \((K,E)\) is a minimizer of \(\Psi_{\rm bk}(|E|)\) and, combining (6.11) and (6.14),
\[\Psi_{\rm bk}(v)=\Psi_{\rm bk}(|E|)+(n+1)\,\omega_{n+1}^{1/(n+1)}\,(v-|E|)^{n/( n+1)}\,. \tag{6.17}\]
Since \((K,E)\) is a minimizer in \(\Psi_{\rm bk}(|E|)\), by step one we can assume that \(K\) is \(\mathcal{H}^{n}\)-rectifiable and that both \(K\) and \(E\) are bounded. We can thus find \(B_{r}(x_{0})\subset\subset\Omega\) such that \(|B_{r}(x_{0})|=v-|E|\), \(|B_{r}(x_{0})\cap E|=0\), and \(\mathcal{H}^{n}(K\cap B_{r}(x_{0}))=0\). In this way \((K_{*},E_{*})=(K\cup\partial B_{r}(x_{0}),E\cup B_{r}(x_{0}))\in\mathcal{K}_{ \rm B}\) is trivially \(\mathcal{C}\)-spanning \(\mathbf{W}\) and such that \(|E_{*}|=v\), and thus is a competitor for \(\Psi_{\rm bk}(v)\). At the same time,
\[\mathcal{F}_{\rm bk}(K_{*},E_{*})=\mathcal{F}_{\rm bk}(K,E)+(n+1)\,\omega_{n+ 1}^{1/(n+1)}\,(v-|E|)^{n/(n+1)}\]
so that, by (6.17), \((K_{*},E_{*})\) is a minimizer of \(\Psi_{\rm bk}(v)\). Having proved that minimizers of \(\Psi_{\rm bk}(v)\) do indeed exist, a further application of step one completes the proof of statement (i).
_Step five_: We finally prove statement (ii). Let us consider a sequence \(v_{j}\to 0^{+}\) and corresponding minimizers \((K_{j},E_{j})\) of \(\Psi_{\rm bk}(v_{j})\). By (6.11) with \(v_{1}=0\) and \(v_{2}=v_{j}\) we see that \(\{(K_{j},E_{j})\}_{j}\) satisfies the assumptions of Theorem 1.4. Since \(|E_{j}|=v_{j}\to 0\), setting \(\mu_{j}=\mathcal{H}^{n}\mathop{\hbox{\vrule height 6.0pt width 0.4pt depth 0.0pt\vrule hei ght 0.4pt depth 0.0pt width 1px}}\nolimits(\Omega\cap\partial^{*}E_{j})+2\,\mathcal{H}^{n} \mathop{\hbox{\vrule height 6.0pt width 0.4pt depth 0.0pt\vrule hei ght 0.4pt depth 0.0pt width 1px}}\nolimits(\mathcal{R}(K_{j})\cap E_{j}^{{(0)}})\), the conclusion of Theorem 1.4 is that there are a Radon measure \(\mu\) in \(\Omega\) and a Borel set \(K\) such that \(K\) is \(\mathcal{C}\)-spanning \(\mathbf{W}\) and \(\mu_{j}\xrightarrow{\ \
with \(K^{*}=\bigcup_{i}\partial^{*}U_{i}\). Now, thanks to (1.40), (1.41), and the inclusion in (1.46), we have
\[U^{(1)}\cap\partial^{*}(U\cap E)\stackrel{{\mathcal{H}^{n}}}{{=}}U ^{(1)}\cap\partial^{*}E\stackrel{{\mathcal{H}^{n}}}{{\subset}}U^{ (1)}\cap K^{*}\,,\]
which combined with (7.2) gives
\[2\,\mathcal{H}^{n}(U^{(1)}\cap K^{*})=\mathcal{H}^{n}\big{(}U^{(1)}\cap \partial^{*}E\big{)}+\sum_{i}\mathcal{H}^{n}(U^{(1)}\cap\partial^{*}U_{i})\,. \tag{7.3}\]
Therefore, using in order
\[U^{(1)}\cap\partial^{*}E\stackrel{{\mathcal{H}^{n}}}{{\subset}}U ^{(1)}\cap K^{*}\,,\qquad K^{*}\stackrel{{\mathcal{H}^{n}}}{{ \subset}}K\,,\qquad\mathcal{H}^{n}(K^{*}\cap E^{(1)})=0\,,\]
and Federer's theorem (1.37), we obtain
\[\mathcal{F}_{\rm bk}(K,E;U^{(1)}) = \mathcal{H}^{n}(U^{(1)}\cap\partial^{*}E)+2\,\mathcal{H}^{n}(U^ {(1)}\cap K\cap E^{(0)})\] \[= 2\,\mathcal{H}^{n}(U^{(1)}\cap K^{*}\cap\partial^{*}E)-\mathcal{ H}^{n}(U^{(1)}\cap\partial^{*}E)\] \[+2\,\mathcal{H}^{n}(U^{(1)}\cap K^{*}\cap E^{(0)})+2\,\mathcal{H} ^{n}(U^{(1)}\cap(K\setminus K^{*})\cap E^{(0)})\] \[= 2\,\mathcal{H}^{n}(U^{(1)}\cap K^{*})-\mathcal{H}^{n}(U^{(1)} \cap\partial^{*}E)+2\,\mathcal{H}^{n}(U^{(1)}\cap(K\setminus K^{*})\cap E^{ (0)})\] \[= \sum_{i}\mathcal{H}^{n}(U^{(1)}\cap\partial^{*}U_{i})+2\, \mathcal{H}^{n}(U^{(1)}\cap(K\setminus K^{*})\cap E^{(0)})\,,\]
where in the last identity we have used (7.3).
The next lemma is a slight reformulation of [13, Lemma 10] and [13, Lemma 4.1].
**Lemma 7.2**.: _If \(\mathbf{W}\) is closed, \(\mathcal{C}\) is a spanning class for \(\mathbf{W}\), \(S\) is relatively closed in \(\Omega\) and \(\mathcal{C}\)-spanning \(\mathbf{W}\), and \(B\subset\Omega\) is an open ball, then for any \(\gamma\in\mathcal{C}\) we either have \(\gamma(\mathbb{S}^{1})\cap(S\setminus B)\neq\varnothing\), or \(\gamma(\mathbb{S}^{1})\) has non-empty intersection with at least two connected components of \(B\setminus S\). In particular, it intersects the boundaries of both components._
We are now ready for the proof of Theorem 1.6.
Proof of Theorem 1.6.: The opening part of the statement of Theorem 1.6 is Theorem 6.2-(i), therefore we can directly consider a minimizer \((K,E)\in\mathcal{K}\) of \(\Psi_{\rm bk}(v)\) such that both \(E\) and \(K\) are bounded, \(K\cup E\) is \(\mathcal{C}\)-spanning \(\mathbf{W}\), and
\[K\cap E^{(1)}=\varnothing\,, \tag{7.4}\]
and begin by proving the existence of a closed set \(\Sigma\subset K\) closed such that (i): \(\Sigma=\varnothing\) if \(1\leq n\leq 6\), \(\Sigma\) is locally finite in \(\Omega\) if \(n=7\), and \(\mathcal{H}^{s}(\Sigma)=0\) for every \(s>n-7\)
Figure 7.1. The situation in Lemma 7.1: (a) a depiction of the left hand side of (7.1), where \(K\setminus\partial^{*}E\) is drawn with a bold line to indicate that, in the computation of \(\mathcal{F}_{\rm bk}(K,E;U^{(1)})=\mathcal{H}^{n}(U^{(1)}\cap\partial^{*}E)+ 2\,\mathcal{H}^{n}(U^{(1)}\cap K\setminus\partial^{*}E)\), it is counted with multiplicity \(2\); (b) a depiction of the right hand side of (7.1), where \(K\setminus K^{*}\) is drawn with a bold line to indicate that it has to be counted with multiplicity \(2\).
if \(n\geq 8\); (ii): \((\partial^{*}E)\setminus\Sigma\) is a smooth hypersurface with constant mean curvature; (iii) \(K\setminus(\operatorname{cl}(E)\cup\Sigma)\) is a smooth minimal hypersurface; (iv)\({}_{\alpha}\): if \(x\in[\Omega\cap(\partial E\setminus\partial^{*}E)]\setminus\Sigma\), then there are \(r>0\), \(\nu\in\mathbb{S}^{n}\), \(u_{1},u_{2}\in C^{1,\alpha}({\bf D}^{\nu}_{r}(x);(-r/4,r/4))\) (\(\alpha\in(0,1/2)\) arbitrary) such that \(u_{1}(x)=u_{2}(x)=0\), \(u_{1}\leq u_{2}\) on \({\bf D}^{\nu}_{r}(x)\), \(\{u_{1}<u_{2}\}\) and \(\operatorname{int}\{u_{1}=u_{2}\}\) are both non-empty, and
\[{\bf C}^{\nu}_{r}(x)\cap K = \cup_{i=1,2}\bigl{\{}y+u_{i}(y)\,\nu:y\in{\bf D}^{\nu}_{r}(x) \bigr{\}}\,, \tag{7.5}\] \[{\bf C}^{\nu}_{r}(x)\cap\partial^{*}E = \cup_{i=1,2}\bigl{\{}y+u_{i}(y)\nu:y\in\{u_{1}<u_{2}\}\bigr{\}}\,,\] (7.6) \[{\bf C}^{\nu}_{r}(x)\cap E = \bigl{\{}y+t\,\nu:y\in\{u_{1}<u_{2}\}\,,u_{1}(x)<t<u_{2}(x) \bigr{\}}\,. \tag{7.7}\]
(The sharp version of conclusion (iv), that is conclusion (iv)\({}_{\alpha}\) with \(\alpha=1\), and conclusion (v), will be proved in the final step five of this proof.) The key step to prove conclusions (i)-(iv)\({}_{\alpha}\) is showing the validity of the following claim.
_Claim_: There exist positive constants \(\Lambda\) and \(r_{0}\) such that if \(B_{2r}(x)\subset\subset\Omega\), then, denoting by \(\{U_{j}\}_{j}\) the open connected components of \(B_{2r}(x)\setminus(E\cup K)\),
\[B_{r}(x)\cap K=B_{r}(x)\cap\cup_{j}\partial U_{j}\,, \tag{7.8}\] \[\#\bigl{\{}i:B_{r}(x)\cap U_{j}\neq\varnothing\bigr{\}}<\infty\,,\] (7.9) \[B_{2\,r}(x)\cap\operatorname{cl}\left(\partial^{*}U_{j}\right)= B_{2\,r}(x)\cap\partial U_{j}\,,\] (7.10) \[P(U_{j};B_{r}(x))\leq P(V_{j};B_{r}(x))+\Lambda\left|U_{j} \Delta V_{j}\right|, \tag{7.11}\]
whenever \(V_{j}\) satisfies \(V_{j}\Delta U_{j}\subset\subset B_{r}(x)\) and \(\operatorname{diam}\left(U_{j}\Delta V_{j}\right)<r_{0}\).
_Deduction of (i)-(iv) from the claim_: Let \(\{B_{2r_{i}}(x_{i})\}_{i\in\mathbb{N}}\) be a countable family of balls, locally finite in \(\Omega\), such that \(B_{2r_{i}}(x_{i})\subset\subset\Omega\) and \(\Omega=\cup_{i}B_{r_{i}}(x_{i})\). Setting for brevity
\[\Omega_{i}=B_{r_{i}}(x_{i})\,,\]
by (7.9) there are finitely many connected components \(\{U^{i}_{j}\}_{j=1}^{J_{i}}\) of \(B_{2r_{i}}(x_{i})\setminus(E\cup K)\) such that \(U^{i}_{j}\cap\Omega_{i}\neq\varnothing\). Thanks to (7.11), we deduce from [10, Theorem 28.1] that, if we set \(\Sigma^{i}_{j}=\Omega_{i}\cap(\partial U^{i}_{j}\setminus\partial^{*}U^{i}_{ j})\), then \(\Omega_{i}\cap\partial^{*}U^{i}_{j}\) is a \(C^{1,\alpha}\)-hypersurface for every \(\alpha\in(0,1/2)\), and \(\Sigma^{i}_{j}\) is a closed set that satisfies the dimensional estimates listed in conclusion (i). In particular, if we set
\[\Sigma=\cup_{i\in\mathbb{N}}\cup_{j=1}^{J_{i}}\Sigma^{i}_{j}\,, \tag{7.12}\]
then \(\Sigma\subset K\) thanks to \(\Sigma^{i}_{j}\subset\Omega_{i}\cap\partial U^{i}_{j}\) and to (7.8), and conclusion (i) holds by the local finiteness of the covering \(\{B_{2r_{i}}(x_{i})\}_{i}\) of \(\Omega\) and from \(J_{i}<\infty\) for every \(i\). Before moving to prove the remaining conclusions, we first notice that (7.8) gives
\[\Omega_{i}\cap K\setminus\Sigma = \Omega_{i}\cap\cup_{j=1}^{J_{i}}\partial U^{i}_{j}\setminus\Sigma \tag{7.13}\] \[\subset \Omega_{i}\cap\cup_{j=1}^{J_{i}}(\partial U^{i}_{j}\setminus \Sigma^{i}_{j})\ =\ \Omega_{i}\cap\cup_{j=1}^{J_{i}}\partial^{*}U^{i}_{j}\,;\]
second, we notice that, since \(K\) is \({\mathcal{H}}^{n}\)-finite,
\[\{E\cap\Omega_{i},U^{j}_{i}\cap\Omega_{i}\}_{j=1}^{J_{i}}\mbox{ is a Caccioppoli partition of }\Omega_{i}\,; \tag{7.14}\]
finally, we recall that, by (1.23), for every \(X\in C^{1}_{c}(\Omega;\mathbb{R}^{n+1})\) it holds
\[\lambda\,\int_{\partial^{*}E}X\cdot\nu_{E}\,d{\mathcal{H}}^{n}=\int_{\partial^ {*}E}\operatorname{div}^{K}X\,d{\mathcal{H}}^{n}+2\,\int_{K\cap E^{(0)}} \operatorname{div}^{K}X\,d{\mathcal{H}}^{n}\,. \tag{7.15}\]
_To prove conclusion (ii)_: Given \(x\in\Omega\cap\partial^{*}E\setminus\Sigma\), there is \(i\in\mathbb{N}\) such that \(x\in\Omega_{i}\cap\partial^{*}E\). By \(\Omega\cap\partial^{*}E\subset K\) and by (7.13) there is \(j(x)\in\{1,...,J_{i}\}\) such that \(x\in\partial^{*}U^{i}_{j(x)}\). By (7.14), we can use (1.47) and \(x\in\Omega\cap\partial^{*}E\cap\partial^{*}U^{i}_{j(x)}\) to deduce that
\[x\not\in\cup_{j\neq j(x)}\partial^{*}U^{i}_{j}\,. \tag{7.16}\]
Let \(r>0\) be such that \(B_{r}(x)\cap\partial^{*}U^{i}_{j(x)}\) is a \(C^{1}\)-hypersurface. Since \(\Sigma\) contains \(\cup_{j}\partial U^{i}_{j}\) and (7.10) holds, (7.16) implies that there is \(r>0\) such that
\[B_{r}(x)\subset\subset\Omega_{i}\setminus\Sigma\,,\qquad B_{r}(x)\cap\cup_{j} \partial U^{i}_{j}=B_{r}(x)\cap\partial U^{i}_{j(x)}=B_{r}(x)\cap\partial^{*} U^{i}_{j(x)}\,. \tag{7.17}\]
Since \(B_{r}(x)\cap\cup_{j\neq j(x)}\partial U^{i}_{j}=\varnothing\) and \(B_{r}(x)\cap U^{i}_{j(x)}\neq\varnothing\), we also have that
\[B_{r}(x)\cap\cup_{j}U^{i}_{j}=B_{r}(x)\cap U^{i}_{j(x)}\,,\]
and thus, by (7.14), that \(\{E\cap B_{r}(x),U^{i}_{j(x)}\cap B_{r}(x)\}\) is an \(\mathcal{H}^{n}\)-partition of \(B_{r}(x)\). In particular, \(B_{r}(x)\cap\partial^{*}E=B_{r}(x)\cap\partial^{*}U^{i}_{j(x)}\): intersecting with \(B_{r}(x)\) in (7.13) and taking into account (7.17), we conclude that
\[B_{r}(x)\cap K = B_{r}(x)\cap[\Omega_{i}\cap K\setminus\Sigma]\ \subset\ B_{r}(x)\cap[\Omega_{i}\cap\cup_{j=1}^{J_{i}}\partial^{*}U^{i}_{j}]\ =\ B_{r}(x)\cap\partial^{*}U^{i}_{j(x)} \tag{7.18}\] \[= B_{r}(x)\cap\partial^{*}E\,,\]
and (7.15) implies that, for every \(X\in C^{1}_{c}(B_{r}(x);\mathbb{R}^{n+1})\),
\[\lambda\int_{\partial^{*}E}X\cdot\nu_{E}\,d\mathcal{H}^{n}=\int_{\partial^{*}E }\operatorname{div}^{K}X\,d\mathcal{H}^{n}\,. \tag{7.19}\]
Hence, \(\partial^{*}E\) can be represented, locally in \(B_{r}(x)\), as the graph of distributional solutions of class \(C^{1,\alpha}\) to the constant mean curvature equation. By Schauder's theory, \(B_{r}(x)\cap\partial^{*}E\) is a smooth hypersurface whose mean curvature with respect to \(\nu_{E}\) is equal to \(\lambda\) thanks to (7.19).
_To prove conclusions (iii) and (iv)_: Let us now pick \(x\in K\setminus(\Sigma\cup\partial^{*}E)\) and let \(i\in\mathbb{N}\) be such that \(x\in\Omega_{i}\cap K\). Let \(i\in\mathbb{N}\) be such that \(x\in\Omega_{i}\). By (7.13) there is \(j(x)\in\{1,...,J_{i}\}\) such that \(x\in\partial^{*}U^{i}_{j(x)}\). By (7.14) and by (1.47), either \(x\in\partial^{*}E\) (which is excluded from the onset), or there is \(k(x)\neq j(x)\) such that \(x\in\partial^{*}U^{i}_{k(x)}\). We have thus proved that
\[x\in\partial^{*}U^{i}_{j(x)}\cap\partial^{*}U^{i}_{k(x)}\,,\qquad x\not\in \cup_{j\neq j(x),k(x)}\partial^{*}U^{i}_{j}\,. \tag{7.20}\]
To prove conclusion (iii) we notice that if we are in the case when \(x\in K\setminus(\Sigma\cup\partial E)=K\setminus(\Sigma\cup\operatorname{cl} (E))\) (thanks to \(K\cap E=\varnothing\)), then \(x\not\in\operatorname{cl}(E)\) implies that, for some \(r>0\), \(B_{r}(x)\cap(\Sigma\cup\operatorname{cl}(E))=\emptyset\). In particular, by (7.14) and (7.20), \(\{B_{r}(x)\cap U^{i}_{j(x)},B_{r}(x)\cap U^{i}_{k(x)}\}\) is an \(\mathcal{H}^{n}\)-partition of \(B_{r}(x)\), and by (7.13)
\[B_{r}(x)\cap K=B_{r}(x)\cap\partial^{*}U^{i}_{j(x)}=B_{r}(x)\cap\partial^{*}U^ {i}_{k(x)}\,,\]
is a \(C^{1,\alpha}\)-hypersurface. Under these conditions, (7.15) boils down to
\[\int_{K}\operatorname{div}^{K}X\,d\mathcal{H}^{n}=0\,,\qquad\forall X\in C^{1 }_{c}(B_{r}(x);\mathbb{R}^{n+1})\,, \tag{7.21}\]
so that \(K\) can be represented, locally in \(B_{r}(x)\), as the graph of distributional solutions to the minimal surfaces equation of class \(C^{1,\alpha}\). By Schauder's theory, \(B_{r}(x)\cap K\) is a smooth minimal surface.
To finally prove conclusion (iv), let us assume that \(x\in\Omega\cap(\partial E\setminus\partial^{*}E)\setminus\Sigma\). In this case (7.14) and (7.20) do not imply that \(\{B_{r}(x)\cap U^{i}_{j(x)},B_{r}(x)\cap U^{i}_{k(x)}\}\) is an \(\mathcal{H}^{n}\)-partition of \(B_{r}(x)\); actually, by \(\Omega\cap\partial E=\Omega\cap\operatorname{cl}(\partial^{*}E)\), the fact that \(x\in\partial E\) implies that \(B_{s}(x)\cap\partial^{*}E\neq\emptyset\) for every \(s>0\), so that \(|B_{s}(x)\cap E|>0\) for every \(s>0\), and the situation is such that, for every \(s<r\),
\[\{B_{s}(x)\cap U^{i}_{j(x)},B_{s}(x)\cap U^{i}_{k(x)},B_{s}(x)\cap E\}\text{ is an $\mathcal{H}^{n}$-partition of $B_{s}(x)$} \tag{7.22}\]
with all three sets in the partition having positive measure.
Now, by the first inclusion in (7.19), there exists \(\nu\in\mathbb{S}^{n}\) such that, up to further decrease the value of \(r\) and for some \(u_{1},u_{2}\in C^{1,\alpha}(\mathbf{D}_{r}^{\nu}(x);(-r/4,r/4))\) with \(u_{1}(x)=u_{2}(x)=0\) and \(\nabla u_{1}(x)=\nabla u_{2}(x)=0\) it must hold
\[\mathbf{C}_{r}^{\nu}(x)\cap U_{j(x)}^{i}=\left\{y+t\,\nu:y\in \mathbf{D}_{r}^{\nu}(x)\,,t>u_{2}(y)\right\},\] \[\mathbf{C}_{r}^{\nu}(x)\cap U_{k(x)}^{i}=\left\{y+t\,\nu:y\in \mathbf{D}_{r}^{\nu}(x)\,,t<u_{1}(y)\right\}.\]
By \(U_{j(x)}^{i}\cap U_{k(x)}^{i}=\varnothing\) we have \(u_{1}\leq u_{2}\) on \(\mathbf{D}_{r}^{\nu}(x)\), so that (7.21) gives
\[\mathbf{C}_{r}^{\nu}(x)\cap E=\left\{y+t\,\nu:y\in\left\{u_{1}<u_{2}\right\}, u_{1}(y)<t<u_{2}(y)\right\},\]
and \(\left\{u_{1}<u_{2}\right\}\) is non-empty. Again by (7.19) and (7.13) we also have that
\[\mathbf{C}_{r}^{\nu}(x)\cap K = \cup_{k=1}^{2}\left\{y+u_{k}(y)\,\nu:y\in\mathbf{D}_{r}^{\nu}(x) \right\},\] \[\mathbf{C}_{r}^{\nu}(x)\cap\partial^{*}U_{j(x)}^{i}\cap\partial^ {*}U_{k(x)}^{i} = \left\{y+u_{1}(y)\,\nu:y\in\mathbf{D}_{r}^{\nu}(x)\cap\left\{u_{1 }=u_{2}\right\}\right\},\] \[\mathbf{C}_{r}^{\nu}(x)\cap\partial^{*}E = \cup_{k=1}^{2}\left\{y+u_{k}(y)\,\nu:y\in\mathbf{D}_{r}^{\nu}(x) \cap\left\{u_{1}<u_{2}\right\}\right\}.\]
This completes the proof of conclusion (iv)\({}_{\alpha}\).
_Proof of the claim_: Assuming without loss of generality that \(x=0\), we want to find \(\Lambda\) and \(r_{0}\) positive such that if \(B_{2r}\subset\subset\Omega\), then, denoting by \(\{U_{j}\}_{j}\) the open connected components of \(B_{2r}\setminus(E\cup K)\), we have
\[B_{r}\cap K=B_{r}\cap\cup_{j}\partial U_{j}\,, \tag{7.22}\] \[\#\big{\{}j:B_{r}\cap U_{j}\neq\varnothing\big{\}}<\infty\,,\] (7.23) \[B_{2\,r}\cap\mathrm{cl}\,(\partial^{*}U_{j})=B_{2\,r}\cap \partial U_{j}\,, \tag{7.24}\]
and that \(P(U_{j};B_{r})\leq P(V_{j};B_{r})+\Lambda\,|U_{j}\Delta V_{j}|\) whenever \(V_{j}\) satisfies \(V_{j}\Delta U_{j}\subset\subset B_{r}\) and \(\mathrm{diam}\,(U_{j}\Delta V_{j})<r_{0}\).
_Step one_: We prove that
\[K\cap\mathrm{int}\,U_{j}^{{(1)}}=\varnothing\,,\qquad \mathrm{int}\,U_{j}^{{(1)}}=U_{j}\quad \forall j\,. \tag{7.25}\]
To this end, we begin by noticing that, for every \(j\),
\[B_{2\,r}\cap\partial U_{j} \subset B_{2\,r}\cap K\,, \tag{7.26}\] \[U_{j}\ \subset\ \mathrm{int}(U_{j}^{{(1)}}) \subset B_{2\,r}\cap\mathrm{cl}\,U_{j}\ \subset\ B_{2\,r}\cap(U_{j}\cup K)\,,\] (7.27) \[B_{2\,r}\cap\partial[\mathrm{int}(U_{j}^{{(1)}})] \subset B_{2\,r}\cap K\,. \tag{7.28}\]
Indeed, for every \(k\) and \(j\), \(U_{k}\cap U_{j}=\varnothing\) with \(U_{k}\) and \(U_{j}\) open gives \(U_{k}\cap\partial U_{j}=\varnothing\), so that \(B_{2r}\cap\partial U_{j}\subset B_{2r}\setminus\cup_{k}U_{k}=B_{2\,r}\cap(E \cup K)=B_{2\,r}\cap K\) thanks to the fact that \(E\cap\partial U_{j}=\varnothing\) (as \(U_{j}\cap E=\varnothing\)). Having proved (7.26), one easily deduces the third inclusion in (7.27), while the first two are evident. Finally, from (7.27), and since \(K\) is closed, we find
\[B_{2\,r}\cap\mathrm{cl}\left(\mathrm{int}(U_{j}^{{(1)}}) \right)\subset B_{2\,r}\cap\left(\mathrm{cl}\,(U_{j})\cup K\right),\]
so that subtracting \(\mathrm{int}(U_{j}^{{(1)}})\), and recalling that \(U_{j}\subset\mathrm{int}(U_{j}^{{(1)}})\) we find
\[B_{2\,r}\cap\partial[\mathrm{int}(U_{j}^{{(1)}})] \subset B_{2\,r}\cap(K\cup\partial U_{j})\]
and deduce (7.28) from (7.26).
Next, we claim that,
\[\text{if }K_{*}=K\setminus\bigcup_{j}\mathrm{int}\,U_{j}^{{(1)}}, \text{ then }(K_{*},E)\in\mathcal{K}\text{ and }K_{*}\cup E\text{ is }\mathcal{C}\text{- spanning}\,. \tag{7.29}\]
_To prove that \((K_{*},E)\in\mathcal{K}\), the only assertion that is not immediate is the inclusion \(\Omega\cap\partial E\subset K_{*}\). To prove it we notice that if \(z\in\mathrm{int}\,U_{j}^{{(1)}}\), then \(B_{s}(z)\subset\mathrm{int}\,U_{j}^{{(1)}}\) for some \(s>0\), so that \(U_{j}\cap E=\varnothing\) gives \(|E\cap B_{s}(z)|=0\). Since \(E\) is open this implies \(B_{s}(z)\cap E=\varnothing\), hence \(z\notin\partial E\)._
_To prove that \(E\cup K_{*}\) is \(\mathcal{C}\)-spanning_: Since \(E\cup K_{*}\) is relatively closed in \(\Omega\), it suffices to verify that for arbitrary \(\gamma\in\mathcal{C}\), \((K_{*}\cup E)\cap\gamma\neq\varnothing\). Since \(K\setminus B_{2r}=K_{*}\setminus B_{2r}\), we directly assume that \((K\cup E)\cap(\gamma\setminus B_{2r})=\varnothing\). Since \(K\cup E\) is \(\mathcal{C}\)-spanning \(\mathbf{W}\), by Lemma 7.2, there are two distinct connected components \(U_{j}\) and \(U_{k}\) of \(B_{2r}\setminus(K\cup E)\) such that there is \(\gamma(\mathbb{S}^{1})\cap B_{2\,r}\cap(\partial U_{j})\cap(\partial U_{k})\neq\varnothing\). We conclude by showing that
\[B_{2\,r}\cap(\partial U_{j})\cap(\partial U_{k})\subset K_{*}\,,\qquad\forall j \neq k\,. \tag{7.30}\]
Indeed any point in \(B_{2r}\cap(\partial U_{j})\cap(\partial U_{k})\) is an accumulation point for both \(U_{j}\) and \(U_{k}\), and thus, by (7.27), for both \(\mathrm{int}U_{j}^{{(1)}}\) and \(\mathrm{int}U_{k}^{{(1)}}\). Since \(U_{j}\cap U_{k}=\emptyset\) implies \((\mathrm{int}U_{j}^{{(1)}})\cap(\mathrm{int}U_{k}^{{(1)}})=\emptyset\), an accumulation point for both \(\mathrm{int}U_{j}^{{(1)}}\) and \(\mathrm{int}U_{k}^{{(1)}}\) must lie in \([\partial(\mathrm{int}U_{j}^{{(1)}})]\cap[\partial(\mathrm{int}U_{k}^{{(1)}})]\). We thus deduce (7.30) from (7.28), and complete the proof of (7.29).
_To deduce (7.25) from (7.29), and complete step one_: By (7.29), \((K_{*},E)\) is admissible in \(\Psi_{\mathrm{bk}}(v)\). Since \((K,E)\) is a minimizer of \(\Psi_{\mathrm{bk}}(v)\), we conclude that \(\mathcal{H}^{n}(K\setminus K_{*})=0\). Would there be \(z\in\mathrm{int}(U_{j}^{{(1)}})\cap K\) for some \(j\), then by (6.3), and with \(\rho>0\) such that \(B_{\rho}(z)\subset\mathrm{int}(U_{j}^{{(1)}})\), we would find
\[c\,\rho^{n}\leq\mathcal{H}^{n}(K\cap B_{\rho}(z))\leq\mathcal{H}^{n}(K\cap \mathrm{int}(U_{j}^{{(1)}}))\leq\mathcal{H}^{n}(K\setminus K_{*})=0\,.\]
This shows that \(K\cap\mathrm{int}(U_{j}^{{(1)}})=\varnothing\). Using this last fact in combination with \(\mathrm{int}(U_{j}^{{(1)}})\subset B_{2\,r}\cap(U_{j}\cap K)\) from (7.27) we conclude that \(\mathrm{int}(U_{j}^{{(1)}})\subset U_{j}\), and thus that \(\mathrm{int}(U_{j}^{{(1)}})=U_{j}\) by the first inclusion in (7.27).
_Step two_: We prove (7.24), i.e. \(B_{2\,r}\cap\mathrm{cl}\,(\partial^{*}U_{j})=B_{2\,r}\cap\partial U_{j}\). The \(\subset\) inclusion is a general fact, see (1.35). To prove the reverse inclusion we recall, again from (1.35), that \(z\in B_{2\,r}\cap\mathrm{cl}\,(\partial^{*}U_{j})\) if and only if \(0<|B_{\rho}(z)\cap U_{j}|<|B_{\rho}|\) for every \(\rho>0\). Now, if \(z\in B_{2\,r}\cap\partial U_{j}\), then clearly, being \(U_{j}\) open, we have \(|U_{j}\cap B_{\rho}(z)|>0\) for every \(\rho>0\); moreover, should \(|B_{\rho}(z)\cap U_{j}|=|B_{\rho}|\) hold for some \(\rho\), then we would have \(z\in\mathrm{int}(U_{j}^{{(1)}})\), and thus \(z\in U_{j}\) by (7.25), a contradiction.
_Step three_: We prove, for each \(j\), the \(\mathcal{H}^{n}\)-equivalence of \(\partial^{*}U_{j}\) and \(\partial U_{j}\), that is
\[\mathcal{H}^{n}(B_{2\,r}\cap\partial U_{j}\setminus\partial^{*}U_{j})=0\,. \tag{7.31}\]
By a standard argument [13, Theorem 21.11] it will suffice to prove the existence of \(r_{0}>0\) and \(\alpha,\beta\in(0,1/2)\) (depending on \(n\)) such that, for each \(j\) and each \(z\in B_{2\,r}\cap\partial U_{j}\), it holds
\[\alpha\,|B_{\rho}|\leq|B_{\rho}(z)\cap U_{j}|\leq(1-\beta)|B_{\rho}|\,, \tag{7.32}\]
for every \(\rho<\min\{r_{0},\mathrm{dist}(z,\partial B_{2\,r})\}\).
_Proof of the lower bound in (7.32)_: Since diffeomorphic images of \(\mathcal{C}\)-spanning sets are \(\mathcal{C}\)-spanning, a standard argument using diffeomorphic volume fixing variations shows the existence of positive constants \(\Lambda\) and \(r_{0}\) such that if \((K^{\prime},E^{\prime})\in\mathcal{K}_{\mathrm{B}}\), \(K^{\prime}\cup(E^{\prime})^{{(1)}}\) is \(\mathcal{C}\)-spanning \(\mathbf{W}\), and \((K^{\prime}\Delta K)\cup(E^{\prime}\Delta E)\subset\subset B_{\rho}(z)\) for some \(\rho<r_{0}\) and \(B_{\rho}(z)\subset\subset B_{2\,r}\), then
\[\mathcal{F}_{\mathrm{bk}}(K,E)\leq\mathcal{F}_{\mathrm{bk}}(K^{\prime},E^{ \prime})+\Lambda\,|E\Delta E^{\prime}|\,. \tag{7.33}\]
We claim that we can apply (7.33) with
\[E^{\prime}=E\cup\big{(}B_{\rho}(z)\cap\mathrm{cl}\,U_{j}\big{)}\,,\quad K^{ \prime}=\big{(}K\cup(U_{j}^{{(1)}}\cap\partial B_{\rho}(z)\big{)}\setminus(E^{ \prime})^{{(1)}}\,, \tag{7.34}\]
where \(\rho<r_{0}\), \(B_{\rho}(z)\subset\subset B_{2\,r}\), and
\[\mathcal{H}^{n}\big{(}\partial B_{\rho}(z)\cap[\partial^{*}E\cup\partial^{*}U_{j }]\big{)}=\mathcal{H}^{n}(K\cap\partial B_{\rho}(z))=0\,. \tag{7.35}\]
Indeed, \(K^{\prime}\cup(E^{\prime})^{{(1)}}\) contains \(K\cup E^{{(1)}}\), thus \(K\cup E\) being \(E\) open, and is thus \(\mathcal{C}\)-spanning. To check that \((K^{\prime},E^{\prime})\in\mathcal{K}_{\mathrm{B}}\), we argue as follows. First, we notice that
\(\nu_{B_{\rho}(z)\cap\operatorname{cl}(U_{j})}\bigr{\}}=0\), since it is \(\mathcal{H}^{n}\)-contained in the union of \(\partial B_{\rho}(z)\cap\partial^{*}E\) and \(\{\nu_{E}=\nu_{\operatorname{cl}(U_{j})}\}\), that are \(\mathcal{H}^{n}\)-negligible by (7.35) and by the fact that \(\nu_{E}=-\nu_{\operatorname{cl}(U_{j})}\)\(\mathcal{H}^{n}\)-a.e. on \(\partial^{*}E\cap\partial^{*}\operatorname{cl}(U_{j})\) thanks to \(|E\cap\operatorname{cl}(U_{j})|=0\). By \(\mathcal{H}^{n}(\{\nu_{E}=\nu_{B_{\rho}(z)\cap\operatorname{cl}(U_{j})}\})=0\) and (1.39) we thus have
\[\Omega\cap\partial^{*}E^{\prime}\stackrel{{\mathcal{H}^{n}}}{{= }}\Omega\cap\left\{\bigl{[}E^{{(0)}}\cap\partial^{*}\bigl{(}B_{\rho}(z)\cap \operatorname{cl}U_{j}\bigr{)}\bigr{]}\cup\bigl{[}\bigl{(}B_{\rho}(z)\cap \operatorname{cl}U_{j}\bigr{)}^{{(0)}}\cap\partial^{*}E \bigr{]}\right\}. \tag{7.36}\]
Since \(U_{j}\) is Lebesgue equivalent to \(\operatorname{cl}(U_{j})\) (indeed, \(B_{2\,r}\cap\partial U_{j}\subset K\)), we have \(U_{j}^{{(1)}}=[\operatorname{cl}(U_{j})]^{{(1)}}\) and \(\partial^{*}[\operatorname{cl}(U_{j})]=\partial^{*}U_{j}\), so that (1.40) and (7.35) give
\[\partial^{*}\bigl{(}B_{\rho}(z)\cap\operatorname{cl}(U_{j})\bigr{)} \stackrel{{\mathcal{H}^{n}}}{{=}}\bigl{\{}[\operatorname{cl}(U_{ j})]^{{(1)}}\cap\partial B_{\rho}(z)\bigr{\}}\cup\bigl{\{}B_{\rho}(x)\cap \partial^{*}[\operatorname{cl}(U_{j})]\bigr{\}}\,,\] \[=\bigl{(}U_{j}^{{(1)}}\cap\partial B_{\rho}(z)\bigr{)} \cup\bigl{(}B_{\rho}(x)\cap\partial^{*}U_{j}\bigr{)}\subset\bigl{(}U_{j}^{{ (1)}}\cap\partial B_{\rho}(z)\bigr{)}\cup K\,, \tag{7.37}\]
by \(B_{2\,r}\cap\partial U_{j}\subset K\). By (7.36) and \(\mathcal{H}^{n}((E^{\prime})^{{(1)}}\cap\partial^{*}E^{\prime})=0\) we thus find that
\[\Omega\cap\partial^{*}E^{\prime}\cap\partial^{*}\bigl{(}B_{\rho}(z)\cap \operatorname{cl}(U_{j})\bigr{)}\stackrel{{\mathcal{H}^{n}}}{{ \subset}}K^{\prime}\,. \tag{7.38}\]
Moreover, by \(\Omega\cap\partial^{*}E\subset\Omega\cap\partial E\subset K\) and
\[(\partial^{*}E)\cap\bigl{(}B_{\rho}(z)\cap\operatorname{cl}U_{j}\bigr{)}^{{ (0)}}\subset E^{{(1/2)}}\cap\bigl{(}B_{\rho}(z)\cap \operatorname{cl}U_{j}\bigr{)}^{{(0)}}\subset\mathbb{R}^{n+1} \setminus(E^{\prime})^{{(1)}}\,,\]
we find \((\partial^{*}E)\cap\bigl{(}B_{\rho}(z)\cap\operatorname{cl}U_{j}\bigr{)}^{{ (0)}}\subset K\setminus(E^{\prime})^{{(1)}}\subset K^{\prime}\), which combined with (7.38) finally proves the \(\mathcal{H}^{n}\)-containment of \(\Omega\cap\partial^{*}E^{\prime}\) into \(K^{\prime}\), and thus \((K^{\prime},E^{\prime})\in\mathcal{K}_{\mathrm{B}}\). We have thus proved that \((K^{\prime},E^{\prime})\) as in (7.34) is admissible into (7.33). Since \(\mathcal{F}_{\mathrm{bk}}(K,E;\partial B_{\rho}(z))=0\) by (7.35) and \(\mathcal{F}_{\mathrm{bk}}(K,E;A)=\mathcal{F}_{\mathrm{bk}}(K^{\prime},E^{ \prime};A)\) if \(A=\Omega\setminus\operatorname{cl}(B_{\rho}(z))\), we deduce from (7.33) that
\[\mathcal{F}_{\mathrm{bk}}(K,E;B_{\rho}(z))\leq\mathcal{F}_{\mathrm{bk}}(K^{ \prime},E^{\prime};\operatorname{cl}(B_{\rho}(z)))+\Lambda\,|E\Delta E^{\prime}|\,. \tag{7.39}\]
To exploit (7.39), we first notice that \(\{B_{\rho}(z)\cap U_{k}\}_{k}\) is a Lebesgue partition of \(B_{\rho}(z)\setminus E\) with \(B_{\rho}(z)^{{(1)}}\cap\partial^{*}(B_{\rho}(z)\cap U_{k})=B_{\rho}(z)\cap \partial^{*}U_{k}\) for every \(k\), so that, by Lemma 7.1,
\[\mathcal{F}_{\mathrm{bk}}(K,E;B_{\rho}(z))=2\,\mathcal{H}^{n}\Bigl{(}B_{\rho}(z )\cap E^{{(0)}}\cap\Bigl{(}K\setminus\bigcup_{k}\partial^{*}U_{k}\Bigr{)} \Bigr{)}+\sum_{k}P(U_{k};B_{\rho}(z))\,. \tag{7.40}\]
Similarly, \(\{B_{\rho}(z)\cap U_{k}\}_{k\neq j}\) is a Lebesgue partition of \(B_{\rho}(z)\setminus E^{\prime}\), so that again by Lemma 7.1 we find
\[\mathcal{F}_{\mathrm{bk}}(K^{\prime},E^{\prime};B_{\rho}(z))=2\, \mathcal{H}^{n}\Bigl{(}B_{\rho}(z)\cap(E^{\prime})^{{(0)}}\cap\Bigl{(}K^{ \prime}\setminus\bigcup_{k\neq j}\partial^{*}U_{k}\Bigr{)}\Bigr{)}+\sum_{k\neq j }P(U_{k};B_{\rho}(z))\] \[=2\,\mathcal{H}^{n}\Bigl{(}B_{\rho}(z)\cap(E^{\prime})^{{ (0)}}\cap\Bigl{(}K\setminus\bigcup_{k}\partial^{*}U_{k}\Bigr{)}\Bigr{)}+\sum_{k \neq j}P(U_{k};B_{\rho}(z)) \tag{7.41}\]
where in the last identity we have used that, by (7.34), we have \(B_{\rho}(z)\cap(E^{\prime})^{{(0)}}\cap\partial^{*}U_{j}=0\) and \(B_{\rho}(z)\cap K^{\prime}\cap(E^{\prime})^{{(0)}}=B_{\rho}(z)\cap K\cap(E^{ \prime})^{{(0)}}\). Combining (7.39), (7.40), (7.41) and the fact that \((E^{\prime})^{{(0)}}\subset E^{{(0)}}\), we find that
\[P(U_{j};B_{\rho}(z))\leq\mathcal{F}_{\mathrm{bk}}\bigl{(}K^{\prime},E^{\prime}; \partial B_{\rho}(z)\bigr{)}+\Lambda\,|B_{\rho}(z)\cap U_{j}|\,. \tag{7.42}\]
The first term in \(\mathcal{F}_{\mathrm{bk}}\bigl{(}K^{\prime},E^{\prime};\partial B_{\rho}(z) \bigr{)}\) is \(P(E^{\prime};\partial B_{\rho}(z))\): taking into account \(\mathcal{H}^{n}(\partial^{*}E\cap\partial B_{\rho}(z))=0\), by (7.36) and the second identity in (7.37) we find
\[P(E^{\prime};\partial B_{\rho}(z)) = \mathcal{H}^{n}\bigl{(}\partial B_{\rho}(z)\cap E^{{(0)}} \cap\partial^{*}\bigl{(}B_{\rho}(z)\cap\operatorname{cl}U_{j}\bigr{)}\bigr{)}\] \[= \mathcal{H}^{n}(E^{{(0)}}\cap U_{j}^{{(1)}}\cap\partial B_{\rho}(z) )=\mathcal{H}^{n}(U_{j}^{{(1)}}\cap\partial B_{\rho}(z))\,,\]
while for the second term in \(\mathcal{F}_{\mathrm{bk}}\bigl{(}K^{\prime},E^{\prime};\partial B_{\rho}(z) \bigr{)}\), by \(\mathcal{H}^{n}(K\cap\partial B_{\rho}(z))=0\),
\[\mathcal{H}^{n}(K^{\prime}\cap(E^{\prime})^{{(0)}}\cap\partial B_{\rho}(z))= \mathcal{H}^{n}((E^{\prime})^{{(0)}}\cap U_{j}^{{(1)}}\cap\partial B_{\rho}(z) )=0\]
since \((E^{\prime})^{(0)}\subset(B_{\rho}(z)\cap\operatorname{cl}\,(U_{j}))^{(0)}\) and \(B_{\rho}(z)\cap\operatorname{cl}\,(U_{j})\) has positive Lebesgue density at points in \(U_{j}^{(1)}\cap\partial B_{\rho}(z)\). Having thus proved that \(\mathcal{F}_{\operatorname{bk}}\big{(}K^{\prime},E^{\prime};\partial B_{\rho} (z)\big{)}=\mathcal{H}^{n}(U_{j}^{(1)}\cap\partial B_{\rho}(z))\), we conclude from (7.42) that
\[P(U_{j};B_{\rho}(z))\leq\mathcal{H}^{n}(U_{j}^{(1)}\cap\partial B_{\rho}(z))+ \Lambda\,|B_{\rho}(z)\cap U_{j}|\,,\]
for a.e. \(\rho<r_{0}\). Since \(z\in B_{2\,r}\cap\partial U_{j}=B_{2\,r}\cap\operatorname{cl}\,(\partial^{*}U _{j})\) and (1.35) imply that \(|B_{\rho}(z)\cap U_{j}|>0\) for every \(\rho>0\), a standard argument (see, e.g. [13, Theorem 21.11]) implies that, up to further decrease the value of \(r_{0}\) depending on \(\Lambda\), and for some constant \(\alpha=\alpha(n)\in(0,1/2)\), the lower bound in (7.32) holds true.
_Proof of the upper bound in (7.32)_: We argue by contradiction that, no matter how small \(\beta\in(0,1/2)\) is, we can find \(j\), \(z\in B_{2\,r}\cap\partial U_{j}\), and \(\rho<\min\{r_{0},\operatorname{dist}(z,\partial B_{2\,r})\}\), such that
\[|B_{\rho}(z)\cap U_{j}|>(1-\beta)\,|B_{\rho}|\,. \tag{7.43}\]
We first notice that for every \(k\neq j\) it must be \(B_{\rho/2}(z)\cap\partial U_{k}=\varnothing\): indeed if \(w\in B_{\rho/2}(z)\cap\partial U_{k}\) for some \(k\neq j\), then by the lower bound in (6.2) and by (7.43) we find
\[\alpha\,|B_{\rho/2}|\leq|U_{k}\cap B_{\rho/2}(w)|\leq|B_{\rho}(z)\setminus U_ {j}|<\beta\,|B_{\rho}|\]
which gives a contradiction if \(\beta<\alpha/2^{n+1}\). By \(B_{\rho/2}(z)\cap\partial U_{k}=\varnothing\) it follows that
\[B_{\rho/2}(z)\subset\operatorname{cl}\,(U_{j})\cup\operatorname{cl}\,(E)\,. \tag{7.44}\]
Let us now set
\[E^{\prime}=E\setminus B_{\rho/2}(z)\,,\qquad K^{\prime}=\big{(}K\setminus B_ {\rho/2}(z)\big{)}\cup\big{(}E^{(1)}\cap\partial B_{\rho/2}(z)\big{)}\,. \tag{7.45}\]
By (1.41), if \(\mathcal{H}^{n}(\partial^{*}E\cap\partial B_{\rho/2})=0\), then \((K^{\prime},E^{\prime})\in\mathcal{K}\), since \((\Omega\setminus B_{\rho/2}(z))\cap\partial^{*}E\subset K\setminus B_{\rho/2} (z)\subset K^{\prime}\) implies
\[\Omega\cap\partial^{*}E^{\prime}\stackrel{{\mathcal{H}^{n}}}{{=}} \Omega\cap\big{\{}\big{(}(\partial^{*}E)\setminus B_{\rho/2}(z)\big{)}\cup \big{(}E^{(1)}\cap\partial B_{\rho/2}(z)\big{)}\big{\}}\subset K^{\prime}\,.\]
Moreover \(K^{\prime}\cup(E^{(1)})^{\prime}\) is \(\mathcal{C}\)-spanning \(\mathbf{W}\) since it contains \((K\cup E)\setminus B_{\rho/2}(z)\), and
\[(K\cup E)\setminus B_{\rho/2}(z)\text{ is $\mathcal{C}$-spanning $\mathbf{W}$}\,. \tag{7.46}\]
Indeed, if \(\gamma\in\mathcal{C}\) and \(\gamma(\mathbb{S}^{1})\cap(K\cup E)\setminus B_{\rho/2}(z)=\emptyset\), then by applying Lemma 7.2 to \(S=K\cup E\) and \(B=B_{2\,r}\) we see that either \(\gamma(\mathbb{S}^{1})\cap(K\cup E)\setminus B_{2\,r}\neq\varnothing\) (and thus \(\gamma(\mathbb{S}^{1})\cap(K\cup E)\setminus B_{\rho/2}(z)\neq\varnothing\) by \(B_{\rho/2}(z)\subset B_{r}\)), or there are \(k\neq h\) such that \(\gamma(\mathbb{S}^{1})\cap\partial U_{k}\neq\varnothing\) and \(\gamma(\mathbb{S}^{1})\cap\partial U_{h}\neq\varnothing\). Up to possibly switch \(k\) and \(h\), we have that \(k\neq j\), so that (7.44) implies that \(\varnothing\neq\gamma(\mathbb{S}^{1})\cap\partial U_{k}=\gamma(\mathbb{S}^{1}) \cap\partial U_{k}\setminus B_{\rho/2}(z)\), where the latter set is contained in \(K\setminus B_{\rho/2}(z)\) by (7.22) and \(B_{\rho/2}(z)\subset B_{r}\). This proves (7.46).
We can thus plug the competitor \((K^{\prime},E^{\prime})\) defined in (7.45) into (7.39), and find
\[\mathcal{F}_{\operatorname{bk}}(K,E;B_{\rho/2}(z))\leq\mathcal{F}_{\operatorname {bk}}\big{(}K^{\prime},E^{\prime};\operatorname{cl}\,(B_{\rho/2}(z))\big{)}+ \Lambda\,|E\cap B_{\rho/2}(z)|\,,\]
for every \(\rho<\min\{r_{0},\operatorname{dist}(z,\partial B_{2\,r})\}\) such that \(\mathcal{H}^{n}(K\cap\partial B_{\rho/2}(z))=0\). Now, by Lemma 7.1 and by (7.44) we have
\[\mathcal{F}_{\operatorname{bk}}(K,E;B_{\rho/2}(z))\geq P(U_{j};B_{\rho/2}(z))=P (E;B_{\rho/2}(z))\,,\]
while (1.40) gives
\[\operatorname{cl}\,(B_{\rho/2}/z)\cap K^{\prime}\stackrel{{ \mathcal{H}^{n}}}{{=}}\operatorname{cl}\,(B_{\rho/2}/z)\cap\partial^{*}E^{ \prime}\stackrel{{\mathcal{H}^{n}}}{{=}}E^{(1)}\cap\partial B_{\rho/ 2}(z)\,,\]
thus proving that, for a.e. \(\rho<\min\{r_{0},\operatorname{dist}(z,\partial B_{2\,r})\}\),
\[P(E;B_{\rho/2}(z))\leq\mathcal{H}^{n}(E^{(1)}\cap B_{\rho/2}(z))+\Lambda\,|E \cap B_{\rho/2}(z)|\,.\]
Since \(z\in B_{2\,r}\cap\partial U_{j}\) and \(B_{\rho/2}(z)\cap\partial^{*}U_{j}=B_{\rho/2}(z)\cap\partial^{*}E\), by (1.35) we see that \(|E\cap B_{\rho/2}(z)|>0\) for every \(\rho<\min\{r_{0},\mathrm{dist}(z,\partial B_{2\,r})\}\). By a standard argument, up to further decrease the value of \(r_{0}\), we find that for some \(\alpha^{\prime}=\alpha^{\prime}(n)\) it holds
\[|E\cap B_{\rho/2}(z)|\geq\alpha^{\prime}\,|B_{\rho/2}|\,,\qquad\forall\rho< \min\{r_{0},\mathrm{dist}(z,\partial B_{2\,r})\}\,,\]
and since \(|E\cap B_{\rho/2}(z)|=|B_{\rho/2}(z)\setminus U_{j}|\) this give a contradiction with (7.43) up to further decrease the value of \(\beta\).
_Step three_: We prove (7.22) and (7.23). The lower bound in (7.32) implies (7.23), i.e., \(J=\#\{j:U_{j}\cap B_{r}\neq\varnothing\}<\infty\). Next, by \(B_{2\,r}\cap\partial U_{j}\subset K\) (last inclusion in (7.27)), to prove (7.22) it suffices to show that
\[K\cap B_{r}\subset\cup_{j=1}^{J}\partial U_{j}\,. \tag{7.47}\]
Now, if \(z\in K\cap B_{r}\), then by \(K\cap E=\varnothing\) we have either \(z\in K\setminus\mathrm{cl}\,(E)\) or \(z\in B_{r}\cap\partial E\), and, in the latter case, \(|E\cap B_{\rho}(z)|\leq(1-c)\,|B_{\rho}|\) for every \(\rho<\min\{r_{0},\mathrm{dist}(z,\partial\mathbf{W})\}\) thanks to (6.2). Therefore, in both cases, \(z\) is an accumulation point for \((\cup_{j=1}^{J}U_{j})^{(1)}\cap B_{r}\). Since \(J\) is finite, there must be at least one \(j\) such that \(z\in\mathrm{cl}\,(U_{j})\) - hence \(z\in\partial U_{j}\) thanks to \(K\cap U_{j}=\varnothing\).
Before moving to the next step, we also notice that
\[\mathcal{F}_{\mathrm{bk}}(K,E;B_{r})=\sum_{j=1}^{J}P(U_{j};B_{r})\,. \tag{7.48}\]
Indeed, by (7.22), (7.23), and (7.31) we have
\[K\cap B_{r}=B_{r}\cap\cup_{j=1}^{J}\partial U_{j}\stackrel{{ \mathbb{R}^{n}}}{{=}}B_{r}\cap\cup_{j=1}^{J}\partial^{*}U_{j}\,, \tag{7.49}\]
so that, in the application of Lemma 7.1, i.e. in (7.40), the multiplicity \(2\) terms vanishes, and we find (7.48).
_Step four_: In this step we consider a set of finite perimeter \(V_{1}\) such that, for some \(B:=B_{\rho}(z)\subset B_{r}\) with \(\rho<r_{0}\) and \(\mathcal{H}^{n}(K\cap\partial B)=0\), we have
\[U_{1}\Delta V_{1}\subset\subset B\,. \tag{7.50}\]
We then define a pair of Borel sets \((K^{\prime},E^{\prime})\) as
\[E^{\prime} = \left(E\setminus B\right)\,\cup\,\left[B\cap\left(V_{1}\Delta(E \cup U_{1})\right)\right], \tag{7.51}\] \[K^{\prime} = \left(K\setminus B\right)\,\cup\,\left[B\cap\left(\partial^{*}V_{ 1}\cup\partial^{*}U_{2}\cup\cdots\cup\partial^{*}U_{J}\right)\right], \tag{7.52}\]
and show that \((K^{\prime},E^{\prime})\in\mathcal{K}_{\mathrm{B}}\), \(K^{\prime}\cup(E^{\prime})^{(1)}\) is \(\mathcal{C}\)-spanning \(\mathbf{W}\), and
\[\mathcal{F}_{\mathrm{bk}}(K^{\prime},E^{\prime})-\mathcal{F}_{\mathrm{bk}}(K,E )\leq P(V_{1};B)-P(U_{1};B)\,. \tag{7.53}\]
As a consequence of (7.53), (7.33) and \(|E\Delta E^{\prime}|=|U_{1}\Delta V_{1}|\), we find of course that \(P(U_{1};\Omega)\leq P(V_{1};\Omega)+\Lambda\,|U_{1}\Delta V_{1}|\), thus showing that \(U_{1}\) is a \((\Lambda,r_{0})\)-perimeter minimizer in \(\Omega\).
Proving that \((K^{\prime},E^{\prime})\in\mathcal{K}_{\mathrm{B}}\) is immediately reduced to showing that \(B\cap\partial^{*}E^{\prime}\) is \(\mathcal{H}^{n}\)-contained in \(B\cap(\partial^{*}V_{1}\cup\partial^{*}U_{2}\cup\cdots\cup\partial^{*}U_{J})\) thanks to \(\mathcal{H}^{n}(K\cap\partial B)=0\). Now, on taking into account that, by (1.39) and (1.41), \(\partial^{*}(X\cup Y)\) and \(\partial^{*}(X\setminus Y)\) are both \(\mathcal{H}^{n}\)-contained in \((\partial^{*}X)\cup(\partial^{*}Y)\), and thus \(\partial^{*}(X\Delta Y)\) is too, we easily see that
\[B\cap\partial^{*}E^{\prime}=B\cap\partial^{*}[V_{1}\Delta(E\cup U_{1})] \stackrel{{\mathbb{R}^{n}}}{{\subset}}(B\cap\partial^{*}V_{1}) \cup(B\cap\partial^{*}(E\cup U_{1}))\,.\]
However, \(B\cap(E\cup U_{1})=B\setminus(\cup_{j=2}^{J}U_{j})\), so that \(\partial^{*}X=\partial^{*}(\mathbb{R}^{n+1}\setminus X)\) gives
\[B\cap\partial^{*}(E\cup U_{1})=B\cap\partial^{*}(\cup_{j=2}^{J}U_{j})\stackrel{{ \mathbb{R}^{n}}}{{\subset}}B\cap\cup_{j\geq 2}\partial^{*}U_{j}\,,\]
where we have used again the \(\mathcal{H}^{n}\)-containment of \(\partial^{*}(X\cup Y)\) in \((\partial^{*}X)\cup(\partial^{*}Y)\). This proves that \((K^{\prime},E^{\prime})\in\mathcal{K}_{\mathrm{B}}\).
To prove that \(K^{\prime}\cup(E^{\prime})^{{(1)}}\) is \(\mathcal{C}\)-spanning \(\mathbf{W}\), we show that the set \(S\) defined by
\[S=\big{(}(K\cup E)\setminus B\big{)}\cup\big{(}\mathrm{cl}\,(B)\cap\cup_{j\geq 2 }\partial U_{j}\big{)}\,,\]
is \(\mathcal{H}^{n}\)-contained in \(K^{\prime}\cup(E^{\prime})^{{(1)}}\) and is \(\mathcal{C}\)-spanning \(\mathbf{W}\).
To prove that \(S\) is \(\mathcal{H}^{n}\)-contained in \(K^{\prime}\cup(E^{\prime})^{{(1)}}\), we start by noticing that \((K\cup E)\setminus\mathrm{cl}\,(B)\) is \(\mathcal{H}^{n}\)-equivalent to \((K\cup E^{{(1)}}\cup\partial^{*}E)\setminus\mathrm{cl}\,(B)\subset K\cup E^{{ (1)}}\) (by \((K,E)\in\mathcal{K}_{\mathrm{B}}\)), whereas \(|(E\Delta E^{\prime})\setminus B|=0\) implies \((E^{{(1)}}\Delta(E^{{ (1)}}))\setminus\mathrm{cl}\,(B)=\varnothing\): hence \(S\setminus\mathrm{cl}\,(B)\) if \(\mathcal{H}^{n}\)-contained in \(K^{\prime}\cup(E^{\prime})^{{(1)}}\). Next, by (7.31) and by definition of \(K^{\prime}\),
\[S\cap B=B\cap\cup_{j\geq 2}\partial U_{j}\stackrel{{\mathcal{H}^{n} }}{{=}}B\cap\cup_{j\geq 2}\partial^{*}U_{j}\subset K^{\prime}\,.\]
Finally, by \(\mathcal{H}^{n}(K\cap\partial B)=0\), (7.26), and Federer's theorem, \((S\cap\partial B)\setminus K\) is \(\mathcal{H}^{n}\)-equivalent to \((E^{{(1)}}\cap\partial B)\setminus K\), where \(E^{{(1)}}\cap A=(E^{\prime})^{{(1)}}\cap A\) in an open neighborhood \(A\) of \(\partial B\) thanks to \(U_{1}\Delta V_{1}\subset\!\subset B\).
To prove that \(S\) is \(\mathcal{C}\)-spanning \(\mathbf{W}\), since \(S\) is relatively closed in \(\Omega\) and thanks to Theorem A.1, we only need to check that \(S\cap\gamma(\mathbb{S}^{1})\neq\varnothing\) for every \(\gamma\in\mathcal{C}\). Since \((K\cup E)\cap\gamma(\mathbb{S}^{1})\neq\varnothing\) for every \(\gamma\in\mathcal{C}\), this is immediate unless \(\gamma\) is such that \(S\cap\gamma(\mathbb{S}^{1})\setminus B=\varnothing\); in that case, however, Lemma 7.2 implies the existence of \(j\neq k\) such that \(\gamma(\mathbb{S}^{1})\cap B\cap\partial U_{j}\) and \(\gamma(\mathbb{S}^{1})\cap B\cap\partial U_{k}\) are both non-empty. Since either \(j\geq 2\) or \(k\geq 2\), we conclude by (7.26) that \(\gamma(\mathbb{S}^{1})\cap B\cap K^{\prime}\neq\varnothing\), thus completing the proof.
We are thus left to prove the validity of (7.53). Keeping (7.48) and \(\mathcal{F}_{\mathrm{bk}}(K^{\prime},E^{\prime};B)\leq\mathcal{F}_{\mathrm{bd} }(K^{\prime},E^{\prime};B)\) into account, this amounts to showing that
\[\mathcal{F}_{\mathrm{bd}}(K^{\prime},E^{\prime};B)=\mathcal{H}^{n}(B\cap \partial^{*}E^{\prime})+2\,\mathcal{H}^{n}\big{(}B\cap K^{\prime}\setminus \partial^{*}E^{\prime}\big{)}=P(V_{1};B)+\sum_{j=2}^{J}P(U_{j};B)\,. \tag{7.54}\]
To this end we notice that by (1.44) and \(B\cap E^{\prime}=B\cap[V_{1}\Delta(E\cup U_{1})]\) we have
\[B\cap\partial^{*}E^{\prime} \stackrel{{\mathcal{H}^{n}}}{{=}} B\cap\big{\{}\partial^{*}V_{1}\cup\partial^{*}(E\cup U_{1}) \big{\}}\] \[\stackrel{{\mathcal{H}^{n}}}{{=}} B\cap\big{\{}(\partial^{*}V_{1})\,\cup\,(U_{1}^{{(0)}}\cap \partial^{*}E)\,\cup\,(E^{{(0)}}\cap\partial^{*}U_{1})\big{\}}\,,\]
where we have used (1.39) and \(\mathcal{H}^{n}(\{\nu_{E}=\nu_{U_{1}}\})=0\) (as \(E\cap U_{1}=\varnothing\)). By (1.46) and (1.47), since \(\{B\cap E,B\cap U_{j}\}_{j=1}^{N}\) is a Caccioppoli partition of \(B\), we have
\[U_{1}^{{(0)}}\cap\partial^{*}E=(\partial^{*}E)\cap\bigcup_{j\geq 2}(\partial^{*}U_{j} )\,,\qquad E^{{(0)}}\cap\partial^{*}U_{1}=(\partial^{*}U_{1})\cap \bigcup_{j\geq 2}(\partial^{*}U_{j})\,,\]
so that
\[B\cap\partial^{*}E^{\prime} \stackrel{{\mathcal{H}^{n}}}{{=}} B\cap\Big{\{}(\partial^{*}V_{1})\cup\Big{(}[(\partial^{*}E)\cup( \partial^{*}U_{1})]\cap\bigcup_{j\geq 2}(\partial^{*}U_{j})\Big{)}\Big{\}}\,,\] \[B\cap(K^{\prime}\setminus\partial^{*}E^{\prime}) \stackrel{{\mathcal{H}^{n}}}{{=}} B\cap\Big{(}\bigcup_{j\geq 2}\partial^{*}U_{j}\Big{)}\setminus\big{[}( \partial^{*}E)\cup(\partial^{*}U_{1})\big{]}\,.\]
We thus find
\[\mathcal{H}^{n}(B\cap\partial^{*}E)+2\,\mathcal{H}^{n}(B\cap(K^{ \prime}\setminus\partial^{*}E^{\prime}))\] \[=P(V_{1};B)+2\,\mathcal{H}^{n}\Big{(}\Big{(}\bigcup_{j\geq 2} \partial^{*}U_{j}\Big{)}\setminus(\partial^{*}E\cup\partial^{*}U_{1})\Big{)}+ \mathcal{H}^{n}\Big{(}\Big{(}\bigcup_{j\geq 2}\partial^{*}U_{j}\Big{)}\cap( \partial^{*}E\cup\partial^{*}U_{1})\Big{)}\] \[=P(V_{1};B)+\sum_{j\geq 2}P(U_{j};B)\,,\]
that is (7.54).
_Step five_: In this final step we prove conclusions (iv) and (v). To this end we fix \(x\in[\Omega\cap(\partial E\setminus\partial^{*}E)]\setminus\Sigma\), and recall that, by conclusion (iv)\({}_{\alpha}\), there are \(r>0\), \(\nu\in\mathbb{S}^{n}\)
\(u_{1},u_{2}\in C^{1,\alpha}({\bf D}^{\nu}_{r}(x);(-r/4,r/4))\) (\(\alpha\in(0,1/2)\) arbitrary) such that \(u_{1}(x)=u_{2}(x)=0\), \(u_{1}\leq u_{2}\) on \({\bf D}^{\nu}_{r}(x)\), \(\{u_{1}<u_{2}\}\) and \(\operatorname{int}\{u_{1}=u_{2}\}\) are both non-empty, and
\[{\bf C}^{\nu}_{r}(x)\cap K = \cup_{i=1,2}\big{\{}y+u_{i}(y)\,\nu:y\in{\bf D}^{\nu}_{r}(x)\big{\}}\,, \tag{7.55}\] \[{\bf C}^{\nu}_{r}(x)\cap\partial^{*}E = \cup_{i=1,2}\big{\{}y+u_{i}(y)\nu:y\in\{u_{1}<u_{2}\}\big{\}}\,,\] (7.56) \[{\bf C}^{\nu}_{r}(x)\cap E = \big{\{}y+t\,\nu:y\in\{u_{1}<u_{2}\}\,,u_{1}(x)<t<u_{2}(x)\big{\}}\,. \tag{7.57}\]
We claim that \((u_{1},u_{2})\) has the minimality property
\[\mathcal{A}(u_{1},u_{2})\leq\mathcal{A}(w_{1},w_{2}):=\int_{{\bf D}^{\nu}_{r} (x)}\sqrt{1+|\nabla w_{1}|^{2}}+\sqrt{1+|\nabla w_{2}|^{2}}\,, \tag{7.58}\]
among all pairs \((w_{1},w_{2})\) with \(w_{1},w_{2}\in\operatorname{Lip}({\bf D}^{\nu}_{r}(x);(-r/2,r/2))\) that satisfy
\[\begin{cases}w_{1}\leq w_{2}\,,&\text{on }{\bf D}^{\nu}_{r}(x)\,,\\ w_{k}=u_{k}\,,&\text{on }\partial{\bf D}^{\nu}_{r}(x),\,k=1,2\,,\qquad\int_{{\bf D }^{\nu}_{r}(x)}w_{2}-w_{1}=\int_{{\bf D}^{\nu}_{r}(x)}u_{2}-u_{1}\,.\end{cases} \tag{7.59}\]
Indeed, starting from a given a pair \((w_{1},w_{2})\) as in (7.59), we can define \((K^{\prime}\cap{\bf C}^{\nu}_{r}(x),E^{\prime}\cap{\bf C}^{\nu}_{r}(x))\) by replacing \((u_{1},u_{2})\) with \((w_{1},w_{2})\) in (7.55) and (7.57), and then define \((K^{\prime},E^{\prime})\in\mathcal{K}_{\rm B}\) by setting \(K^{\prime}\setminus{\bf C}^{\nu}_{r}(x)=K\setminus{\bf C}^{\nu}_{r}(x)\) and \(E^{\prime}\setminus{\bf C}^{\nu}_{r}(x)=E\setminus{\bf C}^{\nu}_{r}(x)\). Since \(\partial{\bf C}^{\nu}_{r}\setminus(K^{\prime}\cup E^{\prime})=\partial{\bf C} ^{\nu}_{r}\setminus(K\cup E)\) it is easily seen (by a simple modification of Lemma 7.2 where balls are replaced by cylinders) that \((K^{\prime},E^{\prime})\) is \(\mathcal{C}\)-spanning \({\bf W}\). Since \(|E^{\prime}|=|E|\), the minimality of \((K,E)\) in \(\Psi_{\rm bk}(v)\) implies that \(\mathcal{F}_{\rm bk}(K,E)\leq\mathcal{F}_{\rm bk}(K^{\prime},E^{\prime})\), which readily translates into (7.58).
Recalling that both \(A_{0}=\operatorname{int}\{u_{1}=u_{2}\}\) and \(A_{+}=\{u_{1}<u_{2}\}\) are non-empty open subsets of \({\bf D}^{\nu}_{r}(x)\), and denoting by \(\operatorname{MS}(u)[\varphi]=\int_{{\bf D}^{\nu}_{r}(x)}\nabla\varphi\cdot[( \nabla u)/\sqrt{1+|\nabla u|^{2}}]\) the distributional mean curvature operator, we find that
\[\operatorname{MS}(u_{1})+\operatorname{MS}(u_{2}) =0\,, \text{on }{\bf D}^{\nu}_{r}(x)\,,\] \[\operatorname{MS}(u_{k}) =0\,, \text{on }A_{0}\text{ for each }k=1,2\,,\] \[\operatorname{MS}(u_{2})=-\operatorname{MS}(u_{1}) =\lambda\,, \text{on }A_{+}\,, \tag{7.60}\]
for some constant \(\lambda\in\mathbb{R}\); in particular, \(u_{1},u_{2}\in C^{\infty}(A_{0})\cap C^{\infty}(A_{+})\). We notice that it must be
\[\lambda<0\,. \tag{7.61}\]
Indeed, arguing by contradiction, should it be that \(\lambda\geq 0\), then by (7.60) we find \(\operatorname{MS}(u_{2})\geq 0\) and \(\operatorname{MS}(u_{1})\leq 0\) on \(A_{+}\). Since \(A_{+}\) is open an non-empty, there is an open ball \(B\subset A_{+}\) such that \(\partial B\cap\partial A_{+}=\{y_{0}\}\). Denoting by \(x_{0}\) the center of \(B\) and setting \(\nu=(x_{0}-y_{0})/|x_{0}-y_{0}|\), by \(u_{1}\leq u_{2}\), \(u_{1}(y_{0})=u_{2}(y_{0})\) and \(u_{k}\in C^{1}({\bf D}^{\nu}_{r}(x))\) we find that \(\nabla u_{1}(y_{0})=\nabla u_{2}(y_{0})\). At the same time, by applying Hopf's lemma in \(B\) at \(y_{0}\), we see that since \(\operatorname{MS}(u_{2})\geq 0\) and \(\operatorname{MS}(u_{1})\leq 0\) on \(B\), it must be \(\nu\cdot\nabla u_{2}(y_{0})<0\) and \(\nu\cdot\nabla u_{1}(y_{0})>0\), against \(\nabla u_{1}(y_{0})=\nabla u_{2}(y_{0})\).
By (7.60), (7.61), and \(u_{2}\geq u_{1}\) on \({\bf D}^{\nu}_{r}(x)\) we can apply the sharp regularity theory for the double membrane problem developed in [23, Theorem 5.1] and deduce that \(u_{1},u_{2}\in C^{1,1}({\bf D}^{\nu}_{r}(x))\). Next we notice that, for every \(\varphi\in C^{\infty}_{c}(A_{+})\), and setting \(u_{+}=u_{2}-u_{1}\),
\[2\,\lambda\,\int_{A_{+}}\varphi=\operatorname{MS}(u_{2})[\varphi]-\operatorname{ MS}(u_{1})[\varphi]=\int_{A_{+}}\operatorname{A}(x)[\nabla u_{+}]\cdot\nabla \varphi\,,\]
where we have set, with \(f(z)=\sqrt{1+|z|^{2}}\),
\[\operatorname{A}(x)=\int_{0}^{1}\,\nabla^{2}f\big{(}s\,\nabla u_{2}(x)+(1-s)\, \nabla u_{1}(x)\big{)}\,ds\,.\]
In particular, \(u_{+}\in C^{1,1}({\bf D}^{\nu}_{r}(x))\) is a non-negative distributional solution of
\[\operatorname{div}\big{(}\operatorname{A}(x)\nabla u_{+}\big{)}=-2\,\lambda\,, \qquad\text{on }A_{+}\,,\]
with a strictly positive right-hand side (by (7.61)) and with \(\mathrm{A}\in\mathrm{Lip}(A_{+};\mathbb{R}^{n\times n}_{\mathrm{sym}})\) uniformly elliptic. We can thus apply the regularity theory for free boundaries developed in [13, Theorem 1.1, Theorem 4.14] to deduce that
\[\mathrm{FB}=\mathbf{D}_{r}^{\nu}(x)\cap\partial\{u_{+}=0\}=\mathbf{D}_{r}^{\nu }(x)\cap\partial\{u_{2}=u_{1}\}\,,\]
can be partitioned into sets \(\mathrm{Reg}\) and \(\mathrm{Sing}\) such that \(\mathrm{Reg}\) is relatively open in \(\mathrm{FB}\) and such that for every \(z\in\mathrm{Reg}\) there are \(r>0\) and \(\beta\in(0,1)\) such that \(B_{r}(x)\cap\mathrm{FB}\) is a \(C^{1,\beta}\)-embedded \((n-1)\)-dimensional manifold, and such that \(\mathrm{Sing}=\cup_{k=0}^{n-1}\mathrm{Sing}_{k}\) is relatively closed in \(\mathrm{FB}\), with each \(\mathrm{Sing}_{k}\) locally \(\mathcal{H}^{k}\)-rectifiable in \(\mathbf{D}_{r}^{\nu}(x)\). Since, by (7.56),
\[\mathbf{C}_{r}^{\nu}(x)\cap(\partial E\setminus\partial^{*}E)=\left\{y+u_{1}( y)\,\nu:y\in\mathrm{FB}\right\}\]
and \(u_{1}\in C^{1,1}(\mathbf{D}_{r}^{\nu}(x))\), we conclude by a covering argument that \(\Omega\cap(\partial E\setminus\partial^{*}E)\) has all the required properties, and complete the proof of the theorem.
## 8. Equilibrium across transition lines in wet foams (Theorem 1.7)
Proof of Theorem 1.7.: Let \(\Omega\subset\mathbb{R}^{n+1}\) be open and let \((K_{*},E_{*})\in\mathcal{K}_{\mathrm{foam}}\). We can find \((K,E)\in\mathcal{K}\) such that \(K\) is \(\mathcal{H}^{n}\)-equivalent to \(K_{*}\), \(E\) Lebesgue equivalent to \(E_{*}\), and \(K\cap E^{\mbox{\tiny(1)}}=\varnothing\) by repeating with minor variations the considerations made in step one of the proof of Theorem 6.2 (we do not have to worry about the \(\mathcal{C}\)-spanning condition, but have to keep track of the volume constraint imposed for each \(U_{i}\), which can be done by using the volume-fixing variations for clusters from [15, Part IV]). In proving the regularity part of the statement, thanks to Theorem 2.1-(a) we can directly work with balls \(B\subset\subset\Omega\) having radius less than \(r_{0}\) (with \(r_{0}\) as in (1.33)), and consider the open connected components \(\{U_{i}\}_{i}\) of \(B\) induced by \(K\cup E\). Using Lemma 7.1 and, again, volume-fixing variation techniques in place of the theory of homotopic spanning, we can proceed to prove analogous statement to (7.8), (7.9), (7.10), and (7.11), thus proving the \((\Lambda,r_{0})\)-minimality of each \(U_{i}\) in \(B\). The claimed \(C^{1,\alpha}\)-regularity of each \(U_{i}\) outside of a closed set \(\Sigma\) with the claimed dimensional estimates follows then from De Giorgi's theory of perimeter minimality [1, 13, 14].
## Appendix A Equivalence of homotopic spanning conditions
In Theorem A.1 we prove that, when \(S\) is a closed set, the notion of "\(S\) is \(\mathcal{C}\)-spanning \(\mathbf{W}\)" introduced in Definition B boils down to the one in Definition A. We then show that the property of being \(\mathcal{C}\)-spanning is stable under reduction to the rectifiable part of a Borel set, see Lemma 2.2.
**Theorem A.1**.: _Given a closed set \(\mathbf{W}\subset\mathbb{R}^{n+1}\), a spanning class \(\mathcal{C}\) for \(\mathbf{W}\), and a set \(S\) relatively closed in \(\Omega\), the following two properties are equivalent:_
**(i):**: _for every_ \(\gamma\in\mathcal{C}\)_, we have_ \(S\cap\gamma(\mathbb{S}^{1})\neq\varnothing\)_;_
**(ii):**: _for every_ \((\gamma,\Phi,T)\in\mathcal{T}(\mathcal{C})\) _and for_ \(\mathcal{H}^{1}\)_-a.e._ \(s\in\mathbb{S}^{1}\)_, we have_
\[\text{for $\mathcal{H}^{n}$-a.e. $x\in T[s]$}\,,\] (A.1) \[\exists\text{ a partition $\{T_{1},T_{2}\}$ of $T$ with $x\in\partial^{e}T_{1}\cap\partial^{e}T_{2}$}\,,\] \[\text{and s.t. $S\cup T[s]$ essentially disconnects $T$ into $\{T_{1},T_{2}\}$}\,.\]
_In particular, \(S\) is \(\mathcal{C}\)-spanning \(\mathbf{W}\) according to Definition A if and only if it does so according to Definition B._
**Remark A.2** (\(x\)-dependency of \(\{T_{1},T_{2}\}\)).: In the situation of Figure 1.4 it is clear that the same choice of \(\{T_{1},T_{2}\}\) can be used to check the validity of (A.1) at every \(x\in T[s]\). One may thus wonder if it could suffice to reformulate (A.1) so that the partition \(\{T_{1},T_{2}\}\) is independent of \(x\). The simpler example we are aware of and that shows this simpler definition would not work is as follows. In \(\mathbb{R}^{3}\), let \(\mathbf{W}\) be a closed \(\delta\)-neighborhood of a
circle \(\Gamma\), let \(U\) be the open \(\delta\)-neighborhood of a loop with link number _three_ (or higher _odd_ number) with respect to \(\mathbf{W}\), let \(K\) be the disk spanned by \(\Gamma\), and let \(S=\Omega\cap[(K\setminus U)\cup\partial U]\), see Figure A.1. Now consider a "test tube" \(T\) which compactly contains \(U\) and is such that, for every \(s\), \(U\cap T[s]\) consists of three disks \(\{D_{i}\}_{i=1}^{3}\). Since \(U\subset T\), the property "\(S\cup T[s]\) essentially disconnects \(T\) into \(\{T_{1},T_{2}\}\) in such a way that \(T[s]\subset T\cap\partial^{e}T_{1}\cap\partial^{e}T_{2}\)" would immediately imply "\(U\cap(S\cup T[s])=U\cap T[s]\) essentially disconnects \(T\cap U=U\) into \(\{U_{1},U_{2}\}\) with \(U\cap T[s]\subset U\cap\partial^{e}U_{1}\cap\partial^{e}U_{2}\)", where \(U_{i}=T_{i}\cap U\) (see step one in the proof of Theorem 3.1 for a formal proof of this intuitive assertion). However, the latter property does not hold. To see this, denoting by \(\{A_{i}\}_{i=1}^{3}\) the three connected components of \(U\setminus T[s]\), we would have \(U_{1}=A_{i}\cup A_{j}\) and \(U_{2}=A_{k}\) for some choice of \(i\neq j\neq k\neq i\), whereas, independently of the choice made, \(U\cap\partial^{e}U_{1}\cap\partial^{e}U_{2}\) always fails to contain one of the disks \(\{D_{i}\}_{i=1}^{3}\): for example, if \(U_{1}=A_{1}\cup A_{2}\) and \(U_{2}=A_{3}\), then \(U\cap\partial^{e}U_{1}\cap\partial^{e}U_{2}=D_{2}\cup D_{3}\), and \(D_{1}\) is entirely missed. We conclude that the set \(S\) just constructed, although clearly \(\mathcal{C}\)-spanning \(\mathbf{W}\) in terms of Definition A, fails to satisfy the variant of (A.1) where a same partition \(\{T_{1},T_{2}\}\) is required to work for \(\mathcal{H}^{n}\)-a.e. choice of \(x\in T[s]\).
Proof of Theorem a.1.: _Step one_: We prove that (ii) implies (i). Indeed, if there is \(\gamma\in\mathcal{C}\) such that \(S\cap\gamma(\mathbb{S}^{1})=\varnothing\), then, \(S\) being closed, we can find \((\gamma,\Phi,T)\in\mathcal{T}(\mathcal{C})\) such that \(\operatorname{dist}(S,T)>0\). By (ii), there is \(s\in\mathbb{S}^{1}\) such that \(S\cup T[s]\) essentially disconnects \(T\). By \(\operatorname{dist}(S,T)>0\) we see that \((S\cup T[s])\cap T=T[s]\), so that \(T[s]\) essentially disconnects \(T\), a contradiction.
_Step two_: We now prove that (i) implies (ii). To this end we consider an arbitrary \((\gamma,\Phi,T)\in\mathcal{T}(\mathcal{C})\) and aim at proving the existence of \(J\) of full \(\mathcal{H}^{1}\)-measure in \(\mathbb{S}^{1}\) such that, if \(s\in J\), then (A.1) holds.
This is trivial, with \(J=\mathbb{S}^{1}\), if \(|S\cap T|=|T|\). Indeed, in this case, we have \(T=S^{(1)}\cap T\), that, combined with \(S\) being closed, implies \(T=S\cap T\). In particular, \(S\cup T[s]=T\) for every \(s\in\mathbb{S}^{1}\), and since, trivially, \(T\) essentially disconnects \(T\), the conclusion follows.
We thus assume that \(|S\cap T|<|T|\): in particular,
\[U=T\setminus S\]
is a non-empty, open set, whose connected components are denoted by \(\{U_{i}\}_{i\in I}\) (\(I\) a countable set). By the Lebesgue points theorem, \(\mathcal{L}^{n+1}\)-a.e. \(x\in T\) belongs either to \(U^{(0)}\) or to \(U\). Then, by the smoothness of \(\Phi\) and by the area formula, we can find a set \(J\) of full
Figure A.1. The situation in Remark A.2. The components \(A_{1}\), \(A_{2}\) and \(A_{3}\) (depicted in purple, yellow, and green respectively) of \(U\setminus T[s]\) are bounded by the three disks \(\{D_{i}\}_{i=1}^{3}\) (depicted as boldface segments).
\(\mathcal{H}^{1}\)-measure in \(\mathbb{S}^{1}\) such that
\[\mathcal{H}^{n}\big{(}T[s]\setminus(U^{{(0)}}\cup U)\big{)}=0\,,\qquad\forall s \in J\,.\] (A.2)
In particular, given \(s\in J\), we just need to prove (A.1) when either \(x\in T[s]\cap U^{{(0)}}\) or \(x\in T[s]\cap U\). Before examining these two cases we also notice that we can further impose on \(J\) that
\[\mathcal{H}^{n}\Big{(}T[s]\cap\Big{[}\partial^{e}U\cup\partial^{e}S\cup\big{(} U^{{(1)}}\setminus U\big{)}\cup\bigcup_{i\in I}\big{(}U^{{(1)}}_{i} \setminus U_{i}\big{)}\Big{]}\Big{)}=0\,,\qquad\forall s\in J\,.\] (A.3)
Indeed, again by the Lebesgue points theorem, the sets \(\partial^{e}U\), \(\partial^{e}S\), \(U^{{(1)}}\setminus U\), and \(\cup_{i\in I}U^{{(1)}}_{i}\setminus U_{i}\) are all \(\mathcal{L}^{n+1}\)-negligible.
_Case one, \(x\in T[s]\cap U^{{(0)}}\)_: To fix ideas, notice that \(U^{{(0)}}\neq\varnothing\) implies \(|S\cap T|>0\), and in particular \(S\) has positive Lebesgue measure. Given an arbitrary \(s^{\prime}\in J\setminus\{s\}\) we denote by \(\{I_{1},I_{2}\}\) the partition of \(\mathbb{S}^{1}\) bounded by \(\{s,s^{\prime}\}\), and then consider the Borel sets
\[T_{1}=\Phi(I_{1}\times B_{1}^{n})\cap S\,,\qquad T_{2}=\Phi(I_{2}\times B_{1}^ {n})\cup\,\Big{(}\Phi(I_{1}\times B_{1}^{n})\setminus S\Big{)}\,.\]
We first notice that \(\{T_{1},T_{2}\}\) is a non-trivial partition of \(T\): Indeed \(|T_{1}|>0\) since \(x\) has density \(1/2\) for \(\Phi(I_{1}\times B_{1}^{n})\) and (by \(x\in U^{{(0)}}\)) density \(1\) for \(S\cap T\); at the same time \(|T_{2}|=|T\setminus T_{1}|\geq|T\setminus S|>0\). Next, we claim that
\[T^{{(1)}}\cap\partial^{e}T_{1}\cap\partial^{e}T_{2}\text{ is $\mathcal{H}^{n}$- contained in $S$}\,.\] (A.4)
Indeed, since \(\Phi(I_{1}\times B_{1}^{n})\) is an open subset of \(T\) with \(T\cap\partial[\Phi(I_{1}\times B_{1}^{n})]=T[s]\cup T[s^{\prime}]\), and since \(\partial^{e}T_{1}\) coincides with \(\partial^{e}S\) inside the open set \(\Phi(I_{1}\times B_{1}^{n})\), we easily see that
\[T^{{(1)}}\cap\partial^{e}T_{1}\cap\partial^{e}T_{2} = T\cap\partial^{e}T_{1}=T\cap\partial^{e}\big{(}\Phi(I_{1}\times B _{1}^{n})\cap S\big{)}\] \[\subset \big{(}\Phi(I_{1}\times B_{1}^{n})\cap\partial^{e}S\big{)}\cup \Big{(}\big{(}T[s]\cup T[s^{\prime}]\big{)}\setminus S^{{(0)}} \Big{)}\,.\]
Now, on the one hand, by \(\mathcal{H}^{n}(\partial^{e}S\cap(T[s]\cup T[s^{\prime}]))=0\) (recall (A.3)), it holds
\[\big{(}T[s]\cup T[s^{\prime}]\big{)}\setminus S^{{(0)}} \text{ is $\mathcal{H}^{n}$-contained in $T\cap S^{{(1)}}$}\,;\]
while, on the other hand, by \(\Omega\cap\partial^{e}S\subset\Omega\cap\partial S\subset\Omega\cap S\) (since \(S\) is closed in \(\Omega\)) and by \(\Phi(I_{1}\times B_{1}^{n})\subset T\subset\Omega\), we also have that \(\Phi(I_{1}\times B_{1}^{n})\cap\partial^{e}S\subset T\cap S\); therefore
\[T^{{(1)}}\cap\partial^{e}T_{1}\cap\partial^{e}T_{2}\text{ is $\mathcal{H}^{n}$-contained in $T\cap(S\cup S^{{(1)}})=T\cap S$}\,,\]
where we have used that \(S\) is closed to infer \(S^{{(1)}}\subset S\). Having proved (A.4) and the non-triviality of \(\{T_{1},T_{2}\}\), we conclude that \(S\) (and, thus, \(S\cup T[s]\)) essentially disconnects \(T\) into \(\{T_{1},T_{2}\}\). We are left to prove that \(x\in T\cap\partial^{e}T_{1}\cap\partial^{e}T_{2}\). To this end, we notice that \(x\in T[s]\cap(T\setminus S)^{{(0)}}\) and \(\Phi(I_{1}\times B_{1}^{n})\subset T\) imply
\[|T_{1}\cap B_{r}(x)|=|\Phi(I_{1}\times B_{1}^{n})\cap S\cap B_{r}(x)|=|\Phi(I_ {1}\times B_{1}^{n})\cap B_{r}(x)|+\text{o}(r^{n+1})=\frac{|B_{r}(x)|}{2}+ \text{o}(r^{n+1})\,,\]
so that \(x\in(T_{1})^{{(1/2)}}\subset\partial^{e}T_{1}\); since \(T\cap\partial^{e}T_{1}=T\cap\partial^{e}T_{1}\cap\partial^{e}T_{2}\) and \(x\in T\) we conclude the proof in the case when \(x\in T[s]\cap U^{{(0)}}\).
_Case two, \(x\in T[s]\cap U\)_: In this case there exists \(i\in I\) such that \(x\in U_{i}\), and, correspondingly, we claim that
\[\exists\{V_{1},V_{2}\}\text{ a non-trivial Borel partition of }U_{i}\setminus T[s]\,,\] (A.5) \[\text{ s.t. }x\in\partial^{e}V_{1}\cap\partial^{e}V_{2}\text{ and }T\cap(\partial V_{1}\cup\partial V_{2})\subset S\cup T[s]\,.\]
Given the claim, we conclude by setting \(T_{1}=V_{1}\) and \(T_{2}=V_{2}\cup(T\setminus U_{i})\). Indeed, since \(V_{2}\cap U_{i}=T_{2}\cap U_{i}\) with \(U_{i}\) open implies \(U_{i}\cap\partial^{e}V_{1}=U_{i}\cap\partial^{e}T_{1}\), we deduce from (A.5) that
\[x\in U_{i}\cap\partial^{e}V_{1}\cap\partial^{e}V_{2}=U_{i}\cap\partial^{e}T_{1} \cap\partial^{e}T_{2}\,;\]
at the same time, \(S\cup T[s]\) essentially disconnects \(T\) into \(\{T_{1},T_{2}\}\) since, again by (A.5),
\[T^{{(1)}}\cap\partial^{e}T_{1}\cap\partial^{e}T_{2}=T\cap \partial^{e}T_{1}=T\cap\partial^{e}V_{1}\subset T\cap\partial V_{1}\subset S\cup T [s]\,.\]
We are thus left to prove (A.5). To this end, let us choose \(r(x)>0\) small enough to have that \(B_{r(x)}(x)\subset U_{i}\), and that \(B_{r(x)}(x)\setminus T[s]\) consists of exactly two connected components \(\{V_{1}^{x},V_{2}^{x}\}\); in this way,
\[x\in(V_{1}^{x})^{{(1/2)}}\cap(V_{2}^{x})^{{(1/2)}}\,.\] (A.6)
Next, we define
\[V_{1} =\text{ the connected component of }U_{i}\setminus T[s]\text{ containing }V_{1}^{x}\,,\] \[V_{2} =U_{i}\setminus(T[s]\cup V_{1})\,.\]
Clearly \(\{V_{1},V_{2}\}\) is a partition of \(U_{i}\setminus T[s]\), and, thanks to \(\partial V_{1}\cup\partial V_{2}\subset T[s]\cup\partial U_{i}\), we have
\[T\cap(\partial V_{1}\cup\partial V_{2})\subset T\cap(T[s]\cup\partial U_{i}) \subset S\cup T[s]\,.\]
Therefore (A.5) follows by showing that \(|V_{1}|\,|V_{2}|>0\). Since \(V_{1}\) contains the connected component \(V_{1}^{x}\) of \(B_{r(x)}(x)\setminus T[s]\), which is open and non-empty, we have \(|V_{1}|>0\). Arguing by contradiction, we assume that
\[|V_{2}|=|U_{i}\setminus(T[s]\cup V_{1})|=0\,.\]
Since \(V_{1}\) is a connected component of the open set \(U_{i}\setminus T[s]\) this implies that
\[U_{i}\setminus T[s]=V_{1}\,.\]
Let \(x_{1}\in V_{1}^{x}\) and \(x_{2}\in V_{2}^{x}\) (where \(V_{1}^{x}\) and \(V_{2}^{x}\) are the two connected components of \(B_{r(x)}(x)\setminus T[s]\)). Since \(V_{1}\) is connected and \(\{x_{1},x_{2}\}\subset U_{i}\setminus T[s]=V_{1}\), there is a smooth embedding \(\gamma_{1}\) of \([0,1]\) into \(V_{1}\) with \(\gamma_{1}(0)=x_{1}\) and \(\gamma_{1}(1)=x_{2}\). Arguing as in [5, Step 2] using Sard's theorem, we may modify \(\gamma_{1}\) by composing with a smooth diffeomorphism such that the modified \(\gamma_{1}\) intersects \(\partial B_{r(x)}(x)\) transversally at finitely many points. Thus \(\gamma_{1}([0,1])\setminus\operatorname{cl}B_{r(x)}(x)\) is partitioned into finitely many curves \(\gamma_{1}((a_{i},b_{i}))\) for disjoint arcs \((a_{i},b_{i})\subset[0,1]\). Since \(B_{r(x)}(x)\setminus T[s]\) is disconnected into \(V_{1}^{x}\) and \(V_{2}^{x}\) and \(\gamma_{1}\) is disjoint from \(T[s]\), there exists \(i\) such that, up to interchanging \(V_{1}^{x}\) and \(V_{2}^{x}\), \(\gamma(a_{i})\in\operatorname{cl}V_{1}^{x}\cap\partial B_{r(x)}(x)\) and \(\gamma(b_{i})\in\operatorname{cl}V_{2}^{x}\cap\partial B_{r(x)}(x)\). Let us call \(\tilde{\gamma}_{1}\) the restriction of \(\gamma_{1}\) to \([a_{i},b_{i}]\). Next, we choose a smooth embedding \(\gamma_{2}\) of \([0,1]\) into \(B_{r(x)}(x)\) such that \(\gamma_{2}(0)=\tilde{\gamma}_{1}(a_{i})\), \(\gamma_{2}(1)=\tilde{\gamma}_{1}(b_{i})\), and \(\gamma_{2}([0,1])\) intersects \(T[s]\cap B_{r(x)}(x)\) at exactly one point, denoted by \(x_{12}=\gamma_{2}(t_{0})\), with
\[\gamma_{2}^{\prime}(t_{0})\neq 0\,.\] (A.7)
Since \(\tilde{\gamma_{1}}((a_{i},b_{i}))\cap\operatorname{cl}B_{r(x)}(x)=\varnothing\) and \(\gamma_{2}([0,1])\subset\operatorname{cl}B_{r}(x)\), we can choose \(\gamma_{2}\) so that the concatenation of \(\gamma_{1}\) and \(\gamma_{2}\) defines a smooth embedding \(\gamma_{*}\) of \(\mathbb{S}^{1}\) into \(U_{i}\subset T\). Up to reparametrizing we may assume that \(\gamma_{*}(1)=x_{12}\). Since \(\gamma_{1}([0,1])\subset V_{1}\) and \(V_{1}\cap(S\cup T[s])=\varnothing\), we have that
\[\gamma_{*}(\mathbb{S}^{1})\cap(S\cup T[s])=\gamma_{2}([0,1])\cap(S\cup T[s])= \{x_{12}\}\subset T[s]\cap B_{r(x)}(x)\,.\] (A.8)
A first consequence of (A.8) is that \(\gamma_{*}(\mathbb{S}^{1})\cap S=\varnothing\). Similarly, the curve \(\gamma_{**}:\mathbb{S}^{1}\to\Omega\) defined via \(\gamma_{**}(t)=\gamma_{*}(\overline{t})\) (\(t\in\mathbb{S}^{1}\)) where the bar denotes complex conjugation, has the same image as \(\gamma_{*}\) and thus satisfies \(\gamma_{**}(\mathbb{S}^{1})\cap S=\varnothing\) as well. Therefore, in order to obtain a contradiction with \(|V_{2}|=0\), it is enough to prove that either \(\gamma_{*}\in\mathcal{C}\) or \(\gamma_{**}\in\mathcal{C}\). To this end we are now going to prove that one of \(\gamma_{*}\) or \(\gamma_{**}\) is homotopic to \(\gamma\) in \(T\) (and thus in \(\Omega\)), where \(\gamma\) is the curve from the tube \((\gamma,\Phi,T)\in\mathcal{T}(\mathcal{C})\) considered at the start of the argument.
Indeed, let \(\mathbf{p}:\mathbb{S}^{1}\times B_{1}^{n}\to\mathbb{S}^{1}\) denote the canonical projection \(\mathbf{p}(t,x)=t\), and consider the curves \(\sigma_{*}=\mathbf{p}\circ\Phi^{-1}\circ\gamma_{*}:\mathbb{S}^{1}\to\mathbb{S}^ {1}\) and \(\sigma_{**}=\mathbf{p}\circ\Phi^{-1}\circ\gamma_{**}\). By (A.8), \(\sigma_{*}^{-1}(\{s\})=\{1\}\), and \(1\) is a regular point of \(\sigma_{*}\) by (A.7) and since \(\Phi\) is a diffeomorphism. Similarly, \(\sigma_{**}^{-1}(\{s\})=\{1\}\)
and \(1\) is a regular point of \(\sigma_{**}\). Now by our construction of \(\gamma_{**}\), exactly one of \(\gamma_{*}\) or \(\gamma_{**}\) is orientation preserving at \(1\) and the other is orientation reversing. So we may compute the winding numbers of \(\sigma_{*}\) and \(\sigma_{**}\) via (see e.g. [10, pg 27]):
\[\deg\sigma_{*}=\operatorname{sgn}\,\det D\sigma_{*}(1)=-\operatorname{sgn}\, \det D\sigma_{**}(1)=-\deg\sigma_{**}\in\{+1,-1\}\,.\]
If we define \(\sigma=\mathbf{p}\circ\Phi^{-1}\circ\gamma\), then \(\sigma\) has winding number \(1\), and so is homotopic in \(\mathbb{S}^{1}\) to whichever of \(\sigma_{*}\) or \(\sigma_{**}\) has winding number \(1\). Since \(\Phi\) is a diffeomorphism of \(\mathbb{S}^{1}\times B^{n}_{1}\) into \(\Omega\), we conclude that \(\gamma\) is homotopic relative to \(\Omega\) to one of \(\gamma_{*}\) or \(\gamma_{**}\), and, thus, that \(\gamma^{*}\in\mathcal{C}\) or \(\gamma_{**}\in\mathcal{C}\) as desired.
## Appendix B Convergence of every minimizing sequence of \(\Psi_{\mathrm{bk}}(v)\)
In proving Theorem 1.5 we have shown that every minimizing sequence \(\{(K_{j},E_{j})\}_{j}\) of \(\Psi_{\mathrm{bk}}(v)\) has a limit \((K,E)\) such that, denoting by \(B^{(w)}\) a ball of volume \(w\), it holds
\[\Psi_{\mathrm{bk}}(v)=\Psi_{\mathrm{bk}}(|E|)+P(B^{(v-|E|)})\,,\qquad\Psi_{ \mathrm{bk}}(|E|)=\mathcal{F}_{\mathrm{bk}}(K,E)\,,\]
with both \(K\) and \(E\) bounded. In particular, minimizers of \(\Psi_{\mathrm{bk}}(v)\) can be constructed in the form \((K\cup\partial B^{(v-|E|)}(x),E\cup B^{(v-|E|)}(x))\) provided \(x\) is such that \(B^{(v-|E|)}(x)\) is disjoint from \(K\cup E\cup\mathbf{W}\). This argument, although sufficient to prove the existence of minimizers of \(\Psi_{\mathrm{bk}}(v)\), it is not sufficient to prove the convergence of every minimizing sequence of \(\Psi_{\mathrm{bk}}(v)\), i.e., to exclude the possibility that \(|E|<v\). This is done in the following theorem at the cost of assuming the \(C^{2}\)-regularity of \(\partial\Omega\). This result will be important in the companion paper [11].
**Theorem B.1**.: _If \(\mathbf{W}\) is the closure of a bounded open set with \(C^{2}\)-boundary, \(\mathcal{C}\) is a spanning class for \(\mathbf{W}\), and \(\ell<\infty\), then for every \(v>0\) and every minimizing sequence \(\{(K_{j},E_{j})\}_{j}\) of \(\Psi_{\mathrm{bk}}(v)\) there is a minimizer \((K,E)\) of \(\Psi_{\mathrm{bk}}(v)\) such that \(K\) is \(\mathcal{H}^{n}\)-rectifiable and, up to extracting subsequences and as \(j\to\infty\),_
\[E_{j}\to E\,,\qquad\mu_{j}\stackrel{{\ast}}{{\rightharpoonup}} \mathcal{H}^{n}\operatorname{\LARGE}(\Omega\cap\partial^{\ast}E)+2\,\mathcal{H }^{n}\operatorname{\LARGE}(K\cap E^{(0)})\,,\] (B.1)
_where \(\mu_{j}=\mathcal{H}^{n}\operatorname{\LARGE}(\Omega\cap\partial^{\ast}E_{j})+ 2\,\mathcal{H}^{n}\operatorname{\LARGE}(\mathcal{R}(K_{j})\cap E^{(0)}_{j})\)._
Proof.: By step three in the proof of Theorem 6.2, there is \((K,E)\in\mathcal{K}_{\mathrm{B}}\) satisfying (B.1) and such that \(K\) and \(E\) are bounded, \((K,E)\) is a minimizer of \(\Psi_{\mathrm{bk}}(|E|)\), \(K\) is \(\mathcal{H}^{n}\)-rectifiable, and \(|E|\leq v\); moreover, if \(v>|E|\), then there is \(x\in\mathbb{R}^{n+1}\) such that \(B^{(v-|E|)}(x)\) is disjoint from \(K\cup E\cup\mathbf{W}\) and \((K^{\prime},E^{\prime})=(K\cup\partial B^{(v-|E|)}(x),E\cup B^{(v-|E|)}(x))\) is a minimizer of \(\Psi_{\mathrm{bk}}(v)\). We complete the proof by deriving a contradiction with the \(v^{\ast}=v-|E|>0\) case. The idea is to relocate \(B^{(v^{\ast})}(x)\) to save perimeter by touching \(\partial\mathbf{W}\) or \(\partial E\); see Figure B.1.
First of all, we claim that \(K=\Omega\cap\partial E\). If not, since \((K,E)\) and \((K^{\prime},E^{\prime})\) respectively are minimizers of \(\Psi_{\mathrm{bk}}(|E|)\) and \(\Psi_{\mathrm{bk}}(v)\), then there are \(\lambda,\lambda^{\prime}\in\mathbb{R}\) such that \((K,E)\) and \((K^{\prime},E^{\prime})\) respectively satisfy (6.1) with \(\lambda\) and \(\lambda^{\prime}\). By localizing (6.1) for \((K^{\prime},E^{\prime})\) at points in \(\Omega\cap\partial^{\ast}E\) we see that it must be \(\lambda=\lambda^{\prime}\); by localizing at points in \(\partial B^{(v-|E|)}(x)\), we see that \(\lambda\) is equal to the mean curvature of \(\partial B^{(v-|E|)}(x)\), so that \(\lambda>0\); by arguing as in the proof of [10, Theorem 2.9] (see [11] for the details), we see that if \(K\setminus(\Omega\cap\partial E)\neq\varnothing\), then \(\lambda\leq 0\), a contradiction.
Having established that \(K=\Omega\cap\partial E\), we move an half-space \(H\) compactly containing \(\operatorname{cl}\,(E)\cup\mathbf{W}\) until the boundary hyperplane \(\partial H\) first touches \(\operatorname{cl}\,(E)\cup\mathbf{W}\). Up to rotation and translation, we can thus assume that \(H=\{x_{n+1}>0\}\) and
\[0\in\operatorname{cl}\,(E)\cup\mathbf{W}\subset\operatorname{cl}\,(H)\,.\] (B.2)
We split (B.2) into two cases, \(0\in\Omega\cap\partial E\) and \(0\in\mathbf{W}\), that are then separately discussed for the sake of clarity. In both cases we set \(x=(x^{\prime},x_{n+1})\in\mathbb{R}^{n}\times\mathbb{R}\equiv\mathbb{R}^{n+1}\), and set
\[\mathbf{C}_{\delta} = \{x:x_{n+1}\in(0,\delta)\,,|x^{\prime}|<\delta\}\,,\]
\[\mathbf{L}_{\delta} = \left\{x:\left|x^{\prime}\right|=\delta,x_{n+1}\in(0,\delta)\right\},\] \[\mathbf{T}_{\delta} = \left\{x:x_{n+1}=\delta\,,\left|x^{\prime}\right|<\delta\right\},\] \[\mathbf{D}_{\delta} = \left\{x:x_{n+1}=0\,,\left|x^{\prime}\right|<\delta\right\},\]
for every \(\delta>0\).
_Case one, \(0\in\Omega\cap\partial E\)_: In this case, by the maximum principle [13, Lemma 3], (6.1), and the Allard regularity theorem, we can find \(\delta_{0}>0\) and \(u\in C^{2}(\mathbf{D}_{\delta_{0}};[0,\delta_{0}])\) with \(u(0)=0\) and \(\nabla u(0)=0\) such that \(\mathbf{C}_{\delta_{0}}\subset\subset\Omega\) and
\[E\cap\mathbf{C}_{\delta_{0}}=\left\{x\in\mathbf{C}_{\delta_{0}}: \delta_{0}>x_{n+1}>u(x^{\prime})\right\},\] (B.3) \[(\partial E)\cap\mathbf{C}_{\delta_{0}}=\left\{x\in\mathbf{C}_{ \delta_{0}}:x_{n+1}=u(x^{\prime})\right\}.\]
Since \(0\leq u(x^{\prime})\leq C\,|x^{\prime}|^{2}\) for some \(C=C(E)\), if we set
\[\Gamma_{\delta}=\left\{x\in\mathbf{C}_{\delta}:0<x_{n+1}<u(x^{\prime})\right\},\qquad\delta\in(0,\delta_{0})\,,\] (B.4)
then we have
\[\left|\Gamma_{\delta}\right| \leq C\,\delta^{n+2}\,,\] (B.5) \[P\big{(}\Gamma_{\delta};\mathbf{L}_{\delta}\big{)} \leq C\,\delta^{n+1}\,.\] (B.6)
We then set
\[E_{\delta}=E\cup\Gamma_{\delta}\cup\left(B_{r_{\delta}}(z_{\delta})\setminus H \right),\] (B.7)
see Figure B.1-(a), where \(r_{\delta}>0\) and \(z_{\delta}\in\mathbb{R}^{n+1}\setminus\operatorname{cl}\left(H\right)\) are uniquely determined by requiring that, first,
\[\operatorname{cl}\left(B_{r_{\delta}}(z_{\delta})\right)\cap\partial H= \partial\mathbf{C}_{\delta}\cap\partial H=\left\{x:x_{n+1}=0\,,\left|x^{ \prime}\right|\leq\delta\right\},\] (B.8)
and, second, that
\[\left|E_{\delta}\right|=v\,.\] (B.9)
To see that this choice is possible, we first notice that, since \(E\cap\Gamma_{\delta}=\varnothing\), (B.9) is equivalent to
\[\left|B_{r_{\delta}}(z_{\delta})\setminus H\right|=v-\left|E\right|-\left| \Gamma_{\delta}\right|=v^{*}-\left|\Gamma_{\delta}\right|.\] (B.10)
Taking (B.5) into account we see that (B.8) and (B.10) uniquely determine \(z_{\delta}\in\mathbb{R}^{n+1}\) and \(r_{\delta}>0\) as soon as \(\delta_{0}\) is small enough to guarantee \(v^{*}-|\Gamma_{\delta_{0}}|>0\). In fact, by (B.5), \(v^{*}-|\Gamma_{\delta}|\to v^{*}>0\) with \(\mathcal{H}^{n}(\partial\mathbf{C}_{\delta}\cap\partial H)\to 0\) as \(\delta\to 0^{+}\), so that, up to further decrease \(\delta_{0}\), we definitely have \(z_{\delta}\not\in H\), and
\[\Big{|}r_{\delta}-\Big{(}\frac{v^{*}}{\omega_{n+1}}\Big{)}^{1/(n+1)}\Big{|}\leq C \,\delta^{n+2}\,,\] (B.11)
where \(C=C(E,n,v^{*})\).
We now use the facts that \(K\cup E^{{(1)}}\) is \(\mathcal{C}\)-spanning \(\mathbf{W}\) and that \(E\subset E_{\delta}\) to prove that
\[(K_{\delta},E_{\delta})=((\Omega\cap\partial^{*}E_{\delta})\cup(K\cap E_{ \delta}^{{(0)}}),E_{\delta})\] (B.12)
is such that \(K_{\delta}\cup E_{\delta}^{{(1)}}\) is \(\mathcal{C}\)-spanning \(\mathbf{W}\) (and thus is admissible in \(\Psi_{\mathrm{bk}}(v)\) by (B.9)). To this end, it is enough to show that
\[K\cup E^{{(1)}}\overset{\mathcal{H}^{n}}{\subset}K_{\delta}\cup E _{\delta}^{{(1)}}\,.\] (B.13)
Indeed, by \(E\subset E_{\delta}\) and Federer's theorem (1.37) we have
\[E^{{(1)}}\subset E_{\delta}^{{(1)}}\,,\qquad E_{ \delta}^{{(0)}}\subset E^{{(0)}}\,,\qquad E^{{(1)}}\cup \partial^{*}E\overset{\mathcal{H}^{n}}{\subset}E_{\delta}^{{(1)}} \cup\partial^{*}E_{\delta}\,.\] (B.14)
(Notice indeed that \(\partial^{*}E\subset E^{{(1/2)}}\subset\mathbb{R}^{n+1} \setminus E_{\delta}^{{(0)}}\)). Next, using in order Federer's theorem (1.37), (B.14) and \(K\subset\Omega\), and the definition of \(K_{\delta}\), we have
\[E^{{(1)}}\cup(K\setminus E_{\delta}^{{(0)}}) \overset{\mathcal{H}^{n}}{=}E^{{(1)}}\cup[K\cap( \partial^{*}E_{\delta}\cup E_{\delta}^{{(1)}})] \overset{\mathcal{H}^{n}}{\subset}E_{\delta}^{{(1)}} \cup(\Omega\cap\partial^{*}E_{\delta})\subset E_{\delta}^{{(1)}} \cup K_{\delta}\,.\]
But \(K\cap E_{\delta}^{{(0)}}\subset K_{\delta}\) by definition, which combined with the preceding containment completes the proof of (B.13). Having proved that \((K_{\delta},E_{\delta})\) is admissible in \(\Psi_{\mathrm{bk}}(v)\), we have
\[\mathcal{F}_{\mathrm{bk}}(K,E)+P(B^{{(v^{*})}})=\Psi_{ \mathrm{bk}}(v)\leq\mathcal{F}_{\mathrm{bk}}(K_{\delta},E_{\delta})\,.\] (B.15)
By (B.15), the definition of \(K_{\delta}\), and (B.14), we find
\[P(E;\Omega)+2\,\mathcal{H}^{n}(K\cap E^{{(0)}})+P(B^{{ (v^{*})}}) \leq P(E_{\delta};\Omega)+2\,\mathcal{H}^{n}(K_{\delta}\cap E_{ \delta}^{{(0)}})\] \[\leq P(E_{\delta};\Omega)+2\,\mathcal{H}^{n}(K\cap E_{ \delta}^{{(0)}}) \leq P(E_{\delta};\Omega)+2\,\mathcal{H}^{n}(K\cap E^{{(0)}})\,,\]
from which we deduce
\[P(E;\Omega)+P(B^{{(v^{*})}})\leq P(E_{\delta};\Omega)\,.\] (B.16)
We now notice that \(E_{\delta}\) coincides with \(E\) in the open set \(\Omega\cap H\setminus\operatorname{cl}\left(\mathbf{C}_{\delta}\right)\), and with \(B_{r_{\delta}}(z_{\delta})\) in the open set \(\mathbb{R}^{n+1}\setminus\operatorname{cl}\left(H\right)\), so that
\[\Big{(}\Omega\cap H\setminus\operatorname{cl}\left(\mathbf{C}_ {\delta}\right)\Big{)}\cap\partial^{*}E_{\delta}=\Big{(}\Omega\cap H\setminus \operatorname{cl}\left(\mathbf{C}_{\delta}\right)\Big{)}\cap\partial^{*}E\,,\] \[\big{(}\Omega\setminus\operatorname{cl}\left(H\right)\big{)} \cap\partial^{*}E_{\delta}=\big{(}\partial B_{r_{\delta}}(z_{\delta})\big{)} \setminus\operatorname{cl}\left(H\right),\]
and (B.16) is equivalent to
\[P\big{(}E;\Omega\cap(\partial H\cup\operatorname{cl}\left( \mathbf{C}_{\delta}\right)\big{)}+P(B^{{(v^{*})}})\] (B.17) \[\leq P\big{(}E_{\delta};\Omega\cap(\partial H\cup\operatorname{ cl}\left(\mathbf{C}_{\delta}\right)\big{)}+P(B_{r_{\delta}}(z_{\delta}); \mathbb{R}^{n+1}\setminus\operatorname{cl}\left(H\right)\big{)}\,.\]
In fact, it is easily proved that \((\partial^{*}E)\cap(\partial H)\setminus\operatorname{cl}\left(\mathbf{C}_{ \delta}\right)=(\partial^{*}E_{\delta})\cap(\partial H)\setminus\operatorname{ cl}\left(\mathbf{C}_{\delta}\right)\) (which is evident from Figure B.1), so that (B.17) readily implies
\[P(B^{{(v^{*})}})\leq P\big{(}E_{\delta};\Omega\cap \operatorname{cl}\left(\mathbf{C}_{\delta}\right)\big{)}+P(B_{r_{\delta}}(z_{ \delta});\mathbb{R}^{n+1}\setminus\operatorname{cl}\left(H\right))\,.\] (B.18)
Now, \(\mathbf{C}_{\delta}\subset\subset\Omega\). Moreover, by (B.3), we have that \(\mathbf{T}_{\delta}\) (the top part of \(\partial\mathbf{C}_{\delta}\)) is contained in \(E^{{(1)}}\subset E_{\delta}^{{(1)}}\), and is thus \(\mathcal{H}^{n}\)-disjoint from \(\partial^{*}E_{\delta}\). Similarly, again by (B.3) we have \(E\cup\Gamma_{\delta}=\mathbf{C}_{\delta}\), and thus \(\mathbf{D}_{\delta}\subset(E\cup\Gamma_{\delta})^{{(1/2)}}\); at the same time, by (B.8) we have
\((B_{r_{\delta}}(z_{\delta})\setminus H)^{{}_{(1/2)}}\); therefore \(\mathbf{D}_{\delta}\subset E_{\delta}^{{}_{(1)}}\), and thus \(\mathbf{D}_{\delta}\) is \(\mathcal{H}^{n}\)-disjoint from \(\partial^{*}E_{\delta}\). Finally, again by \(E\cup\Gamma_{\delta}=\mathbf{C}_{\delta}\) we see that \(P(E_{\delta};\mathbf{C}_{\delta})=0\). Therefore, in conclusion,
\[P\big{(}E_{\delta};\Omega\cap\operatorname{cl}\left(\mathbf{C}_{\delta} \right)\big{)}=P(E_{\delta};\mathbf{L}_{\delta})=P(\Gamma_{\delta};\mathbf{L}_ {\delta})\leq C\,\delta^{n+1}\,,\] (B.19)
where we have used again first (B.3), and then (B.6). Combining (B.18)-(B.19) we get
\[P(B^{(v^{*})})\leq P(B_{r_{\delta}}(z_{\delta});\mathbb{R}^{n+1} \setminus\operatorname{cl}\left(H\right))+C\,\delta^{n+1}\,.\] (B.20)
Finally, by (B.8), (B.5), and (B.11) we have
\[P(B_{r_{\delta}}(z_{\delta});\mathbb{R}^{n+1}\setminus\operatorname{cl} \left(H\right))\leq P(B^{(v^{*})})-C(n)\,\delta^{n}\,;\]
by combining this estimate with (B.20), we reach a contradiction for \(\delta\) small enough.
_Case two, \(0\in\mathbf{W}\)_: In this case, by the \(C^{2}\)-regularity of \(\partial\Omega\) we can find \(\delta_{0}>0\) and \(u\in C^{2}(\mathbf{D}_{\delta_{0}};[0,\delta_{0}])\) with \(u(0)=0\) and \(\nabla u(0)=0\) such that
\[\mathbf{W}\cap\mathbf{C}_{\delta_{0}}=\left\{x\in\mathbf{C}_{ \delta_{0}}:\delta_{0}>x_{n+1}>u(x^{\prime})\right\},\] (B.21) \[(\partial\Omega)\cap\mathbf{C}_{\delta}=\left\{x\in\mathbf{C}_{ \delta_{0}}:x_{n+1}=u(x^{\prime})\right\}.\]
We have \(0\leq u(x^{\prime})\leq C\,|x^{\prime}|^{2}\) for every \(|x^{\prime}|<\delta_{0}\) (and some \(C=C(\mathbf{W})\)), so that defining \(\Gamma_{\delta}\) as in (B.4) we still obtain (B.5) and (B.6). We then define \(E_{\delta}\), \(r_{\delta}\), and \(z_{\delta}\), as in (B.7), (B.8) and (B.9). Notice that now \(E\) and \(\Gamma_{\delta}\) may not be disjoint (see Figure B.1-(b)), therefore (B.9) is not equivalent to (B.10), but to
\[\big{|}B_{r_{\delta}}(z_{\delta})\setminus H\big{|}=v-|E|-|\Gamma_{\delta} \setminus E|=v^{*}-|\Gamma_{\delta}\setminus E|\,.\]
This is still sufficient to repeat the considerations based on (B.8) and (B.5) proving that \(r_{\delta}\) and \(z_{\delta}\) are uniquely determined, and satisfy (B.11). We can repeat the proof that \((K_{\delta},E_{\delta})\) defined as in (B.12) is admissible in \(\Psi_{\mathrm{b}k}(v)\) (since that proof was based only on the inclusion \(E\subset E_{\delta}\)), and thus obtain (B.16). The same considerations leading from (B.16) to (B.18) apply in the present case too, and so we land on
\[P(B^{(v^{*})})\leq P\big{(}E_{\delta};\Omega\cap\operatorname{cl}\left( \mathbf{C}_{\delta}\right)\big{)}+P(B_{r_{\delta}}(z_{\delta});\mathbb{R}^{n+ 1}\setminus\operatorname{cl}\left(H\right))\,.\] (B.22)
Now, by (B.21), \(\mathbf{T}_{\delta}\) is contained in \(\mathbf{W}\), so that \(P(E_{\delta};\mathbf{T}_{\delta})=0\). At the same time, if \(x=(x^{\prime},0)\in\mathbf{D}_{\delta}\cap\Omega\), then \(u(x^{\prime})>0\), and thus \(x\in(E_{\delta}\cap H)^{{}_{(1/2)}}\); since, by (B.8), we also have \(x\in(E_{\delta}\setminus H)^{{}_{(1/2)}}\), we conclude that \(\mathbf{D}_{\delta}\cap\Omega\subset E_{\delta}^{{}_{(1)}}\), and thus that
\[P\big{(}E_{\delta};\Omega\cap\operatorname{cl}\left(\mathbf{C}_{\delta} \right)\big{)}=P\big{(}E_{\delta};\Omega\cap\mathbf{L}_{\delta}\big{)}\leq \mathcal{H}^{n}(\Omega\cap\mathbf{L}_{\delta})\leq C\,\delta^{n+1}\,,\]
where we have used \(0\leq u(x^{\prime})\leq C\,|x^{\prime}|^{2}\) for every \(|x^{\prime}|<\delta_{0}\) again. We thus deduce from (B.22) that
\[P(B^{(v^{*})})\leq P(B_{r_{\delta}}(z_{\delta});\mathbb{R}^{n+1} \setminus\operatorname{cl}\left(H\right))+C\,\delta^{n+1}\,,\]
and from here we conclude as in case one.
## Appendix C An elementary lemma
In this appendix we provide a proof of Lemma 7.2. The proof is an immediate corollary of a geometric property of closed \(\mathcal{C}\)-spanning sets (see (C.2)-(C.3) below) first proved in \(\mathbb{R}^{n+1}\) for \(n\geq 2\)[20, Lemma 4.1]. Here we extend this property to the plane. The difference between \(\mathbb{R}^{2}\) and \(\mathbb{R}^{n+1}\) for \(n\geq 2\) stems from a part of the argument where one constructs a new admissible spanning curve by modifying an existing one inside a ball. Specifically, ensuring that the new curve does not intersect itself requires an extra argument in \(\mathbb{R}^{2}\).
**Lemma C.1**.: _Let \(n\geq 1\), \(\mathbf{W}\subset\mathbb{R}^{n+1}\) be closed, \(\mathcal{C}\) be a spanning class for \(\mathbf{W}\), \(S\subset\Omega:=\mathbb{R}^{n+1}\setminus\mathbf{W}\) be relatively closed and \(\mathcal{C}\)-spanning \(\mathbf{W}\), and \(B_{r}(x)\subset\subset\Omega\). Let \(\{\Gamma_{i}\}_{i}\) be the countable family of equivalence classes of \(\partial B_{r}(x)\setminus S\) determined by the relation:_
\[y\sim x\iff\exists\tilde{\gamma}\in C^{0}([0,1],\operatorname{cl}B_{r}(x) \setminus S):\tilde{\gamma}(0)=y\text{, }\tilde{\gamma}(1)=z\text{, }\tilde{\gamma}((0,1))\subset B_{r}(x)\,.\] (C.1)
_Then if \(\gamma\in\mathcal{C}\), either_
\[\gamma\cap(S\setminus B_{r}(x))\neq\emptyset\] (C.2)
_or there exists a connected component \(\sigma\) of \(\gamma\cap\operatorname{cl}B_{r}(x)\) which is homeomorphic to an interval and such that_
\[\text{the endpoints of }\sigma\text{ belong to two distinct equivalence classes of }\partial B_{r}(x)\setminus S\text{.}\] (C.3)
_In particular, the conclusion of Lemma 7.2 holds._
**Remark C.2**.: The planar version of Lemma C.1 allows one to extend the main existence result [15, Theorem 2.7] to \(\mathbb{R}^{2}\).
Proof of Lemma c.1.: The proof is divided into two pieces. First we show how to deduce Lemma 7.2 from the fact that at least one of (C.2)-(C.3) holds. Then we show in \(\mathbb{R}^{2}\) that (C.3) must hold whenever (C.2) does not, completing the lemma since the case \(n\geq 2\) is contained in [15, Lemma 4.1].
_Conclusion of Lemma 7.2 from (C.2)-(C.3)_: We must show that either \(\gamma(\mathbb{S}^{1})\setminus B_{r}(x)\neq\varnothing\) or that it intersects at least two open connected components of \(B_{r}(x)\setminus S\). If \(\gamma(\mathbb{S}^{1})\setminus B_{r}(x)\neq\varnothing\) we are done, so suppose that \(\gamma(\mathbb{S}^{1})\setminus B_{r}(x)=\varnothing\). Then (C.3) must be true, so that the endpoints of some arc \(\sigma=\gamma((a,b))\subset B_{r}(x)\) for an interval \((a,b)\subset\mathbb{S}^{1}\) belong to distinct equivalence classes. Choose \(\rho\) small enough so that \(B_{\rho}(\gamma(a))\cup B_{\rho}(\gamma(b))\subset\Omega\setminus S\) and \(a^{\prime}\), \(b^{\prime}\in I\) such that \(\gamma(a^{\prime})\in B_{\rho}(\gamma(a))\) and \(\gamma(b^{\prime})\in B_{\rho}(\gamma(b))\). If \(\gamma(a^{\prime})\) and \(\gamma(b^{\prime})\) belonged to the same open connected component of \(B_{r}(x)\setminus S\), we would contradict (C.3), so they belong to different components as desired.
_Verification of (C.2)-(C.3) in \(\mathbb{R}^{2}\)_: As in [15, Lemma 10], we may reduce to the case where \(\gamma\) intersects \(\partial B_{r}(x)\) transversally at finitely many points \(\{\gamma(a_{k})\}_{k=1}^{K}\cup\{\gamma(b_{k})\}_{k=1}^{K}\) such that \(\gamma\cap B_{r}(x)=\cup_{k}\gamma((a_{k},b_{k}))\) and \(\{[a_{k},b_{k}]\}_{k}\) are mutually disjoint closed arcs in \(\mathbb{S}^{1}\). If (C.2) holds we are done, so we assume that
\[\gamma\cap S\setminus B_{r}(x)=\varnothing\] (C.4)
and prove (C.3). Note that each pair \(\{\gamma(a_{k}),\gamma(b_{k})\}\) bounds two open arcs in \(\partial B_{r}(x)\); we make a choice now as follows. Choose \(s_{0}\in\partial B_{r}(x)\setminus\cup_{k}\{\gamma(a_{k}),\gamma(b_{k})\}\). Based on our choice of \(s_{0}\), for each \(k\) there is a unique open arc \(\ell_{k}\subset\partial B_{r}(x)\) such that \(\partial_{\partial B_{r}(x)}\ell_{k}=\{\gamma(a_{k}),\gamma(b_{k})\}\) and \(s_{0}\notin\operatorname{cl}\,_{\partial B_{r}(x)}\ell_{k}\). We claim that
\[\text{if }k\neq k^{\prime}\text{, then either }\ell_{k}\subset\subset\ell_{k^{\prime}} \text{ or }\ell_{k^{\prime}}\subset\subset\ell_{k}\,.\] (C.5)
_To prove (C.5)_: We consider simple closed curves \(\gamma_{k}\) with images \(\gamma((a_{k},b_{k}))\cup\operatorname{cl}\,_{\partial B_{r}(x)}\ell_{k}\). By the Jordan curve theorem, each \(\gamma_{k}\) defines a connected open subset \(U_{k}\) of \(B_{r}(x)\) with \(\partial U_{k}\cap\partial B_{r}(x)=\operatorname{cl}\,_{\partial B_{r}(x)} \ell_{k}\). Aiming for a contradiction, if (C.5) were false, then for some \(k\neq k^{\prime}\), either
\[\gamma(a_{k})\in\ell_{k^{\prime}}\subset\operatorname{cl}U_{k^{ \prime}}\text{ and }\gamma(b_{k})\in\partial B_{r}(x)\setminus\operatorname{cl}\,_{\partial B_{r} (x)}\ell_{k^{\prime}}\subset\partial B_{r}(x)\setminus\operatorname{cl}U_{k^{ \prime}}\text{ or}\] \[\gamma(b_{k})\in\ell_{k^{\prime}}\subset\operatorname{cl}U_{k^{ \prime}}\text{ and }\gamma(a_{k})\in\partial B_{r}(x)\setminus\operatorname{cl}\,_{\partial B_{r} (x)}\ell_{k^{\prime}}\subset\partial B_{r}(x)\setminus\operatorname{cl}U_{k^{ \prime}}\,;\]
in particular, \(\gamma((a_{k},b_{k}))\) has non-trivial intersection with both the open sets \(U_{k^{\prime}}\) and \(B_{r}(x)\setminus\operatorname{cl}U_{k^{\prime}}\). By the continuity of \(\gamma\) and the connectedness of \((a_{k},b_{k})\), we thus deduce that \(\gamma((a_{k},b_{k}))\cap\partial U_{k^{\prime}}\neq\varnothing\). Upon recalling that \(\gamma((a_{k},b_{k}))\subset B_{r}(x)\), we find \(\gamma((a_{k},b_{k}))\cap\partial U_{k^{\prime}}\cap B_{r}(x)=\gamma((a_{k},b_{ k}))\cap\gamma((a_{k^{\prime}},b_{k^{\prime}}))\neq\varnothing\). But this contradicts the fact that \(\gamma\) smoothly embeds \(\mathbb{S}^{1}\) into \(\Omega\). The proof of (C.5) is finished.
Returning to the proof of (C.3), let us assume for contradiction that
\[\gamma(a_{k})\sim\gamma(b_{k})\quad\forall 1\leq k\leq K\,.\] (C.6)
We are going to use (C.4), (C.5), and (C.6) to create a piecewise smooth embedding \(\overline{\gamma}:\mathbb{S}^{1}\to\Omega\) which is a homotopic deformation of \(\gamma\) (and thus approximable by elements in \(\mathcal{C}\)) such that \(\overline{\gamma}\cap S=\varnothing\). After reindexing the equivalence classes \(\Gamma_{i}\), we may assume that \(\{\Gamma_{1},\dots,\Gamma_{\gamma_{\gamma}}\}\) are those equivalence classes containing any pair \(\{\gamma(a_{k}),\gamma(b_{k})\}\) for \(1\leq k\leq K\). We will construct \(\overline{\gamma}\) in steps by redefining \(\gamma\) on those \([a_{k},b_{k}]\) with images under \(\gamma\) having endpoints belonging to the same \(\Gamma_{i}\). For future use, let \(\Omega_{i}\) be the equivalence classes of \(B_{r}(x)\setminus S\) determined by the relation (C.1). Note that they are open connected components of \(B_{r}(x)\setminus S\).
_Construction corresponding to \(\Gamma_{1}\)_: Relabelling in \(k\) if necessary, we may assume that \(\{1,\dots,K_{1}\}\) for some \(1\leq K_{1}\leq K\) are the indices such that \(\{\gamma(a_{k}),\gamma(b_{k})\}\subset\Gamma_{1}\). By further relabelling and applying (C.5) we may assume: first, that \(\ell_{1}\) is a "maximal" arc among \(\{\ell_{1},\dots,\ell_{K_{1}}\}\), in other words
\[\text{for given $k\in\{2,\dots K_{1}\}$, either $\ell_{1}\cap\ell_{k}=\varnothing$ or $\ell_{k}\subset\!\!\subset\ell_{1}$}\,;\] (C.7)
and second, that for some \(K_{1}^{1}\leq K_{1}\), \(\{\ell_{2},\dots,\ell_{K_{1}^{1}}\}\) are those arcs contained in \(\ell_{1}\). Since \(\Omega_{1}\) is open and connected, we may connect \(\gamma(a_{1})\) to \(\gamma(b_{1})\) by a smooth embedding \(\overline{\gamma}_{1}:[a_{1},b_{1}]\to\operatorname{cl}B_{r}(x)\setminus S\) with \(\overline{\gamma}_{1}((a_{1},b_{1}))\subset\Omega_{1}\). Also, by the Jordan curve theorem, \(\ell_{1}\cup\overline{\gamma}_{1}\) defines an open connected subset \(W_{1}\) of \(B_{r}(x)\) with \(\partial W_{1}\cap S=\varnothing\). Using (C.5), we now argue towards constructing pairwise disjoint smooth embeddings \(\overline{\gamma}_{k}:[a_{k},b_{k}]\to\Gamma_{1}\cup\Omega_{1}\).
We first claim that
\[W_{1}\setminus S\text{ is path-connected}\,.\] (C.8)
To prove (C.8), consider any \(y,z\in W_{1}\setminus S\). Since \(\Omega_{1}\supset W_{1}\setminus S\) is open and path-connected, we may obtain continuous \(\tilde{\gamma}:[0,1]\to\Omega_{1}\) connecting \(y\) and \(z\). If \(\tilde{\gamma}([0,1])\subset W_{1}\setminus S\), we are done. Otherwise, \(\varnothing\neq\tilde{\gamma}\cap(\Omega_{1}\setminus(W_{1}\setminus S))= \Omega_{1}\setminus W_{1}\), with the equality following from \(\Omega_{1}\cap S=\varnothing\). Combining this information with \(\tilde{\gamma}(\{0,1\})\subset W_{1}\setminus S\), we may therefore choose \([\delta_{1},\delta_{2}]\subset(0,1)\) to be the smallest interval such that \(\tilde{\gamma}([0,1]\setminus[\delta_{1},\delta_{2}])\subset W_{1}\setminus S\). On \((\delta_{1},\delta_{2})\), we redefine \(\tilde{\gamma}\) using the fact that \(\tilde{\gamma}(\{\delta_{1},\delta_{2}\})\subset\partial W_{1}\cap B_{r}(x)= \overline{\gamma}_{1}((a_{1},b_{1}))\) by letting \(\tilde{\gamma}((\delta_{1},\delta_{2}))=\overline{\gamma}_{1}(I)\), where \(\overline{\gamma}_{1}(I)\) has endpoints \(\tilde{\gamma}(\delta_{1})\) and \(\tilde{\gamma}(\delta_{2})\) and \(I\subset(a_{1},b_{1})\). The modified \(\tilde{\gamma}\) is a concatenation of continuous curves and is thus continuous; furthermore, \(\tilde{\gamma}^{-1}(W_{1}\setminus S)=[0,\delta_{1})\cup(\delta_{2},1]\). It only remains to "push" \(\tilde{\gamma}\) entirely inside \(W_{1}\setminus S\), which we may easily achieve by projecting \(\tilde{\gamma}((\delta_{1}-\varepsilon,\delta_{2}+\varepsilon))\) inside \(W_{1}\setminus S\) for small \(\varepsilon\) using the distance function to the smooth curve \(\overline{\gamma}_{1}(a_{1},b_{1})=\partial W_{1}\cap B_{r}(x)\subset B_{r}(x)\setminus S\). This completes (C.8).
But now since \(W_{1}\setminus S\) is path-connected and open, we may connect any two points in it by a smooth embedding of \([0,1]\), which in particular allows us to connect \(\gamma(a_{2})\) and \(\gamma(b_{2})\) by smooth embedding \(\overline{\gamma}_{2}:[a_{2},b_{2}]\to\operatorname{cl}W_{1}\setminus S\) with \(\overline{\gamma}_{2}((a_{2},b_{2}))\subset W_{1}\setminus S\). Let \(W_{2}\) be the connected open subset of \(W_{1}\) determined by the Jordan curve \(\overline{\gamma}_{2}\cup\ell_{2}\). Arguing exactly as in (C.8), \(W_{2}\setminus S\) is open and path-connected, so we can iterate this argument to obtain mutually disjoint embeddings \(\overline{\gamma}_{k}:[a_{k},b_{k}]\to\operatorname{cl}W_{1}\setminus S\subset \Gamma_{1}\cup\Omega_{1}\) with \(\overline{\gamma}_{k}((a_{k},b_{k}))\subset\Omega_{1}\) for \(1\leq k\leq K_{1}^{1}\).
Next, let \(\ell_{K_{1}^{1}+1}\) be another maximal curve with endpoints in \(\Gamma_{1}\). The same argument as in proving (C.8) implies that \(\Omega_{1}\setminus\operatorname{cl}W_{1}\) is path-connected, and so \(\gamma(a_{K_{1}^{1}+1})\), \(\gamma(b_{K_{1}^{1}+1})\) may be connected by a smooth embedding \(\overline{\gamma}_{K_{1}^{1}+1}:[a_{K_{1}^{1}+1},b_{K_{1}^{1}+1}]\to(\Gamma_{1} \cup\Omega_{1})\setminus\operatorname{cl}W_{1}\), that, together with \(\ell_{K_{1}^{1}+1}\), defines a connected domain \(W_{K_{1}^{1}+1}\subset\Omega_{1}\) by the Jordan curve theorem. In addition, \(W_{K_{1}^{1}+1}\cap W_{1}=\varnothing\) since \((\ell_{2}\cup\overline{\gamma}_{K_{1}^{1}+1})\cap\operatorname{cl}W_{1}=\varnothing\) by (C.7) and
the definition of \(\overline{\tau}_{K_{1}^{1}+1}\). Repeating the whole iteration procedure for those intervals contained in \(\ell_{K_{1}^{1}+1}\) and then the rest of the maximal arcs, we finally obtain mutually disjoint embeddings \(\overline{\gamma}_{k}:[a_{k},b_{k}]\to\Gamma_{1}\cup\Omega_{1}\) with \(\overline{\gamma}_{k}((a_{k},b_{k}))\subset\Omega_{1}\) as desired for \(1\leq k\leq K_{1}\).
_Conclusion of the proof of (C.3)_: Repeating the \(\Gamma_{1}\) procedure for \(\{\Gamma_{2},\dots,\Gamma_{I_{\gamma}}\}\) and using the mutual pairwise disjointness of \(\Gamma_{i}\), we obtain mutually disjoint embeddings \(\overline{\gamma}_{k}:[a_{k},b_{k}]\to\operatorname{cl}B_{r}(x)\setminus S\) with \(\overline{\gamma}_{k}((a_{k},b_{k}))\subset B_{r}(x)\setminus S\) for \(1\leq k\leq K_{1}\). We define \(\overline{\gamma}:\mathbb{S}^{1}\to\Omega\) by
\[\overline{\gamma}(t)=\begin{cases}\gamma(t)&t\in\mathbb{S}^{1}\setminus \cup[a_{k},b_{k}]\\ \overline{\gamma}_{k}(t)&t\in[a_{k},b_{k}]\,,\ \ 1\leq k\leq K\,.\end{cases}\]
Since \(\overline{\gamma}=\gamma\) outside \(B_{r}(x)\subset\subset\Omega\), \(\overline{\gamma}\) is homotopic to \(\gamma\) relative to \(\Omega\). Furthermore, \(\overline{\gamma}\) is piecewise smooth and homotopic to \(\gamma\), and so it can be approximated in the \(C^{0}\) norm by \(\{\gamma_{j}\}\subset\mathcal{C}\). However, by (C.4) and the construction of \(\overline{\gamma}_{k}\), \(\overline{\gamma}\cap S=\varnothing\), which implies that \(S\cap\gamma_{j}=\varnothing\) for large \(j\). This contradicts the fact that \(S\) is \(\mathcal{C}\)-spanning \(\mathbf{W}\), and so (C.3) is true.
|
2308.16849 | The construction of a $E_7$-like quantum subgroup of $SU(3)$ | In this short note we construct an embedding of the planar algebra for
$\overline{\operatorname{Rep}(U_q(sl_3))}$ at $q = e^{2\pi i \frac{1}{24}}$
into the graph planar algebra of di Francesco and Zuber's candidate graph
$\mathcal{E}_4^{12}$. Via the graph planar algebra embedding theorem we thus
construct a rank 11 module category over
$\overline{\operatorname{Rep}(U_q(sl_3))}$ whose graph for action by the vector
representation is $\mathcal{E}_4^{12}$. This fills a small gap in the
literature on the construction of $\overline{\operatorname{Rep}(U_q(sl_3))}$
module categories. As a consequence of our construction, we obtain the
principal graphs of subfactors constructed abstractly by Evans and Pugh. | Cain Edie-Michell, Lance Marinelli | 2023-08-31T16:30:20Z | http://arxiv.org/abs/2308.16849v2 | # The construction of a \(E_{7}\)-like quantum subgroup of \(Su(3)\)
###### Abstract.
In this short note we construct an embedding of the planar algebra for \(\overline{\operatorname{Rep}(U_{q}(\mathfrak{sl}_{3}))}\) at \(q=e^{2\pi i\frac{2}{4}}\) into the graph planar algebra of di Francesco and Zuber's candidate graph \(\mathcal{E}_{4}^{12}\). Via the graph planar algebra embedding theorem we thus construct a rank 11 module category over \(\overline{\operatorname{Rep}(U_{q}(\mathfrak{sl}_{3}))}\) whose graph for action by the vector representation is \(\mathcal{E}_{4}^{12}\). This fills a small gap in the literature on the construction of \(\overline{\operatorname{Rep}(U_{q}(\mathfrak{sl}_{3}))}\) module categories. As a consequence of our construction, we obtain the principal graphs of subfactors constructed abstractly by Evans and Pugh.
## 1. Introduction
To every module category over a modular tensor category (MTC), there is an associated _modular invariant_. This is a positive integer valued matrix commuting with the \(SL(2,\mathbb{Z})\) representation of the MTC. These modular invariants are a useful tool for studying module categories, and have played a key role in classification efforts. However, the modular invariant is not a complete invariant. There are many examples of modular invariants which do not come from module categories [1], and also of distinct module categories with the same modular invariant [1, Sections 11 and 12]. A modular invariant is referred to as _physical_ if it is realised by a module category. Even in the situation where a modular invariant is known to be physical, it can be difficult to determine the structure of the corresponding module categories.
A large class of MTC's come from the (semisimplified) representation theory of quantum groups at root of unity [1, Chapter 7]. These categories are typically denoted \(\overline{\operatorname{Rep}(U_{q}(\mathfrak{g}))}\). In the special case of the Lie algebra \(\mathfrak{sl}_{3}\), the modular invariants were classified by Gannon [1]. In work of Evans and Pugh [1], all of the \(SU(3)\) modular invariants were shown to be physical. For all bar one modular invariant, their proof was via explicit construction of the corresponding module categories (using Ocneanu cell systems). The remaining modular invariant was shown to be physical via a relative tensor product construction. As the relative tensor product of module categories is a difficult construction to work with in practice, the explicit structure of the corresponding module category has not been confirmed. It should also be noted that in [1, Section 5.4] some structure of this module category is deduced based on an assumption on its corresponding algebra object. Further, in [2], an explicit construction of this module category is claimed without detail.
The modular invariant in question can be found in [1] labelled as \(\left(\mathcal{E}_{9}^{(2)}\right)^{c}\). There has been some work on deducing the module fusion graph (the graph representing the action of \(\Lambda_{1}\) on the module) for the module category category corresponding to this modular invariant. In [11] Di Francesco and Zuber suggest the following graph (with some physical supporting evidence):
As it will be useful throughout this paper, the Frobenius-Perron eigenvector for this graph is
\[\lambda=\left\{\frac{[5]_{q}}{[3]_{q}},\frac{[5]_{q}}{[3]_{q}},\frac{[2]_{q}[4]_{q} }{[3]_{q}},\frac{[2]_{q}[4]_{q}}{[3]_{q}},[3]_{q},[5]_{q},[3]_{q},1,[5]_{q},[3]_{ q},1\right\}.\]
In this paper, we fix a small gap in the literature by explicitly constructing a module category with module fusion graph \(\mathcal{E}_{4}^{12}\). Our technique for constructing this module category is by using the graph planar algebra embedding theorem [1, Theorem 1.3]. The use of this technique is typically been refereed to as _cell systems_ in the context of quantum groups [1, 1, 1]. More precisely, we find the following element of \(oGPA(\mathcal{E}_{4}^{12})\). We direct the reader to Subsection 2.2 for the definition of \(oGPA(\mathcal{E}_{4}^{12})\).
**Definition 1.1**.: Let \(q=\zeta_{24}\), and \(z\) the root of the polynomial \(9x^{16}-14x^{8}+9\) with numerical value closest to \(-0.996393+0.0848571i\). We define \(W\in\text{Hom}_{oGPA(\mathcal{E}_{4}^{12})}(-\to++)\) as the functional defined on basis elements by
\[W_{1,6,9} =\begin{cases}\sqrt{[2]_{q}}&6\xrightarrow{\alpha}9\\ 0&6\xrightarrow{\beta}9\end{cases} W_{2,6,9} =\begin{cases}z^{-1}\sqrt{\frac{1}{[2]_{q}}}&6\xrightarrow{ \alpha}9\\ \zeta_{24}^{19}\sqrt{\frac{[3]_{q}}{[2]_{q}}}&6\xrightarrow{\beta}9\end{cases} W_{3,6,9} =\begin{cases}z\sqrt{\frac{1}{[2]_{q}}}&6\xrightarrow{\alpha}9\\ \zeta_{3}z\sqrt{\frac{[3]_{q}}{[4]_{q}([2]_{q}+[3]_{q})}}&6\xrightarrow{\beta}9 \end{cases}\] \[W_{4,6,9} =\begin{cases}\sqrt{\frac{1}{[2]_{q}}}&6\xrightarrow{\alpha}9\\ \zeta_{8}^{5}\sqrt{\frac{[3]_{q}([2]_{q}+[3]_{q})}{[4]_{q}[5]_{q}}}&6 \xrightarrow{\beta}9\end{cases} W_{5,6,9} =\begin{cases}\mathbf{i}z^{-1}\sqrt{\frac{1}{[2]_{q}}}&6 \xrightarrow{\alpha}9\\ \zeta_{48}^{11}z\sqrt{\frac{[4]_{q}}{[5]_{q}}}&6\xrightarrow{\beta}9\end{cases}\] \[W_{3,6,7} =z\sqrt{\frac{[2]_{q}}{[4]_{q}}} \quad W_{3,10,7} =z\sqrt{\frac{[2]_{q}^{2}}{[4]_{q}([2]_{q}+[3]_{q})}} W_{3,10,9} =z\sqrt{\frac{[3]_{q}(1+[2]_{q})}{[2]_{q}[4]_{q}}}\] \[W_{4,6,7} =\zeta_{8}^{5}\sqrt{\frac{[2]_{q}[3]_{q}}{[4]_{q}(1+[2]_{q})}} W_{4,10,7} =z\sqrt{\frac{[2]_{q}+[3]_{q}}{[4]_{q}}} W_{4,10,9} =z\sqrt{\frac{[3]_{q}^{2}}{[2]_{q}[4]_{q}(1+[2]_{q})}}\] \[W_{5,6,7} =\zeta_{8}\sqrt{\frac{[2]_{q}}{[3]_{q}}} W_{5,8,7} =z\sqrt{\frac{[2]_{q}}{[3]_{q}}} W_{5,10,7} =z\sqrt{\frac{1}{[2]_{q}}}\] \[W_{5,10,9} =z\sqrt{\frac{1}{[2]_{q}}} W_{5,10,11} =z\sqrt{[2]_{q}}\]
with the remaining values on basis elements defined by the rotational formula \(W_{a,b,c}=\sqrt{\frac{\lambda_{b}}{\lambda_{c}}}W_{b,c,a}\). Here we use the notation that \(\zeta_{\ell}:=e^{2\pi i\frac{1}{\ell}}\).
Our main result shows that this distinguished element satisfies the relations required to give an embedding for the planar algebra of \(\overline{\text{Rep}(U_{q}(\mathfrak{sl}_{3}))}\) associated to the object \(\Lambda_{1}\).
**Theorem 1.2**.: _The map_
\[\xy(0,0)*{\ar@{-}}\mapsto W\in\text{Hom}_{oGPA(\mathcal{E}_{4}^{12})}(-\to ++)\]
_defines a tensor functor_
\[\mathcal{P}_{\overline{\text{Rep}(U_{q}(\mathfrak{sl}_{3}))};\Lambda_{1}} \to oGPA(\mathcal{E}_{4}^{12}).\]
The graph planar algebra embedding theorem [1, Theorem 1.3] (and [1, Theorem 1.1] for the slight technical alteration needed for our set-up) then gives the construction of the module category.
**Corollary 1.3**.: _There exists a module category \(\mathcal{M}\) over \(\overline{\text{Rep}(U_{q}(\mathfrak{sl}_{3}))}\) such that the action graph for \(\Lambda_{1}\) is \(\mathcal{E}_{4}^{12}\)._
As shown in [1], we obtain several subfactors of the hyperfinite \(\mathrm{II}_{1}\) factor \(\mathcal{R}\) as a consequence of Corollary 1.3. The subfactor with smallest index (\(=24\left(2+\sqrt{3}\right)\)) has principal graph
The above principal graph is obtained from the graph \(\mathcal{E}_{4}^{12}\) via the equations of [1, Section 7].
Our strategy for obtaining the embedding in Definition 1.1 is low-brow, but effective. We begin by numerically approximating a solution for the embedding of the element \([2]_{q}\cdot p_{\Lambda_{2}}\in\overline{\mathrm{Rep}(U_{q}(\mathfrak{sl}_{3}))}\) into \(oGPA(\mathcal{E}_{4}^{12})\). As the element \([2]_{q}\cdot p_{\Lambda_{2}}\) satisfies the Hecke algebra relations, the equations governing its embedding into \(oGPA(\mathcal{E}_{4}^{12})\) are polynomial (of max degree \(3\)), and are amenable to numerical approximation. From this numerical approximation we can then guess exact values for most of the coefficients of the embedding. With many of the coefficients exactly determined, many of the polynomial equations governing the embedding are now linear, and can be solved exactly. This gives us a candidate for the embedding of the element \([2]_{q}\cdot p_{\Lambda_{2}}\). Using the techniques developed in [1], we can then determine the embedding of \(\raisebox{-0.5pt}{\includegraphics[height=56.905512pt]{figs/2
where we understand \(X^{+}=X\) and \(X^{-}=X^{*}\).
If the object \(X\) Cauchy tensor generates \(\mathcal{C}\) (in the sense of [1]), then \(\mathcal{P}_{\mathcal{C},X}\) contains a projection onto every simple object of \(\mathcal{C}\). Hence the Cauchy completion of \(\mathcal{P}_{\mathcal{C},X}\) is monoidally equivalent to \(\mathcal{C}\). In this sense, the subcategory \(\mathcal{P}_{\mathcal{C},X}\) remembers all the information of the original category \(\mathcal{C}\), while being significantly simpler.
An important example of a planar algebra is the Kazhdan-Wenzl presentation for \(\operatorname{Rep}(U_{q}(\mathfrak{sl}_{N}))\). Let \(X=\Lambda_{1}\) be the vector representation. The planar algebra \(\mathcal{P}_{\operatorname{Rep}(U_{q}(\mathfrak{sl}_{N})),\Lambda_{1}}\) is then described in [11] via generators and relations. The generators of this planar algebra are
The planar algebra is then constructed as the free planar algebra built from the generating morphisms (allowing duality morphisms, along with tensor products, compositions, and sums of these morphisms), modulo the generating relations. We have the following relations between the generators (which are sufficient when \(q=e^{2\pi i\frac{1}{N+}}\) for some \(k\in\mathbb{N}\) by [1])
Note that for this paper, we will be specialised to the case where \(N=3\) and \(q=e^{2\pi i\frac{1}{24}}\).
### The graph planar algebra
A key planar algebra used in this paper is the graph planar algebra constructed from a graph \(\Gamma\) and a Frobenius-Perron eigenvector \(\lambda\) for \(\Gamma\). The construction of the graph planar algebra is due to Jones [15]. The graph planar algebra can be defined tersely as follows. Let \(\mathcal{M}\) be a semisimple category, i.e. \(\left(\operatorname{Vec}_{\mathbb{C}}^{f.d.}\right)^{\oplus m}\) for some \(m\in\mathbb{N}\), and \(\Gamma\) an endofunctor of \(\mathcal{M}\), which can be fully described by a graph with \(m\) vertices. We define
\[oGPA(\Gamma):=\mathcal{P}_{\operatorname{End}(\mathcal{M}),\Gamma}\]
where a Frobenius-Perron eigenvector \(\lambda\) of \(\Gamma\) is used to define the rigidity maps.
It was shown in [1] (with an adaption made in [1] to allow for non-self-dual objects) that there is a much more explicit way of defining \(oGPA(\Gamma)\). Namely let \(s\) and \(t\) be two strings in \(\{+,-\}\). We then have that
\[\operatorname{Hom}_{oGPA(\Gamma)}(s\to t)\cong\operatorname{span}_{\mathbb{C} }\{(p,q):p\text{ is a $s$ path},q\text{ is a $t$ path},s(p)=s(q),t(p)=t(q)\}\]
with operations
\[(p^{\prime},q^{\prime})\circ(p,q) =\delta_{q^{\prime},p}(p^{\prime},q)\] \[(p,q)\otimes(p^{\prime},q^{\prime}) =\delta_{t(p),s(p^{\prime})}\delta_{t(q),s(q^{\prime})}(pp^{\prime },qq^{\prime})\]
extended linearly. We then have the distinguished rigidity maps given by
\[\operatorname{ev}_{(+,-)} :=\sum_{(e,\,\overline{e})\text{ a }(+,-)\text{-path}}\sqrt{ \frac{\lambda_{t(e)}}{\lambda_{s(e)}}}((e,\overline{e}),s(e)):(+,-)\to 1\] \[\operatorname{coev}_{(-,+)} :=\sum_{(\overline{e},\,e)\text{ a }(-,+)\text{-path}}\sqrt{\frac{\lambda_{s(e)}}{ \lambda_{t(e)}}}(t(e),(\overline{e},e)):1\to(-,+)\] \[\operatorname{ev}_{(-,+)} :=\sum_{(\overline{e},\,e)\text{ a }(-,+)\text{-path}}\sqrt{\frac{ \lambda_{s(e)}}{\lambda_{t(e)}}}((\overline{e},e),t(e)):(-,+)\to 1\] \[\operatorname{coev}_{(+,-)} :=\sum_{(e,\,\overline{e})\text{ a }(+,-)\text{-path}}\sqrt{\frac{ \lambda_{t(e)}}{\lambda_{s(e)}}}(s(e),(e,\overline{e})):1\to(+,-)\]
These operations give \(oGPA(\Gamma)\) the structure of a pivotal multi-tensor category. This category also has a \(\dagger\) structure given by the anti-linear extension of
\[(p,q)^{\dagger}=(q,p).\]
With this dagger structure, \(oGPA(\Gamma)\) is unitary. We refer the reader to [20, Section 2.2] and [1, Section 2.2] for more details on the category \(oGPA(\Gamma)\).
The graph planar algebra is useful for this paper due to the graph planar algebra embedding theorem [1, Theorem 1.3]. This result shows that module categories over a tensor category are classified by embeddings of the associated planar algebra into graph planar algebras. This allows us to obtain Corollary 1.3 from Theorem 1.2.
## 3. Finding the solution
Our first goal is to find an embedding of the element \(\left\lfloor\begin{array}{c}\vspace{0.2cm}\vspace{0.
matrix \(U^{1}\ _{9}\) is a \(2\times 2\) projection satisfying \((U^{1}\ _{9})^{2}=[2]_{q}\cdot U^{1}\ _{9}\) by (Hecke), and with trace \([2]_{q}\) by [3, Lemma 5.6]. This means we can unitarily conjugate \(U^{1}\ _{9}\) by an element of \(U(2)\) to arrange that
\[\begin{array}{cc}6^{\alpha}&6^{\beta}\\ U^{1}\ _{9}=\begin{bmatrix}[2]_{q}&0\\ 0&0\end{bmatrix}\end{array}\]
This uses up the \(U(2)\) degree of freedom, up to the \(U(1)\oplus U(1)\) diagonal subgroup of this \(U(2)\). Thus with this fixed choice of \(U^{1}\ _{9}\) as above, we have a gauge group of \(U(1)^{25}\). In particular, this means that the absolute values of the coefficients in our solution are now fixed.
We now numerically approximate a solution for the remaining coefficients. As expected, the phases on these coefficients are unrecognisable (as the numerical approximation picks out a random point in the solution space \(U(1)^{25}\)). However, many of the absolute values (which are invariant under the action of \(U(1)^{25}\)) of our numerical coefficients can be immediately identified. The distinct numerical values in our numerical solution for which we can make guesses for their exact values are as follows:
\begin{tabular}{c|c c c c} \hline \hline Numerical Value & 0 & 0.175067 & 0.207107 & 0.239146 & 0.341081 \\ \hline Exact Guess & 0 & \(\frac{1}{[4]_{q}}\left([2]_{q}+\frac{[3]_{q}}{[5]_{q}}\right)-1\) & \(\frac{[2]_{q}}{[3]_{q}}\left(1+\frac{[2]_{q}}{[3]_{q}}\right)-1\) & \(\frac{[3]_{q}}{[4]_{q}}\left(1+\frac{1}{[2]_{q}}\right)-1\) & \(\frac{[3]_{q}}{[4]_{q}}\left(1+\frac{1}{[2]_{q}}\right)-1\) & \(\frac{[4]_{q}}{[3]_{q}}\left([2]_{q}+\frac{[4]_{q}}{[3]_{q}}\right)-1\) \\ \hline \hline Numerical Value & 0.366025 & 0.393847 & 0.439158 & 0.481717 & 0.5 \\ \hline Exact Guess & \(\frac{1}{[3]_{q}}\) & \(\frac{1}{[4]_{q}}\left([2]_{q}+[3]_{q}\right)-1\) & \(\frac{[2]_{q}}{[3]_{q}}+\frac{[3]_{q}}{[5]_{q}}-1\) & \(\sqrt{\frac{[4]_{q}}{[2]_{q}}}\) & \(\frac{1}{[2]_{q}}\) \\ \hline \hline Numerical Value & 0.517638 & 0.538005 & 0.605 & 0.68125 & 0.707107 \\ \hline Exact Guess & \(\frac{1}{[2]_{q}}\) & \(\frac{1}{[4]_{q}}\left([5]_{q}+\frac{[3]_{q}}{[2]_{q}}\right)-1\) & \(\sqrt{\frac{[3]_{q}}{[5]_{q}}}\) & \(\sqrt{\frac{[4]_{q}}{[2]_{q}}}\) & \(\frac{1}{\sqrt{2}}\) \\ \hline \hline Numerical Value & 0.745315 & 0.790471 & 0.800893 & 0.8556 & 0.865966 \\ \hline Exact Guess & \(\sqrt{\frac{1}{[3]_{q}}}\left(1+\frac{1}{[2]_{q}}\right)\) & \(\sqrt{\frac{1}{[3]_{q}}}\left(1+\frac{[2]_{q}}{[3]_{q}}\right)\) & \(\sqrt{\frac{[3]_{q}}{[2]_{q}}}\) & \(\sqrt{\frac{[3]_{q}}{[2]_{q}}}\) & \(\sqrt{\frac{[2]_{q}}{[4]_{q}}}\) & \(\sqrt{\frac{[2]_{q}}{[4]_{q}}}\) \\ \hline \hline Numerical Value & 0.896575 & 0.975056 & 1.020367 & 1.035276 & 1.07313 \\ \hline Exact Guess & \(\frac{[4]_{q}}{[5]_{q}}\) & \(\frac{1}{[5]_{q}}+\frac{[2]_{q}}{[3]_{q}}\) & \(\frac{[3]_{q}}{[2]_{q}[4]_{q}}\left(1+\frac{[3]_{q}}{[2]_{q}}\right)\) & \(\frac{1}{[3]_{q}}\left([2]_{q}+\frac{[4]_{q}}{[5]_{q}}\right)\) & \(\sqrt{\frac{1}{[2]_{q}}\left(1+\frac{[4]_{q}}{[3]_{q}}\right)}\) \\ \hline \hline Numerical Value & 1.207107 & 1.239146 & 1.393847 & 1.41421 & 1.692705 \\ \hline Exact Guess & \(\frac{[2]_{q}}{[3]_{q}}\left(1+\frac{[2]_{q}}{[3]_{q}}\right)\) & \(\frac{[3]_{q}}{[4]_{q}}\left(1+\frac{1}{[2]_{q}}\right)\) & \(\frac{1}{[4]_{q}}\left([2]_{q}+[3]_{q}\right)\) & \(\sqrt{2}\) & \(\frac{[2]_{q}}{[4]_{q}}\left(1+[2]_{q}\right)\) \\ \hline \hline Numerical Value & 1.93185 & & & \\ \hline Exact Guess & \(\frac{[2]_{q}}{[3]_{q}}\) & \(\frac{1}{[2]_{q}}\) & \(\frac{[3]_{q}}{[4]_{q}}\left(1+\frac{[2]_{q}}{[3]_{q}}\right)\) & \(\frac{1}{[4]_{q}}\left([2]_{q}+[3]_{q}\right)\) & \(\sqrt{2}\) & \(\frac{[2]_{q}}{[4]_{q}}\left(1+[2]_{q}\right)\) \\ \hline \hline Numerical Value & 1.93185 & & & \\ \hline Exact Guess & \(\frac{[2]_{q}}{[3]_{q}}\) & \(\frac{1}{[2]_{q}}+\frac{[2]_{q}}{[3]_{q}}\) & \(\frac{[3]_{q}}{[4]_{q}}\left(1+\frac{[3]_{q}}{[2]_{q}}\right)\) & \(\frac{1}{[4]_{q}}\left([2]_{q}+[3]_{q}\right)\) & \(\sqrt{2}\) & \(\frac{[2]_{q}}{[4]_{q}}\left(1+[2]_{q}\right)\) \\ \hline \hline Numerical Value & 1.93185 & & & \\ \hline Exact Guess & \(\frac{[2]_{q}}{[3]_{q}}\) & \(\frac{1}{[2]_{q}}+\frac{[2]_{q}}{[3]_{q}}\) & \(\frac{[3]_{q}}{[4]_{q}}\left(1+\frac{[3]_{q}}{[2]_{q}}\right)\) & \(\frac{1}{[3]_{q}}\left([2]_{q}+\frac{[4]_{q}}{[5]_{q}}\right)\) & \(\sqrt{\frac{1}{[2]_{q}}\left(1+[\frac{4]_{q}}{[3]_{q}}\right)}\) \\ \hline \hline Numerical Value & 1.207107 & 1.239146 & 1.393847 & 1.41421 & 1.692705 \\ \hline Exact Guess & \(\frac{[2]_{q}}{[3]_{q}}\left(1+\frac{[2]_{q}}{[3]_{q}}\right)\) & \(\frac{[3]_{q}}{[4]_{q}}\left(1+\frac{1}{[2]_{q}}\right)\) & \(\frac{1}{[4]_{q}}\left([2]_{q}+[3]_{q}\right)\) & \(\sqrt{2}\) & \(\frac{[2]_{q}}{[4]_{q}}\left(1+[2]_{q}\right)\) \\ \hline \hline Numerical Value & 1.93185 & & & \\ \hline Exact Guess & \(\frac{[2]_{q}}{[3]_{q}}\) & \(\frac{1}{[2]_{q}}+\frac{[2]_{q}}{[3]_{q}}\) & \(\frac{[3]_{q}}{[4]_{q}}\left(1+\frac{[3]_{q}}{[2]_{q}}\right)\) & \(\frac{1}{[4]_{q}}\left([2]_{q}+[3]_{q}\right)\) & \(\sqrt{2}\) & \(\frac{[2]_{q}}{[4]_{q}}\left(1+[2]_{q}\right)\) \\ \hline \hline Numerical Value & 1.93185 & & & \\ \hline Exact Guess & \(\frac{[4]_{q}}{[5]_{q}}\) & \(\frac{1}{[5]_{q}}+\frac{[2]_{q}}{[3]_{q}}\) & \(\frac{[3]_{q}}{[4]_{q}}\left(1+\frac{[3]_{q}}{[
The potential solution to the embedding of \(\mapsto\mathrm{Hom}_{oGPA(\mathcal{E}_{4}^{12})}(-\to++)\) is given in Definition 1.1. Here we use a slight alteration of Boltzmann weight notation, with the value \(W_{v_{1},v_{2},v_{3}}\) representing the coefficient of the basis element \((v_{1}\xleftarrow{\gamma_{3}}v_{3},v_{1}\xrightarrow{\gamma_{1}}v_{2} \xrightarrow{\gamma_{2}}v_{3})\) with edge labels surpassed unless needed.
To give the reader some idea of the structure of the solution for the embedding of \(\nu\) we include the single \(5\times 5\) block and three \(3\times 3\) blocks.
\[\begin{array}{ccccc}\vspace{-0.2cm}\vspace{-0.2cm}\vspace{-0.2cm}\vspace{-0.2cm} \vspace{-0.2cm}\vspace{-0.2cm}\vspace{-0.2cm}\vspace{-0.2cm}\vspace{-0.2cm} \vspace{-0.2cm}\vspace{-0.2cm}\vspace{-0.2cm}\vspace{-0.2cm}\vspace{-0.2cm} \vspace{-0.2cm}\vspace{-0.2cm}\vspace{-0.2cm}\vspace{-0.2cm}\vspace{-0.2cm} \vspace{-0.2cm}\vspace{-0.2cm}\vspace{-0.2cm}\vspace{-0.2cm}\vspace{-0.2cm} \vspace{-0.2cm}\vspace{-0.2cm}\vspace{-0.2cm}\vspace{-0.2cm}\vspace{-0.2cm} \vspace{-0.2cm}\vspace{-0.2cm}\vspace{-0.2cm}\vspace{-0.2cm}\vspace{-0.2cm} \vspace{-0.2cm}\vspace{-0.2cm}\vspace{-0.2cm}\vspace{-0.2cm}\vspace{-0.2cm} \vspace{-0.2cm}\vspace{-0.2cm}\vspace{-0.2cm}\vspace{-0.2cm}\vspace{-0.2cm} \vspace{-0.2cm}\vspace{-0.2cm}\vspace{-0.2cm}\vspace{-0.2cm}\vspace{-0.2cm} \vspace{-0.2cm}\vspace{-0.2cm}\vspace{-0.2cm}\vspace{-0.2cm}\vspace{-0.2cm} \vspace{-0.2cm}\vspace{-0.2cm}\vspace{-0.2cm}\vspace{-0.2cm}\vspace{-0.2cm} \vspace{-0.2cm}\vspace{-0.2cm}\vspace{-0.2cm}\vspace{-0.2cm}\vspace{-0.2cm} \vspace{-0.2cm}\vspace{-0.2cm}\vspace{-0.2cm}\vspace{-0.2cm}\vspace{-0.2cm} \vspace{-0.2cm}\vspace{-0.2cm}\vspace{-0.2cm}\vspace{-0.2cm}\vspace{-0.2cm} \vspace{-0.2cm}\vspace{-0.2cm}\vspace{-0.2cm}\vspace{-0.2cm}\vspace{-0.2cm} \vspace{-0.2cm}\vspace{-0.2cm}\vspace{-0.2cm}\vspace{-0.2cm}\vspace{-0.2cm} \vspace{-0.2cm}\vspace{-0.2cm}\vspace{-0.2cm}\vspace{-0.2cm}\vspace{-0.2cm} \vspace{-0.2cm}\vspace{-0.2cm}\vspace{-0.2cm}\vspace{-0.2cm}\vspace{-0.2cm} \vspace{-0.2cm}\vspace{-0.2cm}\vspace{-0.2cm}\vspace{-0.2cm}\vspace{-0.2cm} \vspace{-0.2cm}\vspace{-0.2cm}\vspace{-0.2cm}\vspace{-0.2cm}\vspace{-0.2cm} \vspace{-0.2cm}\vspace{-0.2cm}\vspace{-0.2cm}\vspace{-0.2cm}\vspace{-0.2cm} \vspace{-0.2cm}\vspace{-0.2cm}\vspace{-0.2cm}\vspace{-0.2cm}\vspace{-0.2cm}\vspace{-0.2cm} \vspace{-0.2cm}\vspace{-0.2cm}\vspace{-0.2cm}\vspace{-0.2cm}\vspace{-0.2cm}\vspace{-0.2cm} \vspace{-0.2cm}\vspace{-0.2cm}\vspace{-0.2cm}\vspace{-0.2cm}\vspace{-0.2cm}\vspace{-0.2cm} \vspace{-0.2cm}\vspace{-0.2cm}\vspace{-0.2cm}\vspace{-0.2cm}\vspace{-0.2cm}\vspace{-0.2cm} \vspace{-0.2cm}\vspace{-0.2cm}\vspace{-0.2cm}\vspace{-0.2cm}\vspace{-0.2cm}\vspace{-0.2cm} \vspace{-0.2cm}\vspace{-0.2cm}\vspace{-0.2cm}\vspace{-0.2cm}\vspace{-0.2cm} \vspace{-0.2cm}\vspace{-0.2cm}\vspace{-0.2cm}\vspace{-0.2cm}\vspace{-0.2cm}\vspace{-0.2cm} \vspace{-0.2cm}\vspace{-0.2cm}\vspace{-0.2cm}\vspace{-0.2cm}\vspace{-0.2cm}\vspace{-0.2cm} \vspace{-0.2cm}\vspace{-0.2cm}\vspace{-0.2cm}\vspace{-0.2cm}\vspace{-0.2cm} \vspace{-0.2cm}\vspace{-0.2cm}\vspace{-0.2cm}\vspace{-0.2cm}\vspace{-0.2cm}\vspace{-0.2cm} \vspace{-0.2cm}\vspace{-0.2cm}\vspace{-0.2cm}\vspace{-0.2cm}\vspace{-0.2cm}\vspace{-0.2cm} \vspace{-0.2cm}\vspace{-0.2cm}\vspace{-0.2cm}\vspace{-0.2cm}\vspace{-0.2cm} \vspace{-0.2cm}\vspace{-0.2cm}\vspace{-0.2cm}\vspace{-0.2cm}\vspace{-0.2cm}\vspace{-0.2cm} \vspace{-0.2cm}\vspace{-0.2cm}\vspace{-0.2cm}\vspace{-0.2cm}\vspace{-0.2cm} \vspace{-0.2cm}\vspace{-0.2cm}\vspace{-0.2cm}\vspace{-0.2cm}\vspace{-0.2cm}\vspace{-0.2cm} \vspace{-0.2cm}\vspace{-0.2cm}\vspace{-0.2cm}\vspace{-0.2cm}\vspace{-0.2cm} \vspace{-0.2cm}\vspace{-0.2cm}\vspace{-0.2cm}\vspace{-0.2cm}\vspace{-0.2cm} \vspace{-0.2cm}\vspace{-0.2cm}\vspace{-0.
To get around this computational roadblock, we observe that the coefficients of the embedding of are significantly nicer than the coefficients of the embedding of. As shown in [13], the category \(\mathcal{P}_{\mathrm{Rep}(U_{q}(\mathfrak{sl}_{3}));\Lambda_{1}}\) has an alternate presentation given in terms of the single generator. The relations of this presentation are as follows:
\[\begin{array}{ccccc}\includegraphics[width=142.26378pt]{figs/p1.eps}&=& \includegraphics[width=142.26378pt]{figs/p2.eps}\\ \includegraphics[width=142.26378pt]{figs/p3.eps}&=&[2]_{q}\\ \includegraphics[width=142.26378pt]{figs/p3.eps}&=&\includegraphics[width=142.26378 pt]{figs/p3.eps}\end{array}\]
Hence if we can verify the above three relations, we will show that our potential solution indeed defines an embedding \(\mathcal{P}_{\overline{\mathrm{Rep}(U_{q}(\mathfrak{sl}_{3}));\Lambda_{1}}} \to oGPA(\mathcal{E}_{4}^{12})\). While relation (ii) is quartic, the simpler form of the algebraic numbers for the coefficients of the embedding of \(\includegraphics[width=142.26378pt]{figs/p3.eps}\) means that these equations are much easier for the computer to verify. Helping our cause is the fact that there are only 171 individual equations to verify for relation (ii). This allows us to give a proof of Theorem 1.2.
Proof of Theorem 1.2.: We directly verify that the element of \(oGPA(\mathcal{E}_{4}^{12})\) given in the statement of the Lemma satisfies relations (), (i), and (ii) using a computer. This gives a \(\dagger\)-embedding of \(\mathcal{P}_{\mathrm{Rep}(U_{q}(\mathfrak{sl}_{3}));\Lambda_{1}}\to oGPA( \mathcal{E}_{4}^{12})\). As \(oGPA(\mathcal{E}_{4}^{12})\) is unitary, we have that the image of \(\mathcal{P}_{\mathrm{Rep}(U_{q}(\mathfrak{sl}_{3}));\Lambda_{1}}\) in \(oGPA(\mathcal{E}_{4}^{12})\) is a unitary subcategory. In particular all negligible elements of \(\mathcal{P}_{\mathrm{Rep}(U_{q}(\mathfrak{sl}_{3}));\Lambda_{1}}\) are mapped to zero. Thus we get an embedding \(\mathcal{P}_{\overline{\mathrm{Rep}(U_{q}(\mathfrak{sl}_{3}));\Lambda_{1}}} \to oGPA(\mathcal{E}_{4}^{12})\) as desired.
|
2309.06973 | DNNShifter: An Efficient DNN Pruning System for Edge Computing | Deep neural networks (DNNs) underpin many machine learning applications.
Production quality DNN models achieve high inference accuracy by training
millions of DNN parameters which has a significant resource footprint. This
presents a challenge for resources operating at the extreme edge of the
network, such as mobile and embedded devices that have limited computational
and memory resources. To address this, models are pruned to create lightweight,
more suitable variants for these devices. Existing pruning methods are unable
to provide similar quality models compared to their unpruned counterparts
without significant time costs and overheads or are limited to offline use
cases. Our work rapidly derives suitable model variants while maintaining the
accuracy of the original model. The model variants can be swapped quickly when
system and network conditions change to match workload demand. This paper
presents DNNShifter, an end-to-end DNN training, spatial pruning, and model
switching system that addresses the challenges mentioned above. At the heart of
DNNShifter is a novel methodology that prunes sparse models using structured
pruning. The pruned model variants generated by DNNShifter are smaller in size
and thus faster than dense and sparse model predecessors, making them suitable
for inference at the edge while retaining near similar accuracy as of the
original dense model. DNNShifter generates a portfolio of model variants that
can be swiftly interchanged depending on operational conditions. DNNShifter
produces pruned model variants up to 93x faster than conventional training
methods. Compared to sparse models, the pruned model variants are up to 5.14x
smaller and have a 1.67x inference latency speedup, with no compromise to
sparse model accuracy. In addition, DNNShifter has up to 11.9x lower overhead
for switching models and up to 3.8x lower memory utilisation than existing
approaches. | Bailey J. Eccles, Philip Rodgers, Peter Kilpatrick, Ivor Spence, Blesson Varghese | 2023-09-13T14:05:50Z | http://arxiv.org/abs/2309.06973v1 | # DNNShifter: An Efficient DNN Pruning System for Edge Computing
###### Abstract
Deep neural networks (DNNs) underpin many machine learning applications. Production quality DNN models achieve high inference accuracy by training millions of DNN parameters which has a significant resource footprint. This presents a challenge for resources operating at the extreme edge of the network, such as mobile and embedded devices that have limited computational and memory resources. To address this, models are pruned to create lightweight, more suitable variants for these devices. Existing pruning methods are unable to provide similar quality models compared to their unpruned counterparts without significant time costs and overheads or are limited to offline use cases. Our work rapidly derives suitable model variants while maintaining the accuracy of the original model. The model variants can be swapped quickly when system and network conditions change to match workload demand. This paper presents DNNShifter, an end-to-end DNN training, spatial pruning, and model switching system that addresses the challenges mentioned above. At the heart of DNNShifter is a novel methodology that prunes sparse models using structured pruning - combining the accuracy-preserving benefits of unstructured pruning with runtime performance improvements of structured pruning. The pruned model variants generated by DNNShifter are smaller in size and thus faster than dense and sparse model predecessors, making them suitable for inference at the edge while retaining near similar accuracy as of the original dense model. DNNShifter generates a portfolio of model variants that can be swiftly interchanged depending on operational conditions. DNNShifter produces pruned model variants up to 93x faster than conventional training methods. Compared to sparse models, the pruned model variants are up to 5.14x smaller and have a 1.67x inference latency speedup, with no compromise to sparse model accuracy. In addition, DNNShifter has up to 11.9x lower overhead for switching models and up to 3.8x lower memory utilisation than existing approaches. DNNShifter is available for public use from [https://github.com/blessonvar/DNNShifter](https://github.com/blessonvar/DNNShifter).
Deep neural networks, Machine learning, Internet of things, Edge computing, Model compression, Model pruning
## I Introduction
Deep neural networks (DNNs) are machine learning (ML) models comprising a sequence of layers, such as convolution and linear. Such models find application in object detection and image classification due to their high accuracy [1]. Production quality DNN models trained on standard datasets contain a large number of parameters. For example, VGG-16 [2] trained on the ImageNet [3] dataset contains 138M parameters. Such models have significant CPU or memory resource requirements and, consequently, high energy consumption for training and inference. Hence, they are suited for resource-rich environments like cloud or high-performance computing sites. These DNNs cannot be adopted for relatively resource-constrained environments, such as the (extreme) network edge dominated by mobile and embedded devices [4].
Edge environments cannot support production quality DNNs due to compute [4], memory [5] and energy [6] constraints. Therefore, approaches for deriving lightweight DNN model variants from production quality DNNs using (i) neural architecture search (NAS) [7] and (ii) pre-trained model compression [5] have been proposed. These approaches have a two-fold _limitation_. Firstly, they are time-consuming and costly [7]. For example, the NasNet [8] search requires four days of computation on 500 data centre-grade GPUs to find optimal model variants.
Secondly, the model variants obtained from these approaches are static. The models are optimised against specific objectives, such as accuracy, inference latency, or model size [7]. Therefore, they cannot be used on the edge to meet the requirements of varying operational conditions, such as changing resource utilisation levels [9, 10].
Existing NAS and compression approaches cannot be used for rapidly producing models, and the models produced by these approaches cannot be adapted to suit changing operational conditions on the edge. The research reported in this paper is therefore focused towards addressing the above limitations and surmounts the following challenges:
_Challenge 1 - Rapidly generating a range of DNNs suited for different operational conditions and heterogeneous edge resources:_ ML applications that run on the edge will need to execute pre-trained DNNs. Training a DNN tailored to the edge resource using approaches, such as NAS, is not suitable as they are time and energy-consuming [7]. Alternatively, compressing a pre-trained DNN using knowledge distillation [11] is based on trial and error, or quantisation [12] that requires specialised hardware or libraries that may not be available in edge environments.
_Challenge 2 - Spatial compression of DNN models while maintaining accuracy:_ DNN compression methods, such as structured pruning [13] or re-parameterisation [14], can significantly reduce the size of a model. However, these methods remove parameters essential to model accuracy. For example, convolutional layers are sensitive to pruning, and even small degrees of pruning can negatively impact accuracy [13]. Consequently, the compressed model is fine-tuned after pruning using computationally expensive methods to regain accuracy, which can take up to 3 times the original training time [5].
_Challenge 3 - On-demand switching of compressed DNNs to adapt to dynamic operational environments:_ DNNs used at the edge will need to seamlessly adapt to changing conditions in real time by switching models on-demand to match model inference performance thresholds. However, existing approaches will incur a downtime in the order of minutes [7] to hours [15] for identifying and deploying a suitable model that meets the desired performance [10].
This paper presents DNNShifter, a framework that utilises production quality sparse models to generate a portfolio of spatially compressed model variants with high accuracy in real time. This is achieved by proposing a novel method that uses structured pruning of highly sparse models. This results in pruned models with a smaller resource footprint and the same model accuracy as the original sparse model. This method fundamentally differs from the commonly reported structured pruning methods that prune pre-trained dense models and negatively impact accuracy. The portfolio of models that are generated by our method can be used to adapt to match a range of operational runtime requirements. This is achieved by low overhead switching from one model to another in the portfolio at runtime. DNNShifter makes the following three research _contributions_:
1) A time and resource-efficient guided DNN model-training, pruning, and runtime switching pipeline that creates a portfolio of spatially pruned model variants comparable to a typical DNN model training routine using NAS. DNNShifter generates a portfolio of pruned model variants up to 93x faster than state-of-the-art methods.
2) A novel pruning method to compress highly sparse DNN models, resulting in accurate and spatially compact pruned model variants suited for edge resources with low inference latency. DNNShifter pruned model variants are up to 5.14x smaller and have up to 1.67x and 1.45x faster CPU and GPU inference latencies, respectively. In addition, the pruned model variants can be obtained orders of magnitude faster when compared to existing structured pruning methods and have higher accuracy for a given model size.
3) A low-overhead method that switches from one model variant to another on-demand at the edge to match a range of operational runtime requirements. DNNShifter has up to 11.9x lower overhead for switching model variants with up to 3.8x lower memory utilisation than existing approaches.
The remainder of this paper is organised as follows. Section II discusses related work. Section III presents the DNNShifter framework. Section IV presents experimental results. Section V concludes the paper by discussing system limitations.
## II Related work
Approaches for DNN compression aim to improve the resource efficiency of models by reducing their computational and memory utilisations while preserving the model's accuracy. Techniques such as model pruning, quantisation, and knowledge distillation leverage different properties of a DNN that inherently lend towards compression. As a result, a compressed model that is either smaller, faster or both compared to the original model, is produced. These approaches typically produce a single compressed model.
Techniques such as NAS, on the other hand, generate a portfolio of compressed models from a search space that suits the requirements of constrained resources [8, 16, 17, 18]. However, NAS is computationally expensive because it trains and evaluates many candidate models (up to thousands) before identifying the optimal set of compressed models. _Our work is positioned at the intersection of DNN compression and NAS, where a more time and resource-efficient compression pipeline than NAS is developed to generate a portfolio of highly compressed models, which serves a range of resource requirements as seen in edge environments_. This section considers the key contributions of each DNN compression method by providing an overview of their strengths and weaknesses and comparing their features. More recent work is considered to highlight the novelty of the DNNShifter framework we propose in addition to presenting baseline methods to compare DNNShifter in the experiments presented in Section IV.
### _Unstructured pruning and sparse models_
Unstructured pruning masks select individual parameters of the DNN by setting their weight to zero [19, 20, 21, 22]. Existing methods such as Lottery Ticket Hypothesis (LTH) [23] demonstrate that introducing sparsity via unstructured pruning early into training can lead to similar or higher final accuracy as a dense model. In addition, with suitable hardware and sparse matrix libraries [24], sparse models can accelerate model training, thus, reducing time and energy costs [25]. The concept of LTH has motivated a large collection of unstructured pruning methods [20, 22, 26, 27]. However, resource-constrained environments usually do not support libraries to leverage any performance benefits of sparse models [28]. Furthermore, the zeroed weights within the sparse model do not reduce the memory footprint but create irregular memory accesses that will degrade inference performance on conventional CPUs. Unstructured pruning research typically focuses on improving sparse model accuracy rather than considering compute performance considerations [26]. _Sparse models are the starting point for our work._ The DNNShifter framework removes sparse data structures within these models in a disciplined manner. In other words, we spatially remove the zeroed weights, thereby reducing the model size and inference latency while maintaining the same accuracy as the original sparse model.
### _Structured pruning and re-parameterisation_
As shown in Figure 1, structured pruning spatially removes groups of parameters, such as convolutional filters [13, 15, 29, 30, 31]. A ranking algorithm is employed to identify filters that contribute the least to accuracy loss if removed. Structured pruning removes these filters, resulting in models with lower memory, energy, and inference footprint. However, structured pruning is time-consuming because: (i) the filters that can be removed need to be identified given the thousands of pruning combinations [15, 29], and (ii) the parameters that remain after pruning are fine-tuned to recover the accuracy lost
while pruning [5, 20]. Thus, on-demand compression cannot be achieved using structured pruning.
However, DNNShifter achieves structured pruning in (near) real-time by leveraging the following observations of sparse models: (i) zeroed weights are inherently ranked as prunable, and (ii) pruning of zeroed weights does not reduce model accuracy. Furthermore, structured re-parameterisation can be combined with structured pruning to further optimise a model by modifying the underlying architecture for a target device. For example, RepVGG [14] restructures ResNets into a VGG-like architecture and improves GPU utilisation.
### _Dynamic DNNs_
Dynamic DNNs improve inference efficiency by adapting a DNN for different operational conditions [32]. Methods such as skipping layers [33] and connections [34] or early-exiting [32] decrease inference latency at the cost of inference accuracy. Although dynamic DNNs offer the advantage of using any sub-model from within a single model, there are no spatial benefits since the entire model runs in memory, even for a small sub-model [32]. Alternatively, DNNShifter provides both inference and spatial benefits and leverages in-memory compression of multiple sparse models to facilitate on-demand switching of models to suit runtime requirements.
### _Other compression methods_
Other compression methods, namely quantisation and knowledge distillation, are presented in the literature for DNN compression. Quantisation reduces the bit precision of parameters in DNNs to reduce the model size and to accelerate inference [12]. However, quantisation is usually applied to all parameters of the DNN, which incurs an accuracy loss. Furthermore, quantised models may require dedicated hardware to carry out inference at a lower precision. Knowledge distillation transfers training knowledge from a more extensive teacher to a smaller student model [11]. The student model achieves similar accuracy to the teacher model and is spatially smaller. However, knowledge distillation is not easily automated to serve various model architectures and produces only a single student model rather than a portfolio of models suited for different operational conditions, such as specific memory budgets. Therefore, knowledge distillation does not scale for the varying resource requirements of deployments seen in heterogeneous edge environments.
### _Addressing open gaps with our contribution_
Although existing compression methods have a range of benefits, they present one or more significant limitations that prohibit their use for on-demand deployment of production quality DNNs to edge devices. _DNNShifter leverages the accuracy-preserving benefits of unstructured pruning with the runtime performance improvements of structured pruning across various model sizes to suit different operational conditions seen in edge environments. This combination has not been previously explored in the literature_. DNNShifter creates an efficient training, pruning, and inference pipeline, which is highlighted in comparison to other DNN compression methods in Table I. The DNNShifter framework and the models generated by the framework meet the requirements for deploying DNNs in edge environments. However, existing methods have one or more limitations making them less suited for edge systems. The next section explores the underlying methodology and implementation of DNNShifter.
## III DNNShifter
DNNShifter is a framework that can be employed for resource-constrained environments, such as at the network edge or the extreme edge that has relatively limited computational capabilities. The framework prunes production quality DNN models on-demand and provides automated runtime model switching for inference on constrained resources. DNNShifter can be employed by system administrators for managing the life cycle of ML application development, deployment, and simulation environments to address the following challenges:
**Rapidly obtaining production quality models:** In real-time, DNNShifter offers structured pruning of large sparse DNN models that cannot be run on hardware-limited resources. The framework derives pruned model variants for the target resource without a significant accuracy loss while achieving this on a small monetary, computation, and energy budget. This contrasts existing approaches that employ NAS [7] or parameter fine-tuning [29].
**Hardware agnostic automated model pruning:** DNNShifter creates a portfolio of hardware-independent pruned model variants with different performance characteristics (e.g. model size, speed, and accuracy) to
Fig. 1: Obtaining sparse and pruned models from pruning a dense model.
suit all deployment conditions. The model variants can be deployed across different resource tiers based on operational conditions, such as resource availability and performance targets. The approach adopted by DNNShifter is hardware agnostic and is not specific to specialised hardware such as a GPU [25].
**Real-time model switching at runtime:** Once a model portfolio has been deployed on the target hardware, DNNShifter utilises the portfolio of pruned model variants to select a model variant suited for a given operational condition. The framework facilitates the adaptation of the model to suit variations in the operational conditions with low overheads. The underlying method in DNNShifter switches the active model for inference from the portfolio via inflation (which activates the pruned model) and deflation (which further compresses and deactivates the pruned model) to match operational demand.
DNNShifter is envisioned to be a holistic end-to-end solution for ML edge applications that reduces human administrator interventions or domain-specific knowledge for creating pruned models. DNNShifter can also benchmark different pruning algorithms on heterogeneous hardware and make informed decisions in the life cycle of edge-based ML applications.
This section will further present the observations that motivated the design of DNNShifter and provides an overview of the framework.
### _Motivation_
A variety of model pruning methods have been presented in the literature for reducing the complexity of production models to suit resource-constrained environments while maintaining accuracy [20]. Traditional pruning methods are limited in multiple ways: (a) many require further time-consuming fine-tuning after the initial pruned model is obtained [29], (b) many rely on hardware accelerators [25], and (c) pruning often requires a costly trial and error process to determine the optimal pruned model for a given target resources [32]. Current pruning methods are unsuitable for real-time execution in critical scenarios, such as on-device video analytics, that require sub-second latency to preserve optimal service quality [35].
DNNShifter was developed to address the above limitations by leveraging the following two observations:
#### Iii-A1 Aggregating and pruning unstructured sparsity
During unstructured pruning, aggregating parameters from sparse data structures will result in fully prunable data structures that directly impact the model size and inference latency. Figure 2 highlights this observation. During unstructured pruning, the parameters of a convolutional kernel are set to zero values. The data structures (matrices) representing the kernels may be sparse (not all values are zero) and, therefore, cannot be pruned without compromising accuracy (shown as unprunable data structure). Zero matrices are obtained as pruning progresses by using the parameter ranking algorithm used in unstructured pruning. A data structure that has full zero values is prunable
Fig. 2: Structured pruning zero-valued data structures obtained from unstructured pruning.
and, by employing structured pruning, can be removed from the model. This results in reducing the model size and, thereby, inference latency.
\(DNNShifter\) leverages this observation to prune sparse models using structural pruning without degrading model accuracy. Since model accuracy is preserved, \(DNNShifter\) does not require fine-tuning after pruning.
#### Iii-A2 Further compression of remaining model sparsity
During runtime, inactive models from the portfolio can be further compressed to reduce overheads. After structured pruning, the remaining unprunable data structures contain sparse matrices (Figure 2). Sparse matrices have repeating and compressible data patterns (of zeroed weights). Therefore, the model can be encoded (deflated) into smaller representations while the model is inactive. For example, such deflation may be applied when downloading the model portfolio from a cloud/edge server to target device hardware or, during runtime, to models in a portfolio that are not actively inferring.
\(DNNShifter\) uses this observation to load the entire portfolio of deflated models into memory during runtime. When a specific model is required for inference, it is decoded (inflated) in (near) real-time. This allows model switching in response to varying operational conditions on the edge and is significantly faster than existing methods.
### _Framework overview_
This section presents an overview of the DNNShifter framework. It operates in three phases, as shown in Figure 3:
_Phase 1: Offline training of production quality DNNs with unstructured ranking -_ In this phase, model training and parameter ranking are combined into a single iterative training process. A production-quality DNN model is taken as input by an unstructured pruning method. Then, the insignificant parameters of the model are masked between each training iteration by an unstructured ranking method to produce a portfolio of model variants (one per iteration) with sparse data structures (referred to as sparse models).
_Phase 2: On-demand conversion from sparse models to pruned models -_ The portfolio of sparse models is pruned via structured pruning to obtain pruned model variants that can be deployed on a target hardware resource.
_Phase 3: Runtime model switching -_ The portfolio is deployed, monitored, and adapts to varying operational conditions by switching the active pruned model variant at runtime.
Phase 1 builds on an existing technique and Phase 2 and Phase 3 comprise nine modules. The next sections discuss these.
### _Phase 1: Model training and parameter ranking_
This phase uses an unstructured ranking algorithm to produce highly sparse models from production-quality DNNs (dense models) while maintaining viable model accuracy. It is to be noted that the sparse models obtained from this phase will include zero values in the model parameters. However, since they are not removed from the data structures until the next phase, sparse models are not smaller in size than the dense model. Choosing an unstructured ranking algorithm over structured pruning eliminates the need for fine-tuning after training to recover model accuracy [13, 29]. In addition, such a ranking approach between training iterations has two advantages.
Firstly, DNNShifter simplifies model parameter ranking so that a user does not require expert knowledge of ranking algorithms and no additional hyperparameters need be configured.
Secondly, DNNShifter improves the model pruning pipeline efficiency. A conventional pruning pipeline consists of training the model, compressing using structured pruning methods, profiling, and iteratively fine-tuning the pruned model for the target hardware. While training and compression can occur offline on large-scale computational resources, fine-tuning will need to be carried out on the target hardware that may be resource-limited. The final accuracy of the model cannot be determined until this time-consuming pipeline is completed. If the desired accuracy is not obtained, then the entire sequence of the pruning pipeline must be repeated with a different set of pruning hyperparameters. In addition, only a single pruned model will be obtained at the end of the pipeline that meets specific operational conditions. If the operational condition changes, the entire pruning pipeline must be repeated.
DNNShifter improves the efficiency of the pruning pipeline in three ways by integrating ranking within the training iterations: (i) The final model accuracy that can be achieved is known before the pipeline completes. Therefore, the pipeline can be reinitialised in advance with new hyperparameters if the target accuracy cannot be achieved. (ii) Fine-tuning, a computationally intensive task, is eliminated on the target hardware resource that may be relatively resource constrained. Therefore, rapid and on-demand deployments of DNN models are feasible since fine-tuning does not need to be carried out. (iii) A portfolio of pruned models can be generated that will suit a range of operational conditions on the target hardware resource by running the pruning pipeline once. This allows for adapting a model that is deployed at runtime.
DNNShifter implements a modified version of the Open Lottery Ticket Hypothesis (OpenLTH) framework1. No modifications were made to the training process. Instead, DNNShifter adds the structured pruning and model switching phases (Phase 2 and Phase 3) which will be discussed later. The Lottery Ticket Hypothesis (LTH) articulates that highly accurate sparse models can be obtained from dense models [23] (shown in Figure 4) and is underpinned by Iterative Magnitude Pruning (IMP) with weight rewinding [36] method that DNNShifter also employs. IMP with rewinding is chosen since it performs well across all available models and datasets. Alternatives, such as SynFlow [27], only perform well for specific models or datasets [26].
Footnote 1: [https://github.com/facebookresearch/open_lth](https://github.com/facebookresearch/open_lth)
Figure 4 illustrates Phase 1 of DNNShifter to generate sparse models using LTH. The model is trained and then ranked by the IMP algorithm in each iteration. The resulting sparse model from each iteration is saved into an intermediate portfolio of sparse models that will be pruned in the next phase. The sparse model from each iteration is provided as input for
the next iteration. Model sparsity increases with the number of iterations up to a user-defined limit.
### _Phase 2: Converting sparse models to pruned models_
In this phase, the \(n\) sparse models from the intermediate portfolio are converted into \(m\) pruned models using structured pruning. This phase consists of six processing modules that pre-process sparse models, identify prunable data structures within each sparse model, generates plans for pruning, and then use structured pruning to generate pruned models. Each sparse model is processed from the intermediate portfolio to produce a final portfolio. Each module of this phase is detailed below:
_Module 1 - Model Pre-Processor:_ This pre-processing module simplifies the DNN model architecture by fusing the convolution and batch normalisation layers [37]. This fusion removes batch normalisation parameters, thereby reducing the complexity of generating a pruning plan for the model, as only convolutional layers and their dependants must be considered (further discussed in Module 3).
_Module 2 - Sparsity Analyser:_ This module builds on the method illustrated in Figure 2 that identifies convolutional kernels with entirely zero values. However, these kernels cannot be removed without further planning since a DNN's architecture does not naturally lend itself to the removal of kernels alone. Instead, a kernel can be removed if all kernels in a channel can be removed. To this end, channels that have all kernels with zero values are further indexed to create prunable convolutional channels.
_Module 3 - Prune Planner:_ In the existing literature, convolution channels are removed from a model iteratively to minimise accuracy loss. However, this is inefficient for two reasons. Firstly, pruning channels is computationally intensive since entire convolutional layers comprising large multi-dimensional parameter arrays will be rebuilt when prunable channels are removed. Secondly, each prunable channel depends on the channel of the next convolutional layer, which will also be rebuilt. Therefore, pruning sequentially incurs overheads. DNNShifter breaks this dependency and removes all prunable channels at the same time. This is achieved by the prune planner module, which creates a concurrent data structure of prunable channel indices.
This module uses Algorithm 1 where each zero channel \(c_{zero}\) (channels with all weights set to zero) in a set of all zero channels \(C_{Zero}\) (indexed in Module 2) is mapped to a convolutional layer \(L_{n}\) where \(0\leq n<D_{conv}\) (model convolutional layer depth). Each convolutional layer receives two sets of zero channels. The first set, \(C_{in}\) is the set of prunable _out_ channels from the previous convolutional layer \(L_{n-1}\): these indices correspond to the prunable _in_ channels of \(L_{n}\). The second set, \(C_{out}\), is the prunable _out_ channels of \(L_{n}\). When \(n=0\) (first convolutional layer), there is no \(C_{in}\). Therefore this layer receives an empty set. The returned prune
Fig. 4: The unstructured pruning method incorporated in DNNShifter uses the combined approach of repetitive training and model ranking between training iterations.
Fig. 3: Overview of the DNNShifter framework.
plan (\((C_{in},C_{out})\)) contains all zero channels that are to be pruned in Module 4 for a given convolutional layer.
```
Data: Prunable channel indices \((C_{in},C_{out})\) in \(L_{n}\) Result: Pruned convolutional layer \(L^{\prime}_{n}\)
1\(|C^{\prime}_{in}|\leftarrow|L_{n}(C_{in})|-|C_{in}|\)
2\(|C^{\prime}_{out}|\leftarrow|L_{n}(C_{out})|-|C_{out}|\)
3\(L^{\prime}_{n}\leftarrow\) create new layer(\(|C^{\prime}_{in}|\), \(|C^{\prime}_{out}|\))
4\(L^{\prime}_{n}(C_{out})\gets L_{n}(C_{out})\setminus C_{out}\)
5\(L^{\prime}_{n}(bias)\gets L_{n}(bias)\setminus C_{out}\)
6if(\(n>0\))then
7\(L^{\prime}_{n}(C_{in})\gets L_{n}(C_{in})\setminus C_{in}\) return\(L^{\prime}_{n}\)
```
**Algorithm 1**DNNShifter Prune Planning
**Module 4** - Model Pruner:The pruning plan from Module 3 is used to prune a sparse model from the intermediate portfolio in real time. This module executes the pruning plan by rebuilding each convolutional layer without the prunable channels and the biases of the channels. As all prunable channels are made available from Module 3, this module prunes all in/out channels in a single batch operation, significantly reducing computational overhead and enabling real-time pruning. After prune planning, this module is executed in parallel to concurrently prune each convolutional layer, forming a series of pruned layers \(L^{\prime}\) that replaces the original unpruned layers \(L\).
This module uses in Algorithm 2 where a pruned layer \(L^{\prime}_{n}\) is created with the smaller channel size \(|C^{\prime}_{in}|\) and \(|C^{\prime}_{out}|\) (Lines 1-3). Afterwards, the pruned set of remaining channels and bias are transferred from \(L_{n}\) to \(L^{\prime}_{n}\) (Lines 4-7).
**Module 5** - Model Profiler:This module benchmarks the pruned model to obtain metrics: accuracy, inference latency, model size, and the maximum memory required to run the model. This is achieved using a test dataset. The metrics relevant to each model are stored as metadata for the next module for selecting a suitable pruned model from a portfolio.
**Module 6** - Portfolio Post-Processor:A portfolio of \(n\) pruned models is generated. This module refines the portfolio to eventually only include \(m\) pruned models (\(m\leq n\)) with distinct performance characteristics (pruned models with similar characteristics are removed).
### _Phase 3: Further compression and model switching_
A portfolio of production-quality DNN models is trained in the first phase and then compressed via structured pruning in the second phase. In this third phase, models are further compressed while not being used (inactive). DNNShifter encodes the portfolio of pruned models into a significantly smaller package before deploying it to the storage of the target device using the lossless DEFLATE [38] compression algorithm. On application initialisation, DNNShifter loads the entire portfolio into memory, and then one model is activated (inflated) to enable inference. Encoding models in this manner (deflating) is possible since zero weights repeat in highly sparse DNN models obtained from training. However, out-of-the-box production quality models are dense (most of their weights are not set to zero). Therefore, applying this compression algorithm to dense models will not provide any benefit. Each module of this phase is detailed below:
**Module 7** - Model Deflater:This module sorts the model portfolio by model size (a proxy for performance characteristics), then shrinks the entire model portfolio into a smaller, sorted, and easily deployable package using DEFLATE before it is transferred to the target devices.
**Module 8** - Application Initialiser:This module loads the entire portfolio of deflated models into device memory. First, it selects a model with the median model size. Then, this model is decompressed in memory to enable application inference (we refer to this as inflation). Note that the inflated model is a pruned model variant from Phase 2. It is smaller and faster for inference than an equivalent dense model (Figure 6).
**Module 9** - Model Switcher:The available memory and CPU load may vary due to the number of running applications and the workload of each application on the device. For example, inference performance metrics, such as queries per second (QPS), may vary over time for an application [39]. During a low load on the edge device, a higher QPS can be achieved, during which time a larger model from the portfolio can be decompressed in the device memory (inflation); the larger model will improve inference accuracy. Inversely, a decreasing QPS suggests a high load and a smaller model from the portfolio is inflated to improve inference latency performance. DNNShifter does not require searching the entire portfolio to switch between models. Instead, this module selects the next or previous model from the portfolio depending on the QPS trend. Therefore, model switching can be rapidly obtained with minimum overheads.
## IV Experiments
This section first presents the experimental testbed and baseline models in Section IV-A and then considers three key aspects of DNNShifter:
(1) The time to generate a portfolio of models for addressing Challenge 1 (Phase 1 of DNNShifter). We will compare against state-of-the-art NAS methods, namely DARTS [16], RepVGG [14], and PreNAS [17], for evaluating this. We will demonstrate in Section IV-B that DNNShifter will generate a portfolio faster and more efficiently.
(2) The accuracy achieved and the time taken for inference by the models for addressing Challenge 2 (Phase 2 of DNNShifter). Two categories of pruning algorithms, unstructured and structured, are considered here. The unstructured
pruning algorithms considered are random pruning, magnitude pruning [19], SynFlow [27], and NTK-SAP [22]. The structured pruning algorithms considered include similarities-aware [40], \(l^{1}\) norm [40], EasiEdge [15], and ProsPr [30]. We will demonstrate in Section IV-C1 and Section IV-C2 that DNNShifter obtains better accuracy and an improved inference speedup for the pruned models compared to unstructured pruning methods. It will also be demonstrated in Section IV-C3 that when compared to structured pruning methods, the pruned models obtained from DNNShifter have better accuracy for a desired model size. We will show in Section IV-C4 that DNNShifter has overheads that are multiple magnitudes lower than structured pruning methods.
(3) The overheads for dynamically switching a model in memory for addressing Challenge 3 (Phase 3 of DNNShifter). We will demonstrate in Section IV-D that compared to model switching approaches, such as Model Ensemble [41] and Dynamic once-for-all (Dynamic-OFA) [10], DNNShifter has lower model switching overheads and memory utilisation.
### _Experimental setup_
Two production DNN models trained on the CIFAR-10 [42] and Tiny ImageNet [43] dataset are considered. First is VGG-16 [2] trained on CIFAR-10, which represents a feedforward DNN model. Second is ResNet-50 [44] trained on Tiny ImageNet, which is a more complex branching DNN model.
Table II presents the baseline results and hyperparameters:
**Models, Datasets, and Hyperparameters -** VGG-16 is the OpenLTH configuration that has one linear layer [23], and ResNet-50 is the default ImageNet configuration [44]. CIFAR-10 consists of 50,000 training images and 10,000 test images divided equally across 10 classes. Tiny ImageNet is a subset of ImageNet consisting of 100,000 training images and 10,000 test images divided equally across 200 classes. Tiny ImageNet results are reported for both Top-1 and Top-5 as recommended by model pruning literature [20]. The baseline results were obtained using the training routine from OpenLTH3 as defined in Section III-B.
Footnote 3: Using Python 3.8.10, torch 1.13.0+cu116, and torchvision 0.14.0+cu116.
**Testbed -** We use an AMD EPYC 7713P 64-core CPU and Nvidia RTX A6000 GPU to train the models, as such resources are representative of those in a cloud data centre. Model inference and runtime switching is carried out with an Intel i7-9750H 6-core CPU and an Nvidia RTX 2080 (Max-Q) GPU comparable to an edge server that may be used in a production setting.
**Trial Counts and Reporting Methods -** All DNN training methodologies and experiments were conducted a minimum of three times, except for those in Section IV-B. In Section IV-B, each NAS approach was executed only once due to computational and time constraints. Unless otherwise specified, model performance indicators like accuracy, memory usage, and latency are presented in tables and figures as the mean from all trials accompanied by confidence intervals spanning one standard deviation. In addition, where possible, experiments are carried out across 8 different compression ratios (2, 4, 8, 16, 32, 64, 128, 256).
### _Model training and portfolio generation (Phase 1)_
This study will demonstrate that DNNShifter will generate a portfolio of models from a base architecture with a higher search efficiency than comparable NAS methods. Search efficiency is the percentage of optimal model variants in the portfolio over the total number of searched variants. An optimal model variant is one that is not outperformed on all performance metrics by another variant and is obtained using Pareto optimality. The performance metrics considered in this article are model size, inference latency, and model accuracy. For example, training a single model that reaches an adequate accuracy has a search efficiency of 100%. However, if training occurs \(N\) times, then the model with the highest accuracy from the \(N\) training rounds is optimal, but search efficiency drops to \(100/N\)%.
DNNShifter creates a model portfolio by iteratively pruning the largest model variant into progressively smaller variants (discussed in Section III-C). Each pruning iteration is equivalent to searching the model architecture once for a variant. We compare DNNShifter against three different NAS-based methods, which search a model architecture for optimal model variants.
The first is DARTS, an accelerated NAS method that generates model variants from a continuous search space via gradient descent. Compared to older NAS methods such as NasNet, DARTS is 500x faster. In addition, DARTS is a NAS approach that automatically generates a portfolio of models.
The second is RepVGG, which employs a family of VGG-like model variants from a set of discrete hyperparameters that scale various model architecture properties. In total, 228 model variants are individually trained to identify the optimal set of model variants based on hyperparameters presented in RepVGG literature [14].
The third is PreNAS, a modern NAS that generates models using the emerging vision transformer model architecture [45]. PreNAS is a one-shot NAS that decides on a set of optimal model variants and only trains those candidates, significantly reducing computational requirements.
DNNShifter has one hyperparameter, \(n\), which specifies how many model variants should be generated where each variant is twice as compressed as the previous. For example,
\(n=8\) generates a portfolio of model variants up to the compression ratio 256 (\(2^{n}\)). The first variant is the original dense model with no sparsity.
Table III contrasts DNNShifter against DARTS, RepVGG, and PreNAS. DNNShifter generates 4 optimal model variants out of a portfolio of 9, resulting in a search efficiency of 44.44%. This is more efficient than the NAS-based methods. The number of parameters trained using DARTS is divided across a more extensive portfolio of models. These models are not sufficiently diverse, resulting in a low search efficiency of DARTS. The DARTS search method requires over 6 hours to create the portfolio. Then, each variant is individually trained and evaluated, totalling a training time that is 6x longer than DNNShifter. RepVGG and PreNAS achieve a higher model accuracy than DNNShifter, but each model variant is up to one order of magnitude larger in parameter count. As this study evaluates training time as a function of parameter count, the trend that is seen in Table III generalises for all model architectures of different sizes.
**Observation 1:** The DNNShifter method for generating a portfolio of models via iterative pruning is more resource and time-efficient than NAS-based methods.
### _Performance of sparse and pruned models (Phase 2)_
This study will demonstrate that DNNShifter produces pruned models of the same or better accuracy than other unstructured (Section IV-C1) and structured (Section IV-C3) pruning methods. In addition, it is demonstrated that the pruned models generated by DNNShifter are smaller and faster than sparse models, which is quantified for various compression ratios (Section IV-C2).
#### Iv-C1 Comparing accuracy against sparse models
We will first contrast the choice of the unstructured pruning method in DNNShifter against other unstructured pruning methods. Unstructured pruning methods produce a sparse model variant where parameters are set to zero. In this paper, a sparse model with a compression ratio of \(C\) has for every \(C\) parameters, \(C-1\) parameters set to zero; this is also presented in the literature [27, 20]. As seen in Section III-C, DNNShifter utilises IMP with rewinding as its unstructured pruning method. This study evaluates DNNShifter against random pruning, which is a naive method, magnitude pruning, SynFlow, and NTK-SAP. For each method, the baseline models in Table II are iteratively pruned up to eight times where a compression ratio of \(2^{n}\) is achieved per iteration \(n\) as described in Section IV-B.
Figure 5 shows the change in test accuracy as the compression ratio increases for VGG-16 and Resnet-50. For all methods, accuracy decreases as the compression ratio increases. However, the rate of decline varies per method, whereas DNNShifter maintains the highest accuracy in all cases. For example, for VGG-16 on CIFAR-10 at a compression ratio of 256, DNNShifter accuracy dropped by 1.57%, whereas SynFlow, NTK-SAP, and Random pruning dropped by 6.21%, 5.15%, and 31.09%, respectively. Magnitude pruning compromises almost all of its accuracy by a compression ratio of 256. Magnitude pruning causes the model to become unusable at high compression ratios due to layer collapse (when an entire layer is pruned) [27]. However, SynFlow was designed to avoid layer collapse and maintain usable accuracy at high compression ratios. DNNShifter also does not encounter layer collapse as it uses the rewinding approach from IMP that stabilises the sparse model [36].
For ResNet-50 on Tiny ImageNet, for top-1 and top-5 test accuracy, DNNShifter accuracy reduction is less than half of SynFlow at higher compression ratios. In contrast, magnitude pruning, and NTK-SAP undergo layer collapse after a compression ratio of 32, and 64, respectively.
**Observation 2:** DNNShifter preserves the highest accuracy for sparse models compared to existing methods. Furthermore, this enables DNNShifter to generate sparse models at extreme compression ratios, providing the opportunity for structured pruning.
The benefits of using structured pruning at extreme model sparsities are explored in the next subsection.
#### Iv-C2 Comparing runtime performance against sparse models
Unstructured pruning methods do not provide inference acceleration or spatially reduce the model size at runtime. This is because the parameters are not spatially pruned but rather are merely made zero values in the model. DNNShifter removes pruned parameters via structured pruning. This study highlights the performance benefits of using pruned models from DNNShifter compared to the sparse models generated by unstructured pruning methods.
Figure 6 shows the CPU and GPU inference speed up and spatial compression achieved for increasing compression ratios. For DNNShifter, inference speed-up is defined as the inference latency of the baseline model over the pruned model for a given compression ratio. For the other unstructured pruning methods, we consider inference speed-up as the inference latency of the baseline model over the sparse model for a
given compression ratio. Similarly, spatial compression is the in-memory size of the baseline model over that of the in-memory size of the pruned model (for DNNShifter) or of the sparse model (for unstructured pruning methods) for a given compression ratio. For both metrics, DNNShifter provides improvements for all compression ratios, whereas unstructured pruning methods alone provide little or, in some cases, worse performance than the baseline model.
DNNShifter achieves up to 1.67x and 1.32x for VGG-16 and ResNet-50 on CPU and GPU, respectively, at no cost to the accuracy of the sparse model. The other unstructured pruning methods achieve a small speedup. However, the speedup varies due to irregular memory access in sparse models [46]. DNNShifter spatially prunes the sparse parameters and thus is not affected by structural irregularity. As the compression ratio increases, more sparse parameters are removed, resulting in a smaller model with lower CPU and GPU inference times.
DNNShifter at a compression ratio of 256 achieves a spatial compression on the sparse model of 5.14x and 1.87x for VGG-16 and ResNet-50, respectively. ResNet-50 has a lower spatial compression ratio as DNNShifter only removes linear connections using structural pruning. As such, any skip connections or downsampling layers in ResNet-50 are not pruned as it will impact model accuracy [47].
**Observation 3:**DNNShifter reduces inference latency and sparse model sizes in memory without losing accuracy. This is in contrast to sparse models obtained from unstructured pruning that are unsuitable for edge environments since they have poor inference performance and no spatial compression.
#### Iv-A3 Comparing accuracy against pruned models
This study contrasts DNNShifter against other structured pruning methods. Specifically, DNNShifter is demonstrated to have comparable accuracy to the original model while producing similarly sized or smaller pruned models than other structured pruning methods. DNNShifter is compared against RepVGG, EasiEdge, ProsPr, and two classic structured pruning methods: similarities-aware and \(l^{1}\) norm.
RepVGG, as described in Section IV-B, creates VGG-like architectures by re-parameterising ResNet. In this study, RepVGG-A0 is the pruned VGG-16 model obtained from the baseline RepVGG-B0 [14]. EasiEdge is a recent structured pruning method that creates pruned models for edge deployment. ProsPr is another modern pruning method that learns which weights to prune within the first few steps of optimisation [30]. In this study, we use the structured pruning variation that prunes channels. Pruning using \(l^{1}\) norm is a classic structured pruning method that ranks the importance of each channel using \(l^{1}\) norm and then prunes the lowest value channels [40]. Similarities-aware is another classic structured pruning method that removes channels with similar outputs [40].
Table IV shows the accuracy change of the pruned model and the total parameter count after pruning the baseline VGG-16 on CIFAR-10 using different structured pruning methods. The table is organised in descending order of parameter count, where the baseline VGG-16 models are considered first and increasingly pruned models towards the bottom. EasiEdge and ProsPr models are denoted using a prune degree as a percentage. Prune degree is the percentage of pruned parameters from the baseline. For example, EasiEdge-25% prunes VGG-16 by 25%. DNNShifter models are denoted using the compression ratio. For example, DNNShifter-2x has a compression ratio of 2, equivalent to a prune degree of 50%.
Both classic structured pruning methods showed more than 5% accuracy reduction with a pruning degree of 35% or less. Combining the two methods allows for a similar accuracy loss but up to a pruning degree of 45%. RepVGG-A0 achieves the same pruning degree as the combined classic methods while only dropping 0.4% model accuracy. However, RepVGG does not have a smaller model variant than RepVGG-A0. DNNShifter and EasiEdge produce smaller models with better accuracy than the baseline model. DNNShifter-16x has the best accuracy improvement with a 0.4% gain, where a similarly sized EasiEdge-80% lost 0.22% accuracy. The smallest EasiEdge variant, namely EasiEdge-85%, with 0.46M parameters, 0.51% loss in accuracy, whereas DNNShifter-64x is over twice as small, with 0.21M parameters and gains 0.33%
Fig. 5: Accuracy of unstructured pruning in DNNShifter against other methods as compression ratio increases; dashed line is baseline model accuracy.
accuracy. ProsPr maintains a positive accuracy change up until models of size 2M parameters; however, accuracy remains lower than DNNShifter at all model sizes.
**Observation 4:** DNNShifter produces smaller and more accurate pruned models than other structured pruning methods.
#### Iv-C4 Pruning time against structured pruning methods
Figure 7 shows the pruning time in seconds of various structured pruning methods to prune a ResNet model. \(l^{1}\) norm requires almost 3,923 seconds to prune and fine-tune the model. EasiEdge does not require fine-tuning, but the ranking process it employs using Taylor approximations is exhaustive, thus requiring 4,740 seconds. RepVGG does not require ranking. Instead, it re-parameterises the model, which only requires 8 seconds. Although this is a relatively small cost, an 8-second overhead per training round may equate to a substantial overhead for certain use cases. For example, consider pruning during the rounds of federated learning [48]. NTK-SAP requires 20 epochs of pre-training to generate an unstructured pruning mask resulting in 544 seconds of overhead. DNNShifter prunes a model in a sub-second time frame. For ResNet, it is on an average of 120 ms, or less than 3 frames for a 30 frames/second real-time edge video analytics application [35], as opposed to tens of seconds to minutes of downtime with existing approaches.
**Observation 5:** DNNShifter enables (near) real-time structured pruning of DNN models and is at least one order of magnitude faster than other structured pruning methods.
### _Performance of model switching (Phase 3)_
Model switching enables an application to respond to changing runtime conditions by selecting a suitable model for
Fig. 6: Performance of DNNShifter against other unstructured pruning methods as compression ratio increases; dashed line is baseline model performance. Each plot is the mean of five runs with confidence intervals of one standard deviation.
inference from a pruned model portfolio. This study compares the in-memory compression and model switching method of DNNShifter against the classic methods, namely model ensemble and Dynamic-OFA. The model ensemble method hosts simultaneous models in memory, and Dynamic-OFA uses a smaller sub-network within a single DNN to match operational demands.
Table V compares runtime switching of DNNShifter against model ensemble and Dynamic-OFA. DNNShifter has a portfolio of four pruned VGG-16 models obtained in Section IV-B with an accuracy range of 91.64-93.71%, model portfolio size of 30.4-66.1MB, and a CPU inference speedup of 1.20-1.67x. A model ensemble of the same four VGG-16 models and a Dynamic-OFA model are also noted. Memory utilisation is the size of the model portfolio in memory. DNNShifter's memory utilisation is variable since inactive models are further compressed in memory (Section III). However, in the model ensemble method, all models are uncompressed and hosted in memory. Similarly, Dynamic-OFA maintains the entire model in memory, even though only a smaller sub-network may be used during inference. DNNShifter utilises as little as 3.8x less memory for its model portfolio compared to the model ensemble method.
Decision overhead is the wall clock time for a model-switching method to select a model from the portfolio. For example, the model ensemble method runs inference on all models in the portfolio and then chooses the output with the highest confidence. On the other hand, Dynamic-OFA selects one DNN configuration to run inference and then reconfigures the DNN to that selection. DNNShifter inflates the appropriate model from in-memory and has an average decision overhead of 43 ms, which is up to 11.9x faster than Dynamic-OFA.
## V Discussion and Conclusion
Deploying production-quality DNNs in resource-constrained environments is essential for facilitating edge machine learning. Model compression offers techniques to derive model variants from a production-quality model suited for resource-constrained environments. However, obtaining model variants that preserve accuracy and can be compressed to reduce the resource footprint and achieve low inference latencies is challenging. Moreover, existing research has limited focus on adapting model variants to changing runtime conditions.
DNNShifter addresses the above concerns by developing an end-to-end framework that incorporates a novel pruning method and a time and resource-efficient pipeline for model training, compression, and runtime switching. DNNShifter prepares model variants magnitudes faster than state-of-the-art neural architectural search, thus facilitating rapid and on-demand model deployments at the edge. The pruned model variants maintain the same accuracy as their production quality counterparts. They are suited for edge deployments since they are lightweight and adaptable to runtime conditions.
DNNShifter was designed to accommodate existing ML training and inference pipelines. DNNShifter does not introduce any extra hyperparameters or dependencies other than requiring a user-specified maximum portfolio size. The structured pruning method of DNNShifter can easily be used: (1) for one-time optimisation to pre-existing pre-trained DNN models, (2) in conjunction with the other phases to create a training and inference pipeline from scratch, or (3) in any combination of DNNShifter's phases. Thus, DNNShifter is easily transferable to existing ML applications and products.
DNNShifter is primarily limited by the high computation cost of training sparse models. There is potential for structured pruning to be conducted at the initialisation of the model (before training) with minimal accuracy loss [49, 50]. This will be explored in the future.
## Acknowledgements
This research is funded by Rakuten Mobile, Japan.
Fig. 7: Average pruning time of a ResNet model using structured pruning. |
2309.08792 | An entropy-based approach for a robust least squares spline
approximation | We consider the weighted least squares spline approximation of a noisy
dataset. By interpreting the weights as a probability distribution, we maximize
the associated entropy subject to the constraint that the mean squared error is
prescribed to a desired (small) value. Acting on this error yields a robust
regression method that automatically detects and removes outliers from the data
during the fitting procedure, by assigning them a very small weight. We discuss
the use of both spline functions and spline curves. A number of numerical
illustrations have been included to disclose the potentialities of the
maximal-entropy approach in different application fields. | Luigi Brugnano, Domenico Giordano, Felice Iavernaro, Giorgia Rubino | 2023-09-15T22:20:48Z | http://arxiv.org/abs/2309.08792v1 | # An entropy-based approach for a robust least squares spline approximation
###### Abstract
We consider the weighted least squares spline approximation of a noisy dataset. By interpreting the weights as a probability distribution, we maximize the associated entropy subject to the constraint that the mean squared error is prescribed to a desired (small) value. Acting on this error yields a robust regression method that automatically detects and removes outliers from the data during the fitting procedure, by assigning them a very small weight. We discuss the use of both spline functions and spline curves. A number of numerical illustrations have been included to disclose the potentialities of the maximal-entropy approach in different application fields.
keywords: Weighted least squares approximation, B-splines, Entropy Msc: [2010] 65D10, 94A17 +
Footnote †: journal: Journal of Computational and Applied Mathematics
## 1 Introduction
With the advent of computer-aided modern technology, sheer volumes of data need to be pre-processed in order to make them suitable for the subsequent data-driven tasks they are intended for. Real data are often affected by various imperfections, including noise, poor sampling, missing values and outliers. The automatic identification and removal of these inconsistencies has become of paramount importance during the preprocessing phase of data, since they may significantly affect the predictive accuracy and efficiency of models such as those based upon single and multivariate regression, as well as of pattern recognition procedures resulting from machine learning and deep learning processes [1; 2; 3; 4].
Identification of corrupted data also play a fundamental role in automatic anomaly detection, meant as the appearance of events or observations which are inconsistent with the pattern underlying a given dataset. Anomaly detection has become increasingly important in many application areas ranging from statistics, cyber security, medicine, event detection in sensor networks, financial fraud and machine learning [5].
Outliers may be thought of extreme values that deviate significantly from the trend defined by the majority of the data points, possibly due to errors or rare events, and that can consequently
worsen the performance of many data analysis algorithms. Classical outlier detection methods often rely on specific assumptions about the data's distribution. However, in many real-world scenarios, estimating such a distribution beforehand can be challenging due to the data's dependence on various unknown or complex factors and the presence of highly noisy sources. This limitation becomes apparent in vast collections of time series data, especially within environmental investigation. Such a topic has recently garnered extensive research attention, especially in understanding the correlation between climate changes and the increasing severity of natural disasters [6, 7].
Extending the study addressed in [8] for the polynomial case, the present paper introduces a robust regression technique for spline approximation of both univariate and multivariate time series, considering scenarios where observations exhibit varying degrees of reliability (see [9] for a related study). In statistics, robust regression tries to overcome the limitations of the ordinary least squares when its underlying assumptions are violated, for example, due to the presence of outliers [10, 11, 12, 13].
The proposed procedure tackles the challenges posed by outliers and noise by formulating a weighted least squares problem that leverages the statistical concept of entropy. To this end, we adopt the normalization condition that the weights sum to one, which allows us to interpret them as a probability distribution.
In more detail, to mitigate the negative influence of outliers and noise on the resulting approximating curve, the procedure maximizes the entropy \(H\) associated with the weights distribution, under the constraint that the resulting weighted mean squared error takes a prescribed value lower than the one corresponding to a uniform weights distribution. Such a value may be either provided by the user, on the basis of what he would expect in absence of corrupted data, or automatically detected during the implementation of the procedure.
To better elucidate the role played here by entropy, we quote Jaynes [14, page 97]:
_...the distribution that maximizes \(H\), subject to constraints which represent whatever information we have, provides the most honest description of what we know. The probability is, by this process, spread out as widely as possible without contradicting the available information._
Translating Jaynes' words in our context, we may stress that the proposed approach ensures that as many data points as possible carry non-negligible weights, which results in maximizing the inlier set while adhering to the mean squared error constraint. To achieve this, the strategy assigns smaller weights to points that are more likely to be considered outliers, effectively minimizing their influence on defining the final shape of the approximating spline curve. It is important to note that this weighting task is seamlessly integrated into the fitting procedure, resulting in a unified methodology that eliminates the need for a preprocessing phase. Similarly to the RANSAC algorithm [15], the entropy-based approach proves particularly effective in handling situations where a substantial portion of the data is corrupted. However, unlike the RANSAC algorithm, it boasts the advantage of being deterministic in nature. Furthermore, by reinterpreting the weights as probabilities, we can readily justify the use of entropy as a mathematical tool for effectively handling corrupted data points.
The paper is structured as follows: In Section 2, we review the fundamental concepts related to weighted least squares spline approximation and introduce the corresponding notations. Section 3 presents a formal definition of the approximation problem using the entropy-based tool and proposes a simple algorithm to obtain the optimal solution for the constrained optimization problem. To demonstrate the functionality of the entropy tool, a few numerical illustrations are provided in
Section 4. In Section 5, three examples involving real-world data are considered. Finally, in Section 6, we draw conclusions based on the findings.
## 2 Background
Consider a parametrized sequence of points \(\{(t_{i},y_{i})\}_{i=1}^{m}\), where \(t=(t_{1},\ldots,t_{m})^{\top}\) is a non-decreasing sequence of real parameters and \(y_{i}\in\mathbb{R}^{s}\) the corresponding data points. In the statistics parlance the sequence \(\{(t_{i},y_{i})\}\) is often referred to as a multivariate time series. As is usual in this context, we introduce a change of variable that normalizes the data in \([0,1]\times[0,1]^{s}\):1
Footnote 1: In the sequel, all the operations and functions evaluations involving vectors are meant componentwise. For example, for a given vector \(z=(z_{1},\ldots,z_{k})^{\top}\) and a function \(g:\mathbb{R}\to\mathbb{R}\), we have \(g(z)=(g(z_{1}),\ldots,g(z_{k}))^{\top}\).
\[t_{i}\to\frac{t_{i}-t_{\min}}{t_{\max}-t_{\min}},\qquad y_{i}\to\frac{y_{i}-y_ {\min}}{y_{\max}-y_{\min}}.\]
where
\[t_{\min}=\min_{1\leq i\leq m}t_{i},\quad t_{\max}=\max_{1\leq i\leq m}t_{i}\]
and, denoting by \(y_{i}(j)\) the \(j\)th entry of the vector \(y_{i}\),
\[y_{\min}(j)=\min_{1\leq i\leq m}y_{i}(j),\quad y_{\max}(j)=\max_{1\leq i\leq m }y_{i}(j),\quad j=1,\ldots,s.\]
Of course, one can revert to the original coordinates by employing the inverse transformations. We wish to fit the given data set by means of a spline curve \(f\) of degree \(d\) expanded along a B-spline basis \(\{B_{j,d}(x)\}_{j=1}^{n}\), namely
\[f(x,c)=\sum_{j=1}^{n}c_{j}B_{j,d}(x). \tag{1}\]
Here, \(c=(c_{1}^{\top},\ldots,c_{n}^{\top})^{\top}\in\mathbb{R}^{sn}\) is a set of \(n\) control points, each of length \(s\), and the B-splines \(B_{j}(x)\) are defined on a non-decreasing sequence of \((d+1)\)-regular knots
\[0=x_{1}=\cdots=x_{d+1}<x_{d+2}\leq\ldots\leq x_{n}<x_{n+1}=\cdots=x_{n+d+1}=1, \tag{2}\]
via the three-terms recursive relation2
Footnote 2: If a division by zero occurs, the related term is neglected.
\[B_{j,d}(x)=\frac{x-x_{j}}{x_{j+d}-x_{j}}B_{j,d-1}(x)+\frac{x_{j+d+1}-x}{x_{j+ d+1}-x_{j+1}}B_{j+1,d-1}(x),\]
with
\[B_{j,0}(x)=\left\{\begin{array}{ll}1,&\mbox{if }x_{j}\leq x<x_{j+1},\\ 0,&\mbox{otherwise}.\end{array}\right.\]
Besides the conditions at the end points in (2), the \((d+1)\)-regularity of the knot vector also imposes that \(n\geq d+1\) and \(x_{j}<x_{j+d+1}\), for \(j=1,\ldots,n\), which are relevant assumptions for the B-splines linear independence property [16]. In the sequel, for sake of simplicity, we will omit the second subscript in \(B_{j,d}(x)\).
Now, for a given vector \(w=(w_{1},\ldots,w_{m})^{\top}\) of (positive) weights satisfying the normalization condition
\[\sum_{i=1}^{m}w_{i}=1, \tag{3}\]
we consider the weighted mean squared error
\[\overline{\mathrm{E}^{2}}=\sum_{i=1}^{m}w_{i}||f(t_{i},c)-y_{i}||_{2}^{2} \tag{4}\]
as an estimate of the approximation accuracy of a spline function \(f(x,c)\) in the form (1) to the given data set. Denoting by \(I_{s}\) the identity matrix of dimension \(s\) and introducing the generalized Vandermonde matrix
\[A=\begin{pmatrix}B_{1}(t_{1})&\cdots&B_{n}(t_{1})\\ \vdots&&\vdots\\ B_{1}(t_{m})&\cdots&B_{n}(t_{m})\end{pmatrix}\in\mathbb{R}^{m\times n},\]
the vector \(y=(y_{1}^{\top},\ldots,y_{m}^{\top})^{\top}\in\mathbb{R}^{sm}\) and the diagonal matrix \(W=\mathrm{diag}(w_{1},\ldots,w_{m})\), (4) may be cast in two equivalent forms that will be conveniently exploited for calculation and implementation purposes:
\[\begin{array}{rcl}\overline{\mathrm{E}^{2}}&=&(f(t,c)-y)^{\top}(W\otimes I _{s})(f(t,c)-y)\\ &=&\|(\sqrt{W}\otimes I_{s})(f(t,c)-y)\|_{2}^{2}\\ &=&\|(\sqrt{W}\otimes I_{s})((A\otimes I_{s})c-y)\|_{2}^{2}\end{array} \tag{5}\]
and, denoting by \(e_{s}=(1,\ldots,1)^{\top}\) the unit vector of length \(s\),
\[\begin{array}{rcl}\overline{\mathrm{E}^{2}}&=&(w\otimes e_{s})^{\top}(f(t,c )-y)^{2}\\ &=&(w\otimes e_{s})^{\top}((A\otimes I_{s})c-y)^{2}.\end{array} \tag{6}\]
For a prescribed choice of weights, the _least squares approximation problem_ consists in finding the (vector) coefficients \(c_{j}\) such that the corresponding weighted mean squared error (5) is minimized. As is well known, differentiating (5) with respect to \(c\), this requirement leads to the normal system
\[(A^{\top}WA\otimes I_{s})c=(A^{\top}W\otimes I_{s})y, \tag{7}\]
which results from computing the stationary points of \(\overline{\mathrm{E}^{2}}\) regarded as a function of \(c\).
Under the assumption that for any \(j=1,\ldots,n\) a \(t_{i_{j}}\) exists such that \(B(t_{i_{j}})\neq 0\), matrix \(A^{\top}WA\) is positive definite and the Cholesky factorization may be employed to transform (7) into a couple of triangular systems. More in general, also to prevent a worsening of the conditioning, one avoids the left multiplication by the matrix \(A^{\top}\) and directly deals with the least squares solution of the overdetermined system
\[(\sqrt{W}A\otimes I_{s})c=\sqrt{W}y. \tag{8}\]
In such a case, application of the \(QR\) factorization algorithm with column pivoting, or the SVD decomposition to the rectangular matrix \(\sqrt{W}A\) may be considered to solve the associated least squares problem.
**Remark 1**.: In the event that the components \(y_{i}(j)\), \(j=1,\ldots,s\) are affected by sources of noise of different size depending on \(j\), one could improve (4) by allowing a different weight for each component of the error \(f(t_{i},c)-y_{i}\). This is tantamount to consider a vector of weights \(w\) of length \(ms\) and the related mean squared error defined as
\[\overline{\mathrm{E}^{2}}=w^{\top}(f(t,c)-y)^{2}\equiv\|\sqrt{W}(f(t,c)-y)\|_{ 2}^{2}, \tag{9}\]
with \(W=\mathrm{diag}(w)\). In the numerical tests discussed in Sections 4 and 5 both approaches showed pretty similar results, so we only included those relying on (4).
In the sequel, \(\overline{\mathrm{E}^{2}}_{\mathrm{uw}}\) will denote the mean squared error resulting from the ordinary least squared (OLS) approximation defined on the uniform weights distribution \(w_{i}=1/m\), namely
\[\overline{\mathrm{E}^{2}}_{\mathrm{uw}}=\frac{1}{m}\sum_{i=1}^{m}(f(t_{i}, \bar{c})-y_{i})^{2}, \tag{10}\]
where \(\bar{c}\) satisfies the normal linear system (7) with \(W=I_{m}/m\), \(I_{m}\) being the identity matrix of dimension \(m\).
## 3 Maximum entropy weighted least squares spline approximation
The use of a weighted mean squared error is helpful when the data highlight different level of accuracies, due to the presence of noise and/or outliers. In such a case, it would be appropriate to attach large weights to very accurate data points and small weights to data points which are most likely affected by a high level of inaccuracy. In fact, a weight \(w_{i}\) approaching zero makes the corresponding data point \(y_{i}\) irrelevant for the purpose of the fitting procedure. On the other hand, increasing the size of \(w_{i}\) will make \(f(t_{i},c)\) closest to \(y_{i}\). It turns out that, under the normalization condition (3), the WLS approximation will mimic the OLS one applied to the subset of data carrying relatively large weights.
By exploiting an entropy-based argument, the _maximum entropy weighted least squares_ (MEWLS) approximation tries to devise an automatic, easy-to-understand and effective procedure for assigning the correct weight to each data point during the fitting procedure. The MEWLS approach based on splines approximating functions in the form (1) is defined by the following set of equations (\(e_{m}\) stands for the unit vectors of length \(m\)):
\[\begin{array}{ll}\mbox{maximize}&-w^{\top}\log w,\\ \mbox{subject to:}&w^{\top}e_{m}=1,\\ &(w\otimes e_{s})^{\top}(f(t,c)-y)^{2}=\overline{\mathrm{E}^{2}}\,.\end{array} \tag{11}\]
In other words, we wish to maximize the entropy function
\[H(w)=-w^{\top}\log w=-\sum_{i=1}^{m}w_{i}\log w_{i} \tag{12}\]
associated with a weights distribution \(w\) satisfying the normalization condition \(\sum_{i}w_{i}=1\), subject to the constraint that the corresponding mean squared error attains a prescribed value \(\overline{\mathrm{E}^{2}}\).
As is well known, problem (11), deprived of the second constraint, admits the solution \(w_{i}=1/m\), which leads us back to the ordinary least squares problem with uniform weights and associated means squared error \(\overline{\rm E^{2}}_{\rm uw}\). Clearly, the very same solution is obtained when solving the complete set of equations in (11) under the choice \(\overline{\rm E^{2}}=\overline{\rm E^{2}}_{\rm uw}\), so (11) contains the ordinary least squares problem as a special instance. By setting \(\overline{\rm E^{2}}\) to a suitable value lower than the mean squared error \(\overline{\rm E^{2}}_{\rm uw}\), the weights selection technique based upon the maximal-entropy argument epitomized by (11) is aimed at mitigating the effect of outliers and noise in the data while solving the weighted least squares problem. To highlight the relation between \(\overline{\rm E^{2}}\) and \(\overline{\rm E^{2}}_{\rm uw}\), we assume in the sequel
\[\overline{\rm E^{2}}=\frac{1}{r}\,\overline{\rm E^{2}}_{\rm uw} \tag{13}\]
where \(r>1\) is a suitable reduction factor.
According to the Lagrange multiplier theorem, we compute the stationary points of the Lagrangian function
\[{\cal L}(w,c,\lambda_{1},\lambda_{2})=w^{\top}\log w+\lambda_{1}(w^{\top}e_{ m}-1)+\lambda_{2}\left((w\otimes e_{s})^{\top}(f(t,c)-y)^{2}-\overline{\rm E ^{2}}\right). \tag{14}\]
Differentiating, we get:
\[\frac{\partial{\cal L}}{\partial w} = e_{m}+\log w+\lambda_{1}e_{m} \tag{15}\] \[+\lambda_{2}\left((I_{m}\otimes e_{s})^{\top}(f(t,c)-y)^{2}\right),\]
\[\frac{\partial{\cal L}}{\partial c} = 2\lambda_{2}\left((A^{\top}WA\otimes I_{s})c-(A^{\top}W\otimes I _{s})y\right), \tag{16}\]
\[\frac{\partial{\cal L}}{\partial\lambda_{1}} = w^{\top}e_{m}-1,\]
\[\frac{\partial{\cal L}}{\partial\lambda_{2}} = (w\otimes e_{s})^{\top}(f(t,c)-y)^{2}-\overline{\rm E^{2}}\,.\]
The last term in (15) is the vector of length \(m\)
\[\lambda_{2}\left(||f(t_{1},c)-y_{1}||_{2}^{2},\ldots,||f(t_{m},c)-y_{m}||_{2}^ {2}\right)^{\top}, \tag{17}\]
while (16) comes from the equivalence of formulae (5) and (6), after observing that the first two terms in the Lagrangian (14) do not depend on the spline coefficients \(c_{i}\). The stationary points of \({\cal L}\) are the solutions of the following set of \(n+m+2\) equations in as many unknowns \(c\in\mathbb{R}^{n}\), \(w\in\mathbb{R}^{m}\), \(\lambda_{1}\) and \(\lambda_{2}\):
\[(A^{\top}WA\otimes I_{s})c-(A^{\top}W\otimes I_{s})y = 0, \tag{18}\] \[(w\otimes e_{s})^{\top}(f(t,c)-y)^{2}-\overline{\rm E^{2}} = 0,\] (19) \[e_{m}+\log w+\lambda_{1}e_{m}+\lambda_{2}\left((I_{m}\otimes e_{ s})^{\top}(f(t,c)-y)^{2}\right) = 0,\] (20) \[w^{\top}e_{m}-1 = 0. \tag{21}\]
By exploiting the weights normalization condition (21), we can easily remove the unknown \(\lambda_{1}\). To this end, we first recast equation (20) as
\[w=\exp(-(1+\lambda_{1}))\cdot\exp\left(-\lambda_{2}\left((I_{m}\otimes e_{s})^{ \top}(f(t,c)-y)^{2}\right)\right).\]
Multiplying both sides by \(e_{m}^{\top}\) and taking into account (21) and (17) yields
\[1=\exp(-(1+\lambda_{1}))\cdot Q(c,\lambda_{2}),\qquad\mbox{with }Q(c,\lambda_{ 2})=\sum_{i=1}^{m}\exp\left(-\lambda_{2}||f(t_{i},c)-y_{i}||_{2}^{2}\right)\]
and hence
\[w=\frac{1}{Q(c,\lambda_{2})}\cdot\exp\left(-\lambda_{2}\left((I_{m}\otimes e_ {s})^{\top}(f(t,c)-y)^{2}\right)\right) \tag{22}\]
that will replace (20) and (21). Plugging (22) into (19) we arrive at the final shape of the system to be solved:
\[(A^{\top}WA\otimes I_{s})c-(A^{\top}W\otimes I_{s})y = 0, \tag{23}\] \[\sum_{i=1}^{m}||f(t_{i},c)-y_{i}||_{2}^{2}\cdot\exp\left(- \lambda_{2}||f(t_{i},c)-y_{i}||_{2}^{2}\right)-\sum_{i=1}^{m}\exp\left(- \lambda_{2}||f(t_{i},c)-y_{i}||_{2}^{2}\right)\overline{\mathrm{E}^{2}} = 0,\] (24) \[w-\frac{1}{Q(c,\lambda_{2})}\cdot\exp\left(-\lambda_{2}\left((I_{m}\otimes e _{s})^{\top}(f(t,c)-y)^{2}\right)\right) = 0. \tag{25}\]
Before facing the question of how to solve the system numerically, a few remarks are in order:
* (23) is nothing but the normal linear system one would get when handling the least squares problem with constant weights (see (7)). It can be therefore expressed as the overdetermined system (8) which has to be solved in the least squares sense;
* (24) is a scalar equation that, for a given vector \(c\), may be easily solved with respect to the Lagrange multiplier \(\lambda_{2}\) via a Newton or Newton-like iteration;
* equation (25) is explicit with respect to the unknown \(w\), for given \(\lambda_{2}\) and \(c\).
Therefore, a quite natural technique to solve the nonlinear system (23)-(25) is yielded by the hybrid iteration summarized in Algorithm 1 (_tol_ is an input tolerance for the stopping criterion).
In order to improve the convergence properties of the nonlinear scheme, we employ a continuation technique on \(\overline{\mathrm{E}^{2}}\). In more detail, we define a sequence of increasing reduction factors
\[1=r_{0}<r_{1}<r_{2}<\cdots<r_{N}=\frac{\overline{\mathrm{E}^{2}}_{\mathrm{uw}} }{\overline{\mathrm{E}^{2}}}\]
and the corresponding sequence of mean squared errors
\[\overline{\mathrm{E}^{2}_{j}}=\frac{1}{r_{j}}\,\overline{\mathrm{E}^{2}}_{ \mathrm{uw}},\quad j=0,\ldots,N, \tag{26}\]
so that \(\overline{\mathrm{E}^{2}_{0}}=\overline{\mathrm{E}^{2}}_{\mathrm{uw}}\) and \(\overline{\mathrm{E}^{2}_{N}}=\overline{\mathrm{E}^{2}}\). Then, for \(j=0,\ldots,N\), we perform lines 2-5 of Algorithm 1 taking care that the output quantities \(c^{(k)}\), \(\lambda_{2}^{(k)}\), \(W^{(k)}\) obtained at step \(j\) are used as input parameters for the subsequent step \(j+1\).
A further relevant motivation for employing such a continuation technique is that it generates a discrete family of homotopic curves, parametrized by \(\overline{\mathrm{E}_{\mathrm{j}}^{2}}\), admitting the OLS and the MEWLS solutions as initial and final configurations respectively. Each element in this family brings a specific weights distribution (and entropy value) and acts as a starting guess for the subsequent approximation curve. Therefore, the overall procedure can be interpreted as an improvement on the OLS approximation in that, by reducing the mean squared error progressively, it smoothly deforms the initial shape of the spline curve to get rid of outliers. An illustration is provided in the first example of the next section.
Finally, it is worth noticing that the resulting weights may be exploited for classification purposes. Indeed, the original data set \(D\) may be split in two disjoint subsets: \(D=D_{1}\cup D_{2}\), where \(D_{1}\) contains the inliers while \(D_{2}\) identifies the outliers. To this end, given a small enough tolerance \(tol\), one can set, for example,
\[D_{2}=\{(x_{i},y_{i})\in D\ |\ w_{i}<tol\cdot\max_{j}w_{j}\},\qquad D_{1}=D-D_{2}. \tag{27}\]
## 4 Numerical illustrations
To showcase the potential of the MEWLS spline approximation, we present three numerical experiments using synthetic data points. The first experiment focuses on a spline function fitting problem, aiming at elucidating the continuation technique (26) and the use of (27) for the automatic detection of outliers. The second and third examples involve approximating a set of data points with a spline curve in the plane and in 3D space, respectively. All the numerical tests have been implemented in Matlab (R2023a) on a 3.6 GHz Intel I9 core computer with 32 GB of memory. References to colors have been included for the online version of the manuscript.
### Example 1
We consider a dataset comprising 44 points in the square \([0,1]\times[0,1]\), out of which 32 closely follow a given profile, while the remaining 12 consistently deviate from it. To fit the data, we employ a spline of degree \(d=2\), defined on a regular and uniform knot sequence consisting of 20 nodes, covering the interval \([0,1]\).
In the top-left picture of Figure 1, we observe the data set along with the ordinary least squares approximation. We see that the OLS spline approximation fails to accurately reproduce the correct profile due to the strong influence of the \(12\) anomalous points. Therefore, we aim to improve the approximation by decreasing the weighted mean squared error while utilizing the maximal-entropy argument to make an optimal weights selection. To this end, we consider a sequence of reduction factors distributed over the interval \([1,500]\). For graphical clarity, we set \(N=50\) in (26) to mimic the behavior of formula (13), where the variable \(r\) continuously varies within the specified interval. Algorithm 1 generates a sequence of \(50\) homotopic functions with parameter \(r\in[1,500]\).
The top-right picture of Figure 1 displays two such functions, one corresponding to \(r=2\) (\(\overline{\mathrm{E}^{2}}=\overline{\mathrm{E}^{2}}_{\mathrm{uw}}\,/2\), solid line), and the other to \(r=4\) (\(\overline{\mathrm{E}^{2}}=\overline{\mathrm{E}^{2}}_{\mathrm{uw}}\,/4\), dashed-line). As the reduction factor \(r\) increases, the maximum entropy principle deforms the shape of the original OLS solution by adjusting the weights to ensure that the maximum number of points contribute while still adhering to the mean squared error constraint.
In the bottom-left picture of Figure 1 we can see the final shape of the approximating spline, corresponding to \(r=500\) (\(\overline{\mathrm{E}^{2}}=\overline{\mathrm{E}^{2}}_{\mathrm{uw}}\,/500\)). We can see that it nicely conforms to the profile underlying the given data set. The use of formula (27) with \(tol=10^{-4}\) correctly detects \(12\) outliers which are surrounded by small circles in the picture.
Finally, the bottom-right picture of Figure 1 illustrates the behavior of the entropy (12) as a function of the scaling factor \(r\). As expected, reducing \(\overline{\mathrm{E}^{2}}\) results in a decrease of the entropy associated with the weights distribution. The appropriate choice of \(\overline{\mathrm{E}^{2}}\) depends on the context and, in particular, on the expected accuracy of the model in the absence of outliers. An automatic identification of a suitable value for \(\overline{\mathrm{E}^{2}}\) may be inferred by examining the rate of change in the spline approximations as the scaling factor \(r\) increases, which is closely related to the behavior of the entropy function \(H\) as a function of \(r\). This aspect will be the subject of future research.
### Example 2
We address the problem of approximating the arithmetic spiral defined by the equations
\[\left\{\begin{array}{rcl}x(t)&=&(a+bt)\cos(t),\\ y(t)&=&(a+bt)\sin(t),\end{array}\right.\]
with \(a=1\), \(b=4\), \(t\in[-a/b,4\pi]\), which ensures that the spiral originates at the origin. To this end, we create a data set consisting of \(N=200\) points sampled along the spiral and then introduce random noise to \(100\) of them, specifically targeting the odd-numbered ones. In more detail, after setting \(h=4\pi/(N-1)\), our data set is defined as follows:
\[\left\{\begin{array}{rcll}t_{i}&=&(i-1)h,&i=1,\ldots,N,\\ (x_{i},y_{i})&=&(x(t_{i}),y(t_{i})),&\text{if $i$ is even},\\ (x_{i},y_{i})&=&(x(t_{i})+\delta_{x}^{(i)},y(t_{i})+\delta_{y}^{(i)}),&\text{ if $i$ is odd},\end{array}\right.\]
where \(\delta_{x}^{(i)},\delta_{y}^{(i)}\in\mathcal{N}(0,\sigma^{2})\) are random variables distributed normally with mean \(0\) and variance \(\sigma^{2}=30\). Since, for the specified range of \(t\), the spiral is entirely enclosed in the square \(S=[-60,60]^{2}\), for visualization clarity, we iterate the generation of values \(\delta_{x}^{(i)},\delta_{y}^{(i)}\) until \((x_{i},y_{i})\) falls within \(S\), for each odd index \(i\).
The left picture of Figure 2 portrays the dataset \(\left(x_{i},y_{i}\right)_{i=1}^{N}\) along with the spline approximations using ordinary least squares (dashed line) and maximum entropy weighted least squares (solid line). Notably, while the OLS approximation struggles to capture the true spiral due to the presence of outliers, the MEWLS spline curve faithfully reproduces the unperturbed spiral \((x(t),y(t))\).
### Example 3
We replicate a procedure akin to the one executed in the prior spiral example but address our attention to a circular helix defined by the equations
\[\left\{\begin{array}{rcl}x(t)&=&r\cos(2\pi t),\\ y(t)&=&r\sin(2\pi t),\\ z(t)&=&ct,\end{array}\right.\]
with \(r=2,c=1\) and \(t\in[-4,4]\), so the helix is enclosed in the cube \([-4,4]^{3}\). We begin with a data set \(\left(x_{i},y_{i},z_{i}\right)_{i=1}^{N}\) consisting of \(N=400\) points sampled along the helix but, differently to what was done in Example 4.2, we now introduce a random noise to a randomly chosen subset of these points. More precisely, we first compute a subset \(\Omega\) obtained by randomly extracting \(M\) points from the set of indices \(\{1,2,\ldots,N\}\). Then we define
\[\left\{\begin{array}{rcl}t_{i}&=&(i-1)h,&i=1,\ldots,N,\\ (x_{i},y_{i},z_{i})&=&(x(t_{i}),y(t_{i}),z(t_{i})),&\text{if $i\not\in\Omega$},\\ (x_{i},y_{i},z_{i})&=&(x(t_{i})+\delta_{x}^{(i)},y(t_{i})+\delta_{y}^{(i)},z(t _{i})+\delta_{z}^{(i)}),&\text{if $i\in\Omega$}.\end{array}\right.\]
Here, \(\delta_{x}^{(i)},\delta_{y}^{(i)},\delta_{z}^{(i)}\in\mathcal{N}(0,20)\) represent random variables drawn from a normal distribution with mean \(0\) and variance \(\sigma^{2}=20\). Again, for visualization clarity, for each odd index \(i\) we iterate the generation of the perturbation values \(\delta_{x}^{(i)},\delta_{y}^{(i)},\delta_{z}^{(i)}\) until \(\left(x_{i},y_{i},z_{i}\right)\) falls within the cube \(S=[-4,4]^{3}\). The right picture of Figure 2 displays the dataset \(\left(x_{i},y_{i},z_{i}\right)_{i=1}^{N}\) along with the spline approximations using ordinary least squares (irregular solid line) and maximum entropy weighted least squares (helix-shaped solid line). Again the MEWLS spline curve faithfully reproduces the shape of the original helix. The results obtained in both this example and the previous one underscore the effectiveness of MEWLS in successfully detecting and eliminating outliers from highly noisy datasets. Further instances based on real data are illustrated in the next section.
## 5 A few applications to real data
### Approximating the main sequence in a Hertzsprung-Russell diagram
The Hertzsprung-Russell (HR) diagram is a graphical representation of stars, mapping the correlation between their absolute magnitudes or luminosities versus their color indices or temperatures, allowing astronomers to discern distinct patterns in stellar evolution [17; 18].
The absolute magnitude of a star is a measure of its intrinsic brightness or luminosity, unaltered by its distance from Earth. It is the apparent magnitude (brightness as seen from Earth) that a star would have if it were located at a standard distance of 10 parsecs (about 32.6 light-years) away. Essentially, the absolute magnitude allows astronomers to compare the luminosities of stars irrespective of their varying distances from us.
The B-V color index is a parameter that characterizes a star's color and temperature. It is the difference between the star's apparent magnitudes in the blue (B) and visual (V) parts of the electromagnetic spectrum. Blue stars have negative B-V values, while redder stars have positive values. This index is crucial in categorizing stars by their spectral types, indicating whether a star is hotter (blue) or cooler (red).
Together, the absolute magnitude and B-V color index are vital tools in understanding stars' properties, evolutionary stages, and positions within the Hertzsprung-Russell diagram. As an example, the left picture of Figure 3 shows the HR diagram for the Yale Trigonometric Parallax Dataset
[19] comprising more than 6000 catalogued stars.This astronomical resource provides measurements of stellar distances using the trigonometric parallax method, a technique employed to determine the distance to a star by measuring its apparent shift in position against more distant background stars as the Earth orbits the Sun. Besides observed parallaxes (in arcsec), the Yale catalogue also includes the B-V color index and the apparent V magnitude. The absolute magnitude is then obtained by means of the formula
\[\mathrm{absolute\ magnitude}=\mathrm{apparent\ V\ magnitude}+5(\log_{10}( \mathrm{observed\ parallax})+1).\]
At its core, the diagram features a continuous and well-defined band known as the main sequence. This band comprises the vast majority of genuine stars in the cosmos, including our own Sun with an absolute magnitude of 4.8 and a B-V color index of 0.66.
Located in the lower-left portion of the diagram are the white dwarfs, while the upper part accommodates the subgiants, giants, and supergiants. This layout visually captures the diverse stages of stellar evolution with the white dwarfs representing stars in their final stages of evolution.
One of the diagram's remarkable applications is in determining the distance between Earth and distant celestial objects like star clusters or galaxies.
In this example, our aim is to accurately approximate the main sequence's shape using an appropriate spline curve and further categorize stars through color assignments. To achieve this, we employed a spline of degree \(d=3\) along with a regular knot sequence
\[t=[0,0,0,0,0.286,0.397,0.658,0.757,1,1,1,1].\]
The left picture in Figure 3 displays the outcome of the ordinary least squares approximation (indicated by the blue line in the color image). This method evidently fails to accurately replicate the main sequence's distinctive form due to the presence of giants and white dwarfs. Conversely, the maximal-entropy least squares approximation successfully captures the main sequence's true shape. By determining the distribution of weights based on the entropy-driven procedure, we assigned distinct color gradients to each star. This color differentiation effectively highlights the discrepancies between these stars and those belonging to the main sequence. As the corresponding weights decrease, the intensity of magenta and yellow pixels progressively intensifies. This approach not only improves the accuracy of the main sequence representation but also facilitates the identification of stars that deviate from its expected characteristics.
### Detecting train rails in a railway infrastructure and surrounding environment
In the present example, we delve into a segmentation task performed on a point cloud that portrays a railway environment, captured using a terrestrial laser scanning system. An instance of such a scenario is presented in Figure 4, which will serve as the subject of our examination. Here, we observe a curved railway emerging from a tunnel, enveloped by dense vegetation. Our aim revolves around identifying the train rails within this scenario and approximating their shape using a suitable spline curve. Conducting such an analysis can yield valuable insights into the transportation system and aid in identifying potential issues that could impact its operational effectiveness (see [20] and reference therein).
It is worth underscoring that the essence of this example lies in testing the entropy-based approach on a highly noisy dataset, where the set \(D_{1}\) of inliers is significantly dwarfed by the set \(D_{2}\) of outliers. As a result, the technique showcased in this example serves as a proof of concept
rather than a definitive solution for the intended problem (for a more effective identification of the rails, refer to works such as [21; 22; 23]).
A point cloud is a data set that realizes a digital representation of a physical environment or object in a three-dimensional space. It is arranged in a structured array housing fields that store various attributes for each point within the cloud. These attributes encompass 3D coordinates, distance ranges, color information, intensity measurements, and potentially other geometric or spectral data. We will utilize the intensity parameter, a measure of the reflectivity of the material of the object containing the sample point, to identify reflective elements like train rails.
Within the segmentation procedure, the intensity field frequently comes into play for the purpose of condensing the initial array of data points into a more fitting subset of points pertinent to the analysis. In fact, noteworthy structures, including train rails and overhead wires, exhibit resemblances in their intensity attributes. This correspondence arises from the inherent connection between a surface's reflective characteristics and its constituent material. For instance, train rails are predominantly composed of steel, leading to nearly uniform intensity readings from the laser sensor along the rail's length. By resorting on the intensity parameter as a filtering criterion, we can effectively discern the majority of points situated on the rails.
Building upon the analysis conducted in [23] for a point cloud of similar nature, our approach to reduce the size of the original point cloud, while retaining the majority of rail points, involves extracting those with intensity values not exceeding 65. Additionally, due to the level nature of the terrain under consideration, we omit the vertical component of the points and instead focus on a two-dimensional projection of the filtered point cloud. This projection is illustrated in the leftmost image of Figure 5 and forms a data set comprising 304911 points. The lower-right section of the image corresponds to the segment of the rails situated within the tunnel. This region exhibits a much cleaner appearance compared to the area outside the tunnel. Indeed, in the external environment, a considerable number of points associated with vegetation are regrettably retained even after the filtering procedure. This introduces a notable degree of noise into the data.
The right image in Figure 5 displays the ordinary least squares spline approximation curve (solid blue line). By referring to equations (1)-(2), this curve is obtained through a spline of degree \(d=2\) and \(n=15\), utilizing a uniform \((d+1)\)-regular knots distribution. Evidently, the OLS approximation does not deviate that much from the shape traced by the rail tracks, making it a suitable initial estimate within Algorithm 1 for computing the maximal-entropy weighted least squares spline approximation curve.
This MEWLS curve is depicted in the same graph as a dashed red line. It is clear that the MEWLS spline closely captures the profile of the upper rail, demonstrating a very high accuracy. An inspection of the weights through formula (27) reveals that, in this specific example, the number of outliers exceeds the number of inliers by more than six times.
A comparable process can be subsequently applied to acquire the approximation for the lower rail. This involves eliminating the points related to the upper rail from the dataset and then performing Algorithm 1 again (we omit to display this latter approximation for visual clarity).
In conclusion, the MEWLS approach effectively enhances the accuracy of the initial OLS approximation and leads to a precise parametric representation of the rails.
### Detecting and scoring outliers in an environmental data set
The final test case is drawn from a study in [6] and explores an environmental dataset accessible through the R-package _openair_[24]. This dataset encompasses hourly readings of wind speed, wind direction, and concentrations of pollutants such as NO\({}_{x}\), NO\({}_{2}\), O\({}_{3}\), PM\({}_{10}\), SO\({}_{2}\), CO, and
PM\({}_{25}\) recorded at Marylebone (London) spanning from January 1, 1998, to June 23, 2005. For comparison purposes, we conform to the choice in [6] and focus on a specific subset of this dataset, only comprising the O\({}_{3}\) concentrations during December 2002. This particular segment encompasses a total of 744 observations, while also featuring several instances of missing data points.
The dots depicted in Figure 6 provide a visual representation of the O\({}_{3}\) concentrations, measured in parts per billion (ppb), over the specified time frame. To approximate this univariate time series, we employ a spline function with degree \(d=3\), defined on a uniform \((d+1)\)-regular knots distribution. In order to capture the erratic nature of the data, we opt for a number of coefficients \(n\) equal to half the data points' count. Figure 6 only displays the MEWLS approximation (red solid line).
In contrast to the approach adopted in prior examples, our strategy for obtaining the approximating spline varies here. Rather than predefining the reduction factor, we pursue a distinct perspective. Specifically, we establish the number of outlier candidates, denoted as \(N\), and iteratively reduce the \(\overline{\mathrm{E}^{2}}\) value until \(N\) data points are encompassed within the outlier set \(D_{2}\). This methodology introduces a natural ranking within \(D_{2}\), assigning scores to each prospective outlier. This is readily accomplished using (27), where the \(i\)th point entering \(D_{2}\) receives a score of \(i\). In Figure 6, outliers are denoted by points enclosed in green circles, each indicating the corresponding score.
The outcomes obtained align with those presented in [6], particularly those based upon the extreme value theory. This systematic scoring approach has the potential to streamline the decision-making process, aiding specialists in identifying the data points that merit closer investigation or intervention.
## 6 Conclusions
In real-world scenarios, data quality directly impacts the performance of subsequent analytical processes, so that the importance of effective preprocessing techniques and robust fitting procedures have become increasingly evident.
In this context, we have introduced an entropy-based weighting methodology for determining spline approximations of multivariate time series. In contrast to the ordinary least squares approach, which displays sensitivity to corrupted data, the MEWLS spline approximation effectively mitigates the impact of outliers and noise even when handling large and highly noisy datasets. Its ability to accurately extract meaningful information from noisy backgrounds has been illustrated through various synthetic and real-world examples.
One limitation when compared to the OLS approach is that, even for linear models, the resulting algebraic system becomes nonlinear and its solution requires the implementation of an appropriate iterative scheme. In this regard, the OLS solution can serve as an initial estimate. The numerical illustrations underscore that the MEWLS solution significantly outperforms the classical OLS procedure. Nonetheless, the efficient resolution of this nonlinear system warrants dedicated investigation and will be a focus of future research.
## Acknowledgements
Felice Iavernaro acknowledges the contribution of the National Recovery and Resilience Plan, Mission 4 Component 2 - Investment 1.4 - NATIONAL CENTER FOR HPC, BIG DATA AND
QUANTUM COMPUTING - Spoke 5 - Environmental and Natural Disasters, under the NRRP MUR program funded by the European Union - NextGenerationEU - (CUP H93C22000450007).
Luigi Brugnano and Felice Iavernaro thank the GNCS for its valuable support under the INDAM-GNCS project CUP_E55F22000270001.
|
2309.06967 | On motives of parabolic Higgs bundles and parabolic connections | Let $X$ be a compact Riemann surface of genus $g \geq 2$ and let $D\subset X$
be a fixed finite subset. We considered the moduli spaces of parabolic Higgs
bundles and of parabolic connections over $X$ with the parabolic structure over
$D$. For generic weights, we showed that these two moduli spaces have equal
Grothendieck motivic classes and their $E$-polynomials are the same. We also
show that the Voevodsky and Chow motives of these two moduli spaces are also
equal. We showed that the Grothendieck motivic classes and the $E$-polynomials
of parabolic Higgs moduli and of parabolic Hodge moduli are closely related.
Finally, we considered the moduli spaces with fixed determinants and showed
that the above results also hold for the fixed determinant case. | Sumit Roy | 2023-09-13T14:03:08Z | http://arxiv.org/abs/2309.06967v2 | # On motives of parabolic Higgs bundles and parabolic connections
###### Abstract.
Let \(X\) be a compact Riemann surface of genus \(g\geq 2\) and let \(D\subset X\) be a fixed finite subset. We considered the moduli spaces of parabolic Higgs bundles and of parabolic connections over \(X\) with the parabolic structure over \(D\). For generic weights, we showed that these two moduli spaces have equal Grothendieck motivic classes and their \(E\)-polynomials are the same. We also show that the Voevodsky and Chow motives of these two moduli spaces are also equal. We showed that the Grothendieck motivic classes and the \(E\)-polynomials of parabolic Higgs moduli and of parabolic Hodge moduli are closely related. Finally, we considered the moduli spaces with fixed determinants and showed that the above results also hold for the fixed determinant case.
Key words and phrases:Motives, Grothendieck motives, Voevodsky motives, Chow motives, Higgs bundles, Parabolic connections, Hodge moduli, \(E\)-polynomial 2020 Mathematics Subject Classification: 14C15, 14C30, 14D20, 14D23, 70G45 E-mail : [email protected] Address: Center for Geometry and Physics, Institute for Basic Science (IBS), Pohang 37673, Korea.
## 1. Introduction
In this paper we consider the moduli spaces of parabolic Higgs bundles ([4], [6], [7], [8], [9]) and parabolic connections ([9], [17]) over a compact Riemann surface \(X\) of genus \(g\geq 2\). These objects have several geometric structures that are useful in different topics, like algebraic geometry, differential geometry, mirror symmetry, mathematical physics, Langlands duality and so on. We prove equalities of some motivic classes (namely Grothendieck motives, Voevodsky motives and Chow motives) for these two moduli spaces. We also prove that their \(E\)-polynomials are equal.
A _parabolic bundle_\(E_{*}\) over \(X\) is a holomorphic vector bundle \(E\) over \(X\) together with a weighted flag over a fixed finite set \(D\subset X\), called the _parabolic structure_. These weights are real numbers between \(0\) and \(1\). A _parabolic Higgs bundle_ is a pair \((E_{*},\Phi)\), where \(E_{*}\) is a parabolic bundle and \(\Phi:E_{*}\to E_{*}\otimes K(D)\) is a parabolic Higgs field, where \(K\) is the canonical bundle over \(X\). On the other hand, a _parabolic connection_ is a pair \((E_{*},\mathcal{D})\) where \(\mathcal{D}:E\to E\otimes K(D)\) is a logarithmic connection on the underlying vector bundle satisfying some properties.
In [9], Simpson considered an algebraic family over \(\mathbb{C}\), which he called the Hodge moduli space, such that the fibres over \(0\) and \(1\) are exactly the moduli spaces of Higgs bundles and of connections respectively. Also, that gives a homeomorphism between these two moduli spaces, which is famously known as the non-abelian Hodge correspondence (see [8], [9], [10]). These two moduli spaces have singularities in general. But if we consider that the rank and degree are coprime, then these moduli spaces are smooth. For coprime rank and degree, Hausel and Thaddeus in [20, Theorem 6.2] proved that the \(E\)-polynomials are equal for these two moduli spaces and they have pure Hodge structure. There is a natural \(\mathbb{C}^{*}\)-action on the Hodge
moduli space and with that the moduli space becomes a semiprojective variety [27]. Using the smoothness and semiprojectivity of the Hodge moduli space, recently Hoskins and Lehalleur in [31] established a motivic version of the non-abelian Hodge correspondence. In that paper, they proved that the moduli of Higgs bundles and of connections have equal motivic classes in various setups. Later, Federov, A. Soibelman and Y. Soibelman in [30] proved a motivic Donaldson-Thomas invariants of these two moduli spaces in parabolic setting.
In this paper, we consider three types of motives, namely Grothendieck motives, Voevodsky motives and Chow motives. Let \(\mathcal{V}_{\mathbb{C}}\) denote the category of complex quasi-projective varieties. Let \(K(\mathcal{V}_{\mathbb{C}})\) denote the _Grothendieck ring of varieties_ and let \(\hat{K}(\mathcal{V}_{\mathbb{C}})\) be the dimensional completion. Let \(Z\) be a quasi-projective variety. Then \([Z]\in\hat{K}(\mathcal{V}_{\mathbb{C}})\) is called the _motive_ of \(Z\). If \(Z\) is \(n\)-dimensional with pure Hodge structure, then the corresponding \(E\)-polynomial is defined by
\[E(Z)=E(Z)(a,b)=\sum_{p,q=0}^{n}(-1)^{p+q}h_{c}^{p,q}(Z)u^{p}v^{q}.\]
Let \(\mathcal{M}_{\rm Higgs}(r,d,\alpha)\), \(\mathcal{M}_{\rm pc}(r,d,\alpha)\) and \(\mathcal{M}_{\rm Hod}(r,d,\alpha)\) be the moduli space of parabolic Higgs bundles, moduli of parabolic connections and parabolic Hodge moduli space of rank \(r\), degree \(d\) and generic weights \(\alpha\) over \(X\) respectively. We proved the following Theorem
**Theorem 1.1**.: _In \(\hat{K}(\mathcal{V}_{\mathbb{C}})\) we have the following motivic equalities_
\[[\mathcal{M}_{\rm Higgs}(r,d,\alpha)]=[\mathcal{M}_{\rm pc}(r,d,\alpha)]\,\, \,{\rm and}\,\,\,[\mathcal{M}_{\rm Hod}(r,d,\alpha)]=\mathbb{L}[\mathcal{M}_ {\rm Higgs}(r,d,\alpha)].\]
_Therefore, we have the following equalities of the \(E\)-polynomials_
\[E(\mathcal{M}_{\rm Higgs}(r,d,\alpha))=E(\mathcal{M}_{\rm pc}(r,d,\alpha))\, \,\,{\rm and}\,\,\,E(\mathcal{M}_{\rm Hod}(r,d,\alpha))=uvE(\mathcal{M}_{\rm Higgs }(r,d,\alpha)).\]
Here \(\mathbb{L}\) is the Lefschetz motive. To prove this theorem, we first prove that the moduli spaces \(\mathcal{M}_{\rm Higgs}(r,d,\alpha)\) and \(\mathcal{M}_{\rm Hod}(r,d,\alpha)\) are semiprojective. For details of the proof see Theorem 4.3.
We then consider Voevodsky's category of geometric motives \(DM_{\rm gm}(\mathbb{C};R)\) over \(\mathbb{C}\) with coefficients in a ring \(R\). We denote by \(M_{\rm gm}(X)_{R}\) the geometric motive of the smooth variety \(X\) with coefficients in \(R\). Also, we consider the category of Chow motives \(\mathbf{Chow}^{\rm eff}(\mathbb{C};R)\) which has an embedding into the Voevodsky's category of effective geometric motives \(DM_{\rm gm}^{\rm eff}(\mathbb{C};R)\subset DM_{\rm gm}(\mathbb{C};R)\). We denote by \(C(X)_{R}\) the Chow motive of a smooth variety \(X\) with coefficients in \(R\). Then using the semi-projectivity, we prove the following motivic equalities
**Theorem 1.2**.: _For any ring \(R\), we have the following isomorphisms of motives,_
\[M_{\rm gm}\big{(}\mathcal{M}_{\rm Higgs}(r,d,\alpha)\big{)}_{R}\cong M_{\rm gm }\big{(}\mathcal{M}_{\rm pc}(r,d,\alpha)\big{)}_{R}\in DM_{\rm gm}(\mathbb{C} ;R).\]
_and_
\[C\big{(}\mathcal{M}_{\rm Higgs}(r,d,\alpha)\big{)}_{R}\cong C\big{(}\mathcal{ M}_{\rm pc}(r,d,\alpha)\big{)}_{R}\in\textbf{Chow}^{\rm eff}(\mathbb{C};R).\]
For details of the proof, see Theorem 5.1 and 5.2.
Finally in the last section we consider the moduli spaces with fixed determinants and we prove above two theorems in the fixed determinant setup. See Theorem 6.1, 6.2 and 6.3.
## 2. Preliminaries
### Parabolic bundles
Let \(X\) be a compact Riemann surface of genus \(g\geq 2\) and let \(D=\{p_{1},\dots,p_{n}\}\subset X\) be a fixed subset of \(n\geq 1\) distinct marked points of \(X\).
**Definition 2.1**.: A _parabolic bundle_\(E_{*}\) of rank \(r\) (assuming \(r\geq 2\)) over \(X\) is a holomorphic vector bundle \(E\) of rank \(r\) over \(X\) endowed with a parabolic structure along the divisor \(D\), i.e. for every point \(p\in D\), we have
1. a filtration of subspaces \[E_{p}\eqqcolon E_{p,1}\supsetneq E_{p,2}\supsetneq\cdots\supsetneq E_{p,r_{p} }\supsetneq E_{p,r_{p}+1}=\{0\},\]
2. a sequence of real number satisfying \[0\leq\alpha_{1}(p)<\alpha_{2}(p)<\cdots<\alpha_{r_{p}}(p)<1,\]
where \(r_{p}\) is a natural number between \(1\) and \(r\). For all \(i=1,\ldots,r_{p}\), the real number \(\alpha_{i}(p)\) is called the _parabolic weight_ associated to the subspace \(E_{p,i}\).
For a fixed parabolic structure we denote the collection of all parabolic weights by \(\alpha=\{(\alpha_{1}(p),\alpha_{2}(p),\ldots,\alpha_{r_{p}}(p))\}_{p\in D}\). The parabolic structure is said to have _full flags_ if
\[\dim(E_{p,i}/E_{p,i+1})=1\]
for all \(i=1,\ldots,r_{p}\) and for all \(p\in D\), or equivalently \(r_{p}=r\) for all \(p\in D\).
The _parabolic degree_ of a parabolic bundle \(E_{*}\) is defined as
\[\operatorname{pardeg}(E_{*})\coloneqq\deg(E)+\sum_{p\in D}\sum_{i=1}^{r_{p}} \alpha_{i}(p)\cdot\dim(E_{p,i}/E_{p,i+1})\]
and the _parabolic slope_ of \(E_{*}\) is defined as
\[\mu_{\operatorname{par}}(E_{*})\coloneqq\frac{\operatorname{pardeg}(E_{*})}{r}.\]
In [11], Maruyama and Yokogawa gave an alternative definition of parabolic bundles in terms of coherent sheaves, which is useful to define the notion of parabolic tensor products and parabolic dual.
**Definition 2.2**.: A _parabolic homomorphism_\(\phi:E_{*}\to E_{*}^{\prime}\) between two parabolic bundles is a homomorphism of underlying vector bundles that satisfies the following: at each \(p\in D\) we have
\[\alpha_{i}(p)>\alpha_{j}^{\prime}(p)\implies\phi(E_{p,i})\subseteq E_{p,j+1}^ {\prime}.\]
Furthermore, we call such a homomorphism _strongly parabolic_ if
\[\alpha_{i}(p)\geq\alpha_{j}^{\prime}(p)\implies\phi(E_{p,i})\subseteq E_{p,j+1} ^{\prime}\]
for every \(p\in D\).
A parabolic subbundle \(F_{*}\subset E_{*}\) is a holomorphic subbundle \(F\subset E\) of the underlying vector bundle together with the induced parabolic structure, i.e. by taking the appropriate intersections.
A parabolic bundle \(E_{*}\) is called _stable_ (resp. _semistable_) if every nonzero proper subbundle \(F_{*}\subset E_{*}\) satisfies
\[\mu_{\operatorname{par}}(F_{*})<\mu_{\operatorname{par}}(E_{*})\;\;(\text{resp. }\;\;\leq).\]
The moduli space \(\mathcal{M}(r,d,\alpha)\) of semistable parabolic bundles over \(X\) of fixed rank \(r\), degree \(d\) and parabolic structure \(\alpha\) was constructed by Mehta and Seshadri in [4]. It is a normal projective complex variety of dimension
\[\dim\mathcal{M}(r,d,\alpha)=r^{2}(g-1)+1+\frac{n(r^{2}-r)}{2},\]
where the last summand comes from the assumption that the parabolic structure has full flags at each point \(p\in D\). The stable locus of \(\mathcal{M}(r,d,\alpha)\) is exactly the smooth locus of the moduli space. If weights are generic then semistability of a parabolic bundle implies stability, therefore the moduli space \(\mathcal{M}(r,d,\alpha)\) is a smooth variety.
### Parabolic Higgs bundles
Let \(K\) be the canonical bundle on \(X\). We write \(K(D)\coloneqq K\otimes\mathcal{O}(D)\).
**Definition 2.3**.: A _(strongly) parabolic Higgs bundle_ on \(X\) is a parabolic bundle \(E_{*}\) on \(X\) together with a Higgs field \(\Phi:E_{*}\to E_{*}\otimes K(D)\) such that \(\Phi\) is strongly parabolic, i.e. for all \(p\in D\) we have \(\Phi(E_{p,i})\subset E_{p,i+1}\otimes\left.K(D)\right|_{p}\).
We also have a notion of (non-strongly) parabolic Higgs bundle where the Higgs field \(\Phi\) is a parabolic morphism. But in this paper, the Higgs field is always assumed to be strongly parabolic.
For a parabolic Higgs bundle \((E_{*},\Phi)\), a subbundle \(F_{*}\subset E_{*}\) is called \(\Phi\)_-invariant_ if \(\Phi\) preserves \(F_{*}\), i.e. \(\Phi(F_{*})\subseteq F_{*}\otimes K(D)\).
**Definition 2.4**.: A parabolic Higgs bundle \((E_{*},\Phi)\) is said to be _stable_ (resp. _semistable_) if every nonzero proper \(\Phi\)-invariant subbundle \(F_{*}\subset E_{*}\) satisfies
\[\mu_{\rm par}(F_{*})<\mu_{\rm par}(E_{*})\;\;({\rm resp.}\;\;\leq).\]
Let \(\mathcal{M}_{\rm Higgs}(r,d,\alpha)\) denote the moduli space of semistable parabolic Higgs bundles over \(X\) of rank \(r\), degree \(d\) and full flag parabolic structure \(\alpha\). It is a normal quasi-projective complex variety (see [12], [14]). The stable locus is exactly the smooth locus of the moduli space \(\mathcal{M}_{\rm Higgs}(r,d,\alpha)\). Therefore as before, if weights are generic, then \(\mathcal{M}_{\rm Higgs}(r,d,\alpha)\) is a smooth variety.
Notice that the moduli space \(\mathcal{M}(r,d,\alpha)\) is embedded in the moduli space \(\mathcal{M}_{\rm Higgs}(r,d,\alpha)\) by considering the zero Higgs fields. By the parabolic version of Serre duality (see [12], [13]), the cotangent bundle of \(\mathcal{M}(r,d,\alpha)\)
\[T^{*}\mathcal{M}(r,d,\alpha)\subset\mathcal{M}_{\rm Higgs}(r,d,\alpha)\]
is an open dense subset of \(\mathcal{M}_{\rm Higgs}(r,d,\alpha)\). Thus the moduli space \(\mathcal{M}_{\rm Higgs}(r,d,\alpha)\) has dimension
\[\dim\mathcal{M}_{\rm Higgs}(r,d,\alpha)=2\dim\mathcal{M}(r,d,\alpha)=2r^{2}(g- 1)+2+n(r^{2}-r).\]
Let \(t\in\mathbb{C}^{*}\) be a nonzero complex number. It can be check that if \((E_{*},\Phi)\in\mathcal{M}_{\rm Higgs}(r,d,\alpha)\) then so is \((E_{*},t\Phi)\), i.e. \((E_{*},t\Phi)\in\mathcal{M}_{\rm Higgs}(r,d,\alpha)\) is also a semistable parabolic Higgs bundle. Therefore, we have a standard \(\mathbb{C}^{*}\)-action on the moduli space \(\mathcal{M}_{\rm Higgs}(r,d,\alpha)\), which is given by
\[t\cdot(E_{*},\Phi)=(E_{*},t\Phi).\]
#### 2.2.1. Hitchin fibration
Let \((E_{*},\Phi)\) be a parabolic Higgs bundle of rank \(r\). Consider the characteristic polynomial of the Higgs field \(\Phi\),
\[\det(x\cdot I-\Phi)=x^{r}+s_{1}x^{r-1}+\cdots+s_{r},\]
where \(s_{i}={\rm tr}(\wedge^{i}\Phi)\in H^{0}(X,K(D)^{i})\) and \(K(D)^{i}\) denotes the tensor product of \(K^{\otimes i}\) and \(j\)-th power of the line bundle corresponding to the divisor \(D\).
Since \(\Phi\) is strongly parabolic, the residue of the parabolic Higgs field \(\Phi\) at each marked point \(p\in D\) is nilpotent with respect to the filtration. So the eigenvalues of \(\Phi\) vanishes along
the divisor \(D\), i.e. \(s_{i}\in H^{0}(X,K^{i}(D^{i-1}))\subset H^{0}(X,K(D)^{i})\). Hence, we have the _parabolic Hitchin fibration_
\[h:\mathcal{M}_{\mathrm{Higgs}}(r,d,\alpha)\longrightarrow\mathcal{H}\coloneqq \bigoplus_{i=1}^{r}H^{0}(X,K^{i}(D^{i-1})),\]
sending \((E_{*},\Phi)\) to the coefficients of its characteristic polynomial of the Higgs field \(\Phi\). Here the vector space \(\mathcal{H}\) is called the _Hitchin base_.
Notice that the Hitchin fibration \(h\) doesn't depend on the parabolic structure as it only depends on the Higgs field \(\Phi\) and the line bundle \(K(D)\). Also, \(h\) is a proper surjective morphism (see [15]).
Suppose \(t\in\mathbb{C}^{*}\). Then the Hitchin base \(\mathcal{H}\) also admits a natural \(\mathbb{C}^{*}\)-action, which is given by
\[\mathbb{C}^{*}\times\mathcal{H} \longrightarrow\mathcal{H}\] \[(t,(s_{1},s_{2},\dots,s_{r})) \mapsto(ts_{1},t^{2}s_{2},\dots,t^{r}s_{r}).\]
### Parabolic connections
A _logarithmic connection_ on a vector bundle \(E\) over \(X\), singular over the divisor \(D\) is a \(\mathbb{C}\)-linear morphism
\[\mathcal{D}:E\to E\otimes K(D)\]
satisfying the Leibniz identity
\[\mathcal{D}(fs)=f\mathcal{D}(s)+df\otimes s,\]
where \(f\) is a locally defined holomorphic function on \(X\) and \(s\) is a locally defined holomorphic section of \(E\). For more details about the logarithmic connection, see [2] and [5].
**Definition 2.5**.: A _parabolic connection_ on a parabolic bundle \(E_{*}\) over \(X\) is a logarithmic connection \(\mathcal{D}\) on the underlying vector bundle \(E\) satisfying the following conditions:
1. On every fibre \(E_{p}\) over each marked point \(p\in D\), the logarithmic connection \(\mathcal{D}\) satisfies \[\mathcal{D}(E_{p,i})\subseteq E_{p,i}\otimes K(D)|_{p}\] for all \(i=1,2,\dots,r\).
2. For all \(p\in D\) and for all \(i\in\{1,\dots,r\}\) the action of the residue \(Res(\mathcal{D},p)\in\mathrm{End}(E_{p})\) on the quotient \(E_{p,i}/E_{p,i+1}\) is the multiplication by \(\alpha_{p,i}\), where \(\alpha_{p,i}\)'s are the parabolic weights over the point \(p\). Since the residue \(Res(\mathcal{D},p)\) preserves the filtration over \(p\), it acts on every quotient.
A parabolic connection will be denoted by \((E_{*},\mathcal{D})\) and a parabolic subbundle \(F_{*}\subset E_{*}\) is called \(\mathcal{D}\)_-invariant_ if \(\mathcal{D}(F)\subseteq F\otimes K(D)\).
**Definition 2.6**.: A parabolic connection \((E_{*},\mathcal{D})\) is called _stable_ (resp. _semistable_) if every non-zero proper \(\mathcal{D}\)-invariant subbundle \(F_{*}\subset E_{*}\) satisfies
\[\mu_{\mathrm{par}}(F_{*})<\mu_{\mathrm{par}}(E_{*})\ \ (\text{resp. }\leq).\]
The moduli space \(\mathcal{M}_{\mathrm{pc}}(r,d,\alpha)\) of semistable parabolic connections of fixed rank \(r\), degree \(d\) and generic weight type \(\alpha\) (assuming full flag structure) is a smooth quasi-projective irreducible complex variety of dimension
\[\dim\mathcal{M}_{\mathrm{pc}}(r,d,\alpha)=2r^{2}(g-1)+2+n(r^{2}-r)\]
(see [26, Theorem 2.1]).
### Parabolic \(\lambda\)-connections
Let \(\lambda\in\mathbb{C}\) be a complex number.
**Definition 2.7**.: A _parabolic \(\lambda\)-connection_ over \(X\) is a triple \((E_{*},\lambda,\nabla)\) where \(E_{*}\) is a parabolic bundle over \(X\) and \(\nabla:E\longrightarrow E\otimes K(D)\) is a \(\mathbb{C}\)-linear morphism between the underlying vector bundles satisfying
1. \(\nabla(fs)=f\nabla(s)+\lambda\cdot df\otimes s\), where \(f\) is a locally defined holomorphic function on \(X\) and \(s\) is a locally defined holomorphic section of \(E\).
2. On every fibre \(E_{p}\), the connection \(\nabla\) satisfies \[\nabla(E_{p,i})\subseteq E_{p,i}\otimes\left.K(D)\right|_{p}\] for all \(i=1,2,\ldots,r\).
3. For all \(p\in D\) and for all \(i\in\{1,\ldots,r\}\) the action of the residue \(Res(\nabla,p)\) on the quotient \(E_{p,i}/E_{p,i+1}\) is the multiplication by \(\lambda\alpha_{p,i}\).
A parabolic subbundle \(F_{*}\subset E_{*}\) is called \(\nabla\)_-invariant_ if \(\nabla(F)\subseteq F\otimes K(D)\).
**Definition 2.8**.: A parabolic \(\lambda\)-connection \((E_{*},\lambda,\nabla)\) is _stable_ (resp. _semistable_) if every non-zero proper \(\nabla\)-invariant subbundle \(F_{*}\subset E_{*}\) satisfies
\[\mu(F_{*})<\mu(E_{*})\ \ (\text{resp. }\leq).\]
We denote by \(\mathcal{M}_{\text{Hod}}(r,d,\alpha)\) the moduli space of semistable parabolic \(\lambda\)-connections over \(X\) of fixed rank \(r\), degree \(d\) and weight type \(\alpha\). For generic weights, the moduli space \(\mathcal{M}_{\text{Hod}}(r,d,\alpha)\) is a smooth quasiprojective complex variety (see [28]). This moduli space is also called the parabolic Hodge moduli space.
There is a canonical surjective algebraic map
\[\text{pr}\coloneqq\text{pr}_{\lambda}:\mathcal{M}_{\text{Hod}}(r,d,\alpha) \longrightarrow\mathbb{C} \tag{2.1}\]
defined by \(\text{pr}(E_{*},\lambda,\nabla)=\lambda\).
Let us consider the case \(\lambda=0\), i.e. the moduli space of parabolic \(0\)-connections \((E_{*},0,\nabla)\). In this case the residue \(Res(\nabla,p)\) of the morphism \(\nabla:E\longrightarrow E\otimes K(D)\) at every \(p\in D\) acts as the zero map on the quotient \(E_{p,i}/E_{p,i+1}\). Therefore for every \(p\in D\), we have \(\nabla(E_{p,i})\subseteq\left.E_{p,i+1}\otimes\left.K(D)\right|_{p}\right.\). Thus, a parabolic \(0\)-connection is equivalent to a strongly parabolic Higgs bundle. Hence,
\[\text{pr}^{-1}(0)=\mathcal{M}_{\text{Higgs}}(r,d,\alpha)\subset\mathcal{M}_{ \text{Hod}}(r,d,\alpha).\]
The natural \(\mathbb{C}^{*}\)-action on the moduli space \(\mathcal{M}_{\text{Higgs}}(r,d,\alpha)\) extends to a \(\mathbb{C}^{*}\)-action on \(\mathcal{M}_{\text{Hod}}(r,d,\alpha)\) defined by
\[t\cdot(E_{*},\lambda,\nabla)=(E_{*},t\lambda,t\nabla). \tag{2.2}\]
Similarly, if we consider the case \(\lambda=1\), then we get
\[\text{pr}^{-1}(1)=\mathcal{M}_{\text{pc}}(r,d,\alpha)\subset\mathcal{M}_{ \text{Hod}}(r,d,\alpha).\]
## 3. Semiprojectivity of the moduli spaces
In this section, we will prove the semiprojectivity of the moduli spaces \(\mathcal{M}_{\text{Higgs}}(r,d,\alpha)\) and \(\mathcal{M}_{\text{Hod}}(r,d,\alpha)\).
**Definition 3.1** (Semiprojective varieties).: Let \(V\) be a quasi-projective complex variety equipped with a \(\mathbb{C}^{*}\)-action \(z\mapsto t\cdot z\), \(z\in V,t\in\mathbb{C}^{*}\). We say that \(V\) is _semiprojective_ if it satisfies the following conditions:
1. for every \(x\in V\), the limit \(\lim_{t\to 0}(t\cdot x)_{t\in\mathbb{C}^{*}}\) exists in \(V\),
2. the fixed point locus \(V^{\mathbb{C}^{*}}\) under the \(\mathbb{C}^{*}\)-action is proper.
### Semiprojectivity of the moduli space of parabolic Higgs bundles
**Lemma 3.1**.: _The Hitchin map \(h:\mathcal{M}_{\rm Higgs}(r,d,\alpha)\to\mathcal{H}\) is \(\mathbb{C}^{*}\)-equivariant._
Proof.: Recall that the \(\mathbb{C}^{*}\)-action on the Hitchin base \(\mathcal{H}=\bigoplus_{i=1}^{r}H^{0}(X,K^{i}(D^{i-1}))\) is given by
\[t\cdot(s_{1},s_{2},\dots,s_{r})=(ts_{1},t^{2}s_{2},\dots,t^{r}s_{r}).\]
Let \(h\big{(}(E_{*},\Phi)\big{)}=(s_{1},s_{2},\dots,s_{r})\), i.e. \(s_{i}=\operatorname{tr}(\wedge^{i}\Phi)\) are the coefficients of the characteristic polynomial of \(\Phi\). Then the characteristic polynomial of \(t\Phi\) is given by
\[\det(x\cdot I-t\Phi)=x^{r}+ts_{1}x^{r-1}+t^{2}s_{2}x^{r-2}+\dots+t^{r}s_{r}.\]
Therefore,
\[h\big{(}(t\cdot(E_{*},\Phi))\big{)}=h\big{(}(E_{*},t\Phi)\big{)}=(ts_{1},t^{2 }s_{2},\dots,t^{r}s_{r})=t\cdot(s_{1},s_{2},\dots,s_{r})=t\cdot h\big{(}(E_{*}, \Phi)\big{)}.\]
Hence, \(h\) is \(\mathbb{C}^{*}\)-equivariant.
To show the semiprojectivity of the moduli space \(\mathcal{M}_{\rm Higgs}(r,d,\alpha)\) we have to show that the natural \(\mathbb{C}^{*}\)-action on \(\mathcal{M}_{\rm Higgs}(r,d,\alpha)\) satisfies the above two conditions given in the Definition 3.1.
**Lemma 3.2**.: _Let \((E_{*},\Phi)\) be a semistable parabolic Higgs bundle. Then the limit \(\lim_{t\to 0}(E_{*},t\Phi)\) exists in \(\mathcal{M}_{\rm Higgs}(r,d,\alpha)\)._
Proof.: Consider the map
\[f:\mathbb{C}^{*}\longrightarrow\mathcal{M}_{\rm Higgs}(r,d,\alpha)\]
given by \(t\mapsto(E_{*},t\Phi)\). By the above Lemma 3.1, we know that \(h\) is \(\mathbb{C}^{*}\)-equivariant. Therefore, we have
\[\lim_{t\to 0}h\big{(}(E_{*},t\Phi)\big{)}=\lim_{t\to 0}t\cdot h\big{(}(E_{*}, \Phi)\big{)}=0.\]
Thus, the composition map \(F\coloneqq h\circ f:\mathbb{C}^{*}\longrightarrow\mathcal{H}\) extends to a map \(\hat{F}:\mathbb{C}\longrightarrow\mathcal{H}\). Since \(h\) is proper, by the valuative criterion of properness \(f\) also extend to a map
\[\hat{f}:\mathbb{C}\longrightarrow\mathcal{M}_{\rm Higgs}(r,d,\alpha).\]
Hence, the limit \(\lim_{t\to 0}(E_{*},t\Phi)\) exists in \(\mathcal{M}_{\rm Higgs}(r,d,\alpha)\).
**Lemma 3.3**.: _The fixed point locus under the \(\mathbb{C}^{*}\)-action on \(\mathcal{M}_{\rm Higgs}(r,d,\alpha)\) is proper in \(h^{-1}(0)\subset\mathcal{M}_{\rm Higgs}(r,d,\alpha)\)._
Proof.: Note that the only element that is fixed under the \(\mathbb{C}^{*}\)-action on the Hitchin base \(\mathcal{H}\) is the zero point. Therefore, the fixed point locus \(\mathcal{H}^{\mathbb{C}^{*}}\) is exactly the set \(\{0\}\). Since the Hitchin fibration \(h\) is \(\mathbb{C}^{*}\)-equivariant, the fixed point locus \(\mathcal{M}_{\rm Higgs}(r,d,\alpha)^{\mathbb{C}^{*}}\) must be closed in \(h^{-1}(\mathcal{H}^{\mathbb{C}^{*}})=h^{-1}(0)\). Since \(h\) is proper, so is \(h^{-1}(0)\). Hence, \(\mathcal{M}_{\rm Higgs}(r,d,\alpha)^{\mathbb{C}^{*}}\) is also proper in \(h^{-1}(0)\).
**Proposition 3.4**.: _The moduli space \(\mathcal{M}_{\rm Higgs}(r,d,\alpha)\) is a smooth semiprojective complex variety._
Proof.: We know that for generic weights, the moduli space \(\mathcal{M}_{\rm Higgs}(r,d,\alpha)\) is a smooth quasiprojective complex variety. Therefore the semiprojectivity of \(\mathcal{M}_{\rm Higgs}(r,d,\alpha)\) follows from Lemma 3.2 and 6.1.
### Semiprojectivity of parabolic Hodge moduli space
Recall that the \(\mathbb{C}^{*}\)-action on the moduli space \(\mathcal{M}_{\mathrm{Hod}}(r,d,\alpha)\) is given by
\[t\cdot(E_{*},\lambda,\nabla)=(E_{*},t\lambda,t\nabla).\]
To prove the semiprojectivity of \(\mathcal{M}_{\mathrm{Hod}}(r,d,\alpha)\) we need to check that this \(\mathbb{C}^{*}\)-action satisfies the two properties given in the Definition 3.1.
**Lemma 3.5**.: _Let \((E_{*},\lambda,\nabla)\in\mathcal{M}_{\mathrm{Hod}}(r,d,\alpha)\) be a semistable parabolic \(\lambda\)-connection. Then the limit \(\lim_{t\to 0}(E_{*},t\lambda,t\nabla)\) exists in \(\mathrm{pr}^{-1}(0)\subset\mathcal{M}_{\mathrm{Hod}}(r,d,\alpha)\)._
Proof.: The proof is similar to [18, Corollary 10.2]. Consider the following to projections
\[\pi_{1}:X\times\mathbb{C}^{*}\longrightarrow X\ \ \text{and}\ \ \pi_{2}:X\times\mathbb{C} \longrightarrow\mathbb{C}.\]
Now consider the \(\mathbb{C}^{*}\)-flat family over \(\pi_{2}:X\times\mathbb{C}\longrightarrow\mathbb{C}\) given by
\[(\mathcal{E},t\lambda,\nabla_{\pi_{2}})\coloneqq(\pi_{1}^{*}E_{*},t\lambda,t \pi_{1}^{*}\nabla)\]
For any \(t\neq 0\), we know that a parabolic \(t\lambda\)-connection \((E_{*},t\lambda,t\nabla)\) is semistable if and only if the parabolic \(\lambda\)-connection \((E_{*},\lambda,\nabla)\) is semistable. Therefore, the fibers of the above family are semistable for \(t\neq 0\). Following [18, Theorem 10.1], there exist a \(\mathbb{C}\)-flat family \((\overline{\mathcal{E}},\overline{t\lambda},\overline{\nabla_{\pi_{2}}})\) over \(\pi_{2}:X\times\mathbb{C}\longrightarrow\mathbb{C}\) such that
\[(\overline{\mathcal{E}},\overline{t\lambda},\overline{\nabla_{\pi_{2}}}) \big{|}_{X\times\mathbb{C}^{*}}\cong(\pi_{1}^{*}E_{*},t\lambda,t\pi_{1}^{*}\nabla)\]
and \((\overline{\mathcal{E}},\overline{t\lambda},\overline{\nabla_{\pi_{2}}}) \big{|}_{X\times\{0\}}\) is semistable. Therefore,
\[(\overline{\mathcal{E}},\overline{t\lambda},\overline{\nabla_{\pi_{2}}}) \big{|}_{X\times\{0\}}\in\mathrm{pr}^{-1}(0)\]
is the limit of the \(\mathbb{C}^{*}\)-orbit of \((E_{*},\lambda,\nabla)\) at \(t=0\) in the moduli space \(\mathcal{M}_{\mathrm{Hod}}(r,d,\alpha)\).
**Lemma 3.6**.: _The fixed point locus under the \(\mathbb{C}^{*}\)-action on \(\mathcal{M}_{\mathrm{Hod}}(r,d,\alpha)\) is proper in \(\mathcal{M}_{\mathrm{Hod}}(r,d,\alpha)\)._
Proof.: The fixed point locus under the \(\mathbb{C}^{*}\)-action \(t\cdot(E_{*},\lambda,\nabla)=(E_{*},t\lambda,t\nabla)\) is exactly corresponds to the fixed point locus under the \(\mathbb{C}^{*}\)-action on \(\mathrm{pr}^{-1}(0)=\mathcal{M}_{\mathrm{Higgs}}(r,d,\alpha)\). Therefore by Lemma 6.1, the fixed point locus \(\mathcal{M}_{\mathrm{Hod}}(r,d,\alpha)^{\mathbb{C}^{*}}\) is proper.
**Proposition 3.7**.: _The moduli space \(\mathcal{M}_{\mathrm{Hod}}(r,d,\alpha)\) is a smooth semiprojective complex variety._
_Moreover, the algebraic map \(\mathrm{pr}:\mathcal{M}_{\mathrm{Hod}}(r,d,\alpha)\to\mathbb{C}\) given in (2.1) is a \(\mathbb{C}^{*}\)-equivariant surjective submersion covering the scaling action on \(\mathbb{C}\)._
Proof.: Since weights are generic, the moduli space \(\mathcal{M}_{\mathrm{Hod}}(r,d,\alpha)\) is a smooth quasiprojective complex variety. Therefore, Lemma 3.5 and 6.2 implies that the moduli space \(\mathcal{M}_{\mathrm{Hod}}(r,d,\alpha)\) is semiprojective.
The second part follows immediately from the smoothness property of the moduli space \(\mathcal{M}_{\mathrm{Hod}}(r,d,\alpha)\).
## 4. Grothendieck motives of semiprojective varieties
In this section we recall the Grothendieck ring of varieties and some basic properties. We also define what we mean by the Grothendieck motive.
### Grothendieck ring of varieties
Let \(\mathcal{V}_{\mathbb{C}}\) denote the category of quasiprojective complex varieties. We also denote by \([Z]\) the isomorphism class corresponding to an element \(Z\in\mathcal{V}_{\mathbb{C}}\). Let \(Z^{\prime}\subset Z\) be a Zariski-closed subset of \(Z\). Let \(G\) be the quotient group coming from the free abelian group generated by the isomorphism classes \([Z]\), modulo the relation
\[[Z]=[Z^{\prime}]+[Z\setminus Z^{\prime}].\]
In this group \(G\), the additive structure is given by
\[[Z_{1}]+[Z_{2}]\coloneqq[Z_{1}\sqcup Z_{2}],\]
where \(\sqcup\) denotes the disjoint union and the multiplicative structure is defined by
\[[Z_{1}]\cdot[Z_{2}]\coloneqq[Z_{1}\times Z_{2}].\]
Therefore we get a commutative ring \((G,+,\cdot)\), called the _Grothendieck ring of varieties_. We will denote this ring by \(K(\mathcal{V}_{\mathbb{C}})\). The additive and multiplicative units of \(K(\mathcal{V}_{\mathbb{C}})\) are \(0=[\emptyset]\) and \(1=[\operatorname{Spec}(\mathbb{C})]\) respectively.
Consider the affine line \(\mathbb{A}^{1}\). The class of \(\mathbb{A}^{1}\) is called the _Lefschetz object_, denoted by
\[\mathbb{L}\coloneqq[\mathbb{A}^{1}]=[\mathbb{C}].\]
Therefore,
\[\mathbb{L}^{n}=[\mathbb{A}^{n}]=[\mathbb{C}^{n}].\]
Let \(K(\mathcal{V}_{\mathbb{C}})[\mathbb{L}^{-1}]\) be the localization of \(K(\mathcal{V}_{\mathbb{C}})\) and let
\[\hat{K}(\mathcal{V}_{\mathbb{C}})=\Bigg{\{}\sum_{k\geq 0}[Z_{k}]\mathbb{L}^{-k} \ \Bigg{|}\ \left[Z_{k}\right]\in K(\mathcal{V}_{\mathbb{C}})\text{ with }\dim Z_{k}-k\longrightarrow-\infty\Bigg{\}}\]
be the dimensional completion of \(K(\mathcal{V}_{\mathbb{C}})\).
Throughout this paper, by the Grothendieck motive we mean
**Definition 4.1**.: let \(Z\) be a quasiprojective complex variety. The class \([Z]\in K(\mathcal{V}_{\mathbb{C}})\) or in \(\hat{K}(\mathcal{V}_{\mathbb{C}})\) is called the _Grothendieck motive_, or just the _motive_ of \(Z\).
### Mixed Hodge structure and \(E\)-polynomial
Let \(d=\dim(Z)\) be the dimension of a quasiprojective complex variety \(Z\). In [3], Deligne proved that the compactly supported \(k\)-th cohomology \(H^{k}_{c}(Z)\coloneqq H^{k}_{c}(Z,\mathbb{C})\) is equipped with a mixed Hodge structure for all \(k\in\{0,\dots,2d\}\). Also, \(H^{k}_{c}(Z)\) is endowed with two filtrations \(W^{\bullet}\) and \(F_{\bullet}\) and that allow us to define the corresponding Hodge numbers
\[h^{k,p,q}(Z)\coloneqq\dim H^{p,q}(H^{k}_{c}(Z,\mathbb{C}))=\dim\mathrm{Gr}^{ p}_{F}\mathrm{Gr}^{W}_{p+q}(H^{k}_{c}(Z,\mathbb{C})),\]
where \(p,q\in\{0,\dots,k\}\). If \(h^{k,p,q}(Z)\neq 0\), then we say that \((p,q)\) are \(k\)-weights of \(Z\). It can be easily verify that the mixed Hodge numbers satisfy \(h^{k,p,q}(Z)=h^{k,q,p}(Z)\) and \(\dim H^{k}_{c}(Z)=\sum_{p,q=0}^{d}h^{k,p,q}(Z)\). Define
\[\mathcal{X}^{p,q}(Z)\coloneqq\sum_{k}(-1)^{k}h^{k,p,q}(Z).\]
Then the _\(E\)-polynomial_ of \(Z\) is defined by
\[E(Z)=E(Z;u,v)=\sum_{p,q=0}\mathcal{X}^{p,q}(Z)u^{p}v^{q}\in\mathbb{Z}[u,v].\]
Notice that \(E(Z;1,1)=\chi(Z)\) is the Euler characteristic of \(Z\). So the \(E\)-polynomial is a generalization of the Euler characteristic.
The \(E\)-polynomial satisfies the following properties
1. _(scissor relation)_ \(E(Z)=E(V)+E(Z\setminus V)\) for a closed subvariety \(V\subset Z\),
2. _(multiplicativity)_ \(E(Y\times Z)=E(Y)\cdot E(Z)\) where \(Y\times Z\) is the cartesian product,
3. If \(Z\to Y\) is an algebraic fibre bundle with fibre \(B\), then \(E(Z)=E(Y)\cdot E(B)\).
**Examples 4.1**.:
* \(E(\mathbb{C})=E(\mathbb{A}^{1})=E(\mathbb{P}^{1})-E(\mathrm{pt})=uv=:x\),
* \(E(\mathbb{P}^{n})=E(\mathbb{A}^{n})+E(\mathbb{A}^{n-1})+\cdots+E(\mathbb{A}^{ 1})+E(\mathbb{A}^{0})=x^{n}+x^{n-1}+\cdots+x+1\).
Now assume that \(Z\) has pure Hodge structure, then its \(E\)-polynomial is given by
\[E(Z)=\sum_{p,q=0}^{d}(-1)^{p+q}h^{p,q}(Z)u^{p}v^{q} \tag{4.1}\]
where \(d=\dim Z\) and \(h^{p,q}(Z)=\dim H^{p,q}_{c}(Z)\).
**Remark**.: _The \(E\)-polynomial can be realized as a ring homomorphism_
\[E:\hat{K}(\mathcal{V}_{\mathbb{C}})\longrightarrow\mathbb{Z}[u,v]\]
_from the Grothendieck ring of varieties to \(\mathbb{Z}[u,v]\). This map extends to the completion_
\[E:\hat{K}(\mathcal{V}_{\mathbb{C}})\longrightarrow\mathbb{Z}[u,v]\left[ \left\lfloor\frac{1}{uv}\right\rfloor\right]\]
_(also denoted by \(E\)) taking values in the Laurent series in \(uv\). Hence if two quasiprojective varieties have the same motive then their \(E\)-polynomials are the same._
We will apply the following result for a smooth semiprojective complex variety in our set up.
**Proposition 4.2** ([29], Theorem 5.6).: _Let \(Z\) be a smooth semiprojective complex variety endowed with a \(\mathbb{C}^{*}\)-equivariant surjective submersion \(\pi:Z\to\mathbb{C}\) covering the standard scaling action on \(\mathbb{C}\). Then the following motivic equalities hold in the Grothendieck ring \(\hat{K}(\mathcal{V}_{\mathbb{C}})\),_
\[[\pi^{-1}(0)]=[\pi^{-1}(1)]\;\;\mathrm{and}\;\;[Z]=\mathbb{L}[\pi^{-1}(0)],\]
_where \(\mathbb{L}\) is the Lefschetz motive._
Proof.: See [29, Theorem 5.6] for details.
**Theorem 4.3**.: _In \(\hat{K}(\mathcal{V}_{\mathbb{C}})\) the following equalities hold,_
\[[\mathcal{M}_{\mathrm{Higgs}}(r,d,\alpha)]=[\mathcal{M}_{\mathrm{pc}}(r,d, \alpha)]\;\;\mathrm{and}\;\;[\mathcal{M}_{\mathrm{Hod}}(r,d,\alpha)]=\mathbb{L }[\mathcal{M}_{\mathrm{Higgs}}(r,d,\alpha)].\]
_Therefore, we have the following equalities of the \(E\)-polynomials_
\[E(\mathcal{M}_{\mathrm{Higgs}}(r,d,\alpha))=E(\mathcal{M}_{\mathrm{pc}}(r,d, \alpha))\;\;\mathrm{and}\;\;E(\mathcal{M}_{\mathrm{Hod}}(r,d,\alpha))=uvE( \mathcal{M}_{\mathrm{Higgs}}(r,d,\alpha)).\]
Proof.: By Proposition 3.7, the parabolic Hodge moduli space \(\mathcal{M}_{\mathrm{Hod}}(r,d,\alpha)\) is a smooth semiprojective complex variety with the \(\mathbb{C}^{*}\)-action given in (2.2). Also, from the Proposition 3.7 it follows that the surjective map \(\mathrm{pr}:\mathcal{M}_{\mathrm{Hod}}(r,d,\alpha)\to\mathbb{C}\) given in (2.1) is a \(\mathbb{C}^{*}\)-equivariant submersion covering the natural \(\mathbb{C}^{*}\)-action on \(\mathbb{C}\). Thus by Proposition 4.2, we have
\[[\mathcal{M}_{\mathrm{Higgs}}(r,d,\alpha)]=[\mathrm{pr}^{-1}(0)]=[\mathrm{pr}^ {-1}(1)]=[\mathcal{M}_{\mathrm{pc}}(r,d,\alpha)]\]
and
\[[\mathcal{M}_{\rm Hod}(r,d,\alpha)]=\mathbb{L}[{\rm pr}^{-1}(0)]=\mathbb{L}[ \mathcal{M}_{\rm Higgs}(r,d,\alpha)].\]
Therefore, by Remark (4.2), \(E\)-polynomials of the moduli spaces \(\mathcal{M}_{\rm Higgs}(r,d,\alpha))\) and \(\mathcal{M}_{\rm pc}(r,d,\alpha))\) are equal, i.e.
\[E(\mathcal{M}_{\rm Higgs}(r,d,\alpha))=E(\mathcal{M}_{\rm pc}(r,d,\alpha))\]
and by the multiplicative property of the \(E\)-polynomial, we have
\[E(\mathcal{M}_{\rm Hod}(r,d,\alpha))=E(\mathbb{C})E(\mathcal{M}_{\rm Higgs}(r,d,\alpha))=uvE(\mathcal{M}_{\rm Higgs}(r,d,\alpha)).\]
**Theorem 4.4**.: _The Hodge structures of the moduli spaces \(\mathcal{M}_{\rm Higgs}(r,d,\alpha)\) and \(\mathcal{M}_{\rm pc}(r,d,\alpha)\) are isomorphic, i.e._
\[H^{\bullet}(\mathcal{M}_{\rm Higgs}(r,d,\alpha))=H^{\bullet}(\mathcal{M}_{ \rm pc}(r,d,\alpha)).\]
_Also, the moduli spaces \(\mathcal{M}_{\rm pc}(r,d,\alpha)\) and \(\mathcal{M}_{\rm Hod}(r,d,\alpha)\) have pure mixed Hodge structures._
Proof.: Following [27, Corollary 1.3.3], we have that the cohomologies of the fibres \({\rm pr}^{-1}(0)=\mathcal{M}_{\rm Higgs}(r,d,\alpha)\) and \({\rm pr}^{-1}(1)=\mathcal{M}_{\rm pc}(r,d,\alpha)\) are isomorphic and have pure mixed Hodge structures. Again by [27, Corollary 1.3.2], since the moduli space \(\mathcal{M}_{\rm Hod}(r,d,\alpha)\) is smooth semiprojective for generic weights, it has pure cohomology.
## 5. Voevodsky motives and Chow motives
In this section, we will briefly describe the Voevodsky's category of geometric motives over a field \(k\) with coefficients in a commutative ring \(R\). This is a tensor triangulated category. For more details, see [19], [21], [23] and [24].
### The category of finite correspondences
**Definition 5.1**.: Let \(Y\) and \(Z\) be varieties over \(k\). Let \(c(Y,Z)\) denote the group generated by integral closed subvariety \(W\subset Y\times_{k}Z\) such that
1. the first projection \(\pi_{1}:W\longrightarrow Y\) is finite and
2. the image \(\pi_{1}(W)\subset Y\) is an irreducible component.
Then the elements of the group \(c(Y,Z)\) are called the _finite correspondences_ between the varieties \(Y\) and \(Z\).
Let \(X,Y\) and \(Z\) be varieties over \(k\), and let \(W_{1}\in c(X,Y)\) and \(W_{2}\in c(Y,Z)\) be two finite correspondences. If \(X\) and \(Y\) are irreducible then every irreducible component \(P\) of \(X\times|W_{2}|\cap|W_{1}|\times Z\) is finite and \(\pi_{X}(P)=X\). Therefore, we have a bilinear composition rule
\[\circ:c(Y,Z)\times c(X,Y) \longrightarrow c(X,Z)\] \[(W_{2},W_{1}) \mapsto W_{2}\circ W_{1}\coloneqq\pi_{(X\times Z)_{*}}\bigg{(} \pi_{(X\times Y)}^{*}(W_{1})\cdot\pi_{(Y\times Z)}^{*}(W_{2})\bigg{)}.\]
Consider the category of smooth \(k\)-varieties \({\bf Sm}/k\). Then the objects of _the category of finite correspondences_\({\bf Corr}_{fin}/k\) are same as \({\bf Sm}/k\), with
\[{\rm Hom}_{{\bf Corr}_{fin}/k}(Y,Z)\coloneqq c(Y,Z)\]
and the composition law is given as above.
**Remark**.: _The operation \(\times_{k}\) on \({\bf Sm}/k\) and on cycles gives the category \({\bf Corr}_{fin}/k\) the structure of a tensor category. Therefore, the corresponding bounded homotopy category \(K^{b}({\bf Corr}_{fin}/k)\) is a tensor triangulated category._
### The category of effective geometric motives
Consider the category \(\widehat{DM}_{\rm gm}^{\rm eff}(k)\), which is the localization of the tensor triangulated category \(K^{b}({\bf Corr}_{fin}/k)\), inverting all objects of the form \([X\times\mathbb{A}^{1}]\to[X]\) (homotopy) and \([U\cap V]\to[U]\oplus[V]\to[X]\) for any open covering \(U\cup V=X\) (Mayer-Vietoris).
**Definition 5.2**.: The category \(DM_{\rm gm}^{\rm eff}(k)\) of _effective geometric motives_ over \(k\) is the pseudo-abelian envelope of the quotient category \(\widehat{DM}_{\rm gm}^{\rm eff}(k)\).
We now consider the functor
\[{\bf Sm}/k\longrightarrow{\bf Corr}_{fin}/k\]
sending a morphism \(f:X\to Y\) in \({\bf Sm}/k\) to its graph \(\Gamma_{f}\subset X\times_{k}Y\). We will denote the object in \({\bf Corr}_{fin}/k\) corresponding to \(X\in{\bf Sm}/k\) by \([X]\). This induces the following covariant functor
\[M_{\rm gm}^{\rm eff}:{\bf Sm}/k\longrightarrow DM_{\rm gm}^{\rm eff}(k)\]
where \(M_{\rm gm}^{\rm eff}(X)\) is the image of \([X]\) in \(DM_{\rm gm}^{\rm eff}(k)\) and and it sends a morphism \(f:X\to Y\) to \(M_{\rm gm}^{\rm eff}(f)\coloneqq[\Gamma_{f}]\).
We note that the category \(DM_{\rm gm}^{\rm eff}(k)\) is in fact a closed monoidal triangulated category. Therefore, we can consider the cone of a morphism, tensor products. The functor \(M_{\rm gm}^{\rm eff}\) satisfies the following properties
\[M_{\rm gm}^{\rm eff}(X\sqcup Y) =M_{\rm gm}^{\rm eff}(X)\oplus M_{\rm gm}^{\rm eff}(Y)\] \[M_{\rm gm}^{\rm eff}(X\times Y) =M_{\rm gm}^{\rm eff}(X)\otimes M_{\rm gm}^{\rm eff}(Y).\]
**Definition 5.3**.: \(M_{\rm gm}^{\rm eff}(X)\) is said to be the _effective geometric motive_ of a smooth \(k\)-variety \(X\).
#### 5.2.1. Tate motives
Let \(X\in{\bf Sm}/k\) be a smooth variety with a \(k\)-point \(0\in X(k)\). Then the corresponding motive in \(K^{b}({\bf Corr}_{fin}/k)\) is defined by
\[\widehat{[X]}\coloneqq{\rm Cone}\bigg{(}{i_{0}}_{*}:[{\rm Spec}(k)] \longrightarrow[X]\bigg{)}.\]
We denote the image of \(\widehat{[X]}\) in \(M_{\rm gm}^{\rm eff}(k)\) by \(\widehat{M_{\rm gm}^{\rm eff}(X)}\). We set
\[\Lambda(1)\coloneqq\widehat{M_{\rm gm}^{\rm eff}(\mathbb{P}^{1})}[-2].\]
One can think \(\Lambda(1)\) as the reduced homology of \(\mathbb{P}^{1}\). It is an invertible object with respect to the tensor product and its inverse is exactly its dual \(\underline{\rm Hom}(\Lambda(1),\Lambda(0))\). We denote its inverse by \(\Lambda(-1)\).
For \(r\in\mathbb{Z}\), we set
\[\Lambda(r)=\begin{cases}\Lambda(1)^{\otimes r}&\text{if }r\geq 0\\ \Lambda(-1)^{\otimes-r}&\text{if }r<0.\end{cases}\]
These objects are called _pure Tate motives_. For an object \(M\in DM_{\rm gm}^{\rm eff}(k)\), the twists
\[M(r)\coloneqq M\otimes\Lambda(r)\]
are called the _Tate twists_.
### The category of geometric motives
To define the category of geometric motive we need to consider the motive \(\Lambda(1)[2]\) which is similar to the Lefschetz motive \(\mathbb{L}\).
**Definition 5.4**.: The category \(DM_{\rm gm}(k)\) of _geometric motives_ is defined by inverting the functor \(\otimes_{\Lambda(1)}\) on \(DM_{\rm gm}^{\rm eff}(k)\), i.e. for \(n,m\in\mathbb{Z}\) and \(A,B\in DM_{\rm gm}^{\rm eff}(k)\),
\[{\rm Hom}_{DM_{\rm gm}^{\rm eff}(k)}(A(n),B(m))\coloneqq\lim_{\longrightarrow_ {r}}{\rm Hom}_{DM_{\rm gm}^{\rm eff}(k)}\big{(}A\otimes\Lambda(r+n),B\otimes \Lambda(r+m)\big{)}.\]
The category of geometric motives \(DM_{\rm gm}(k)\) is also a triangulated category. By Voevodsky's cancellation theorem, the embedding
\[i:DM_{\rm gm}^{\rm eff}(k)\longrightarrow DM_{\rm gm}(k)\]
is a fully faithful functor.
Consider the composition
\[M_{\rm gm}\coloneqq i\circ M_{\rm gm}^{\rm eff}:{\bf Sm}/k\longrightarrow DM_{ \rm gm}(k).\]
**Definition 5.5**.: \(M_{\rm gm}(X)\) is called the _geometric motive_ of the smooth \(k\)-variety \(X\).
Let \(R\) be a ring and let \(DM_{\rm gm}^{\rm eff}(k;R)\coloneqq DM_{\rm gm}^{\rm eff}(k)\otimes R\) denote the category of effective geometric motives with coefficients in \(R\). We denote by \(M_{\rm gm}^{\rm eff}(X)_{R}\) the effective geometric motive of \(X\) in the category \(DM_{\rm gm}^{\rm eff}(k;R)\). Similarly, we denote by \(M_{\rm gm}(X)_{R}\) the geometric motive of \(X\) in the category \(DM_{\rm gm}(k;R)=DM_{\rm gm}(k)\otimes R\).
### The category of effective Chow motives
Let \({\bf Chow}^{\rm eff}(k;R)\) denote the category of effective Chow motives over a field \(k\) with coefficients in \(R\). There exist a functor
\[{\bf Chow}^{\rm eff}(k;R)\longrightarrow DM_{\rm gm}^{\rm eff}(k;R)\]
which is a fully faithful embedding. This functor is compatible with the tensor structure and the category \({\bf Chow}^{\rm eff}(k;R)\) contains the Lefschetz motive \(\mathbb{L}\). We can think of the category \(DM_{\rm gm}^{\rm eff}(k;R)\) as being a "triangulated envelop" of the category \({\bf Chow}^{\rm eff}(k;R)\). We can consider the motive of a smooth \(k\)-variety either in \({\bf Chow}^{\rm eff}(k;R)\) or in the category \(DM_{\rm gm}^{\rm eff}(k;R)\). See [1], [16] and [22] for more details.
Let \(C(X)_{R}\in{\bf Chow}^{\rm eff}(k;R)\) denote the _Chow motive_ of \(X\) with coefficients in \(R\).
**Theorem 5.1**.: _Let \(X\) be a compact Riemann surface of genus \(g\geq 2\). Then for any ring \(R\), we have the following isomorphism of Voevodsky's motive,_
\[M_{\rm gm}\big{(}{\mathcal{M}}_{\rm Higgs}(r,d,\alpha)\big{)}_{R}\cong M_{\rm gm }\big{(}{\mathcal{M}}_{\rm pc}(r,d,\alpha)\big{)}_{R}\in DM_{\rm gm}(\mathbb{C };R).\]
Proof.: We know that the moduli space \({\mathcal{M}}_{\rm Hod}(r,d,\alpha)\) is a smooth semiprojective variety equipped with a \(\mathbb{C}^{*}\)-invariant surjective submersion \({\rm pr}:{\mathcal{M}}_{\rm Hod}(r,d,\alpha)\to\mathbb{C}\) such that \({\rm pr}^{-1}(0)={\mathcal{M}}_{\rm Higgs}(r,d,\alpha)\) and \({\rm pr}^{-1}(1)={\mathcal{M}}_{\rm pc}(r,d,\alpha)\) (see 2.1). Therefore by [31, Theorem B.1], we have the following isomorphism in the Voevodsky's category \(DM_{\rm gm}(\mathbb{C};R)\)
\[M_{\rm gm}\big{(}{\mathcal{M}}_{\rm Higgs}(r,d,\alpha)\big{)}_{R}=M_{\rm gm} \big{(}{\rm pr}^{-1}(0)\big{)}_{R}\cong M_{\rm gm}\big{(}{\rm pr}^{-1}(1)\big{)} _{R}=M_{\rm gm}\big{(}{\mathcal{M}}_{\rm pc}(r,d,\alpha)\big{)}_{R}.\]
This implies the following isomorphism of Chow motives.
**Theorem 5.2**.: _For any ring \(R\) we have the following isomorphism of Chow motives,_
\[C\big{(}{\mathcal{M}}_{\rm Higgs}(r,d,\alpha)\big{)}_{R}\cong C\big{(}{ \mathcal{M}}_{\rm pc}(r,d,\alpha)\big{)}_{R}\in{\textbf{Chow}^{\rm eff}( \mathbb{C};R)}.\]
Proof.: Since the moduli space \(\mathcal{M}_{\rm Higgs}(r,d,\alpha)\) is smooth semiprojective, its Voevodsky motive is pure by [31, Corollary A.5]. Similarly, the motive of the moduli space \(\mathcal{M}_{\rm pc}(r,d,\alpha)\) is pure. Since their Voevodsky motives are isomorphic by the above Theorem 5.1, their Chow motives are also isomorphic.
## 6. Motives of moduli spaces with fixed determinant
In earlier sections we were working on the moduli space of parabolic bundles of fixed rank \(r\) and degree \(d\) over a curve \(X\), which is same as the moduli space of parabolic \(\operatorname{GL}(r,\mathbb{C})\)-bundles of degree \(d\) over \(X\). In this final section, we will consider the moduli space of parabolic \(\operatorname{SL}(r,\mathbb{C})\)-bundles over \(X\), i.e. the moduli space of parabolic bundles over \(X\) with fixed determinant.
By a _parabolic \(\operatorname{SL}(r,\mathbb{C})\)-Higgs bundle_\((E_{*},\Phi)\), we mean a parabolic bundle \(E_{*}\) of rank \(r\) with determinant \(\xi\) and traceless Higgs field \(\Phi\). Let \(\operatorname{Jac}^{d}(X)\) denote the space of degree \(d\) line bundles over \(X\). Consider the determinant map
\[\det:\mathcal{M}_{\rm Higgs}(r,d,\alpha) \longrightarrow\operatorname{Jac}^{d}(X)\times H^{0}(X,K)\] \[(E_{*},\Phi) \longmapsto(\wedge^{r}E,\operatorname{trace}(\Phi)).\]
Since \(\Phi\) is strongly parabolic, \(\operatorname{trace}(\Phi)\in H^{0}(X,K)\subset H^{0}(X,K(D))\). The moduli space \(\mathcal{M}_{\rm Higgs}^{\xi}(r,d,\alpha)\) of semistable parabolic Higgs bundles with fixed determinant \(\xi\) is defined by the fiber \(\det^{-1}(\xi,0)\), i.e.
\[\mathcal{M}_{\rm Higgs}^{\xi}(r,d,\alpha)\coloneqq\det^{-1}(\xi,0).\]
As before, if weights are generic then the moduli space \(\mathcal{M}_{\rm Higgs}^{\xi}(r,d,\alpha)\) is a smooth quasi-projective complex variety of dimension
\[\dim\mathcal{M}_{\rm Higgs}^{\xi}(r,d,\alpha)=2(g-1)(r^{2}-1)+n(r^{2}-r).\]
In this case, as \(\operatorname{trace}(\Phi)=0\), the Hitchin map is given by
\[h^{\xi}:\mathcal{M}_{\rm Higgs}^{\xi}(r,d,\alpha)\longrightarrow\mathcal{H}^{ \xi}\coloneqq\bigoplus_{i=2}^{r}H^{0}(X,K^{i}(D^{i-1})).\]
Following [25, theorem 1.2] and (3.1), similarly we can prove that the Hitchin map \(h^{\xi}\) is \(\mathbb{C}^{*}\)-equivariant and the moduli space \(\mathcal{M}_{\rm Higgs}^{\xi}(r,d,\alpha)\) is smooth \(\mathbb{C}^{*}\)-equivariant closed subvariety of \(\mathcal{M}_{\rm Higgs}(r,d,\alpha)\). Therefore, \(\mathcal{M}_{\rm Higgs}^{\xi}(r,d,\alpha)\) is smooth semiprojective.
By a _parabolic \(\lambda\)-connection with fixed determinant_\((\xi,\delta)\) (i.e. for the group \(\operatorname{SL}(r,\mathbb{C})\)), we mean a parabolic \(\lambda\)-connection \((E_{*},\lambda,\nabla)\) such that \(\wedge^{r}E_{*}\cong\xi\) and \(\operatorname{trace}(\nabla)=\delta\) (see [28, Definition 8.1] for more details). It can be verified that the \(\operatorname{trace}(\nabla)\) gives a \(\lambda\)-connection on the line bundle \(\xi\). Consider the determinant map
\[\det:\mathcal{M}_{\rm Hod}(r,d,\alpha) \longrightarrow\mathcal{M}_{\rm Hod}(1,d,\alpha)\] \[(E_{*},\lambda,\nabla) \longmapsto(\wedge^{r}E_{*},\lambda,\operatorname{trace}(\nabla)).\]
Then the moduli space \(\mathcal{M}_{\rm Hod}^{\xi}(r,d,\alpha)\) of semistable parabolic \(\lambda\)-connections with fixed determinant \((\xi,\delta)\) is defined by
\[\mathcal{M}_{\rm Hod}^{\xi}(r,d,\alpha)\coloneqq\det^{-1}(\xi,\lambda,\delta).\]
The moduli space \(\mathcal{M}^{\xi}_{\mathrm{Hod}}(r,d,\alpha)\) is clearly a smooth \(\mathbb{C}^{*}\)-invariant closed subvariety of \(\mathcal{M}_{\mathrm{Hod}}(r,d,\alpha)\). Therefore, following (3.2), we can similarly prove that \(\mathcal{M}^{\xi}_{\mathrm{Hod}}(r,d,\alpha)\) is in fact semiprojective. By considering the restriction of the morphism (2.1), we get a \(\mathbb{C}^{*}\)-invariant surjective submersion
\[\mathrm{pr}:\mathcal{M}^{\xi}_{\mathrm{Hod}}(r,d,\alpha)\longrightarrow\mathbb{ C}.\]
Let \(\mathcal{M}^{\xi}_{\mathrm{pc}}(r,d,\alpha)\) denote the moduli space of semistable parabolic connections with fixed determinant \((\xi,\delta)\).
Then we have the following isomorphisms
1. \(\mathrm{pr}^{-1}(0)\cong\mathcal{M}^{\xi}_{\mathrm{Higgs}}(r,d,\alpha)\)
2. \(\mathrm{pr}^{-1}(1)\cong\mathcal{M}^{\xi}_{\mathrm{pc}}(r,d,\alpha)\)
3. \(\mathrm{pr}^{-1}(\mathbb{C}^{*})\cong\mathcal{M}^{\xi}_{\mathrm{pc}}(r,d, \alpha)\times\mathbb{C}^{*}.\)
Then we have the following motivic invariance theorems.
**Theorem 6.1** (Grothendieck motive).: _In the Grothendieck ring of varieties \(\hat{K}(\mathcal{V}_{\mathbb{C}})\) the following equalities hold,_
\[[\mathcal{M}^{\xi}_{\mathrm{Higgs}}(r,d,\alpha)]=[\mathcal{M}^{\xi}_{\mathrm{ pc}}(r,d,\alpha)]\;\;\mathrm{and}\;\;[\mathcal{M}^{\xi}_{\mathrm{Hod}}(r,d, \alpha)]=\mathbb{L}[\mathcal{M}^{\xi}_{\mathrm{Higgs}}(r,d,\alpha)].\]
_Therefore, we have the following equalities of the \(E\)-polynomials_
\[E(\mathcal{M}^{\xi}_{\mathrm{Higgs}}(r,d,\alpha))=E(\mathcal{M}^{\xi}_{\mathrm{ pc}}(r,d,\alpha))\;\;\mathrm{and}\;\;E(\mathcal{M}^{\xi}_{\mathrm{Hod}}(r,d, \alpha))=uvE(\mathcal{M}^{\xi}_{\mathrm{Higgs}}(r,d,\alpha)).\]
Proof.: The proof is totally analogous to the proof of the Theorem 4.3. We just need to carefully modify the objects to the fixed determinant version.
**Theorem 6.2** (Voevodsky motive).: _For any ring \(R\), we have the following isomorphism of Voevodsky's motive,_
\[M_{\mathrm{gm}}\big{(}\mathcal{M}^{\xi}_{\mathrm{Higgs}}(r,d,\alpha)\big{)}_{ R}\cong M_{\mathrm{gm}}\big{(}\mathcal{M}^{\xi}_{\mathrm{pc}}(r,d,\alpha)\big{)}_{ R}\in DM^{\mathrm{eff}}_{\mathrm{gm}}(\mathbb{C};R).\]
Proof.: The proof is the same as in Theorem 5.1.
**Theorem 6.3** (Chow motive).: _For any ring \(R\) we have the following isomorphism of Chow motives,_
\[C\big{(}\mathcal{M}^{\xi}_{\mathrm{Higgs}}(r,d,\alpha)\big{)}_{R}\cong C \big{(}\mathcal{M}^{\xi}_{\mathrm{pc}}(r,d,\alpha)\big{)}_{R}\in\textbf{Chow}^ {\mathrm{eff}}(\mathbb{C};R).\]
Proof.: The proof is the same as in Theorem 5.2.
## Acknowledgement
This work was supported by the Institute for Basic Science (IBS-R003-D1).
|
2309.12912 | Symmetric Exponential Time Requires Near-Maximum Circuit Size | We show that there is a language in $\mathsf{S}_2\mathsf{E}/_1$ (symmetric
exponential time with one bit of advice) with circuit complexity at least
$2^n/n$. In particular, the above also implies the same near-maximum circuit
lower bounds for the classes $\Sigma_2\mathsf{E}$,
$(\Sigma_2\mathsf{E}\cap\Pi_2\mathsf{E})/_1$, and
$\mathsf{ZPE}^{\mathsf{NP}}/_1$. Previously, only "half-exponential" circuit
lower bounds for these complexity classes were known, and the smallest
complexity class known to require exponential circuit complexity was
$\Delta_3\mathsf{E} = \mathsf{E}^{\Sigma_2\mathsf{P}}$ (Miltersen,
Vinodchandran, and Watanabe COCOON'99).
Our circuit lower bounds are corollaries of an unconditional zero-error
pseudodeterministic algorithm with an $\mathsf{NP}$ oracle and one bit of
advice ($\mathsf{FZPP}^{\mathsf{NP}}/_1$) that solves the range avoidance
problem infinitely often. This algorithm also implies unconditional
infinitely-often pseudodeterministic $\mathsf{FZPP}^{\mathsf{NP}}/_1$
constructions for Ramsey graphs, rigid matrices, two-source extractors, linear
codes, and $\mathrm{K}^{\mathrm{poly}}$-random strings with nearly optimal
parameters.
Our proofs relativize. The two main technical ingredients are (1) Korten's
$\mathsf{P}^{\mathsf{NP}}$ reduction from the range avoidance problem to
constructing hard truth tables (FOCS'21), which was in turn inspired by a
result of Je\v{r}\'abek on provability in Bounded Arithmetic (Ann. Pure Appl.
Log. 2004); and (2) the recent iterative win-win paradigm of Chen, Lu,
Oliveira, Ren, and Santhanam (FOCS'23). | Lijie Chen, Shuichi Hirahara, Hanlin Ren | 2023-09-22T14:56:59Z | http://arxiv.org/abs/2309.12912v1 | # Symmetric Exponential Time Requires Near-Maximum Circuit Size
###### Abstract
We show that there is a language in \(\mathsf{S}_{2}\mathsf{E}/_{1}\) (symmetric exponential time with one bit of advice) with circuit complexity at least \(2^{n}/n\). In particular, the above also implies the same near-maximum circuit lower bounds for the classes \(\Sigma_{2}\mathsf{E}\), \((\Sigma_{2}\mathsf{E}\cap\Pi_{2}\mathsf{E})/_{1}\), and \(\mathsf{ZPE}^{\mathsf{NP}}/_{1}\). Previously, only "half-exponential" circuit lower bounds for these complexity classes were known, and the smallest complexity class known to require exponential circuit complexity was \(\Delta_{3}\mathsf{E}=\mathsf{E}^{\Sigma_{2}\mathsf{P}}\) (Miltersen, Vinodchandran, and Watanabe COCOON'99).
Our circuit lower bounds are corollaries of an unconditional zero-error pseudodeterministic algorithm with an \(\mathsf{NP}\) oracle and one bit of advice \((\mathsf{F2PP}^{\mathsf{NP}}/_{1})\) that solves the range avoidance problem infinitely often. This algorithm also implies unconditional infinitely-often pseudodeterministic \(\mathsf{F2PP}^{\mathsf{NP}}/_{1}\) constructions for Ramsey graphs, rigid matrices, two-source extractors, linear codes, and \(\mathsf{K}^{\mathrm{poly}}\)-random strings with nearly optimal parameters.
Our proofs relativize. The two main technical ingredients are (1) Korten's \(\mathsf{P}^{\mathsf{NP}}\) reduction from the range avoidance problem to constructing hard truth tables (FOCS'21), which was in turn inspired by a result of Jerabek on provability in Bounded Arithmetic (Ann. Pure Appl. Log. 2004); and (2) the recent iterative win-win paradigm of Chen, Lu, Oliveira, Ren, and Santhanam (FOCS'23).
###### Contents
* 1 Introduction
* 1.1 Our Results
* 1.2 Intuitions
* 1.3 Proof Overview
* 1.4 Discussions
* 2 Preliminaries
* 2.1 Complexity Classes
* 2.2 Single-valued \(\mathsf{F}\Sigma_{2}\mathsf{P}\) and \(\mathsf{F}\Sigma_{2}\mathsf{P}\) Algorithms
* 2.3 The Range Avoidance Problem
* 3 Korten's Reduction
* 3.1 GGM Tree and the Reduction
* 3.2 \(\Pi_{1}\) Verification of the History of \(\mathsf{Korten}(C,f)\)
* 4 Circuit Lower Bounds for \(\Sigma_{2}\mathsf{E}\)
* 5 Circuit Lower Bounds for \(\mathsf{S}_{2}\mathsf{E}\)
* 5.1 Reed-Muller Codes
* 5.2 Encoded History and \(\mathsf{S}_{2}\mathsf{BPP}\) Verification
* 5.3 Lower Bounds for \(\mathsf{S}_{2}\mathsf{E}\)
* 5.4 Infinitely Often Single-Valued \(\mathsf{F}\mathsf{S}_{2}\mathsf{P}\) Algorithm for Arbitrary Input Range Avoidance
Introduction
Proving lower bounds against non-uniform computation (i.e., circuit lower bounds) is one of the most important challenges in theoretical computer science. From Shannon's counting argument [1, 13], we know that almost all \(n\)-bit Boolean functions have _near-maximum_\((2^{n}/n)\) circuit complexity.1 Therefore, the task of proving circuit lower bounds is simply to _pinpoint_ one such hard function. More formally, one fundamental question is:
Footnote 1: All \(n\)-input Boolean functions can be computed by a circuit of size \((1+\frac{3\log n}{n}+O(\frac{1}{n}))2^{n}/n\)[14, 15], while most Boolean functions require circuits of size \((1+\frac{\log n}{n}-O(\frac{1}{n}))2^{n}/n\)[16, 17]. Hence, in this paper, we say an \(n\)-bit Boolean function has _near-maximum_ circuit complexity if its circuit complexity is at least \(2^{n}/n\).
What is the smallest complexity class that contains a language of exponential (\(2^{\Omega(n)}\)) circuit complexity?
Compared with super-polynomial lower bounds, exponential lower bounds are interesting in their own right for the following reasons. First, an exponential lower bound would make Shannon's argument _fully constructive_. Second, exponential lower bounds have more applications than super-polynomial lower bounds: For example, if one can show that \(\mathsf{E}\) has no \(2^{o(n)}\)-size circuits, then we would have \(\mathrm{pr}\mathsf{P}=\mathrm{pr}\mathsf{BPP}\)[18, 19], while super-polynomial lower bounds such as \(\mathsf{EXP}\not\subset\mathsf{P}/_{\mathrm{poly}}\) only imply sub-exponential time derandomization of \(\mathrm{pr}\mathsf{BPP}\).2
Footnote 2: \(\mathsf{E}=\mathsf{DTIME}[2^{O(n)}]\) denotes _single-exponential_ time and \(\mathsf{EXP}=\mathsf{DTIME}[2^{n^{O(1)}}]\) denotes _exponential_ time; classes such as \(\mathsf{E}^{\mathsf{NP}}\) and \(\mathsf{EXP}^{\mathsf{NP}}\) are defined analogously. Exponential time and single-exponential time are basically interchangeable in the context of super-polynomial lower bounds (by a padding argument); the exponential lower bounds proven in this paper will be stated for single-exponential time classes since this makes our results stronger. Below, \(\Sigma_{3}\mathsf{E}\) and \(\Pi_{3}\mathsf{E}\) denote the exponential-time versions of \(\Sigma_{3}\mathsf{P}=\mathsf{NP}^{\mathsf{NPNP}}\) and \(\Pi_{3}\mathsf{P}=\mathsf{coNP}^{\mathsf{NPNP}}\), respectively.
Unfortunately, despite its importance, our knowledge about exponential lower bounds is quite limited. Kannan [12] showed that there is a function in \(\Sigma_{3}\mathsf{E}\cap\Pi_{3}\mathsf{E}\) that requires maximum circuit complexity; the complexity of the hard function was later improved to \(\Delta_{3}\mathsf{E}=\mathsf{E}^{\Sigma_{2}\mathsf{P}}\) by Miltersen, Vinodchandran, and Watanabe [19], via a simple binary search argument. This is **essentially all we know** regarding exponential circuit lower bounds.3
Footnote 3: We also mention that Hirahara, Lu, and Ren [10] recently proved that for every constant \(\varepsilon>0\), \(\mathsf{BPE}^{\mathsf{MCSP}/2^{\varepsilon n}}\) requires near-maximum circuit complexity, where \(\mathsf{MCSP}\) is the Minimum Circuit Size Problem [13]. However, the hard function they constructed requires subexponentially (\(2^{\varepsilon n}\)) many advice bits to describe.
We remark that Kannan [12, Theorem 4] claimed that \(\Sigma_{2}\mathsf{E}\cap\Pi_{2}\mathsf{E}\) requires exponential circuit complexity, but [19] pointed out a gap in Kannan's proof, and suggested that exponential lower bounds for \(\Sigma_{2}\mathsf{E}\cap\Pi_{2}\mathsf{E}\) were "reopened and considered an open problem." Recently, Vyas and Williams [18] emphasized our lack of knowledge regarding the circuit complexity of \(\Sigma_{2}\mathsf{EXP}\), even with respect to _relativizing_ proof techniques. In particular, the following question has been open for at least 20 years (indeed, if we count from [12], it would be at least 40 years):
**Open Problem 1.1**.: _Can we prove that \(\Sigma_{2}\mathsf{EXP}\not\subset\mathsf{SIZE}[2^{\varepsilon n}]\) for some absolute constant \(\varepsilon>0\), or at least show a relativization barrier for proving such a lower bound?_
The half-exponential barrier.There is a richer literature regarding super-polynomial lower bounds than exponential lower bounds. Kannan [12] proved that the class \(\Sigma_{2}\mathsf{E}\cap\Pi_{2}\mathsf{E}\) does not have polynomial-size circuits. Subsequent works proved super-polynomial circuit lower bounds for exponential-time complexity classes such as \(\mathsf{ZPEXP}^{\mathsf{NP}}\)[18, 1], \(\mathsf{S}_{2}\mathsf{EXP}\)[19, 10], \(\mathsf{PEXP}\)[19, 10], and \(\mathsf{MAEXP}\)[15, 16].
Unfortunately, all these works fail to prove exponential lower bounds. All of their proofs go through certain _Karp-Lipton_ collapses [13]; such a proof strategy runs into a so-called "half-exponential barrier", preventing us from getting exponential lower bounds. See Section1.4.1 for a detailed discussion.
### Our Results
#### 1.1.1 New near-maximum circuit lower bounds
In this work, we _overcome_ the half-exponential barrier mentioned above and resolve creftype1.1 by showing that both \(\Sigma_{2}\mathsf{E}\) and \((\Sigma_{2}\mathsf{E}\cap\Pi_{2}\mathsf{E})/_{1}\) require near-maximum \((2^{n}/n)\) circuit complexity. Moreover, our proof indeed _relativizes_:
**Theorem 1.2**.: \(\Sigma_{2}\mathsf{E}\not\subset\mathsf{SIZE}[2^{n}/n]\) _and \((\Sigma_{2}\mathsf{E}\cap\Pi_{2}\mathsf{E})/_{1}\not\subset\mathsf{SIZE}[2^{n}/n]\). Moreover, they hold in every relativized world._
Up to one bit of advice, we finally provide a proof of Kannan's original claim in [13, Theorem 4]. Moreover, with some more work, we extend our lower bounds to the smaller complexity class \(\mathsf{S}_{2}\mathsf{E}/_{1}\) (see creftype2.1 for a formal definition), again with a relativizing proof:
**Theorem 1.3**.: \(\mathsf{S}_{2}\mathsf{E}/_{1}\not\subset\mathsf{SIZE}[2^{n}/n]\)_. Moreover, this holds in every relativized world._
The symmetric time class \(\mathsf{S}_{2}\mathsf{E}\).\(\mathsf{S}_{2}\mathsf{E}\) can be seen as a "randomized" version of \(\mathsf{E}^{\mathsf{NP}}\) since it is sandwiched between \(\mathsf{E}^{\mathsf{NP}}\) and \(\mathsf{Z}\mathsf{P}^{\mathsf{NP}}\): it is easy to show that \(\mathsf{E}^{\mathsf{NP}}\subseteq\mathsf{S}_{2}\mathsf{E}\)[12], and it is also known that \(\mathsf{S}_{2}\mathsf{E}\subseteq\mathsf{Z}\mathsf{P}^{\mathsf{NP}}\)[14]. We also note that under plausible derandomization assumptions (e.g., \(\mathsf{E}^{\mathsf{NP}}\) requires \(2^{\Omega(n)}\)-size \(\mathsf{SAT}\)-oracle circuits), all three classes simply collapse to \(\mathsf{E}^{\mathsf{NP}}\)[15].
Hence, our results also imply a near-maximum circuit lower bound for the class \(\mathsf{Z}\mathsf{P}^{\mathsf{NP}}/_{1}\subseteq(\Sigma_{2}\mathsf{E}\cap\Pi _{2}\mathsf{E})/_{1}\). This vastly improves the previous lower bound for \(\Delta_{3}\mathsf{E}=\mathsf{E}^{\Sigma_{2}\mathsf{P}}\).
**Corollary 1.4**.: \(\mathsf{Z}\mathsf{P}^{\mathsf{NP}}/_{1}\not\subset\mathsf{SIZE}[2^{n}/n]\)_. Moreover, this holds in every relativized world._
#### 1.1.2 New algorithms for the range avoidance problem
Background on Avoid.Actually, our circuit lower bounds are implied by our new algorithms for solving the range avoidance problem (Avoid) [13, 14, 15], which is defined as follows: given a circuit \(C:\{0,1\}^{n}\to\{0,1\}^{n+1}\) as input, find a string outside the range of \(C\) (we define \(\text{Range}(C)\coloneqq\{C(z):z\in\{0,1\}^{n}\}\)). That is, output any string \(y\in\{0,1\}^{n+1}\) such that for every \(x\in\{0,1\}^{n}\), \(C(x)\neq y\).
There is a trivial \(\mathsf{FZ}\mathsf{P}^{\mathsf{NP}}\) algorithm solving Avoid: randomly generate strings \(y\in\{0,1\}^{n+1}\) and output the first \(y\) that is outside the range of \(C\) (note that we need an \(\mathsf{NP}\) oracle to verify if \(y\notin\text{Range}(C)\)). The class \(\mathsf{APE}\mathsf{P}\) (Abundant Polynomial Empty Pigeonhole Principle) [13] is the class of total search problems reducible to Avoid.
As demonstrated by Korten [14, Section 3], \(\mathsf{APE}\mathsf{P}\) captures the complexity of explicit construction problems whose solutions are guaranteed to exist by the probabilistic method (more precisely, the dual weak pigeonhole principle [12, 13]), in the sense that constructing such objects reduces to the range avoidance problem. This includes many important objects in mathematics and theoretical computer science, including Ramsey graphs [14], rigid matrices [15, 16, 17], two-source extractors [18, 19], linear codes [15], hard truth tables [14], and strings with maximum time-bounded Kolmogorov complexity (i.e., \(\mathrm{K}^{\mathrm{poly}}\)-random strings) [15]. Hence, derandomizing the trivial \(\mathsf{FZ}\mathsf{P}^{\mathsf{NP}}\) algorithm for Avoid would imply explicit constructions for all these important objects.
Our results: new pseudodeterministic algorithms for Avoid.We show that, _unconditionally_, the trivial \(\mathsf{FZPP^{NP}}\) algorithm for Avoid can be made _pseudodeterministic_ on infinitely many input lengths. A _pseudodeterministic_ algorithm [11] is a randomized algorithm that outputs the same _canonical_ answer on most computational paths. In particular, we have:
**Theorem 1.5**.: _For every constant \(d\geq 1\), there is a randomized algorithm \(\mathcal{A}\) with an \(\mathsf{NP}\) oracle such that the following holds for infinitely many integers \(n\). For every circuit \(C\colon\{0,1\}^{n}\to\{0,1\}^{n+1}\) of size at most \(n^{d}\), there is a string \(y_{C}\in\{0,1\}^{n}\setminus\operatorname{Range}(C)\) such that \(\mathcal{A}(C)\) either outputs \(y_{C}\) or \(\bot\), and the probability (over the internal randomness of \(\mathcal{A}\)) that \(\mathcal{A}(C)\) outputs \(y_{C}\) is at least \(2/3\). Moreover, this theorem holds in every relativized world._
As a corollary, for every problem in \(\mathsf{APEPP}\), we obtain zero-error pseudodeterministic constructions with an \(\mathsf{NP}\) oracle and one bit of advice (\(\mathsf{FZPP^{NP}}/_{1}\)) that works infinitely often4:
Footnote 4: The one-bit advice encodes whether our algorithm succeeds on a given input length; it is needed since on bad input lengths, our algorithm might not be pseudodeterministic (i.e., there may not be a canonical answer that is outputted with high probability).
**Corollary 1.6** (Informal).: _There are infinitely-often zero-error pseudodeterministic constructions for the following objects with an \(\mathsf{NP}\) oracle and one-bit of advice: Ramsey graphs, rigid matrices, two-source extractors, linear codes, hard truth tables, and \(\mathsf{K}^{\mathrm{poly}}\)-random strings._
Actually, we obtain single-valued \(\mathsf{FS_{2}P}/_{1}\) algorithms for the explicit construction problems above (see Definition2.2), and the pseudodeterministic \(\mathsf{FZPP^{NP}}/_{1}\) algorithms follow from Cai's theorem that \(\mathsf{S_{2}P}\subseteq\mathsf{ZPP^{NP}}\)[10]. We stated them as pseudodeterministic \(\mathsf{FZPP^{NP}}/_{1}\) algorithms since this notion is better known than the notion of single-valued \(\mathsf{FS_{2}P}/_{1}\) algorithms.
Theorem1.5 is tantalizingly close to an infinitely-often \(\mathsf{FP^{NP}}\) algorithm for Avoid (with the only caveat of being _zero-error_ instead of being completely _deterministic_). However, since an \(\mathsf{FP^{NP}}\) algorithm for range avoidance would imply near-maximum circuit lower bounds for \(\mathsf{E^{NP}}\), we expect that it would require fundamentally new ideas to completely derandomize our algorithm. Previously, Hirahara, Lu, and Ren [14, Theorem 36] presented an infinitely-often pseudodeterministic \(\mathsf{FZPP^{NP}}\) algorithm for the range avoidance problem using \(n^{\varepsilon}\) bits of advice, for any small constant \(\varepsilon>0\). Our result improves the above in two aspects: first, we reduce the number of advice bits to \(1\); second, our techniques relativize but their techniques do not.
Lower bounds against non-uniform computation with maximum advice length.Finally, our results also imply lower bounds against non-uniform computation with maximum advice length. We mention this corollary because it is a stronger statement than circuit lower bounds, and similar lower bounds appeared recently in the literature of super-fast derandomization [17].
**Corollary 1.7**.: _For every \(\alpha(n)\geq\omega(1)\) and any constant \(k\geq 1\), \(\mathsf{S_{2}E}/_{1}\not\subset\mathsf{TIME}[2^{kn}]/_{2^{n}-\alpha(n)}\). The same holds for \(\Sigma_{2}\mathsf{E}\), \((\Sigma_{2}\mathsf{E}\cap\Pi_{2}\mathsf{E})/_{1}\), and \(\mathsf{ZPE^{NP}}/_{1}\) in place of \(\mathsf{S_{2}E}/_{1}\). Moreover, this holds in every relativized world._
### Intuitions
In the following, we present some high-level intuitions for our new circuit lower bounds.
#### 1.2.1 Perspective: single-valued constructions
A key perspective in this paper is to view circuit lower bounds (for exponential-time classes) as _single-valued_ constructions of hard truth tables. This perspective is folklore; it was also emphasized in recent papers on the range avoidance problem [13, 14].
Let \(\Pi\subseteq\{0,1\}^{\star}\) be an _\(\varepsilon\)-dense_ property, i.e., for every integer \(N\in\mathbb{N}\), \(|\Pi_{N}|\geq\varepsilon\cdot 2^{N}\). (In what follows, we use \(\Pi_{N}:=\Pi\cap\{0,1\}^{N}\) to denote the length-\(N\) slice of \(\Pi\).) As a concrete example, let \(\Pi_{\text{hard}}\) be the set of hard truth tables, i.e., a string \(tt\in\Pi_{\text{hard}}\) if and only if it is the truth table of a function \(f:\{0,1\}^{n}\to\{0,1\}\) whose circuit complexity is at least \(2^{n}/n\), where \(n:=\log N\). (We assume that \(n:=\log N\) is an integer.) Shannon's argument [15, 16] shows that \(\Pi_{\text{hard}}\) is a \(1/2\)-dense property. We are interested in the following question:
What is the complexity of _single-valued_ constructions for any string in \(\Pi_{\text{hard}}\)?
Here, informally speaking, a computation is _single-valued_ if each of its computational paths either fails or outputs the _same_ value. For example, an \(\mathsf{NP}\) machine \(M\) is a single-valued construction for \(\Pi\) if there is a "canonical" string \(y\in\Pi\) such that (1) \(M\) outputs \(y\) on every accepting computational path; (2) \(M\) has at least one accepting computational path. (That is, it is an \(\mathsf{NPSV}\) construction in the sense of [13, 12, 14, 15].) Similarly, a \(\mathsf{BPP}\) machine \(M\) is a single-valued construction for \(\Pi\) if there is a "canonical" string \(y\in\Pi\) such that \(M\) outputs \(y\) on most (say \(\geq 2/3\) fraction of) computational paths. (In other words, single-valued \(\mathsf{ZPP}\) and \(\mathsf{BPP}\) constructions are another name for _pseudodeterministic constructions_[11].)5
Footnote 5: Note that the trivial construction algorithms are not single-valued in general. For example, a trivial \(\Sigma_{2}\mathsf{P}=\mathsf{NP}^{\mathsf{NP}}\) construction algorithm for \(\Pi_{\text{hard}}\) is to guess a hard truth table \(tt\) and use the \(\mathsf{NP}\) oracle to verify that \(tt\) does not have size-\(N/\log N\) circuits; however, different accepting computational paths of this computation would output different hard truth tables. Similarly, a trivial \(\mathsf{BPP}\) construction algorithm for every dense property \(\Pi\) is to output a random string, but there is no _canonical_ answer that is outputted with high probability. In other words, these construction algorithms do not _define_ anything; instead, a single-valued construction algorithm should _define_ some particular string in \(\Pi\).
Hence, the task of proving circuit lower bounds is equivalent to the task of _defining_, i.e., single-value constructing, a hard function, in the smallest possible complexity class. For example, a single-valued \(\mathsf{BPP}\) construction (i.e., pseudodeterministic construction) for \(\Pi_{\text{hard}}\) is equivalent to the circuit lower bound \(\mathsf{BPE}\not\subset\text{i.o.-SIZE}[2^{n}/n]\).6 In this regard, the previous near-maximum circuit lower bound for \(\Delta_{3}\mathsf{E}:=\mathsf{E}^{\Sigma_{2}\mathsf{P}}\)[16] can be summarized in one sentence: The lexicographically first string in \(\Pi_{\text{hard}}\) can be constructed in \(\Delta_{3}\mathsf{P}:=\mathsf{P}^{\Sigma_{2}\mathsf{P}}\) (which is necessarily single-valued).
Footnote 6: To see this, note that (1) \(\mathsf{BPE}\not\subset\text{i.o.-SIZE}[2^{n}/n]\) implies a simple single-valued \(\mathsf{BPP}\) construction for \(\Pi_{\text{hard}}\): given \(N=2^{n}\), output the truth table of \(L_{n}\) (\(L\) restricted to \(n\)-bit inputs), where \(L\in\mathsf{BPE}\) is the hard language not in \(\mathsf{SIZE}[2^{n}/n]\); and (2) assuming a single-valued \(\mathsf{BPP}\) construction \(A\) for \(\Pi_{\text{hard}}\), one can define a hard language \(L\) such that the truth table of \(L_{n}\) is the output of \(A(1^{2^{n}})\), and observe that \(L\in\mathsf{BPE}\).
Reduction to Avoid.It was observed in [13, 14] that explicit construction of elements from \(\Pi_{\text{hard}}\) is a special case of range avoidance: Let \(\mathsf{TT}\colon\{0,1\}^{N-1}\to\{0,1\}^{N}\) (here \(N=2^{n}\)) be a circuit that maps the description of a \(2^{n}/n\)-size circuit into its \(2^{n}\)-length truth table (by [14], this circuit can be encoded by \(N-1\) bits). Hence, a single-valued algorithm solving Avoid for \(\mathsf{TT}\) is equivalent to a single-valued construction for \(\Pi_{\text{hard}}\). This explains how our new range avoidance algorithms imply our new circuit lower bounds (as mentioned in Section1.1.2).
In the rest of Section1.2, we will only consider the special case of Avoid where the input circuit for range avoidance is a \(\mathsf{P}\)-uniform circuit family. Specifically, let \(\{C_{n}\colon\{0,1\}^{n}\to\{0,1\}^{2n}\}_{n\in\mathbb{N}}\) be a \(\mathsf{P}\)-uniform family of circuits, where \(|C_{n}|\leq\text{poly}(n)\).7 Our goal is to find an algorithm \(A\) such that for infinitely many \(n\), \(A(1^{n})\in\{0,1\}^{2n}\setminus\text{Range}(C_{n})\); see Section5.3 and Section5.4 for how to turn
this into an algorithm that works for arbitrary input circuit with a single bit of stretch. Also, since from now on we will not talk about truth tables anymore, we will use \(n\) instead of \(N\) to denote the input length of Avoid instances.
Footnote 7: The _iterative win-win paradigm_ of [12] is a _iterative win-win_ paradigm for explicit constructions, and used that to obtain a polynomial-time pseudo-deterministic construction of primes that works infinitely often. Since our construction algorithm closely follows their paradigm, it is instructive to take a detour and give a high-level overview of how the construction from [12] works.8
Footnote 8: Indeed, for every \(1/\mathrm{poly}(n)\)-dense property \(\Pi\in\mathsf{P}\), they obtained a polynomial-time algorithm \(A\) such that for infinitely many \(n\in\mathbb{N}\), there exists \(y_{n}\in\Pi_{n}\) such that \(A(1^{n})\) outputs \(y_{n}\) with probability at least \(2/3\). By [1] and the prime number theorem, the set of \(n\)-bit primes is such a property.
In this paradigm, for a (starting) input length \(n_{0}\) and some \(t=O(\log n_{0})\), we will consider an increasing sequence of input lengths \(n_{0},n_{1},\ldots,n_{t}\) (jumping ahead, we will set \(n_{i+1}=n_{i}^{\beta}\) for a large constant \(\beta\)), and show that our construction algorithm succeeds on at least one of the input lengths. By varying \(n_{0}\), we can construct infinitely many such sequences of input lengths that are pairwise disjoint, and therefore our algorithm succeeds on infinitely many input lengths.
In more detail, fixing a sequence of input lengths \(n_{0},n_{1},\ldots,n_{t}\) and letting \(\Pi\) be an \(\varepsilon\)-dense property, for each \(i\in\{0,1,\ldots,t\}\), we specify a (deterministic) algorithm \(\mathsf{ALG}_{i}\) that takes \(1^{n_{i}}\) as input and aims to construct an explicit element from \(\Pi_{n_{i}}\). We let \(\mathsf{ALG}_{0}\) be the simple brute-force algorithm that enumerates all length-\(n_{0}\) strings and finds the lexicographically first string in \(\Pi_{n_{0}}\); it is easy to see that \(\mathsf{ALG}_{0}\) runs in \(T_{0}:=2^{O(n_{0})}\) time.
The win-or-improve mechanism.The core of [12] is a novel _win-or-improve mechanism_, which is described by a (randomized) algorithm \(R\). Roughly speaking, for input lengths \(n_{i}\) and \(n_{i+1}\), \(R(1^{n_{i}})\) attempts to _simulate_\(\mathsf{ALG}_{i}\)_faster by using the oracle \(\Pi_{n_{i+1}}\) (hence it runs in \(\mathrm{poly}(n_{i+1})\) time). The crucial property is the following win-win argument:
* Either \(R(1^{n_{i}})\) outputs \(\mathsf{ALG}_{i}(1^{n_{i}})\) with probability at least \(2/3\) over its internal randomness,
* or, from the failure of \(R(1^{n_{i}})\), we can construct an algorithm \(\mathsf{ALG}_{i+1}\) that outputs an explicit element from \(\Pi_{n_{i+1}}\) and runs in \(T_{i+1}=\mathrm{poly}(T_{i})\) time.
We call the above (Win-or-Improve), since either we have a pseudodeterministic algorithm \(R(1^{n_{i}})\) that constructs an explicit element from \(\Pi_{n_{i}}\) in \(\mathrm{poly}(n_{i+1})\leq\mathrm{poly}(n_{i})\) time (since it simulates \(\mathsf{ALG}_{i}\)), or we have an _improved_ algorithm \(\mathsf{ALG}_{i+1}\) at the input length \(n_{i+1}\) (for example, on input length \(n_{1}\), the running time of \(\mathsf{ALG}_{1}\) is \(2^{O\left(n_{1}^{1/\beta}\right)}\ll 2^{O(n_{1})}\)). The (Win-or-Improve) part in [12] is implemented via the Chen-Tell targeted hitting set generator [13] (we omit the details here). Jumping ahead, in this paper, we will implement a similar mechanism using Korten's \(\mathsf{P}^{\mathsf{NP}}\) reduction from the range avoidance problem to constructing hard truth tables [14].
Getting polynomial time.Now we briefly explain why (Win-or-Improve) implies a _polynomial-time_ construction algorithm. Let \(\alpha\) be an absolute constant such that we always have \(T_{i+1}\leq T_{i}^{\alpha}\); we now set \(\beta:=2\alpha\). Recall that \(n_{i}=n_{i-1}^{\beta}\) for every \(i\). The crucial observation is the following:
Although \(T_{0}\) is much larger than \(n_{0}\), the sequence \(\{T_{i}\}\) grows slower than \(\{n_{i}\}\).
Indeed, a simple calculation shows that when \(t=O(\log n_{0})\), we will have \(T_{t}\leq\operatorname{poly}(n_{t})\); see [13, Section 1.3.1].
For each \(0\leq i<t\), if \(R(1^{n_{t}})\) successfully simulates \(\mathsf{ALG}_{i}\), then we obtain an algorithm for input length \(n_{i}\) running in \(\operatorname{poly}(n_{i+1})\leq\operatorname{poly}(n_{i})\) time. Otherwise, we have an algorithm \(\mathsf{ALG}_{i+1}\) running in \(T_{i+1}\) time on input length \(n_{i+1}\). Eventually, we will hit \(t\) such that \(T_{t}\leq\operatorname{poly}(n_{t})\), in which case \(\mathsf{ALG}_{t}\) itself gives a polynomial-time construction on input length \(n_{t}\). Therefore, we obtain a polynomial-time algorithm on at least one of the input lengths \(n_{0},n_{1},\ldots,n_{t}\).
#### 1.2.3 Algorithms for range-avoidance via Korten's reduction
Now we are ready to describe our new algorithms for Avoid. Roughly speaking, our new algorithm makes use of the iterative win-win argument introduced above, together with an easy-witness style argument [14] and Korten's reduction [15].9 In the following, we introduce the latter two ingredients and show how to chain them together via the iterative win-win argument.
Footnote 9: Korten’s result was inspired by [13], which proved that the dual weak pigeonhole principle is equivalent to the statement asserting the existence of Boolean functions with exponential circuit complexity in a certain fragment of Bounded Arithmetic.
An easy-witness style argument.Let \(\mathsf{BF}\) be the \(2^{O(n)}\)-time brute-force algorithm outputting the lexicographically first non-output of \(C_{n}\). Our first idea is to consider its _computational history_, a unique \(2^{O(n)}\)-length string \(h_{\mathsf{BF}}\) (that can be computed in \(2^{O(n)}\) time), and _branch on whether \(h_{\mathsf{BF}}\) has a small circuit or not_. Suppose \(h_{\mathsf{BF}}\) admits a, say, \(n^{\alpha}\)-size circuit for some large \(\alpha\), then we apply an _easy-witness-style_ argument [14] to simulate \(\mathsf{BF}\) by a single-valued \(\mathsf{F}\Sigma_{2}\mathsf{P}\) algorithm running in \(\operatorname{poly}(n^{\alpha})=\operatorname{poly}(n)\) time (see Section1.3.2). Hence, we obtained the desired algorithm when \(h_{\mathsf{BF}}\) is easy.
However, it is less clear how to deal with the other case (when \(h_{\mathsf{BF}}\) is hard) directly. The crucial observation is that we have gained the following ability: we can generate a string \(h_{\mathsf{BF}}\in\{0,1\}^{2^{O(n)}}\) that has circuit complexity at least \(n^{\alpha}\), in only \(2^{O(n)}\) time.
Korten's reduction.We will apply Korten's recent work [14] to make use of the "gain" above. So it is worth taking a detour to review the main result of [14]. Roughly speaking, [14] gives **an algorithm that uses a hard truth table \(f\) to solve a derandomization task: finding a non-output of the given circuit (that has more output bits than input bits).**10
Footnote 10: This is very similar to the classical hardness-vs-randomness connection [13, 14], which can be understood as an algorithm that uses a hard truth table \(f\) (i.e., a truth table without small circuits) to solve another derandomization task: estimating the acceptance probability of the given circuit. This explains why one may want to use Korten’s algorithm to replace the Chen–Tell targeted generator construction [13] from [13], as they are both hardness-vs-randomness connections.
Formally, [14] gives a \(\mathsf{P}^{\mathsf{NP}}\)-computable algorithm \(\mathsf{Korten}(C,f)\) that takes as inputs a circuit \(C\colon\{0,1\}^{n}\to\{0,1\}^{2n}\) and a string \(f\in\{0,1\}^{T}\) (think of \(n\ll T\)), and outputs a string \(y\in\{0,1\}^{2n}\). The guarantee is that if the circuit complexity of \(f\) is sufficiently larger than the size of \(C\), then the output \(y\) is not in the range of \(C\).
This fits perfectly with our "gain" above: for \(\beta\ll\alpha\) and \(m=n^{\beta}\), \(\mathsf{Korten}(C_{m},h_{\mathsf{BF}})\) solves Avoid for \(C_{m}\) since the circuit complexity of \(h_{\mathsf{BF}}\), \(n^{\alpha}\), is sufficiently larger than the size of \(C_{m}\). Moreover, \(\mathsf{Korten}(C_{m},h_{\mathsf{BF}})\) runs in only \(2^{O(n)}\) time, which is much less than the brute-force running time \(2^{O(m)}\). Therefore, we obtain an improved algorithm for Avoid on input length \(m\).
The iterative win-win argument.What we described above is essentially the first stage of an _win-or-improve mechanism_ similar to that from Section1.2.2. Therefore, we only need to iterate the argument above to obtain a polynomial-time algorithm.
For this purpose, we need to consider the computational history of not only \(\mathsf{BF}\), but also algorithms of the form \(\mathsf{Korten}(C,f)\).11 For any circuit \(C\) and "hard" truth table \(f\), there is a _unique_ "computational history" \(h\) of \(\mathsf{Korten}(C,f)\), and the length of \(h\) is upper bounded by \(\operatorname{poly}(|f|)\). We are able to prove the following statement akin to the _easy witness lemma_[13]: if \(h\) admits a size-\(s\) circuit (think of \(s\ll T\)), then \(\mathsf{Korten}(C,f)\) can be simulated by a single-valued \(\mathsf{F}\Sigma_{2}\mathsf{P}\) algorithm in time \(\operatorname{poly}(s)\); see Section1.3.2 for details on this argument.12
Footnote 11: Actually, we need to consider all algorithms \(\mathsf{ALG}_{i}\) defined below and prove the properties of computational history for these algorithms. It turns out that all of \(\mathsf{ALG}_{i}\) are of the form \(\mathsf{Korten}(C,f)\) (including \(\mathsf{ALG}_{0}\)), so in what follows we only consider the computational history of \(\mathsf{Korten}(C,f)\).
Footnote 12: With an “encoded” version of history and more effort, we are able to simulate \(\mathsf{Korten}(C,f)\) by a single-valued \(\mathsf{F}\mathsf{S}_{2}\mathsf{P}\) algorithm in time \(\operatorname{poly}(s)\), and that is how our \(\mathsf{S}_{2}\mathsf{E}\) lower bound is proved; see Section1.3.3 for details.
Now, following the iterative win-win paradigm of [14], for a (starting) input length \(n_{0}\) and some \(t=O(\log n_{0})\), we consider an increasing sequence of input lengths \(n_{0},n_{1},\ldots,n_{t}\), and show that our algorithm \(A\) succeeds on at least one of the input lengths (i.e., \(A(1^{n_{i}})\in\{0,1\}^{2n_{i}}\setminus\operatorname{Range}(C_{n_{i}})\) for some \(i\in\{0,1,\ldots,t\}\)). For each \(i\in\{0,1,\ldots,t\}\), we specify an algorithm \(\mathsf{ALG}_{i}\) of the form \(\mathsf{Korten}(C_{n_{i}},-)\) that aims to solve Avoid for \(C_{n_{i}}\); in other words, we specify a string \(f_{i}\in\{0,1\}^{T_{i}}\) for some \(T_{i}\) and let \(\mathsf{ALG}_{i}:=\mathsf{Korten}(C_{n_{i}},f_{i})\).
The algorithm \(\mathsf{ALG}_{0}\) is simply the brute force algorithm \(\mathsf{BF}\) at input length \(n_{0}\). (A convenient observation is that we can specify an exponentially long string \(f_{0}\in\{0,1\}^{2^{O(n_{0})}}\) so that \(\mathsf{Korten}(C_{n_{0}},f_{0})\) is equivalent to \(\mathsf{BF}=\mathsf{ALG}_{0}\); see creftype3.4.) For each \(0\leq i<t\), to specify \(\mathsf{ALG}_{i+1}\), let \(f_{i+1}\) denote the history of the algorithm \(\mathsf{ALG}_{i}\), and consider the following win-or-improve mechanism.
* If \(f_{i+1}\) admits an \(n_{i}^{\alpha}\)-size circuit (for some large constant \(\alpha\)), by our easy-witness argument, we can simulate \(\mathsf{ALG}_{i}\) by a \(\operatorname{poly}(n_{i})\)-time single-valued \(\mathsf{F}\Sigma_{2}\mathsf{P}\) algorithm.
* Otherwise \(f_{i+1}\) has circuit complexity at least \(n_{i}^{\alpha}\), we plug it into Korten's reduction to solve Avoid for \(C_{n_{i+1}}\). That is, we take \(\mathsf{ALG}_{i+1}\coloneqq\mathsf{Korten}(C_{n_{i+1}},f_{i+1})\) as our new algorithm on input length \(n_{i+1}\).
Let \(T_{i}=|f_{i}|\), then \(T_{i+1}\leq\operatorname{poly}(T_{i})\). By setting \(n_{i+1}=n_{i}^{\beta}\) for a sufficiently large \(\beta\), a similar analysis as [14] shows that for some \(t=O(\log n_{0})\) we would have \(T_{t}\leq\operatorname{poly}(n_{t})\), meaning that \(\mathsf{ALG}_{t}\) would be a \(\operatorname{poly}(n_{t})\)-time \(\mathsf{FP}^{\mathsf{NP}}\) algorithm (thus also a single-valued \(\mathsf{F}\Sigma_{2}\mathsf{P}\) algorithm) solving Avoid for \(C_{n_{t}}\). Putting everything together, we obtain a polynomial-time single-valued \(\mathsf{F}\Sigma_{2}\mathsf{P}\) algorithm that solves Avoid for at least one of the \(C_{n_{i}}\).
The hardness condenser perspective.Below we present another perspective on the construction above which may help the reader understand it better. In the following, we fix \(C_{n}\colon\{0,1\}^{n}\to\{0,1\}^{2n}\) to be the truth table generator \(\mathsf{TT}_{n,2n}\) that maps an \(n\)-bit description of a \(\log(2n)\)-input circuit into its length-\(2n\) truth table. Hence, instead of solving Avoid in general, our goal here is simply _constructing hard truth tables_ (or equivalently, proving circuit lower bounds).
We note that \(\mathsf{Korten}(\mathsf{TT}_{n,2n},f)\) can then be interpreted as a _hardness condenser_[1]:13 Given a truth table \(f\in\{0,1\}^{T}\) whose circuit complexity is sufficiently larger than \(n\), it outputs a length-\(2n\) truth table that is maximally hard (i.e., without \(n/\log n\)-size circuits). The win-or-improve mechanism can be interpreted as an iterative application of this hardness condenser.
At the stage \(i\), we consider the algorithm \(\mathsf{ALG}_{i}\coloneqq\mathsf{Korten}(\mathsf{TT}_{n_{i},2n_{i}},f_{i})\), which runs in \(T_{i}\approx|f_{i}|\) time and creates (roughly) \(n_{i}\) bits of hardness. (That is, the circuit complexity of the output of \(\mathsf{ALG}_{i}\) is roughly \(n_{i}\).) In the (**Win**) case above, \(\mathsf{ALG}_{i}\) admits an \(n_{i}^{\alpha}\)-size history \(f_{i+1}\) (with length approximately \(|f_{i}|\)) and can therefore be simulated in \(\mathsf{F}\Sigma_{2}\mathsf{P}\). The magic is that in the (**Improve**) case, we actually have access to _much more hardness than \(n_{i}\)_: the history string \(f_{i+1}\) has \(n_{i}^{\alpha}\gg n_{i}\) bits of hardness. So we can _distill_ these hardness by applying the condenser to \(f_{i+1}\) to obtain a maximally hard truth tables of length \(2n_{i+1}=2n_{i}^{\beta}\), establish the next algorithm \(\mathsf{ALG}_{i+1}\coloneqq\mathsf{Korten}(\mathsf{TT}_{n_{i+1},2n_{i+1}},f_ {i+1})\), and keep iterating.
Observe that the string \(f_{i+1}\) above has \(n_{i}^{\alpha}>n_{i}^{\beta}=n_{i+1}\) bits of hardness. Since \(|f_{i+1}|\approx|f_{i}|\) and \(n_{i+1}=n_{i}^{\beta}\), the process above creates _harder and harder_ strings, until \(|f_{i+1}|\leq n_{i+1}\leq n_{i}^{\alpha}\), so the (**Win**) case must happen at some point.
### Proof Overview
In this section, we elaborate on the computational history of \(\mathsf{Korten}\) and how the easy-witness-style argument gives us \(\mathsf{F}\Sigma_{2}\mathsf{P}\) and \(\mathsf{FS}_{2}\mathsf{P}\) algorithms.
#### 1.3.1 Korten's reduction
We first review the key concepts and results from [10] that are needed for us. Given a circuit \(C\colon\{0,1\}^{n}\to\{0,1\}^{2n}\) and a parameter \(T\geq 2n\), Korten builds another circuit \(\mathsf{GGM}_{T}[C]\) stretching \(n\) bits to \(T\) bits as follows:14
Footnote 14: We use the name \(\mathsf{GGM}\) because the construction is similar to the pseudorandom function generator of Goldreich, Goldwasser, and Micali [11].
* On input \(x\in\{0,1\}^{n}\), we set \(v_{0,0}=x\). For simplicity, we assume that \(T/n=2^{k}\) for some \(k\in\mathbb{N}\). We build a full binary tree with \(k+1\) layers; see Figure1 for an example with \(k=3\).
* For every \(i\in\{0,1,\ldots,k-1\}\) and \(j\in\{0,1,\ldots,2^{i}-1\}\), we set \(v_{i+1,2j}\) and \(v_{i+1,2j+1}\) to be the first \(n\) bits and the last \(n\) bits of \(C(v_{i,j})\), respectively.
* The output of \(\mathsf{GGM}_{T}[C](x)\) is defined to be the concatenation of \(v_{k,0},v_{k,1},\ldots,v_{k,2^{k}-1}\).
The following two properties of \(\mathsf{GGM}_{T}[C]\) are established in [10], which will be useful for us:
Figure 1: An illustration of the GGM Tree, in which, for instance, it holds that \((v_{3,4},v_{3,5})=C(v_{2,2})\).
1. Given \(i\in[T],C\) and \(x\in\{0,1\}^{n}\), by traversing the tree from the root towards the leaf with the \(i\)-th bit, one can compute the \(i\)-th bit of \(\mathsf{GGM}_{T}[C](x)\) in \(\operatorname{poly}(\mathsf{SIZE}(C),\log T)\) time. Consequently, for every \(x\), \(\mathsf{GGM}_{T}[C](x)\) has circuit complexity at most \(\operatorname{poly}(\mathsf{SIZE}(C),\log T)\).
2. There is a \(\mathsf{P}^{\mathsf{NP}}\) algorithm \(\mathsf{Korten}(C,f)\) that takes an input \(f\in\{0,1\}^{T}\setminus\operatorname{Range}(\mathsf{GGM}_{T}[C])\) and outputs a string \(u\in\{0,1\}^{2n}\setminus\operatorname{Range}(C)\). Note that this is a reduction from solving Avoid for \(C\) to solving Avoid for \(\mathsf{GGM}_{T}[C]\).
In particular, letting \(f\) be a truth table whose circuit complexity is sufficiently larger than \(\mathsf{SIZE}(C)\), by the first property above, it is not in \(\operatorname{Range}(\mathsf{GGM}_{T}[C])\), and therefore \(\mathsf{Korten}(C,f)\) solves Avoid for \(C\). This confirms our description of Korten in Section1.1.2.
3.2 Computational history of \(\mathsf{Korten}\) and an easy-witness argument for \(\mathsf{F}\Sigma_{2}\mathsf{P}\) algorithms
The algorithm \(\mathsf{Korten}(C,f)\) works as follows: we first view \(f\) as the labels of the last layer of the binary tree, and try to reconstruct the whole binary tree, layer by layer (start from the bottom layer to the top layer, within each layer, start from the rightmost node to the leftmost one), by filling the labels of the intermediate nodes. To fill \(v_{i,j}\), we use an \(\mathsf{NP}\) oracle to find the lexicographically first string \(u\in\{0,1\}^{n}\) such that \(C(u)=v_{i+1,2j}\circ v_{i+1,2j+1}\), and set \(v_{i,j}=u\). If no such \(u\) exists, the algorithm stops and report \(v_{i+1,2j}\circ v_{i+1,2j+1}\) as the solution to Avoid for \(C\). Observe that this reconstruction procedure must stop somewhere, since if it successfully reproduces all the labels in the binary tree, we would have \(f=\mathsf{GGM}_{T}[C](v_{0,0})\in\operatorname{Range}(\mathsf{GGM}_{T}[C])\), contradicting the assumption. See Section3.3 for details.
The computational history of Korten.The algorithm described above induces a natural description of the computational history of Korten, denoted as \(\mathsf{History}(C,f)\), as follows: the index \((i_{\star},j_{\star})\) when the algorithm stops (i.e., the algorithm fails to fill in \(v_{i_{\star},j_{\star}}\)) concatenated with the labels of all the nodes generated by \(\mathsf{Korten}(C,f)\) (for the intermediate nodes with no label assigned, we set their labels to a special symbol \(\bot\)); see Figure2 for an illustration. This history has length at most \(5T\), and for convenience, we pad additional zeros at the end of it so that its length is exactly \(5T\).
A local characterization of \(\mathsf{History}(C,f)\).The crucial observation we make on \(\mathsf{History}(C,f)\) is that it admits a local characterization in the following sense: there is a family of local constraints \(\{\psi_{x}\}_{x\in\{0,1\}^{\mathrm{poly}(n)}}\), where each \(\psi_{x}\colon\{0,1\}^{5T}\times\{0,1\}^{T}\to\{0,1\}\) reads only \(\mathrm{poly}(n)\) many bits of its input (we think about it as a local constraint since usually \(n\ll T\)), such that for fixed \(f\), \(\mathsf{History}(C,f)\circ f\) is the unique string making all the \(\psi_{x}\) outputting \(1\).
The constraints are follows: (1) for every leaf node \(v_{k,i}\), its content is consistent with the corresponding block in \(f\); (2) all labels at or before node \((i_{\star},j_{\star})\) are \(\bot\);15 (3) for every \(z\in\{0,1\}^{n}\), \(C(z)\neq v_{i_{\star}+1,2j_{\star}}\circ v_{i_{\star}+1,2j_{\star}+1}\) (meaning the algorithm fails at \(v_{i_{\star},j_{\star}}\)); (4) for every \((i,j)\) after \((i_{\star},j_{\star})\), \(C(v_{i,j})=v_{i+1,2j}\circ v_{i+1,2j+1}\) (\(v_{i,j}\) is the correct label); (5) for every \((i,j)\) after \((i_{\star},j_{\star})\) and for every \(v^{\prime}<v_{i,j}\), \(C(v^{\prime})\neq v_{i+1,2j}\circ v_{i+1,2j+1}\) (\(v_{i,j}\) is the lexicographically first correct label). It is clear that each of these constraints above only reads \(\mathrm{poly}(n)\) many bits from the input and a careful examination shows they precisely **define** the string \(\mathsf{History}(C,f)\).
Footnote 15: We say that \((i,j)\) is before (after) \((i_{\star},j_{\star})\) if the pair \((i,j)\) is lexicographically smaller (greater) than \((i_{\star},j_{\star})\).
A more intuitive way to look at these local constraints is to treat them as a \(\mathrm{poly}(n)\)-time oracle algorithm \(V_{\mathsf{History}}\) that takes a string \(x\in\mathrm{poly}(n)\) as input and two strings \(h\in\{0,1\}^{5T}\) and \(f\in\{0,1\}^{T}\) as oracles, and we simply let \(V_{\mathsf{History}}^{h,f}(x)=\psi_{x}(h\circ f)\). Since the constraints above are all very simple and only read \(\mathrm{poly}(n)\) bits of \(h\circ f\), \(V_{\mathsf{History}}\) runs in \(\mathrm{poly}(n)\) time. In some sense, \(V_{\mathsf{History}}\) is a local \(\Pi_{1}\) verifier: it is local in the sense that it only queries \(\mathrm{poly}(n)\) bits from its oracles, and it is \(\Pi_{1}\) since it needs a universal quantifier over \(x\in\{0,1\}^{\mathrm{poly}(n)}\) to perform all the checks.
\(\mathsf{F}\Sigma_{2}\mathsf{P}\) algorithms.Before we proceed, we give a formal definition of a single-valued \(\mathsf{F}\Sigma_{2}\mathsf{P}\) algorithm \(A\). Here \(A\) is implemented by an algorithm \(V_{A}\) taking an input \(x\) and two \(\mathrm{poly}(|x|)\)-length witnesses \(\pi_{1}\) and \(\pi_{2}\). We say \(A(x)\) outputs a string \(z\in\{0,1\}^{\ell}\) (we assume \(\ell=\ell(x)\) can be computed in polynomial time from \(x\)) if \(z\) is the _unique_ length-\(\ell\) string such that the following hold:
* there exists \(\pi_{1}\) such that for every \(\pi_{2}\), \(V_{\mathsf{History}}(x,\pi_{1},\pi_{2},z)=1\).16
Footnote 16: Note that our definition here is different from the formal definition we used in Definition 2.2. But from this definition, it is easier to see why \(\mathsf{F}\Sigma_{2}\mathsf{P}\) algorithms for constructing hard truth tables imply circuit lower bounds for \(\Sigma_{2}\mathsf{E}\).
We can view \(V_{\mathsf{History}}\) as a verifier that checks whether \(z\) is the desired output using another universal quantifier: given a proof \(\pi_{1}\) and a string \(z\in\{0,1\}^{\ell}\). \(A\) accepts \(z\) if and only if _for every_\(\pi_{2}\), \(V_{\mathsf{History}}(x,\pi_{1},\pi_{2},z)=1\). That is, \(A\) can perform exponentially many checks on \(\pi_{1}\) and \(z\), each taking \(\mathrm{poly}(|x|)\) time.
The easy-witness argument.Now we are ready to elaborate on the easy-witness argument mentioned in Section 1.1.2. Recall that at stage \(i\), we have \(\mathsf{ALG}_{i}=\mathsf{Korten}(C_{n_{i}},f_{i})\) and \(f_{i+1}=\mathsf{History}(C_{n_{i}},f_{i})\) (the history of \(\mathsf{ALG}_{i}\)). Assuming that \(f_{i+1}\) admits a \(\mathrm{poly}(n_{i})\)-size circuit, we want to show that \(\mathsf{Korten}(C_{n_{i}},f_{i})\) can be simulated by a \(\mathrm{poly}(n_{i})\)-time single-valued \(\mathsf{F}\Sigma_{2}\mathsf{P}\) algorithm.
Observe that for every \(t\in[i+1]\), \(f_{t-1}\) is simply a substring of \(f_{t}\) since \(f_{t}=\mathsf{History}(C_{n_{t-1}},f_{t-1})\). Therefore, \(f_{i+1}\) admitting a \(\mathrm{poly}(n_{i})\)-size circuit implies that all \(f_{t}\) admit \(\mathrm{poly}(n_{i})\)-size circuits for \(t\in[i]\). We can then implement \(A\) as follows: the proof \(\pi_{1}\) is a \(\mathrm{poly}(n_{i})\)-size circuit \(C_{i+1}\) supposed to compute \(f_{i+1}\), from which one can obtain in polynomial time a sequence of circuits \(C_{1},\ldots,C_{i}\) that are supposed to compute \(f_{1},\ldots,f_{i}\), respectively. (Also, from Fact 3.4, one can easily construct a \(\mathrm{poly}(n_{0})\)-size circuit \(C_{0}\) computing \(f_{0}\).) Next, for every \(t\in\{0,1,\ldots,i\}\), \(A\) checks whether \(\mathsf{tt}(C_{t+1})\circ\mathsf{tt}(C_{t})\) satisfies all the local constraints \(\psi_{x}\)'s from the characterization of \(\mathsf{History}(C_{n_{t}},f_{t})\). In other words, \(A\) checks whether \(V_{\mathsf{History}}^{C_{t+1},C_{t}}(x)=1\) for all \(x\in\{0,1\}^{\mathrm{poly}(n_{t})}\).
The crucial observation is that since all the \(C_{t}\) have size \(\operatorname{poly}(n_{i})\), each check above can be implemented in \(\operatorname{poly}(n_{i})\) time as they only read at most \(\operatorname{poly}(n_{i})\) bits from their input, despite that \(\mathtt{tt}(C_{t+1})\circ\mathtt{tt}(C_{t})\) itself can be much longer than \(\operatorname{poly}(n_{i})\). Assuming that all the checks of \(A\) above are passed, by induction we know that \(f_{t+1}=\mathsf{History}(C_{n_{t}},f_{t})\) for every \(t\in\{0,1,\ldots,i\}\). Finally, \(A\) checks whether \(z\) corresponds to the answer described in \(\mathtt{tt}(C_{i+1})=f_{i+1}\).
#### 1.3.3 Selectors and an easy-witness argument for \(\mathsf{FS}_{2}\mathsf{P}\) algorithms
Finally, we discuss how to implement the easy-witness argument above with a single-valued \(\mathsf{FS}_{2}\mathsf{P}\) algorithm. It is known that any single-valued \(\mathsf{FS}_{2}\mathsf{BPP}\) algorithm can be converted into an equivalent single-valued \(\mathsf{FS}_{2}\mathsf{P}\) algorithm outputting the same string [13, 14] (see also the proof of Theorem 5.7 for a self-contained argument). Therefore, in the following we aim to give a single-valued \(\mathsf{FS}_{2}\mathsf{BPP}\) algorithm for solving range avoidance, which is easier to achieve.
\(\mathsf{FS}_{2}\mathsf{BPP}\) algorithms and randomized selectors.Before we proceed, we give a formal definition of a single-valued \(\mathsf{FS}_{2}\mathsf{BPP}\) algorithm \(A\). We implement \(A\) by a randomized algorithm \(V_{A}\) that takes an input \(x\) and two \(\operatorname{poly}(|x|)\)-length witnesses \(\pi_{1}\) and \(\pi_{2}\).17 We say that \(A(x)\) outputs a string \(z\in\{0,1\}^{\ell}\) (we assume \(\ell=\ell(x)\) can be computed in polynomial time from \(x\)) if the following hold:
Footnote 17: \(\mathsf{FS}_{2}\mathsf{P}\) algorithms are the special case of \(\mathsf{FS}_{2}\mathsf{BPP}\) algorithms where the algorithm \(V_{A}\) is _deterministic_.
* there exists a string \(h\) such that for every \(\pi\), both \(V_{A}(x,h,\pi)\) and \(V_{A}(x,\pi,h)\) output \(z\) with probability at least \(2/3\). (Note that such \(z\) must be unique if it exists.)
Actually, our algorithm \(A\) will be implemented as a randomized _selector_: given two potential proofs \(\pi_{1}\) and \(\pi_{2}\), it first selects the correct one and then outputs the string \(z\) induced by the correct proof.18
Footnote 18: If both proofs are correct or neither proofs are correct, it can select an arbitrary one. The condition only applies when exactly one of the proofs is correct.
Recap.Revising the algorithm in Section 1.2.3, our goal now is to give an \(\mathsf{FS}_{2}\mathsf{BPP}\) simulation of \(\mathsf{Korten}(C_{n_{i}},f_{i})\), assuming that \(\mathsf{History}(C_{n_{i}},f_{i})\) admits a small circuit. Similar to the local \(\Pi_{1}\) verifier used in the case of \(\mathsf{FS}_{2}\mathsf{P}\) algorithms, now we consider a local randomized selector \(V_{\mathsf{select}}\) which takes oracles \(\pi_{1},\pi_{2}\in\{0,1\}^{5T}\) and \(f\in\{0,1\}^{T}\) such that if exactly one of the \(\pi_{1}\) and \(\pi_{2}\) is \(\mathsf{History}(C,f)\), \(V_{\mathsf{select}}\) outputs its index with high probability.
Assuming that \(f_{i+1}=\mathsf{History}(C_{n_{i}},f_{i})\) admits a small circuit, one can similarly turn \(V_{\mathsf{select}}\) into a single-valued \(\mathsf{FS}_{2}\mathsf{BPP}\) algorithms \(A\) computing \(\mathsf{Korten}(C_{n_{i}},f_{i})\): treat two proofs \(\pi_{1}\) and \(\pi_{2}\) as two small circuits \(C\) and \(D\) both supposed to compute \(f_{i+1}\), from \(C\) and \(D\) we can obtain a sequence of circuits \(\{C_{t}\}\) and \(\{D_{t}\}\) supposed to compute the \(f_{t}\) for \(t\in[i]\). Then we can use the selector \(V_{\mathsf{select}}\) to decide for each \(t\in[i+1]\) which of the \(C_{t}\) and \(D_{t}\) is the correct circuit for \(f_{t}\). Finally, we output the answer encoded in the selected circuit for \(f_{i+1}\); see the proof of Theorem 5.7 for details.19
Footnote 19: However, for the reasons to be explained below, we will actually work with the encoded history instead of the history, which entails a lot of technical challenges in the actual proof.
Observation: it suffices to find the first differing node label.Ignore the \((i_{\star},j_{\star})\) part of the history for now. Let \(\{v^{1}_{i,j}\}\) and \(\{v^{2}_{i,j}\}\) be the node labels encoded in \(\pi_{1}\) and \(\pi_{2}\), respectively. We also assume that exactly one of them corresponds to the correct node labels in \(\mathsf{History}(C,f)\). The crucial observation here is that, since the correct node labels are generated by a deterministic procedure _node by node_ (from bottom to top and from rightmost to leftmost), it is possible to tell which of the \(\{v^{1}_{i,j}\}\) and \(\{v^{2}_{i,j}\}\) is correct given the largest \((i^{\prime},j^{\prime})\) such that \(v^{1}_{i^{\prime},j^{\prime}}\neq v^{2}_{i^{\prime},j^{\prime}}\). (Note that
since all \((i,j)\) are processed by \(\mathsf{Korten}(C,f)\) in reverse lexicographic order, this \((i^{\prime},j^{\prime})\) corresponds to the first node label that the wrong process differs from the correct process, so we call this the first differing point.)
In more detail, assuming we know this \((i^{\prime},j^{\prime})\), we proceed by discussing several cases. First of all, if \((i^{\prime},j^{\prime})\) corresponds to a leaf, then one can query \(f\) to figure out which of \(v^{1}_{i^{\prime},j^{\prime}}\) and \(v^{2}_{i^{\prime},j^{\prime}}\) is consistent with the corresponding block in \(f\). Now we can assume \((i^{\prime},j^{\prime})\) corresponds to an intermediate node. Since \((i^{\prime},j^{\prime})\) is the first differing point, we know that \(v^{1}_{i^{\prime}+1,2j^{\prime}}\circ v^{1}_{i^{\prime}+1,2j^{\prime}+1}=v^{2 }_{i^{\prime}+1,2j^{\prime}}\circ v^{2}_{i^{\prime}+1,2j^{\prime}+1}\) (we let this string to be \(\alpha\) for convenience). By the definition of \(\mathsf{History}(C,f)\), it follows that the correct \(v_{i^{\prime},j^{\prime}}\) should be uniquely determined by \(\alpha\), which means the selector only needs to read \(\alpha\), \(v^{1}_{i^{\prime},j^{\prime}}\), and \(v^{2}_{i^{\prime},j^{\prime}}\), and can then be implemented by a somewhat tedious case analysis (so it is local). We refer readers to the proof of Lemma5.5 for the details and only highlight the most illuminating case here: if both of \(v^{1}_{i^{\prime},j^{\prime}}\) and \(v^{2}_{i^{\prime},j^{\prime}}\) are good (we say a string \(\gamma\) is good, if \(\gamma\neq\bot\) and \(C(\gamma)=\alpha\)), we select the lexicographically smaller one. To handle the \((i_{\star},j_{\star})\) part, one needs some additional case analysis. We omit the details here and refer the reader to the proof of Lemma5.5.
The takeaway here is that if we can find the first differing label \((i^{\prime},j^{\prime})\), then we can construct the selector \(V_{\mathsf{select}}\) and hence the desired single-valued \(\mathsf{FS_{2}BPP}\) algorithm.
Encoded history.However, the above assumes the knowledge of \((i^{\prime},j^{\prime})\). In general, if one is only given oracle access to \(\{v^{1}_{i,j}\}\) and \(\{v^{2}_{i,j}\}\), there is no \(\operatorname{poly}(n)\)-time oracle algorithm computing \((i^{\prime},j^{\prime})\) because there might be exponentially many nodes. To resolve this issue, we will encode \(\{v^{1}_{i,j}\}\) and \(\{v^{2}_{i,j}\}\) via Reed-Muller codes.
Formally, recall that \(\mathsf{History}(C,f)\) is the concatenation of \((i_{\star},j_{\star})\) and the string \(S\), where \(S\) is the concatenation of all the labels on the binary tree. We now define the encoded history, denoted as \(\widetilde{\mathsf{History}}(C,f)\), as the concatenation of \((i_{\star},j_{\star})\) and _a Reed-Muller encoding_ of \(S\). The new selector is given oracle access to two candidate encoded histories together with \(f\). By applying low-degree tests and self-correction of polynomials, we can assume that the Reed-Muller parts of the two candidates are indeed low-degree polynomials. Then we can use a reduction to polynomial identity testing to compute the first differing point between \(\{v^{1}_{i,j}\}\) and \(\{v^{2}_{i,j}\}\) in randomized polynomial time. See the proof of Lemma5.3 for the details. This part is similar to the selector construction from [10].
### Discussions
We conclude the introduction by discussing some related works.
#### 1.4.1 Previous approach: Karp-Lipton collapses and the half-exponential barrier
In the following, we elaborate on the half-exponential barrier mentioned earlier in the introduction.20 Let \(\mathcal{C}\) be a "typical" uniform complexity class containing \(\mathsf{P}\), a _Karp-Lipton collapse_ to \(\mathcal{C}\) states that if a large class (say \(\mathsf{EXP}\)) has polynomial-size circuits, then this class collapses to \(\mathcal{C}\). For example, there is a Karp-Lipton collapse to \(\mathcal{C}=\Sigma_{2}\mathsf{P}\):
Footnote 20: A function \(f\colon\mathbb{N}\to\mathbb{N}\) is _sub-half-exponential_ if \(f(f(n)^{c})=2^{o(n)}\) for every constant \(c\geq 1\), i.e., composing \(f\) twice yields a sub-exponential function. For example, for constants \(c\geq 1\) and \(\varepsilon>0\), the functions \(f(n)=n^{c}\) and \(f(n)=2^{\log^{c}n}\) are sub-half-exponential, but the functions \(f(n)=2^{n^{\varepsilon}}\) and \(f(n)=2^{\varepsilon n}\) are not.
Suppose \(\mathsf{EXP}\subseteq\mathsf{P}/_{\operatorname{poly}}\), then \(\mathsf{EXP}=\Sigma_{2}\mathsf{P}\). ([11], attributed to Albert Meyer)
Now, assuming that \(\mathsf{EXP}\subseteq\mathsf{P}/_{\operatorname{poly}}\implies\mathsf{EXP}= \mathcal{C}\), the following win-win analysis implies that \(\mathcal{C}\)-\(\mathsf{EXP}\), the exponential-time version of \(\mathcal{C}\), is not in \(\mathsf{P}/_{\operatorname{poly}}\): (1) if \(\mathsf{EXP}\not\subset\mathsf{P}/_{\operatorname{poly}}\), then of
course \(\mathcal{C}\text{-}\mathsf{EXP}\supseteq\mathsf{EXP}\) does not have polynomial-size circuits; (2) otherwise \(\mathsf{EXP}\subseteq\mathsf{P}/_{\mathrm{poly}}\). We have \(\mathsf{EXP}=\mathcal{C}\) and by padding \(\mathsf{EXP}=\mathcal{C}\text{-}\mathsf{EXP}\). Since \(\mathsf{EXP}\) contains a function of maximum circuit complexity by direct diagonalization, it follows that \(\mathcal{C}\text{-}\mathsf{EXP}\) does not have polynomial-size circuits.
Karp-Lipton collapses are known for the classes \(\Sigma_{2}\mathsf{P}\)[11], \(\mathsf{ZPP}^{\mathsf{NP}}\)[12], \(\mathsf{S}_{2}\mathsf{P}\)[13] (attributed to Samik Sengupta), \(\mathsf{PP}\), \(\mathsf{MA}\)[14, 15], and \(\mathsf{ZPP}^{\mathsf{MCSP}}\)[16]. All the aforementioned super-polynomial circuit lower bounds for \(\Sigma_{2}\mathsf{EXP}\), \(\mathsf{ZPEXP}^{\mathsf{NP}}\), \(\mathsf{S}_{2}\mathsf{EXP}\), \(\mathsf{PEXP}\), \(\mathsf{MAEXP}\), and \(\mathsf{ZPEXP}^{\mathsf{MCSP}}\) are proven in this way.21
Footnote 21: There are some evidences that Karp–Lipton collapses are essential for proving circuit lower bounds [10].
The half-exponential barrier.The above argument is very successful at proving various super-polynomial lower bounds. However, a closer look shows that it is only capable of proving _sub-half-exponential_ circuit lower bounds. Indeed, suppose we want to show that \(\mathcal{C}\text{-}\mathsf{EXP}\) does not have circuits of size \(f(n)\). We will have to perform the following win-win analysis:
* if \(\mathsf{EXP}\not\subset\mathsf{SIZE}[f(n)]\), then of course \(\mathcal{C}\text{-}\mathsf{EXP}\supseteq\mathsf{EXP}\) does not have circuits of size \(f(n)\);
* if \(\mathsf{EXP}\subseteq\mathsf{SIZE}[f(n)]\), then (a scaled-up version of) the Karp-Lipton collapse implies that \(\mathsf{EXP}\) can be computed by a \(\mathcal{C}\) machine of \(\mathrm{poly}(f(n))\) time. Note that \(\mathsf{TIME}[2^{\mathrm{poly}(f(n))}]\) does not have circuits of size \(f(n)\) by direct diagonalization. By padding, \(\mathsf{TIME}[2^{\mathrm{poly}(f(n))}]\) can be computed by a \(\mathcal{C}\) machine of \(\mathrm{poly}(f(\mathrm{poly}(f(n))))\) time. Therefore, if \(f\) is sub-half-exponential (meaning \(f(\mathrm{poly}(f(n)))=2^{o(n)}\)), then \(\mathcal{C}\text{-}\mathsf{EXP}\) does not have circuits of size \(f(n)\).
Intuitively speaking, the two cases above are _competing with each other_: we cannot get exponential lower bounds in both cases.
#### 1.4.2 Implications for the Missing-String problem?
In the Missing-String problem, we are given a list of \(m\) strings \(x_{1},x_{2},\ldots,x_{m}\in\{0,1\}^{n}\) where \(m<2^{n}\), and the goal is to output any length-\(n\) string \(y\) that does not appear in \(\{x_{1},x_{2},\ldots,x_{m}\}\). Vyas and Williams [21] connected the circuit complexity of Missing-String with the (relativized) circuit complexity of \(\Sigma_{2}\mathsf{E}\):
**Theorem 1.8** ([21, Theorem 32], Informal).: _The following are equivalent:_
* \(\Sigma_{2}\mathsf{E}^{A}\not\subset\mathrm{i.o.-SIZE}^{A}[2^{\Omega(n)}]\) _for every oracle_ \(A\)_;_
* _for_ \(m=2^{\Omega(n)}\)_, the Missing-String problem can be solved by a uniform family of size-_\(2^{O(n)}\) _depth-_\(3\)__\(\mathsf{AC}^{0}\) _circuits._
The intuition behind Theorem1.8 is roughly as follows. For every oracle \(A\), the set of truth tables with low \(A\)-oracle circuit complexity induces an instance for Missing-String, and solving this instance gives us a hard truth table relative to \(A\). If the algorithm for Missing-String is a uniform \(\mathsf{AC}^{0}\) circuit of depth \(3\), then the hard function is inside \(\Sigma_{2}\mathsf{E}^{A}\).
However, despite our Theorem1.2 being completely relativizing, it does not seem to imply any non-trivial depth-\(3\)\(\mathsf{AC}^{0}\) circuit for Missing-String. The reason is the heavy win-win analysis _across multiple input lengths_: for each \(0\leq i<t\), we have a single-valued \(\mathsf{F}\Sigma_{2}\mathsf{P}\) construction algorithm for hard truth tables relative to oracle \(A\) on input length \(n_{i}\), but this algorithm needs access to \(A_{n_{i+1}}\), _a higher input length of \(A\)_. Translating this into the language of Missing-String, we obtain a weird-looking depth-\(3\)\(\mathsf{AC}^{0}\) circuit that takes as input a _sequence_ of Missing-String instances \(\mathcal{I}_{n_{0}},\mathcal{I}_{n_{1}},\ldots,\mathcal{I}_{n_{t}}\) (where _each_\(\mathcal{I}_{n_{i}}\subseteq\{0,1\}^{n_{i}}\) is a set of strings), looks at all of the instances
(or, at least \(\mathcal{I}_{n_{i}}\) and \(\mathcal{I}_{n_{i+1}}\)), and outputs a purportedly missing string of \(\mathcal{I}_{n_{i}}\). It is guaranteed that for at least one input length \(i\), the output string is indeed a missing string of \(\mathcal{I}_{n_{i}}\). However, if our algorithm is only given one instance \(\mathcal{I}\subseteq\{0,1\}^{n}\), without assistance from a larger input length, it does not know how to find any missing string of \(\mathcal{I}\).
It remains an intriguing open problem whether the bullets in Theorem1.8 are true or not. In other words, is there an oracle \(A\) relative to which \(\Sigma_{2}\mathsf{E}\) has small circuits _on infinitely many input lengths_?
### Organization
In Section2, we introduce the necessary technical preliminaries for this paper. In Section3, we review Korten's reduction from solving range avoidance to generating hard truth tables [13], together with some new properties required by our new results. In Section4, we prove the near-maximum circuit lower bound for \(\Sigma_{2}\mathsf{E}\); although this lower bound is superseded by the later \(\mathsf{S}_{2}\mathsf{E}/_{1}\) lower bound, we nonetheless include it in the paper since its proof is much more elementary. In Section5, we extend the near-maximum circuit lower bound to \(\mathsf{S}_{2}\mathsf{E}/_{1}\), and also present our new algorithms for solving the range avoidance problem.
## 2 Preliminaries
Notation.We use \([n]\) to denote \(\{1,2,\ldots,n\}\). A search problem \(\Pi\) maps every input \(x\in\{0,1\}^{*}\) into a solution set \(\Pi_{x}\subseteq\{0,1\}^{*}\). We say an algorithm \(A\) solves the search problem \(\Pi\) on input \(x\) if \(A(x)\in\Pi_{x}\).
### Complexity Classes
We assume basic familiarity with computation complexity theory (see, e.g., [1, 1] for references). Below we recall the definition of \(\mathsf{S}_{2}\mathsf{TIME}[T(n)]\)[14, 15].
**Definition 2.1**.: Let \(T\colon\mathbb{N}\to\mathbb{N}\). We say a language \(L\in\mathsf{S}_{2}\mathsf{TIME}[T(n)]\), if there exists an \(O(T(n))\)-time verifier \(V(x,\pi_{1},\pi_{2})\) that takes \(x\in\{0,1\}^{n}\) and \(\pi_{1},\pi_{2}\in\{0,1\}^{T(n)}\) as input, satisfying that
* if \(x\in L\), then there exists \(\pi_{1}\) such that for every \(\pi_{2}\), \(V(x,\pi_{1},\pi_{2})=1\), and
* if \(x\not\in L\), then there exists \(\pi_{2}\) such that for every \(\pi_{1}\), \(V(x,\pi_{1},\pi_{2})=0\).
Moreover, we say \(L\in\mathsf{S}_{2}\mathsf{E}\) if \(L\in\mathsf{S}_{2}\mathsf{TIME}[T(n)]\) for some \(T(n)\leq 2^{O(n)}\), and \(L\in\mathsf{S}_{2}\mathsf{P}\) if \(L\in\mathsf{S}_{2}\mathsf{TIME}[p(n)]\) for some polynomial \(p\).
It is known that \(\mathsf{S}_{2}\mathsf{P}\) contains \(\mathsf{MA}\) and \(\mathsf{P}^{\mathsf{NP}}\)[14], and \(\mathsf{S}_{2}\mathsf{P}\) is contained in \(\mathsf{ZPP}^{\mathsf{NP}}\)[1]. From its definition, it is also clear that \(\mathsf{S}_{2}\mathsf{P}\subseteq\Sigma_{2}\mathsf{P}\cap\Pi_{2}\mathsf{P}\).
### Single-valued \(\mathsf{F}\Sigma_{2}\mathsf{P}\) and \(\mathsf{FS}_{2}\mathsf{P}\) Algorithms
We consider the following definitions of single-valued algorithms which correspond to circuit lower bounds for \(\Sigma_{2}\mathsf{E}\) and \(\mathsf{S}_{2}\mathsf{E}\).
**Definition 2.2** (Single-valued \(\mathsf{F}\Sigma_{2}\mathsf{P}\) and \(\mathsf{FS}_{2}\mathsf{P}\) algorithms).: A single-valued \(\mathsf{F}\Sigma_{2}\mathsf{P}\) algorithm \(A\) is specified by a polynomial \(\ell(\cdot)\) together with a polynomial-time algorithm \(V_{A}(x,\pi_{1},\pi_{2})\). On an input \(x\in\{0,1\}^{*}\), we say that \(A\) outputs \(y_{x}\in\{0,1\}^{*}\), if the following hold:
1. There is a \(\pi_{1}\in\{0,1\}^{\ell(|x|)}\) such that for every \(\pi_{2}\in\{0,1\}^{\ell(|x|)}\), \(V_{A}(x,\pi_{1},\pi_{2})\) outputs \(y_{x}\).
2. For every \(\pi_{1}\in\{0,1\}^{\ell(|x|)}\), there is a \(\pi_{2}\in\{0,1\}^{\ell(|x|)}\) such that the output of \(V_{A}(x,\pi_{1},\pi_{2})\) is either \(y_{x}\) or \(\bot\) (where \(\bot\) indicates "I don't know").
A single-valued \(\mathsf{FS}_{2}\mathsf{P}\) algorithm \(A\) is specified similarly, except that we replace the second condition above with the following:
1. There is a \(\pi_{2}\in\{0,1\}^{\ell(|x|)}\) such that for every \(\pi_{1}\in\{0,1\}^{\ell(|x|)}\), \(V_{A}(x,\pi_{1},\pi_{2})\) outputs \(y_{x}\).
Now, we say that a single-valued \(\mathsf{FS}_{2}\mathsf{P}\) (\(\mathsf{FS}_{2}\mathsf{P}\)) algorithm \(A\) solves a search problem \(\Pi\) on input \(x\) if it outputs a string \(y_{x}\) and \(y_{x}\in\Pi_{x}\). Note that from Definition2.2, if \(A\) outputs a string \(y_{x}\), then \(y_{x}\) is unique.
For convenience, we mostly only consider single-valued algorithms \(A(x)\) with fixed output lengths, meaning that the output length \(|A(x)|\) only depends on \(|x|\) and can be computed in polynomial time given \(1^{|x|}\).22
Footnote 22: If \(A\) takes multiple inputs like \(x,y,z\), then the output length \(A(x,y,z)\) only depends on \(|x|,|y|,|z|\) and can be computed in polynomial time given \(1^{|x|}\), \(1^{|y|}\), and \(1^{|z|}\).
2.1 Single-Valued \(\mathsf{FS}_{2}\mathsf{P}\) and \(\mathsf{FS}_{2}\mathsf{P}\) algorithms with \(\mathsf{FP}^{\mathsf{NP}}\) post-processing
We also need the fact that single-valued \(\mathsf{FS}_{2}\mathsf{P}\) or \(\mathsf{FS}_{2}\mathsf{P}\) algorithms with \(\mathsf{FP}^{\mathsf{NP}}\) post-processing can still be implemented by single-valued \(\mathsf{FS}_{2}\mathsf{P}\) or \(\mathsf{FS}_{2}\mathsf{P}\) algorithms, respectively. More specifically, we have:
**Theorem 2.3**.: _Let \(A(x)\) be a single-valued \(\mathsf{FS}_{2}\mathsf{P}\) (resp. \(\mathsf{FS}_{2}\mathsf{P}\)) algorithm and \(B(x,y)\) be an \(\mathsf{FP}^{\mathsf{NP}}\) algorithm, both with fixed output length. The function \(f(x)\coloneqq B(x,A(x))\) also admits an \(\mathsf{FS}_{2}\mathsf{P}\) (resp. \(\mathsf{FS}_{2}\mathsf{P}\)) algorithm._
Proof.: We only provide a proof for the case of single-valued \(\mathsf{FS}_{2}\mathsf{P}\) algorithms. Recall that the Lexicographically Maximum Satisfying Assignment problem (\(\mathsf{LMSAP}\)) is defined as follows: given an \(n\)-variable formula \(\phi\) together with an integer \(k\in[n]\), one needs to decide whether \(a_{k}=1\), where \(a_{1},\ldots,a_{n}\in\{0,1\}^{n}\) is the lexicographically largest assignment satisfies \(\phi\). By [10], \(\mathsf{LMSAP}\) is \(\mathsf{p}^{\mathsf{NP}}\)-complete.
Let \(V_{A}(x,\pi_{1},\pi_{2})\) be the corresponding verifier for the single-valued \(\mathsf{FS}_{2}\mathsf{P}\) algorithm \(A\). Let \(L(x,y,i)\) be the \(\mathsf{P}^{\mathsf{NP}}\) language such that \(L(x,y,i)=1\) if and only if \(B(x,y)_{i}=1\). Let \(\ell=|B(x,y)|\) be the output length of \(B\). We now define a single-valued \(\mathsf{FS}_{2}\mathsf{P}\) algorithm \(\widetilde{A}\) by defining the following verifier \(V_{\widetilde{A}}\), and argue that \(\widetilde{A}\) computes \(f\).
The verifier \(V_{\widetilde{A}}\) takes an input \(x\) and two proofs \(\vec{\pi}_{1}\) and \(\vec{\pi}_{2}\), where \(\vec{\pi}_{1}\) consists of \(\omega_{1}\), acting as the second argument to \(V_{A}\), and \(\ell\) assignments \(z_{1}^{1},z_{2}^{1},\ldots,z_{\ell}^{1}\in\{0,1\}^{m}\). Similarly, \(\vec{\pi}_{2}\) consists of \(\omega_{2}\) and \(z_{1}^{2},z_{2}^{2},\ldots,z_{\ell}^{2}\in\{0,1\}^{m}\).
First, \(V_{\widetilde{A}}\) runs \(V_{A}(x,\omega_{1},\omega_{2})\) to get \(y\in\{0,1\}^{|A(x)|}\). Then it runs the reduction from \(L(x,y,i)\) to \(\mathsf{LMSAP}\) for every \(i\in[\ell]\) to obtain \(\ell\) instances \(\{(\phi_{i},k_{i})\}_{i\in[\ell]}\), where \(\phi_{i}\) is an \(m\)-variable formula and \(k_{i}\in[m]\). (Without loss of generality by padding dummy variables, we may assume that the number of variables in \(\phi_{i}\) is the same for each \(i\), i.e., \(m\); and that \(m\) only depends on \(|x|\) and \(|y|\).) Now, for every \(\mu\in[2]\), we can define an answer \(w_{\mu}\in\{0,1\}^{\ell}\) by \((w_{\mu})_{i}=(z_{i}^{\mu})_{k_{i}}\) (i.e., the value of \(B(x,y)\), assuming that \(\vec{\pi}_{\mu}\) consists of the lexicographically largest assignments for all the \(\mathsf{LMSAP}\) instances).
In what follows, when we say that \(V_{\widetilde{A}}\)_selects_ the proof \(\mu\in[2]\), we mean that \(V_{\widetilde{A}}\) outputs \(w_{\mu}\) and terminates. Then, \(V_{\widetilde{A}}\) works as follows:
1. For each \(\mu\in[2]\), it first checks whether for every \(i\in[\ell]\), \(z_{i}^{\mu}\) satisfies \(\phi_{i}\). If only one of the \(\mu\) passes all the checks, \(V_{\widetilde{A}}\) selects that \(\mu\). If none of them passes all the checks, \(V_{\widetilde{A}}\) selects \(1\). Otherwise, it continues to the next step.
2. Now, letting \(Z^{\mu}=z_{1}^{\mu}\circ z_{2}^{\mu}\circ\ldots\circ z_{\ell}^{\mu}\) for each \(\mu\in[2]\). \(V_{\widetilde{A}}\) selects the \(\mu\) with the lexicographically larger \(Z^{\mu}\). If \(Z^{1}=Z^{2}\), then \(V_{\widetilde{A}}\) selects \(1\).
Now we claim that \(\widetilde{A}\) computes \(f(x)\), which can be established by setting \(\vec{\pi}_{1}\) or \(\vec{\pi}_{2}\) be the corresponding proof for \(V_{A}\) concatenated with all lexicographically largest assignments for the \(\{\phi_{i}\}_{i\in[\ell]}\).
### The Range Avoidance Problem
The _range avoidance_ problem [11, 12, 13] is the following problem: Given as input a circuit \(C\colon\{0,1\}^{n}\to\{0,1\}^{\ell}\) where \(\ell>n\), find any string \(y\in\{0,1\}^{\ell}\setminus\mathrm{Range}(C)\). Proving circuit lower bounds (for exponential-time classes) is equivalent to solving the range avoidance problem on the _truth table generator_\(\mathsf{TT}_{n,s}\), defined as follows. It was shown in [13] that for \(n,s\in\mathbb{N}\), any \(s\)-size \(n\)-input circuit \(C\) can be encoded as a _stack program_ with description size \(L_{n,s}:=(s+1)(7+\log(n+s))\). The precise definition of stack programs does not matter (see [13] for a formal definition); the only property we need is that given \(s\) and \(n\) such that \(n\leq s\leq 2^{n}\), in \(\mathrm{poly}(2^{n})\) time one can construct a circuit \(\mathsf{TT}_{n,s}\colon\{0,1\}^{L_{n,s}}\to\{0,1\}^{2^{n}}\) mapping the description of a stack program into its truth table. By the equivalence between stack programs and circuits, it follows that any \(f\in\{0,1\}^{2^{n}}\setminus\mathrm{Range}(\mathsf{TT}_{n,s})\) satisfies \(\mathsf{SIZE}(f)>s\). Also, we note that for large enough \(n\in\mathbb{N}\) and \(s=2^{n}/n\), we have \(L_{n,s}<2^{n}\).
**Fact 2.4**.: _Let \(s(n)\colon\mathbb{N}\to\mathbb{N}\). Suppose that there is a single-valued \(\mathsf{FS}_{2}\mathsf{P}\) algorithm \(A\) such that for infinitely many \(n\in\mathbb{N}\), \(A(1^{2^{n}})\) takes \(\alpha(n)\) bits of advice and outputs a string \(f_{n}\in\{0,1\}^{2^{n}}\setminus\mathrm{Range}(\mathsf{TT}_{n,s(n)})\). Then \(\mathsf{S}_{2}\mathsf{E}/_{\alpha(n)}\not\subset\mathsf{SIZE}[s(n)]\)._
Proof sketch.: We define a language \(L\) such that the truth table of the characteristic function of \(L\cap\{0,1\}^{n}\) is \(A(1^{2^{n}})\). It is easy to see that \(L\notin\mathsf{SIZE}[s(n)]\) and \(L\in\mathsf{S}_{2}\mathsf{E}/_{\alpha(n)}\).
## 3 Korten's Reduction
Our results crucially rely on a reduction in [12] showing that proving circuit lower bounds is "the hardest explicit construction" under \(\mathsf{P}^{\mathsf{NP}}\) reductions.
Notation.Let \(s\) be a string of length \(n\). We will always use \(0\)-index (i.e., the first bit of \(s\) is \(s_{0}\) and the last bit of \(s\) is \(s_{n-1}\)). Let \(i<j\), we use \(s_{[i,j]}\) to denote the substring of \(s\) from the \(i\)-th bit to the \(j\)-th bit, and \(s_{[i,j)}\) to denote the substring of \(s\) from the \(i\)-th bit to the \((j-1)\)-th bit. (Actually, we will use the notation \(s_{[i,j)}\) more often than \(s_{[i,j]}\) as it is convenient when we describe the GGM tree.) We also use \(s_{1}\circ s_{2}\circ\cdots\circ s_{k}\) to denote the concatenation of \(k\) strings.
### GGM Tree and the Reduction
We first recall the GGM tree construction from [1], which is used in a crucial way by [12].
**Definition 3.1** (The GGM tree construction [14]).: Let \(C\colon\{0,1\}^{n}\to\{0,1\}^{2n}\) be a circuit. Let \(n,T\in\mathbb{N}\) be such that \(T\geq 4n\) and let \(k\) be the smallest integer such that \(2^{k}n\geq T\). The function \(\mathsf{GGM}_{T}[C]\colon\{0,1\}^{n}\to\{0,1\}^{T}\) is defined as follows.
Consider a perfect binary tree with \(2^{k}\) leaves, where the root is on level \(0\) and the leaves are on level \(k\). Each node is assigned a binary string of length \(n\), and for \(0\leq j<2^{i}\), denote \(v_{i,j}\in\{0,1\}^{n}\) the value assigned to the \(j\)-th node on level \(i\). Let \(x\in\{0,1\}^{n}\). We perform the following computation to obtain \(\mathsf{GGM}_{T}[C](x)\): we set \(v_{0,0}:=x\), and for each \(0\leq i<k\), \(0\leq j<2^{i}\), we set \(v_{i+1,2j}:=C(v_{i,j})_{[0,n)}\) (i.e., the first half of \(C(v_{i,j})\)) and \(v_{i+1,2j+1}:=C(v_{i,j})_{[n,2n)}\) (i.e., the second half of \(C(v_{i,j})\)). (We say the nodes \((i+1,2j)\) and \((i+1,2j+1)\) are "children" of \((i,j)\).)
Finally, we concatenate all values of the leaves and take the first \(T\) bits as the output:
\[\mathsf{GGM}_{T}[C](x):=(v_{k,0}\circ v_{k,1}\circ\dots\circ v_{k,2^{k}-1})_{[0,T)}.\]
**Lemma 3.2** (The output of GGM tree has a small circuit).: _Let \(\mathsf{GGMEval}(C,T,x,i)\) denote the \(i\)-th bit of \(\mathsf{GGM}_{T}[C](x)\). There is an algorithm running in \(\widetilde{O}(|C|\cdot\log T)\) time that, given \(C,T,x,i\), outputs \(\mathsf{GGMEval}(C,T,x,i)\)._
Proof Sketch.: We first note that to compute the \(i\)-th bit of \(\mathsf{GGM}_{T}[C](x):=(v_{k,0}\circ v_{k,1}\circ\dots\circ v_{k,2^{k}-1})_{[0,T)}\), it suffices to compute \(v_{k,\lfloor i/n\rfloor}\). Computing \(v_{k,\lfloor i/n\rfloor}\) can be done by descending from the root of the GGM tree to the leave \((k,\lfloor i/n\rfloor)\), which takes \(\widetilde{O}(|C|\cdot\log T)\) time.
It is shown in [15] that the range avoidance problem for \(C\) reduces to the range avoidance problem for \(\mathsf{GGM}_{T}[C]\). In what follows, we review this proof, during which we also define the _computational history_ of "solving range avoidance of \(C\) from \(\mathsf{GGM}_{T}[C]\)", which will be crucial in our main proof.
```
Input:\(C\colon\{0,1\}^{n}\to\{0,1\}^{2n}\) denotes the input circuit, and \(f\in\{0,1\}^{T}\setminus\operatorname{Range}(\mathsf{GGM}_{T}[C])\) denotes the input "hard" truth table Output: A non-output of \(C\) Data: The computational history of \(\mathsf{Korten}(C,f)\): a pair \((i_{\star},j_{\star})\) and an array \(\{v_{i,j}\}_{i,j}\) where \(i\in\{0,1\dots,k\}\) and \(j\in\{0,1,\dots,2^{i}\}\).
1 Let \(k\leftarrow\lceil\log_{2}(T/n)\rceil\);
2 Append \(f\) with \(2^{k}n-|f|\) zeros at the end;
3for\(j\gets 0\) to \(2^{k}-1\)do
4\(v_{k,j}\gets f_{[jn,(j+1)n)}\); /* the \(j\)-th "block" of \(f\)
5for\(i\gets k-1\) downto \(0\)do
6for\(j\gets 2^{i}-1\) downto \(0\)do
7 Let \(v_{i,j}\) be the lexicographically smallest string in \(C^{-1}(v_{i+1,2j}\circ v_{i+1,2j+1})\); /* Note that this step needs to invoke the NP oracle */
8if\(v_{i,j}\) does not exist then
9 For every \((i^{\prime},j^{\prime})\) such that \(v_{i^{\prime},j^{\prime}}\) is not set yet, set \(v_{i^{\prime},j^{\prime}}\leftarrow\bot\);
10 Set \(i_{\star}:=i\), and \(j_{\star}:=j\);
11return\(v_{i+1,2j}\circ v_{i+1,2j+1}\);
12
13
14return\(\bot\)
```
**Algorithm 3.1**\(\mathsf{Korten}(C,f)\): Korten's reduction
**Lemma 3.3** (Reduction from solving range avoidance of \(C\) to solving range avoidance of \(\mathsf{GGM}_{T}[C]\)).: _Let \(C\colon\{0,1\}^{n}\to\{0,1\}^{2n}\) be a circuit. Let \(f\) be a non-output of \(\mathsf{GGM}_{T}[C]\), i.e., \(f\in\{0,1\}^{T}\setminus\operatorname{Range}(\mathsf{GGM}_{T}[C])\). Then, \(\mathsf{Korten}(C,f)\) (as defined in Algorithm 3.1) outputs a non-output of \(C\) in deterministic \(\operatorname{poly}(T,n)\) time with an \(\mathsf{NP}\) oracle._
Proof Sketch.: The running time of \(\mathsf{Korten}(C,f)\) follows directly from its description. Also, note that whenever \(\mathsf{Korten}(C,f)\) outputs a string \(v_{i+1,2j}\circ v_{i+1,2j+1}\in\{0,1\}^{2n}\), it holds that this string is not in the range of \(C\). Therefore, it suffices to show that when \(f\in\{0,1\}^{T}\setminus\operatorname{Range}(\mathsf{GGM}_{T}[C])\), \(\mathsf{Korten}(C,f)\) does not return \(\bot\).
Assume, towards a contradiction, that \(\mathsf{Korten}(C,f)\) returns \(\bot\). This means that all the \(\{v_{i,j}\}_{i,j}\) values are set. It follows from the algorithm description that \(f=\mathsf{GGM}_{T}[C](v_{0,0})\), which contradicts the assumption that \(f\in\{0,1\}^{T}\setminus\operatorname{Range}(\mathsf{GGM}_{T}[C])\).
In addition, we observe the following trivial fact:
**Fact 3.4**.: _Let \(C:\{0,1\}^{n}\to\{0,1\}^{2n}\) be a circuit, \(T:=2^{2n}\cdot 2n\), and \(f\) be the concatenation of all length-\(2n\) strings (which has length \(T\)). Then \(f\not\in\operatorname{Range}(\mathsf{GGM}_{T}[C])\)._
One can combine Fact 3.4 with Lemma 3.3 to obtain a brute force algorithm that solves the range avoidance problem in \(2^{O(n)}\) time with an \(\mathsf{NP}\) oracle. Essentially, this brute force algorithm tests every possible length-\(2n\) string against the range of the circuit. It will be the basis of our win-win analysis in Section 4.
Finally, we give the following remark, showing that Korten's reduction relativizes.
_Remark 3.5_.: Algorithm 3.1 and Lemma 3.3_relativizes_, in the sense that if the input is actually an oracle circuit \(C^{O}\) for some arbitrary oracle, the algorithm still works except now it needs to call an \(\mathsf{NP}^{O}\) oracle to find the lexicographically smallest string in \(C^{-1}(v_{i+1,2j}\circ v_{i+1,2j+1})\).
### \(\Pi_{1}\) Verification of the History of \(\mathsf{Korten}(C,f)\)
In what follows, we say that \((i,j)<(i^{\prime},j^{\prime})\) if either \(i<i^{\prime}\) or (\(i=i^{\prime}\) and \(j<j^{\prime}\)) (that is, we consider the lexicographical order of pairs). Observe that Algorithm 3.1 processes all the pairs \((i,j)\) in the reverse lexicographic order.
**Definition 3.6** (The computational history of \(\mathsf{Korten}(C,f)\)).: Let \(n,T\in\mathbb{N}\) be such that \(\log T\leq n\leq T\). Let \(C\colon\{0,1\}^{n}\to\{0,1\}^{2n}\) be a circuit, and \(f\in\{0,1\}^{T}\) be a "hard truth table" in the sense that \(f\not\in\operatorname{Range}(\mathsf{GGM}_{T}[C])\). The _computational history_ of \(\mathsf{Korten}(C,f)\), denoted as
\[\mathsf{History}(C,f),\]
consists of \((i_{\star},j_{\star})\), as well as the concatenation of \(v_{i,j}\) for every \(0\leq i<k\) and \(0\leq j<2^{i}\), in the lexicographical order of \((i,j)\) (\((i_{\star},j_{\star})\) and the \(v_{i,j}\) are defined in Algorithm 3.1). Each \(v_{i,j}\) is encoded by \(n+1\) bits \(\mathsf{enc}(v_{i,j})\), where if \(v_{i,j}\in\{0,1\}^{n}\) then \(\mathsf{enc}(v_{i,j})=0\circ v_{i,j}\), and if \(v_{i,j}=\bot\) then \(\mathsf{enc}(v_{i,j})=1^{n+1}\). The length of this history is at most \((2^{k+1}-1)(n+1)+2\log T\leq 5T\), and for convenience we always pad zeros at the end so that its length becomes exactly \(5T\).
The following lemma summarizes the properties of the computational history construction above required for the \(\Sigma_{2}\mathsf{E}\) lower bound in the next section.
**Lemma 3.7**.: _Let \(n,T\in\mathbb{N}\) be such that \(\log T\leq n\leq T\). Let \(C\colon\{0,1\}^{n}\to\{0,1\}^{2n}\) be a circuit and \(f\in\{0,1\}^{T}\setminus\operatorname{Range}(\mathsf{GGM}_{T}[C])\). Let \(h\coloneqq\mathsf{History}(C,f)\) and \(z\coloneqq\mathsf{Korten}(C,f)\)._
1. **(history contains input/output)** _There is a_ \(\operatorname{poly}(\log T)\)_-time one-query oracle algorithm_ Input _and an_ \(O(n)\)_-time oracle algorithm_ Output_, both having input parameters_ \(T,n\) _and taking a string_ \(\tilde{h}\in\{0,1\}^{5T}\) _as oracle, such that the following hold:_ 1. _When given_ \(h\) _as the oracle,_ \(\mathsf{Input}_{T,n}\) _takes an additional input_ \(i\in\{0,1,\ldots,5T-1\}\) _and outputs_ \(f_{i}\)_._ 2. _When given_ \(h\) _as the oracle,_ \(\mathsf{Output}_{T,n}\) _outputs_ \(z=\mathsf{Korten}(C,f)\)_._
2. **(\(\Pi_{1}\) verification of the history)** _There is an oracle algorithm_ \(V\) _with input parameters_ \(T,n\) _such that the following holds:_ 1. \(V\) _takes_ \(\tilde{f}\in\{0,1\}^{T},\tilde{h}\in\{0,1\}^{5T}\) _as oracles and_ \(C\) _and_ \(w\in\{0,1\}^{5\cdot(\log T+n)}\) _as inputs. It runs in_ \(\operatorname{poly}(n)\) _time._ 2. \(h=\mathsf{History}(C,f)\) _is the unique string from_ \(\{0,1\}^{5T}\) _satisfying the following:_ \[V^{f,h}(C,w)=1\qquad\text{for every $w\in\{0,1\}^{5\cdot(\log T+n)}$.}\]
Proof.: From the definition of \(\mathsf{History}(C,f)\), the construction of \(\mathsf{Input}_{T,n}\) and \(\mathsf{Output}_{T,n}\) are straightforward. Now we describe the verifier \(V^{f,\tilde{h}}\), where \(f\in\{0,1\}^{T}\) and \(\tilde{h}\in\{0,1\}^{5T}\). Note that here we fix the first oracle of \(V\) to be the input truth table \(f\), while the second oracle \(\tilde{h}\) can be any string from \(\{0,1\}^{5T}\).
First, \(V\) reads \((i_{\star},j_{\star})\) from \(\tilde{h}\). Note that the rest of \(\tilde{h}\) can be parsed as an array \(\{v_{i,j}\}_{i,j}\) where \(i\in\{0,1\ldots,k\}\) and \(j\in\{0,1,\ldots,2^{i}\}\). We will think of \(V\) as performing at most \(2^{|w|}\) checks, each of which _passes_ or _fails_. To show the second item of the lemma, we need to show that (1) if a string \(\tilde{h}\) passes all the checks, then it must be the case that \(\tilde{h}=h\); and (2) \(h\) passes all the checks.
Specifically, \(V\) checks \(\tilde{h}\) as follows:
* The values written on the leaves of \(\{v_{i,j}\}\) are indeed \(f\). That is, for every \(j\in\{0,1,\ldots,2^{k}-1\}\), check that \(v_{k,j}\) is consistent with the corresponding block in \(f\).
* For every \((i,j)>(i_{\star},j_{\star})\) such that \(i<k\), \(C(v_{i,j})=v_{i+1,2j}\circ v_{i+1,2j+1}\). (That is, the value \(v_{i,j}\) is consistent with its two children.)
* For every \((i,j)>(i_{\star},j_{\star})\) such that \(i<k\), for every \(x\in\{0,1\}^{n}\) that is lexicographically smaller than \(v_{i,j}\), \(C(x)\neq v_{i+1,2j}\circ v_{i+1,2j+1}\). (That is, the value \(v_{i,j}\) is the lexicographically first preimage of its two children.)
* For every \(x\in\{0,1\}^{n}\), \(C(x)\neq v_{i_{\star}+1,2j_{\star}}\circ v_{i_{\star}+1,2j_{\star}+1}\). (That is, the two children of \((i_{\star},j_{\star})\) form a non-output of \(C\); by the previous checks, \((i_{\star},j_{\star})\) is the lexicographically largest such pair.)
* For every \((i,j)\leq(i_{\star},j_{\star})\), \(v_{i,j}=\bot\).
Note that the above can be implemented with a universal (\(\forall\)) quantification over at most \(5\cdot(\log T+n)\) bits. First, one can see that by the definition of the correct history \(h\) (Definition3.6), \(h\) passes all the checks above. Second, one can indeed see that all the conditions above _uniquely determine_\(h\), and therefore any \(\tilde{h}\) passing all the checks must equal \(h\).
Again, it is easy to observe that Definition3.6 and Lemma3.7 relativize.
_Remark 3.8_.: Definition3.6 and Lemma3.7 _relativize_, in the sense that if \(C\) is an oracle circuit \(C^{O}\) for some arbitrary oracle, Definition3.6 needs no modification since Algorithm3.1 relativizes, and Lemma3.7 holds with the only modification that \(V\) now also need to take \(O\) as an oracle (since it needs to evaluate \(C\)).
Circuit Lower Bounds for \(\Sigma_{2}\mathsf{E}\)
In this section, we prove our near-maximum circuit lower bounds for \(\Sigma_{2}\mathsf{E}\) by providing a new single-valued \(\mathsf{F}\Sigma_{2}\mathsf{P}\) algorithm for Avoid.
Let \(\{C_{n}\colon\{0,1\}^{n}\to\{0,1\}^{2n}\}_{n\in\mathbb{N}}\) be a \(\mathsf{P}\)-uniform family of circuits. We show that there is a single-valued \(\mathsf{F}\Sigma_{2}\mathsf{P}\) algorithm \(A\) that, on input \(1^{n}\), outputs a canonical string that is outside the range of \(C_{n}\) for infinitely many \(n\in\mathbb{N}\).
**Theorem 4.1**.: _Let \(\{C_{n}\colon\{0,1\}^{n}\to\{0,1\}^{2n}\}_{n\in\mathbb{N}}\) be a \(\mathsf{P}\)-uniform family of circuits. There is a single-valued \(\mathsf{F}\Sigma_{2}\mathsf{P}\) algorithm \(A\) with one bit of advice such that for infinitely many \(n\in\mathbb{N}\), \(A(1^{n})\) outputs \(y_{n}\in\{0,1\}^{2n}\setminus\operatorname{Range}(C_{n})\)._
Proof.: We begin with some notation.
Notation.Let \(n^{(1)}\) be a large enough power of \(2\), \(n^{(\ell)}=2^{2^{n^{(\ell-1)}}}\) for each integer \(\ell>1\). Let \(n_{0}^{(\ell)}=n^{(\ell)}\) and \(t^{(\ell)}=O\Big{(}\log n_{0}^{(\ell)}\Big{)}\) be parameters that we set later. For each \(1\leq i\leq t^{(\ell)}\), let \(n_{i}^{(\ell)}:=\Big{(}n_{i-1}^{(\ell)}\Big{)}^{10}\). To show our algorithm \(A\) works on infinitely many input lengths, we will show that for every \(\ell\in\mathbb{N}\), there is an input length \(n_{i}^{(\ell)}\) for some \(i\in\{0,1,\ldots,t^{(\ell)}\}\) such that \(A\) works.
Fix \(\ell\in\mathbb{N}\). From now on, for convenience, we will use \(n_{i}\) and \(t\) to denote \(n_{i}^{(\ell)}\) and \(t^{(\ell)}\), respectively.
Specifying \(T_{i}\) and \(f_{i}\).For each input length \(n_{i}\), we will specify a parameter \(T_{i}\in\mathbb{N}\) and a string \(f_{i}\in\{0,1\}^{T_{i}}\). Our win-win analysis is based on whether \(f_{i}\in\operatorname{Range}(\mathsf{GGM}_{T_{i}}[C_{n_{i}}])\) for each \(i\in\{0,1,\ldots,t\}\).
Let \(T_{0}:=2^{2n_{0}}\cdot 2n_{0}\) and \(f_{0}\) be the concatenation of all length-\(2n_{0}\) strings (which has length \(T_{0}\)). From Fact3.4, we have that \(f_{0}\not\in\operatorname{Range}(\mathsf{GGM}_{T_{0}}[C_{n_{0}}])\). For every \(i\in[t]\), we define
\[f_{i}:=\mathsf{History}(C_{n_{i-1}},f_{i-1}).\]
From Definition3.6, this also means that we have set \(T_{i}=5\cdot T_{i-1}\) for every \(i\in[t]\).
Let \(t\) be the first integer such that \(T_{t+1}\leq 4n_{t+1}\). Note that we have \(T_{i}=5^{i}\cdot T_{0}\leq 2^{3n_{0}+i\cdot\log 5}\) and \(n_{i}=(n_{0})^{10^{i}}=2^{\log n_{0}\cdot 10^{i}}\). Hence, we have that \(t\leq O(\log n_{0})\). (Also note that \(n_{t}^{(\ell)}<n_{0}^{(\ell+1)}\).)
Description of our \(\mathsf{F}\Sigma_{2}\mathsf{P}\) algorithm \(A\).Now, let \(k\in\{0,1,\ldots,t\}\) be the largest integer such that \(f_{k}\not\in\operatorname{Range}(\mathsf{GGM}_{T_{k}}[C_{n_{k}}])\). Since \(f_{0}\not\in\operatorname{Range}(\mathsf{GGM}_{T_{0}}[C_{n_{0}}])\), such a \(k\) must exist. Let \(z:=\mathsf{Korten}(C_{n_{k}},f_{k})\). It follows from Lemma3.3 that \(z\) is not in the range of \(C_{n_{k}}\). Our single-valued \(\mathsf{F}\Sigma_{2}\mathsf{P}\) algorithm \(A\) computes \(z\) on input \(1^{n_{k}}\) (see Definition2.2). That is, for some \(\ell_{1},\ell_{2}\leq\operatorname{poly}(n_{k})\):
* There exists \(\pi_{1}\in\{0,1\}^{\ell_{1}}\) such that for every \(\pi_{2}\in\{0,1\}^{\ell_{2}}\), \(V_{A}(1^{n_{k}},\pi_{1},\pi_{2})\) prints \(z\), and
* For every \(\pi_{1}\in\{0,1\}^{\ell_{1}}\), there exists some \(\pi_{2}\in\{0,1\}^{\ell_{2}}\) such that \(V_{A}(1^{n_{k}},\pi_{1},\pi_{2})\) prints either \(z\) or \(\bot\).
In more details, if \(k<t\), then \(V_{A}\) treats \(\pi_{1}\) as an input to the circuit \(\mathsf{GGM}_{T_{k+1}}[C_{n_{k+1}}]\), and let
\[\hat{f}_{k+1}:=\mathsf{GGM}_{T_{k+1}}[C_{n_{k+1}}](\pi_{1}).\]
Here, the length of \(\pi_{1}\) is \(\ell_{1}:=n_{k+1}\leq\operatorname{poly}(n_{k})\). If \(k=t\), then \(V_{A}\) defines \(\hat{f}_{k+1}:=\pi_{1}\) and \(\ell_{1}:=T_{t+1}\leq\operatorname{poly}(n_{k})\). It is intended that \(\hat{f}_{k+1}=f_{k+1}=\mathsf{History}(C_{n_{k}},f_{k})\) (which \(V_{A}\) needs to
verify). Note that in the case where \(k<t\), since \(f_{k+1}\in\operatorname{Range}(\mathsf{GGM}_{T_{k+1}}[C_{n_{k+1}}])\), there indeed exists some \(\pi_{1}\) such that \(\hat{f}_{k+1}=\hat{f}_{k+1}\).
We note that Lemma3.2 provides us "random access" to the (potentially very long) string \(\hat{f}_{k+1}\): given \(\pi_{1}\) and \(j\in[T_{k+1}]\), one can compute the \(j\)-th bit of \(\hat{f}_{k+1}\) in \(\operatorname{poly}(n_{k})\) time. Also recall from Lemma3.7 that for each \(i\), \(f_{i+1}=\mathsf{History}(C_{n_{i}},f_{i})\) contains the string \(f_{i}\), which can be retrieved by the oracle algorithm \(\mathsf{Input}\) described in Item1 of Lemma3.7. Therefore, for each \(i\) from \(k\) downtto \(1\), we can recursively define \(\hat{f}_{i}\) such that \((\hat{f}_{i})_{j}=\mathsf{Input}_{T_{i},n_{i}}^{\hat{f}_{i+1}}(j)\). We define \(\hat{f}_{0}\) to be the concatenation of all length-\((2n_{0})\) strings in the lexicographical order, so \(\hat{f}_{0}=f_{0}\). Applying the algorithm \(\mathsf{Input}\) recursively, we obtain an algorithm that given \(i\in\{0,1,\ldots,k\}\) and \(j\in\{0,1,\ldots,T_{i}-1\}\), outputs the \(j\)-th bit of \(\hat{f}_{i}\). Since \(\mathsf{Input}\) only makes one oracle query, this algorithm runs in \(\operatorname{poly}(n_{k})\) time.23
Footnote 23: Note that the definition of \(f_{0}\) is so simple that one can directly compute the \(j\)-th bit of \(f_{0}\) in \(\operatorname{poly}(n_{0})\) time.
Then, \(V_{A}\) parses the second proof \(\pi_{2}\) into \(\pi_{2}=(i,w)\) where \(i\in\{0,1,\ldots,k\}\) and \(w\in\{0,1\}^{5(\log T_{i}+n_{i})}\). Clearly, the length of \(\pi_{2}\) is at most \(\ell_{2}:=\log(k+1)+5(\log T_{k}+n_{k})\leq\operatorname{poly}(n_{k})\). Now, let \(V_{\mathsf{History}}\) be the oracle algorithm in Item2 of Lemma3.7, we let \(V_{A}(1^{n_{k}},\pi_{1},\pi_{2})\) check whether the following holds:
\[V_{\mathsf{History}}^{\hat{f}_{i},\hat{f}_{i+1}}(C_{n_{i}},w)=1.\lx@note{ footnote}{Here $V_{\mathsf{History}}$ also takes input parameters $T_{i}$ and $n_{i}$. We omit them in the subscript for notational convenience.} \tag{1}\]
If this is true, then \(V_{A}\) outputs the string \(z:=\mathsf{Output}_{T_{k},n_{k}}^{\hat{f}_{k+1}}\), where \(\mathsf{Output}\) is the output oracle algorithm defined in Item1 of Lemma3.7. Otherwise, \(V_{A}\) outputs \(\bot\).
The correctness of \(A\).Before establishing the correctness of \(A\), we need the following claim:
**Claim 4.2**.: \(f_{k+1}=\hat{f}_{k+1}\) _if and only if the following holds:_
* \(V_{\mathsf{History}}^{\hat{f}_{i},\hat{f}_{i+1}}(C_{n_{i}},w)=1\) _for every_ \(i\in\{0,1,\ldots,k\}\) _and for every_ \(w\in\{0,1\}^{5(\log T_{i}+n_{i})}\)_._
Proof.: First, assume that \(f_{k+1}=\hat{f}_{k+1}\). By Item1 of Lemma3.7, we have that \(\hat{f}_{i}=f_{i}\) for every \(i\in\{0,1,\ldots,k+1\}\). Recall that by definition, \(f_{i+1}=\mathsf{History}(C_{n_{i}},f_{i})\) for every \(i\in\{0,1,\ldots,k\}\). Hence, by Item2 of Lemma3.7, we have that for every \(i\in\{0,1,\ldots,k\}\), and for every \(w\in\{0,1\}^{5(\log T_{i}+n_{i})}\), \(V_{\mathsf{History}}^{\hat{f}_{i},\hat{f}_{i+1}}(C_{n_{i}},w)=1\) holds.
For the other direction, suppose that for every \(i\in\{0,1,\ldots,k\}\) and \(w\in\{0,1\}^{5(\log T_{i}+n_{i})}\), we have that \(V_{\mathsf{History}}^{\hat{f}_{i},\hat{f}_{i+1}}(C_{n_{i}},w)=1\) holds. First recall that \(f_{0}=\hat{f}_{0}\) by definition. By an induction on \(i\in[k+1]\) and (the uniqueness part of) Item2 of Lemma3.7, it follows that \(f_{i}=\hat{f}_{i}\) for every \(i\in\{0,1,\ldots,k+1\}\). In particular, \(f_{k+1}=\hat{f}_{k+1}\). \(\diamond\)
Now we are ready to establish that \(A\) is a single-valued \(\mathsf{F}\Sigma_{2}\mathsf{P}\) algorithm computing \(z\) on input \(1^{n_{k}}\). We first prove the completeness of \(A\); i.e., there is a proof \(\pi_{1}\) such that for every \(\pi_{2}\), \(V_{A}(1^{n_{k}},\pi_{1},\pi_{2})\) outputs \(z=\mathsf{Korten}(C_{n_{k}},f_{k})\). We set \(\pi_{1}\) to be the following proof: If \(k<t\), then \(f_{k+1}\in\operatorname{Range}(\mathsf{GGM}_{T_{k+1}}[C_{n_{k+1}}])\), and we can set \(\pi_{1}\in\{0,1\}^{n_{k+1}}\) to be the input such that \(f_{k+1}=\mathsf{GGM}_{T_{k+1}}[C_{n_{k+1}}](\pi_{1})\); if \(k=t\), then we simply set \(\pi_{1}=f_{k+1}\). Then, we have \(f_{k+1}=\hat{f}_{k+1}\), and by Claim4.2, we know that \(V_{A}\) will output \(z=\mathsf{Korten}(C_{n_{k}},f_{k})\) on every proof \(\pi_{2}\).
Next, we show that for every \(\pi_{1}\), there is some \(\pi_{2}\) that makes \(V_{A}\) output either \(z\) or \(\bot\). It suffices to consider \(\pi_{1}\) such that for every \(\pi_{2}\), \(V_{A}(1^{n_{k}},\pi_{1},\pi_{2})\neq\bot\). In this case, every invocation of Equation1 holds, and thus by Claim4.2 we know that \(f_{k+1}=\hat{f}_{k+1}\). It follows that \(\mathsf{Korten}(C_{n_{k}},f_{k})=z\) and \(V_{A}\) will output \(z\) regardless of \(\pi_{2}\).
Finally, we generalize \(A\) and \(V_{A}\) to work on all inputs \(1^{n}\). On input \(1^{n}\), \(V_{A}\) calculates the largest \(\ell\) such that \(n^{(\ell)}\leq n\), and also calculates the largest \(k^{\prime}\) such that \(n^{(\ell)}_{k^{\prime}}\leq n\). If \(n^{(\ell)}_{k^{\prime}}\neq n\), then \(V_{A}\) immediately outputs \(\bot\) and halts. Otherwise, \(V_{A}\) receives an advice bit indicating whether \(k^{\prime}=k^{(\ell)}\) where \(k^{(\ell)}\) is the largest integer such that \(f^{(\ell)}_{k^{(\ell)}}\not\in\operatorname{Range}(\mathsf{GGM}_{T^{(\ell)}_ {k}}[C_{n^{(\ell)}_{k}}])\). If this is the case, then \(V_{A}\) runs the verification procedure above; otherwise, it immediately outputs \(\bot\) and halts. It is easy to see that \(V_{A}\) runs in \(\operatorname{poly}(n)\) time, and is an infinitely-often single-valued \(\mathsf{F}\Sigma_{2}\mathsf{P}\) algorithm solving the range avoidance problem of \(\{C_{n}\}_{n\in\mathbb{N}}\).
From Remark3.5 and Remark3.8, one can obverse that the proof above also relativizes. Hence we have the following as well.
**Theorem 4.3** (Relativized version of Theorem4.1).: _Let \(\mathcal{O}\colon\{0,1\}^{*}\to\{0,1\}\) be any oracle. Let \(\{C^{\mathcal{O}}_{n}\colon\{0,1\}^{n}\to\{0,1\}^{2n}\}_{n\in\mathbb{N}}\) be a \(\mathsf{P}\)-uniform family of \(\mathcal{O}\)-oracle circuits. There is a single-valued \(\mathsf{F}\Sigma_{2}\mathsf{P}^{\mathcal{O}}\) algorithm \(A^{\mathcal{O}}\) with one bit of advice such that for infinitely many \(n\in\mathbb{N}\), \(A^{\mathcal{O}}(1^{n})\) outputs \(y_{n}\in\{0,1\}^{2n}\setminus\operatorname{Range}(C^{\mathcal{O}}_{n})\)._
We omit the proof of the following corollary since it is superseded by the results in the next section.
**Corollary 4.4**.: \(\Sigma_{2}\mathsf{E}\not\subseteq\mathsf{SIZE}[2^{n}/n]\) _and \((\Sigma_{2}\mathsf{E}\cap\Pi_{2}\mathsf{E})/_{1}\not\subseteq\mathsf{SIZE}[2^ {n}/n]\). Moreover, these results relativize: for every oracle \(\mathcal{O}\), \(\Sigma_{2}\mathsf{E}^{\mathcal{O}}\not\subseteq\mathsf{SIZE}^{\mathcal{O}}[2 ^{n}/n]\) and \((\Sigma_{2}\mathsf{E}^{\mathcal{O}}\cap\Pi_{2}\mathsf{E}^{\mathcal{O}})/_{1} \not\subseteq\mathsf{SIZE}^{\mathcal{O}}[2^{n}/n]\)._
## 5 Circuit Lower Bounds for \(\mathsf{S}_{2}\mathsf{E}\)
In this section, we prove our near-maximum circuit lower bounds for \(\mathsf{S}_{2}\mathsf{E}/_{1}\) by giving a new single-valued \(\mathsf{FS}_{2}\mathsf{P}\) algorithm for Avoid.
### Reed-Muller Codes
To prove maximum circuit lower bounds for \(\mathsf{S}_{2}\mathsf{E}/_{1}\), we will need several standard tools for manipulating Reed-Muller (RM) codes (i.e., low-degree multi-variate polynomials).
For a polynomial \(P\colon\mathbb{F}_{p}^{m}\to\mathbb{F}_{p}\), where \(\mathbb{F}_{p}\) is the finite field of \(p\) elements, we use \(\deg_{\max}(P)\) to denote the maximum individual degree of variables in \(P\). Let \(p\) be a prime, \(\Delta,m\in\mathbb{N}\). For a string \(S\in\{0,1\}^{\Delta^{m}}\), we use \(\mathsf{RM}_{\mathbb{F}_{p},\Delta,m}(S)\) to denote its Reed-Muller encoding by extension: letting \(H=\{0,1,\ldots,\Delta-1\}\) and \(w_{1},\ldots,w_{\Delta^{m}}\in H^{m}\) be the enumeration of all elements in \(H^{m}\) in the lexicographical order, \(\mathsf{RM}_{\mathbb{F}_{p},\Delta,m}(S)\) is the unique polynomial \(P\colon\mathbb{F}_{p}^{m}\to\mathbb{F}_{p}\) such that (1) \(P(w_{i})=S_{i}\) for every \(i\in[\Delta^{m}]\) and (2) \(\deg_{\max}(P)\leq\Delta-1\).25
Footnote 25: To see the uniqueness of \(P\), note that for every \(P\colon\mathbb{F}_{p}^{m}\to\mathbb{F}_{p}\) with \(\deg_{\max}(P)\leq\Delta-1\), the restriction of \(P\) to \(H^{m}\) uniquely determines the polynomial \(P\). Also, such \(P\) can be constructed by standard interpolation.
We also fix a Boolean encoding of \(\mathbb{F}_{p}\) denoted as \(\mathsf{Enc}_{\mathbb{F}_{p}}\colon\mathbb{F}_{p}\to\{0,1\}^{\lceil\log P\rceil}\). For simplicity, we can just map \(z\in\{0,1,\ldots,p-1\}\) to its binary encoding. In particular, \(\mathsf{Enc}_{\mathbb{F}_{p}}(0)=0^{\lceil\log p\rceil}\) and \(\mathsf{Enc}_{\mathbb{F}_{p}}(1)=0^{\lceil\log p\rceil-1}\circ 1\).26 Now we further define \(\mathsf{BRM}_{\mathbb{F}_{p},\Delta,m}(S)\) by concatenating \(\mathsf{RM}_{\mathbb{F}_{p},\Delta,m}(S)\) with \(\mathsf{Enc}_{\mathbb{F}_{p}}\), thus obtaining a Boolean encoding again. Formally, letting \(P=\mathsf{RM}_{\mathbb{F}_{p},\Delta,m}(S)\) and \(w_{1},\ldots,w_{p^{m}}\in\mathbb{F}_{p}^{m}\) be the enumeration of all elements from \(\mathbb{F}_{p}^{m}\) in the lexicographic order, we define \(\mathsf{BRM}_{\mathbb{F}_{p},\Delta,m}(S)=\mathsf{Enc}_{\mathbb{F}_{p}}(P(w_{1 }))\circ\mathsf{Enc}_{\mathbb{F}_{p}}(P(w_{2}))\circ\ldots\circ\mathsf{Enc}_{ \mathbb{F}_{p}}(P(w_{p^{m}}))\). We remark that for every \(i\in[\Delta^{m}]\), in \(\operatorname{poly}(m,\log p)\) time one can compute an index \(i^{\prime}\in[p^{m}\cdot\lceil\log p\rceil]\) such that \(\mathsf{BRM}_{\mathbb{F}_{p},\Delta,m}(S)_{i^{\prime}}=S_{i}\).
Footnote 26: This fact is useful because if we know a string \(m\in\{0,1\}^{\lceil\log p\rceil}\) encodes either \(0\) or \(1\), then we can decode it by only querying the last bit of \(m\).
We need three properties of Reed-Muller codes, which we explain below.
Self-correction for polynomials.We first need the following self-corrector for polynomials, which efficiently computes the value of \(P\) on any input given an oracle that is close to a low-degree polynomial \(P\). (In other words, it is a _local decoder_ for the Reed-Muller code.)
**Lemma 5.1** (A self-corrector for polynomials, cf. [12, 13]).: _There is a probabilistic oracle algorithm \(\mathsf{PCorr}\) such that the following holds. Let \(p\) be a prime, \(m,\Delta\in\mathbb{N}\) such that \(\Delta<p/3\). Let \(g\colon\mathbb{F}_{p}^{m}\to\mathbb{F}_{p}\) be a function such that for some polynomial \(P\) of total degree at most \(\Delta\),_
\[\Pr_{\vec{x}\leftarrow\mathbb{F}_{p}^{m}}[g(\vec{x})\neq P(\vec{x})]\leq 1/4.\]
_Then for all \(\vec{x}\in\mathbb{F}_{p}^{m}\), \(\mathsf{PCorr}^{g}(p,m,\Delta,\vec{x})\) runs in time \(\operatorname{poly}(\Delta,\log p,m)\) and outputs \(P(\vec{x})\) with probability at least \(2/3\)._
Low-max-degree test.We also need the following efficient tester, which checks whether a given polynomial has maximum individual degree at most \(\Delta\) or is far from such polynomials.27
Footnote 27: To obtain the theorem below, we set the parameter \(\delta\) and \(\varepsilon\) from [1, Remark 5.15] to be \(\min\Bigl{(}\frac{1}{200n^{2}(\Delta+1)},1/2p\Bigr{)}\) and \(\min\Bigl{(}\frac{1}{400n^{3}(\Delta+1)},1/2p\Bigr{)}\), respectively.
**Lemma 5.2** (Low-max-degree tester [1, Remark 5.15]).: _Let \(n,\Delta,p\in\mathbb{N}\) be such that \(p\geq 20\cdot(\Delta+1)^{2}\cdot n^{2}\) and \(p\) is a prime. There is a probabilistic non-adaptive oracle machine \(\mathsf{LDT}\) such that the following holds. Let \(g\colon\mathbb{F}_{p}^{n}\to\mathbb{F}_{p}\). Then for \(\delta=3n^{2}\cdot(\Delta+1)/p\), it holds that_
1. _if_ \(\deg_{\max}(g)\leq\Delta\)_, then_ \(\mathsf{LDT}^{g}(p,n,\Delta)\) _accepts with probability_ \(1\)_,_
2. _if_ \(g\) _is at least_ \(\delta\)_-far from every polynomial with maximum individual degree at most_ \(\Delta\)_, then_ \(\mathsf{LDT}^{g}(p,n,\Delta)\) _rejects with probability at least_ \(2/3\)_, and_
3. \(\mathsf{LDT}\) _runs in_ \(\operatorname{poly}(p)\) _time._
Comparing two RM codewords.Lastly, we show an efficient algorithm that, given oracle access to two codewords of \(\mathsf{RM}_{\mathbb{F}_{p},\Delta,m}\), computes the lexicographically first differing point between the respective messages of the two codewords.
**Lemma 5.3** (Comparing two RM codewords).: _Let \(p\) be a prime. Let \(m,\Delta\in\mathbb{N}\) be such that \(m\cdot\Delta<p/2\). There is a probabilistic oracle algorithm \(\mathsf{Comp}\) that takes two polynomials \(f,g\colon\mathbb{F}_{p}^{m}\to\mathbb{F}_{p}\) as oracles, such that if both \(\deg_{\max}(f)\) and \(\deg_{\max}(g)\) are at most \(\Delta\), then the following holds with probability at least \(9/10\):_
* _If_ \(f\neq g\)_, then_ \(\mathsf{Comp}^{f,g}(p,m,\Delta)\) _outputs the lexicographically smallest element_ \(w\) _in_ \(H^{m}\) _such that_ \(f(w)\neq g(w)\)_, where_ \(H=\{0,1,\ldots,\Delta-1\}\)_._28__ Footnote 28: Since both \(f\) and \(g\) have max degree at most \(\Delta\), their values are completely determined by their restrictions on \(H^{m}\). Hence, if \(f\neq g\), then such \(w\) must exist.
* _If_ \(f=g\)_, then_ \(\mathsf{Comp}^{f,g}(p,m,\Delta)\) _outputs_ \(\bot\)_._
* \(\mathsf{Comp}\) _makes at most_ \(\operatorname{poly}(m\cdot\Delta)\) _queries to both_ \(f\) _and_ \(g\)_, and runs in_ \(\operatorname{poly}(m\cdot\Delta\cdot\log p)\) _time._
Proof.: Our proof is similar to the proof from [11], which only considers multi-linear polynomials. Our algorithm \(\mathsf{Comp}^{f,g}(p,m,\Delta)\) works as follows:
1. The algorithm has \(m\) stages, where the \(i\)-th stage aims to find the \(i\)-th entry of \(w\). At the end of the \(i\)-th stage, the algorithm obtains a length-\(i\) prefix of \(w\).
2. For every \(i\in[m]\): 1. Let \(w_{<i}\in H^{i-1}\) be the current prefix. For every \(h\in\{0,1,\ldots,\Delta-1\}\), we run a randomized polynomial identity test to check whether the restricted polynomial \(f(w_{<i},h,\cdot)\) and \(g(w_{<i},h,\cdot)\) are the same, with error at most \(\frac{1}{10m|H|}\).29 Footnote 29: Note that these two polynomials have total degree at most \(m\cdot\Delta<p/2\). Hence if they are different, their values on a random element from \(\mathbb{F}_{p}^{m-i}\) are different with probability at least \(1/2\). Hence the desired error level can be achieved by sampling \(O(\log m+\log\Delta)\) random points from \(\mathbb{F}^{m-i}\) and checking whether \(f(w_{<i},h,\cdot)\) and \(g(w_{<i},h,\cdot)\) have the same values.
2. We set \(w_{i}\) to be the smallest \(h\) such that our test above reports that \(f(w_{<i},h,\cdot)\) and \(g(w_{<i},h,\cdot)\) are distinct. If there is no such \(h\), we immediately return \(\bot\).
By a union bound, all \(mH\) polynomial identity testings are correct with probability at least \(9/10\). In this case, if \(f=g\), then the algorithm outputs \(\bot\) in the first stage. If \(f\neq g\), by induction on \(i\), we can show that for every \(i\in[m]\), \(w_{\leq i}\) is the lexicographically smallest element from \(H^{m}\) such that \(f(w_{\leq i},\cdot)\) and \(g(w_{\leq i},\cdot)\) are distinct, which implies that the output \(w\) is also the lexicographically smallest element \(w\) in \(H^{m}\) such that \(f(w)\neq g(w)\).
### Encoded History and \(\mathsf{S}_{2}\mathsf{BPP}\) Verification
Next, we define the following encoded history.
**Definition 5.4**.: Let \(C\colon\{0,1\}^{n}\to\{0,1\}^{2n}\) be a circuit, and \(f\in\{0,1\}^{T}\) be a "hard truth table" in the sense that \(f\not\in\operatorname{Range}(\mathsf{GGM}_{T}[C])\). Let \(k\), \((i_{\star},j_{\star})\), and \(\{v_{i,j}\}_{i,j}\) be defined as in Algorithm 3.1. Let \(S\) be the concatenation of \(\mathsf{enc}(v_{i,j})\) for every \(i\in\{0,1,\ldots,k-1\}\), \(j\in\{0,1,\ldots,2^{i}-1\}\), in the reserve lexicographical order of \((i,j)\), padded with zeros at the end to length exactly \(5T\). (Recall that \(\mathsf{enc}(v_{i,j})\) was defined in Definition 3.6.) Let \(p\) be the smallest prime that is at least \(20\cdot\log^{5}T\), and \(m\) be the smallest integer such that \((\log T)^{m}\geq 5\cdot T\).
The _encoded computational history_ of \(\mathsf{Korten}(C,f)\), denoted as
\[\widetilde{\mathsf{History}}(C,f),\]
consists of \((i_{\star},j_{\star})\), concatenated with \(\mathsf{BRM}_{\mathbb{F}_{p},\log T,m}(S)\).
The length of the encoded history is at most
\[\left\lceil\log(40\cdot\log^{5}T)\right\rceil\cdot(40\cdot\log^{5}T)^{\log(5T )/\log\log T+1}+2\log T\leq T^{6}\]
for all sufficiently large \(T\), and for convenience we always pad zeros at the end so that its length becomes exactly \(T^{6}\).30
Footnote 30: For simplicity even for \(T\) such that the length of the encoded history is longer than \(T^{6}\), we will pretend its length is exactly \(T^{6}\) throughout this section. This does not affect the analysis in our main theorem Theorem 5.7 since there we only care about sufficiently large \(T\).
Recall that the original computational history \(\mathsf{History}(C,f)\) is simply the concatenation of \((i_{\star},j_{\star})\) and \(S\). In the encoded version, we encode its \(S\) part by the Reed-Muller code instead. In the rest of this section, when we say history, we always mean the encoded history \(\widetilde{\mathsf{History}}(C,f)\) instead of the vanilla history \(\mathsf{History}(C,f)\).
We need the following lemma.
**Lemma 5.5**.: _Let \(n,T\in\mathbb{N}\) be such that \(\log T\leq n\leq T\). Let \(C\colon\{0,1\}^{n}\to\{0,1\}^{2n}\) be a circuit and \(f\in\{0,1\}^{T}\setminus\operatorname{Range}(\mathsf{GGM}_{T}[C])\). Let \(h\coloneqq\mathsf{History}(C,f)\) and \(z\coloneqq\mathsf{Korten}(C,f)\)._
1. **(history contains input/output)** _There is a_ \(\operatorname{poly}(\log T)\)_-time oracle algorithm_ Input _and an_ \(O(n)\)_-time oracle algorithm_ Output_, both of which have input parameters_ \(T,n\) _and take a string_ \(\tilde{h}\in\{0,1\}^{T^{6}}\) _as oracle, such that the following hold:_ 1. \(\mathsf{Input}_{T,n}\) _makes a single query to its oracle; when given_ \(h\) _as the oracle,_ \(\mathsf{Input}_{T,n}\) _takes an additional input_ \(i\in\{0,1,\ldots,T^{6}-1\}\) _and outputs_ \(f_{i}\)_._ 2. \(\mathsf{Output}_{T,n}\) _makes at most_ \(4n\) _queries to its oracle; when given_ \(h\) _as the oracle,_ Output\({}_{T,n}\) _outputs_ \(z=\mathsf{Korten}(C,f)\)_._
2. **(\(\mathsf{S}_{2}\mathsf{BPP}\) verification of the history)** _There is a randomized oracle algorithm_ \(V\) _with input parameters_ \(T,n\) _such that the following hold:_ 1. \(V\) _takes strings_ \(\tilde{f}\in\{0,1\}^{T},\pi_{1},\pi_{2}\in\{0,1\}^{T^{6}}\) _as oracles, the circuit_ \(C\)_, an integer_ \(i\in\big{[}T^{6}\big{]}\)_, and_ \(\varepsilon\in(0,1)\) _as input, and runs in_ \(\operatorname{poly}(n,\log\varepsilon^{-1})\) _time._ 2. _For every_ \(\pi\in\{0,1\}^{T^{6}}\) _and every_ \(i\in\{0,1,\ldots,T^{6}-1\}\)_, we have that_ \[\Pr\Bigl{[}V_{T,n}^{f,\pi,h}(C,i,\varepsilon)=h_{i}\Bigr{]}\geq 1-\varepsilon\quad \text{and}\quad\Pr\Bigl{[}V_{T,n}^{f,h,\pi}(C,i,\varepsilon)=h_{i}\Bigr{]} \geq 1-\varepsilon.\]
Proof.: Again, the algorithms \(\mathsf{Input}_{T,n}\) and \(\mathsf{Output}_{T,n}\) can be constructed in a straightforward way.31 So we focus on the construction of \(V\). Let \(p,m,k\in\mathbb{N}\) be as in Definition5.4. We also set \(\mathbb{F}=\mathbb{F}_{p}\) and \(\Delta=\log T\) in the rest of the proof.
Footnote 31: To see that \(\mathsf{Output}_{T,n}\) makes at most \(4n\) queries: Note that \(\mathsf{Output}\) first reads the pair \((i_{\star},j_{\star})\) from \(h\), and then reads two corresponding blocks from \(v_{i,j}\) encoded in \(h\). In total, it reads at most \(2\log T+2n\leq 4n\) bits from \(h\).
Our \(V\) always first _selects_ one of the oracles \(\pi_{1}\) and \(\pi_{2}\) (say \(\pi_{\mu}\) for \(\mu\in\{1,2\}\)), and then outputs \(\pi_{\mu}(i)\). Hence, in the following, we say that \(V\) selects \(\pi_{\mu}\) to mean that \(V\) outputs \(\pi_{\mu}(i)\) and terminates. Given \(\pi_{1}\) and \(\pi_{2}\), let \(g_{1},g_{2}\colon\mathbb{F}^{m}\to\mathbb{F}\) be the (potential) RM codewords encoded in \(\pi_{1}\) and \(\pi_{2}\), respectively.32 From now on, we will assume that \(i\) points to an entry in the encoded history \(g_{1}\) or \(g_{2}\) instead of the encoded pair of integers \((i_{\star},j_{\star})\). We will discuss the other case at the end of the proof.
Footnote 32: Technically \(\pi_{1}\) and \(\pi_{2}\) are supposed to contain the RM codewords concatenated with \(\mathsf{Enc}_{g_{p}}\colon\mathbb{F}_{p}\to\{0,1\}^{\lceil\log p\rceil}\).
Low-max-degree test and self-correction.We first run \(\mathsf{LDT}^{g_{1}}(p,m,\Delta-1)\) and \(\mathsf{LDT}^{g_{2}}(p,m,\Delta-1)\) for \(c_{1}\) times, where \(c_{1}\) is a sufficiently large constant. Recall that \(p\geq 20\cdot\log^{5}T\), \(m=\lceil\log(5T)/\log\log T\rceil\), and \(\Delta=\log T\). It follows that \(p\geq 20\cdot((\Delta-1)+1)^{2}\cdot m^{2}\), which satisfies the condition of Lemma5.2. We also note that \(3m^{2}\cdot((\Delta-1)+1)/p<1/4\). Hence, by Lemma5.2, if \(g_{1}\) is \(1/4\)-far from all polynomials with maximum individual degree at most \(\Delta-1\), then \(\mathsf{LDT}^{g_{1}}(p,m,\Delta-1)\) rejects with probability \(2/3\), and similarly for \(g_{2}\).
Now, if any of the runs on \(\mathsf{LDT}^{g_{1}}(p,m,\Delta-1)\) rejects, \(V\) selects \(\pi_{2}\), and if any of the runs on \(\mathsf{LDT}^{g_{2}}(p,m,\Delta-1)\) rejects, \(V\) selects \(\pi_{1}\).33 In other words, \(V\) first _disqualifies_ the oracles that do not pass the low-max-degree test. We set \(c_{1}\) to be large enough so that conditioning on the event that \(V\) does not terminate yet, with probability at least \(0.99\), both \(g_{1}\) and \(g_{2}\) are \(1/4\)-close to polynomials \(\widetilde{g}_{1}\colon\mathbb{F}_{p}^{m}\to\mathbb{F}\) and \(\widetilde{g}_{2}\colon\mathbb{F}_{p}^{m}\to\mathbb{F}\), respectively, where \(\deg_{\max}(\widetilde{g}_{1})\) and \(\deg_{\max}(\widetilde{g}_{2})\) are at most \(\Delta-1\).
Footnote 33: As a minor detail, if both \(g_{1}\) and \(g_{2}\) are rejected by some runs, \(V\) selects \(\pi_{2}\).
We can then use \(\mathsf{PCorr}^{g_{1}}(p,m,m\cdot(\Delta-1),\cdot)\) and \(\mathsf{PCorr}^{g_{2}}(p,m,m\cdot(\Delta-1),\cdot)\) to access the polynomials \(\widetilde{g}_{1}\) and \(\widetilde{g}_{2}\). (Note that \(m\cdot(\Delta-1)<p/3\), which satisfies the condition of Lemma5.1). We repeat them each \(O(\log T+\log m)\) times to make sure that on a single invocation, they return the correct values of \(\widetilde{g}_{1}\) and \(\widetilde{g}_{2}\) respectively with probability at least \(1-1/(mT)^{c_{2}}\) for a sufficiently large constant \(c_{2}\). By Lemma5.1, each call to \(\mathsf{PCorr}^{g_{1}}(p,m,m\cdot(\Delta-1),\cdot)\) or \(\mathsf{PCorr}^{g_{2}}(p,m,m\cdot(\Delta-1),\cdot)\) takes \(\operatorname{polylog}(T)\) time.
Selecting the better polynomial.From now on, we **refine** what it means when \(V\) selects \(\pi_{\mu}\): now it means that \(V\) outputs the bit corresponding to \(i\) in \(\widetilde{g}_{\mu}\) (recall that we are assuming that \(i\) points to an entry in the encoded history \(g_{1}\) or \(g_{2}\)).
Let \(\{v^{1}_{i,j}\}\) and \(\{v^{2}_{i,j}\}\) be the encoded histories in \(\widetilde{g}_{1}\) and \(\widetilde{g}_{2}\). Then \(V\) uses \(\mathsf{Comp}^{\widetilde{g}_{1},\widetilde{g}_{2}}(p,m,\Delta-1)\) to find the lexicographically largest \((i^{\prime},j^{\prime})\) such that \(v^{1}_{i^{\prime},j^{\prime}}\neq v^{2}_{i^{\prime},j^{\prime}}\).34 Note that \(\mathsf{Comp}^{\widetilde{g}_{1},\widetilde{g}_{2}}(p,m,\Delta-1)\) makes at most \(\mathsf{poly}(m\cdot\Delta)\) queries to both \(\widetilde{g}_{1}\) and \(\widetilde{g}_{2}\). By making \(c_{2}\) large enough, we know that \(\mathsf{Comp}\) operates correctly with probability at least \(0.8\). By operating correctly, we mean that (1) if \(\widetilde{g}_{1}\neq\widetilde{g}_{2}\), \(\mathsf{Comp}\) finds the correct \((i^{\prime},j^{\prime})\) and (2) If \(\widetilde{g}_{1}=\widetilde{g}_{2}\), \(\mathsf{Comp}\) returns \(\bot\).35
Footnote 34: Recall that the \(\{v_{i,j}\}\) is encoded in the reverse lexicographic order (Definition 5.4).
Footnote 35: From Lemma 5.3, \(\mathsf{Comp}^{\widetilde{g}_{1},\widetilde{g}_{2}}(p,m,\Delta-1)\) itself operates correctly with probability at least \(0.9\). But the access to \(\widetilde{g}_{1}\) (similarly to \(\widetilde{g}_{2}\)) is provided by \(\mathsf{PC}\mathsf{Cor}^{g_{1}}(p,m,m\cdot(\Delta-1),\cdot)\), which may err with probability at most \(1/(mT)^{c_{2}}\). So we also need to take a union bound over all the bad events that a query from \(\mathsf{Comp}\) to \(\widetilde{g}_{1}\) or \(\widetilde{g}_{2}\) is incorrectly answered.
In what follows, we assume that \(\mathsf{Comp}\) operates correctly. If \(\mathsf{Comp}\) returns \(\bot\), then \(V\) simply selects \(\pi_{1}\). Otherwise, there are several cases:
1. \(i^{\prime}=k\). In this case, \(\widetilde{g}_{1}\) and \(\widetilde{g}_{2}\) disagree on their leaf values, which intend to encode \(f\). \(V\) queries \(f\) to figure out which one has the correct value, and selects the corresponding oracle. (Note that at most one of them can be consistent with \(f\). If none of them are consistent, then \(V\) selects \(\pi_{1}\).) From now on, assume \(i^{\prime}<k\) and set \(\alpha=v^{1}_{i^{\prime}+1,2j^{\prime}}\circ v^{1}_{i^{\prime}+1,2j^{\prime}+1}\). Note that by the definition of \((i^{\prime},j^{\prime})\), it holds that \(\alpha=v^{2}_{i^{\prime}+1,2j^{\prime}}\circ v^{2}_{i^{\prime}+1,2j^{\prime}+1}\) as well.
2. \(i^{\prime}<k\) and both \(v^{1}_{i^{\prime},j^{\prime}}\) and \(v^{2}_{i^{\prime},j^{\prime}}\) are not \(\bot\). In this case, \(V\) first checks whether both of them are in \(C^{-1}(\alpha)\) (it can be checked by testing whether \(C(v^{1}_{i^{\prime},j^{\prime}})=\alpha\) and \(C(v^{2}_{i^{\prime},j^{\prime}})=\alpha\)). If only one of them is contained in \(C^{-1}(\alpha)\), \(V\) selects the corresponding oracle. If none of them are contained, \(V\) selects \(\pi_{1}\). Finally, if both are contained in \(C^{-1}(\alpha)\), \(V\) checks which one is lexicographically smaller, and selects the corresponding oracle.
3. \(i^{\prime}<k\), and one of the \(v^{1}_{i^{\prime},j^{\prime}}\) and \(v^{2}_{i^{\prime},j^{\prime}}\) is \(\bot\). Say that \(v^{b}_{i^{\prime},j^{\prime}}=\bot\) for some \(b\in\{1,2\}\), and denote \(\bar{b}:=3-b\) as the index of the other proof. In this case, let \((i_{\diamond},j_{\diamond})\) denote the predecessor of \((i^{\prime},j^{\prime})\) in terms of the reverse lexicographical order (that is, the smallest pair that is lexicographically greater than \((i^{\prime},j^{\prime})\)). Since \(\mathsf{Comp}\) operates correctly, we have that \(v^{1}_{i_{\diamond},j_{\diamond}}=v^{2}_{i_{\diamond},j_{\diamond}}\). If \(v^{1}_{i_{\diamond},j_{\diamond}}=\bot\), then \(\pi_{\bar{b}}\) has to be incorrect (since by Definition 3.6, \(\bot\)'s form a contiguous suffix of the history), and \(V\) selects \(\pi_{b}\). Otherwise, if \(v^{\bar{b}}_{i^{\prime},j^{\prime}}\in C^{-1}(\alpha)\), then \(\pi_{b}\) is incorrect (as it claims that \(C^{-1}(\alpha)=\varnothing\)), and \(V\) selects \(\pi_{\bar{b}}\). Otherwise, \(V\) selects \(\pi_{b}\).
Analysis.Now we show that \(\Pr\Bigl{[}V^{f,h,\pi}_{T,n}(i)=h(i)\Bigr{]}\geq 2/3\). (The proof for \(\Pr\Bigl{[}V^{f,\pi,h}_{T,n}(i)=h(i)\Bigr{]}\geq 2/3\) is symmetric.) To get the desired \(\varepsilon\) error probability, one can simply repeat the above procedure \(O(\log 1/\varepsilon)\) times and output the majority.
First, by Lemma 5.2, \(\mathsf{LDT}^{g_{1}}(p,m,\Delta-1)\) passes with probability \(1\). If some of the runs of \(\mathsf{LDT}^{g_{2}}(p,m,\Delta-1)\) rejects, then \(V\) selects \(h\). Otherwise, we know that with probability at least \(0.99\), \(\mathsf{PC}\mathsf{Corr}^{g_{1}}(p,m,m\cdot(\Delta-1),\cdot)\) and \(\mathsf{PC}\mathsf{Corr}^{g_{2}}(p,m,m\cdot(\Delta-1),\cdot)\) provide access to polynomials \(\widetilde{g}_{1}\) and \(\widetilde{g}_{2}\) with maximum individual degree at most \(\Delta-1\), where \(\widetilde{g}_{1}\) encodes the correct history values \(\{v_{i,j}\}_{i,j}\) of \(\mathsf{Korten}(C,f)\).
Then, assuming \(\mathsf{Comp}\) operates correctly (which happens with probability at least \(0.8\)), if \(\widetilde{g}_{1}=\widetilde{g}_{2}\), then the selection of \(V\) does not matter. Now we assume \(\widetilde{g}_{1}\neq\widetilde{g}_{2}\).
We will verify that in all three cases above, \(h\) (as the first oracle) is selected by \(V\). In the first case, by definition, all leaf values in \(h\) are consistent with \(f\), and hence \(h\) is selected. In the second case, since \(h\) contains the correct history values, we know that \(v^{1}_{i^{\prime},j^{\prime}}\) must be the smallest element from \(C^{-1}(\alpha)\), so again \(h\) is selected. In the last case: (1) if \(v^{1}_{i_{\sigma},j_{\sigma}}=\bot\), then \(v^{1}_{i^{\prime},j^{\prime}}\) has to be \(\bot\) as well, thus \(h\) is selected; (2) if \(v^{1}_{i_{\sigma},j_{\sigma}}\neq\bot\) and \(v^{1}_{i^{\prime},j^{\prime}}=\bot\), then \(C^{-1}(\alpha)=\varnothing\), and since the other proof \(\pi\) claims some element \(v^{2}_{i^{\prime},j^{\prime}}\in C^{-1}(\alpha)\), \(h\) is selected; and (3) if \(v^{1}_{i_{\sigma},j_{\sigma}}\neq\bot\) and \(v^{1}_{i^{\prime},j^{\prime}}\neq\bot\), then \(\pi\) claims that \(C^{-1}(\alpha)=\varnothing\) and we can check that \(v^{1}_{i^{\prime},j^{\prime}}\in C^{-1}(\alpha)\), therefore \(h\) is selected as well.
The remaining case: \(i\) points to the location of \((i_{\star},j_{\star})\).In this case, \(V\) still runs the algorithm described above to make a selection. Indeed, if \(\mathsf{Comp}\) does not return \(\bot\), \(V\) operates exactly the same. But when \(\mathsf{Comp}\) returns \(\bot\), \(V\) cannot simply select \(\pi_{1}\) since we need to make sure that \(V\) selects the oracle corresponding to \(h\) (it can be either \(\pi_{1}\) or \(\pi_{2}\)). Hence, in this case, \(V\) first reads \((i^{1}_{\star},j^{1}_{\star})\) and \((i^{2}_{\star},j^{2}_{\star})\) from \(\pi_{1}\) and \(\pi_{2}\). If they are the same, \(V\) simply selects \(\pi_{1}\). Otherwise, for \(b\in[2]\), \(V\) checks whether \(v^{b}_{i^{\star}_{\star},j^{b}_{\star}}=\bot\), and select the one that satisfies this condition. (If none of the \(v^{b}_{i^{\star}_{\star},j^{b}_{\star}}\) are, then \(V\) selects \(\pi_{1}\)). If both of \(v^{b}_{i^{\star}_{\star},j^{b}_{\star}}\) are \(\bot\), \(V\) selects the \(\mu\in[2]\) such that \((i^{\mu}_{\star},j^{\mu}_{\star})\) is larger.
Now, we can verify that \(V^{f,h,\pi}_{T,n}\) selects \(h\) with high probability as well. (To see this, note that in the correct history, \((i_{\star},j_{\star})\) points to the lexicographically largest all-zero block.)
Finally, the running time bound follows directly from the description of \(V\).
#### 5.2.1 A remark on relativization
Perhaps surprisingly, although 5.5 heavily relies on arithmetization tools such as Reed-Muller encoding and low-degree tests, it in fact also relativizes. To see this, the crucial observation is that, similarly to 3.7, the verifier \(V\) from 5.5 only needs _black-box access_ to the input circuit \(C\), meaning that it only needs to evaluate \(C\) on certain chosen input. Hence, when \(C\) is actually an oracle circuit \(C^{\mathcal{O}}\) for some arbitrary oracle \(\mathcal{O}\), the only modification we need is that \(V\) now also takes \(\mathcal{O}\) as an oracle.
_Remark 5.6_.: Definition5.4 and 5.5_relativize_, in the sense that if \(C\) is an oracle circuit \(C^{\mathcal{O}}\) for some arbitrary oracle, 5.4 needs no modification since 3.6 relativizes, and 5.5 holds with the only modification that \(V\) now also needs to take \(\mathcal{O}\) as an oracle (since it needs to evaluate \(C\)).
Indeed, the remark above might sound strange at first glance: arguments that involve PCPs often do not _relativize_, and the encoded history \(\widetilde{\mathsf{History}}(C,f)\) looks similar to a PCP since it enables \(V\) to perform a probabilistic local verification. However, a closer inspection reveals a key difference: the circuit \(C\) is always treated as a black box--both in the construction of history (3.6) and in the construction of the encoded history (3.4). That is, the arithmetization in the encoded history _does not arithmetize_ the circuit \(C\) itself.
### Lower Bounds for \(\mathsf{S}_{2}\mathsf{E}\)
Let \(\{C_{n}:\{0,1\}^{n}\to\{0,1\}^{2n}\}\) be a \(\mathsf{P}\)-uniform family of circuits. We show that there is a single-valued \(\mathsf{FS}_{2}\mathsf{P}\) algorithm \(\mathcal{A}\) such that for infinitely many \(n\in\mathbb{N}\), on input \(1^{n}\), \(\mathcal{A}(1^{n})\) outputs a canonical string that is outside the range of \(C_{n}\).
**Theorem 5.7**.: _Let \(\{C_{n}\colon\{0,1\}^{n}\to\{0,1\}^{2n}\}_{n\in\mathbb{N}}\) be a \(\mathsf{P}\)-uniform family of circuits. There is a sequence of valid outputs \(\{y_{n}\in\{0,1\}^{2n}\setminus\operatorname{Range}(C_{n})\}_{n\in\mathbb{N}}\) and a single-valued \(\mathsf{FS}_{2}\mathsf{P}\) algorithm \(A\) with one bit of advice, such that for infinitely many \(n\in\mathbb{N}\), \(A(1^{n})\) outputs \(y_{n}\)._
Proof.: Our proof proceeds similarly to the proof of the previous Theorem4.1. We will follow the same notation.
Notation.Let \(n^{(1)}\) be a large enough power of \(2\), \(n^{(\ell)}=2^{2^{n^{(\ell-1)}}}\) for each integer \(\ell>1\). Let \(n_{0}^{(\ell)}=n^{(\ell)}\) and \(t^{(\ell)}=O\Big{(}\log n_{0}^{(\ell)}\Big{)}\) be parameters that we set later. For each \(1\leq i\leq t^{(\ell)}\), let \(n_{i}^{(\ell)}:=\Big{(}n_{i-1}^{(\ell)}\Big{)}^{10}\). To show our algorithm \(A\) works on infinitely many input lengths, we will show that for every \(\ell\in\mathbb{N}\), there is an input length \(n_{i}^{(\ell)}\) for some \(i\in\big{[}t^{(\ell)}\big{]}\) such that \(A\) works.
Fix \(\ell\in\mathbb{N}\). From now on, for convenience, we will use \(n_{i}\) and \(t\) to denote \(n_{i}^{(\ell)}\) and \(t^{(\ell)}\), respectively.
Specifying \(T_{i}\) and \(f_{i}\).For each input length \(n_{i}\), we will specify a parameter \(T_{i}\in\mathbb{N}\) and a string \(f_{i}\in\{0,1\}^{T_{i}}\). Our win-win analysis is based on whether \(f_{i}\in\operatorname{Range}(\mathsf{GGM}_{T_{i}}[C_{n_{i}}])\) for each \(i\in\{0,1,\ldots,t\}\).
Let \(T_{0}:=2^{2n_{0}}\cdot 2n_{0}\) and \(f_{0}\) be the concatenation of all length-\(2n_{0}\) strings (which has length \(T_{0}\)). From creftype3.4, we have that \(f_{0}\not\in\operatorname{Range}(\mathsf{GGM}_{T_{0}}[C_{n_{0}}])\). For every \(i\in[t]\), we define
\[f_{i}=\widetilde{\operatorname{History}}(C_{n_{i-1}},f_{i-1}).\]
From creftype5.4, this also means that we have set \(T_{i}=T_{i-1}^{6}\) for every \(i\in[t]\).
Let \(t\) be the first integer such that \(T_{t+1}\leq n_{t+1}\). Note that we have \(T_{i}=(T_{0})^{6^{i}}\leq 2^{3n_{0}\cdot 6^{i}}\) and \(n_{i}=(n_{0})^{10^{i}}=2^{\log n_{0}\cdot 10^{i}}\). Hence, we have that \(t\leq O(\log n_{0})\). (Also note that \(n_{t}^{(\ell)}<n_{0}^{(\ell+1)}\).)
Description of our \(\mathsf{FS}_{2}\mathsf{P}\) algorithm \(A\).Now, let \(k\in\{0,1,\ldots,t\}\) be the largest integer such that \(f_{k}\not\in\operatorname{Range}(\mathsf{GGM}_{T_{k}}[C_{n_{k}}])\). Since \(f_{0}\not\in\operatorname{Range}(\mathsf{GGM}_{T_{0}}[C_{n_{0}}])\), such a \(k\) must exist. Let \(z:=\mathsf{Korten}(C_{n_{k}},f_{k})\), it follows from creftype3.3 that \(z\) is not in the range of \(C_{n_{k}}\) (i.e., \(z\in\{0,1\}^{2n_{k}}\setminus\operatorname{Range}(C_{n_{k}})\)). Our single-valued \(\mathsf{FS}_{2}\mathsf{P}\) algorithm \(A\) computes \(z\) on input \(1^{n_{k}}\) (see creftype2.2).
We will first construct an \(\mathsf{S}_{2}\mathsf{BPP}\) verifier \(V\) that computes \(z\) in polynomial time on input \(1^{n_{k}}\), and then use the fact that all \(\mathsf{S}_{2}\mathsf{BPP}\) verifiers can be turned into equivalent \(\mathsf{S}_{2}\mathsf{P}\) verifiers with a polynomial-time blow-up [1, 10], from which we can obtain the desired verifier \(V_{A}\) for \(A\).
Description of an \(\mathsf{S}_{2}\mathsf{BPP}\) verifier \(V\) computing \(z\).Formally, \(V\) is a randomized polynomial-time algorithm that takes \(1^{n_{k}}\) and two witnesses \(\pi_{1},\pi_{2}\in\{0,1\}^{n_{k+1}}\) as input, and we aim to establish the following:
There exists \(\omega\in\{0,1\}^{n_{k+1}}\) such that for every \(\pi\in\{0,1\}^{n_{k+1}}\), we have
\[\Pr[V(1^{n_{k}},\omega,\pi)=z]\geq 2/3\qquad\text{and}\qquad\Pr[V(1^{n_{k}}, \pi,\omega)=z]\geq 2/3,\]
where the probabilities are over the internal randomness of \(V\).
In more detail, if \(k<t\), then \(V\) treats \(\pi_{1}\) and \(\pi_{2}\) as inputs to the circuit \(\mathsf{GGM}_{T_{k+1}}[C_{n_{k+1}}]\), and let
\[\hat{f}_{k+1}:=\mathsf{GGM}_{T_{k+1}}[C_{n_{k+1}}](\pi_{1})\quad\text{and} \quad\hat{g}_{k+1}:=\mathsf{GGM}_{T_{k+1}}[C_{n_{k+1}}](\pi_{2}).\]
Here, the lengths of \(\pi_{1}\) and \(\pi_{2}\) are \(\ell:=n_{k+1}\leq\operatorname{poly}(n_{k})\). If \(k=t\), then \(V\) defines \(\hat{f}_{k+1}:=\pi_{1}\), \(\hat{g}_{k+1}:=\pi_{2}\), and their lengths are \(\ell:=T_{t+1}\leq n_{k+1}\leq\operatorname{poly}(n_{k})\). It is intended that one of the \(\hat{f}_{k+1}\) and \(\hat{g}_{k+1}\) is \(f_{k+1}=\widetilde{\operatorname{History}}(C_{n_{k}},f_{k})\) (\(V\) needs to figure out which one).
We now specify the intended proof \(\omega\in\{0,1\}^{n_{k+1}}\). When \(k<t\), since \(f_{k+1}\in\operatorname{Range}(\mathsf{GGM}_{T_{k+1}}[C_{n_{k+1}}])\), we can set \(\omega\) so that \(\mathsf{GGM}_{T_{k+1}}[C_{n_{k+1}}](\omega)=f_{k+1}\). When \(k=t\), we simply set \(\omega=f_{k+1}\).
Note that Lemma3.2 provides us "random access" to the (potentially very long) strings \(\hat{f}_{k+1}\) and \(\hat{g}_{k+1}\): (take \(\hat{f}_{k+1}\) as an example) given \(\pi_{1}\) and \(j\in\{0,1,\ldots,T_{k+1}-1\}\), one can compute the \(j\)-th bit of \(\hat{f}_{k+1}\) in \(\operatorname{poly}(n_{k})\) time. Also recall from Lemma5.5 that for each \(i\), \(f_{i+1}=\widetilde{\mathsf{History}}(C_{n_{i}},f_{i})\) contains the string \(f_{i}\), which can be retrieved by the oracle algorithm \(\mathsf{Input}\) described in Item1 of Lemma5.5. Therefore, for each \(i\) from \(k\) downto \(1\), we can recursively define \(\hat{f}_{i}\) such that \((\hat{f}_{i})_{j}=\mathsf{Input}_{T_{i},n_{i}}^{\hat{f}_{i+1}}(j)\) (similarly for \(\hat{g}_{i}\)). We also define \(\hat{f}_{0}\) and \(\hat{g}_{0}\) to be the concatenation of all length-\((2n_{0})\) strings in the lexicographical order, so \(\hat{f}_{0}=\hat{g}_{0}=f_{0}\).
Applying the algorithm \(\mathsf{Input}\) recursively, we obtain two algorithms \(F\) and \(G\) (depending on \(\pi_{1}\) and \(\pi_{2}\), respectively) that given \(i\in\{0,1,\ldots,k+1\}\) and \(j\in\{0,1,\ldots,T_{i}-1\}\), output the \(j\)-th bit of \(\hat{f}_{i}\) or \(\hat{g}_{i}\), respectively. Since \(\mathsf{Input}\) only makes one oracle query, these algorithms run in \(\operatorname{poly}(n_{k})\) time.
We are now ready to formally construct \(V\). We first recursively define a series of procedures \(V_{0},\ldots,V_{k+1}\), where each \(V_{i}\) takes an input \(j\) and outputs (with high probability) the \(j\)-th bit of \(f_{i}\). Let \(V_{0}\) be the simple algorithm that, on input \(j\), computes the \(j\)-th bit of \(f_{0}\). For every \(i\in[k+1]\), we define
\[V_{i}(\alpha)\coloneqq\mathsf{Select}_{T_{i-1},n_{i-1}}^{V_{i-1},\hat{f}_{i} \hat{g}_{i}}(C_{n_{i-1}},\alpha,\varepsilon_{i})\]
for some \(\varepsilon_{i}\in[0,1)\) to be specified later, where \(\mathsf{Select}\) is the algorithm in Item2 of Lemma5.5. We note that since \(V_{i-1}\) is a randomized algorithm, when \(V_{i}\) calls \(V_{i-1}\), it also draws _independent_ random coins used by the execution of \(V_{i-1}\). Moreover, all calls to \(\hat{f}_{i}\) and \(\hat{g}_{i}\) in \(V_{i}\) can be simulated by calling our algorithms \(F\) and \(G\). Jumping ahead, we remark that \(V_{i}\) is supposed to compute \(f_{i}\) when at least one of \(\hat{f}_{i}\) or \(\hat{g}_{i}\) is \(f_{i}\). We then set
\[V(1^{n_{k}},\pi_{1},\pi_{2})\coloneqq\mathsf{Output}_{T_{k},n_{k}}^{V_{k+1}}\]
(note that \(V_{k+1}\) is defined from \(\hat{f}_{k+1}\) and \(\hat{g}_{k+1}\), which are in turn constructed from \(\pi_{1}\) and \(\pi_{2}\)), where \(\mathsf{Output}_{T_{k},n_{k}}\) is the algorithm from Item1 of Lemma5.5.
Correctness of \(V\).Let \(\tau\in\mathbb{N}\) be a large constant such that \(\mathsf{Select}_{T,n}\) runs in \((n\cdot\log 1/\varepsilon)^{\tau}\) time. In particular, on any input \(\alpha\), \(\mathsf{Select}_{T_{i-1},n_{i-1}}^{V_{i-1},\hat{f}_{i},\hat{g}_{i}}(C_{n_{i-1} },\alpha,\varepsilon_{i})\) makes at most \((n_{i-1}\cdot\log 1/\varepsilon_{i})^{\tau}\) many queries to \(V_{i-1}\).
We say \(\mathsf{Select}_{T,n}^{f,\pi_{1},\pi_{2}}(C,\alpha,\varepsilon_{i})\) makes an error if the following statements hold (\(h=\widetilde{\mathsf{History}}(C,f)\) from Lemma5.5):36
Footnote 36: The condition below only applies when at least one of \(\pi_{1}\) and \(\pi_{2}\) is \(h\). If neither of them are \(h\), then \(\mathsf{Select}\) by definition never errs.
\[[\pi_{1}=h\quad\mathsf{OR}\quad\pi_{2}=h]\quad\mathsf{AND}\quad\Big{[} \mathsf{Select}_{T,n}^{f,\pi_{1},\pi_{2}}(C_{n_{i-1}},\alpha,\varepsilon_{i}) \neq h_{\alpha}\Big{]}.\]
Similarly, we say that \(\mathsf{Select}_{T_{i-1},n_{i-1}}^{V_{i-1},\hat{f}_{i},\hat{g}_{i}}(C_{n_{i-1} },\alpha,\varepsilon_{i})\) makes an error if either (1) one of the queries to \(V_{i-1}\) are incorrectly answered (i.e., the answer is not consistent with \(f_{i-1}\)) or (2) all queries are correctly answered but \(\mathsf{Select}_{T_{i-1},n_{i-1}}^{f_{i-1},\hat{f}_{i},\hat{g}_{i}}(C_{n_{i-1} },\alpha,\varepsilon_{i})\) makes an error. Note that (2) happens with probability at most \(\varepsilon_{i}\) from Item2 of Lemma5.5.
Now we are ready to specify the parameter \(\varepsilon_{i}\). We set \(\varepsilon_{k+1}=1/(100\cdot n_{k+1})\), and for every \(i\in\{0,1,\ldots,k\}\), we set
\[\varepsilon_{i}=\frac{\varepsilon_{i+1}}{4\cdot(n_{i}\cdot\log 1/\varepsilon_{i+1})^{ \tau}}.\]
To show the correctness of \(V\), we prove the following claim by induction.
**Claim 5.8**.: _Assume either \(\hat{f}_{k+1}=f_{k+1}\) or \(\hat{g}_{k+1}=f_{k+1}\). For every \(i\in\{0,1,\ldots,k+1\}\) and \(\alpha\in[|f_{i}|]\), \(V_{i}(\alpha)\) outputs \(f_{i}(\alpha)\) with probability at least \(1-2\varepsilon_{i}\)._
Proof.: The claim certainly holds for \(V_{0}\). Now, for \(i\in[k+1]\), assuming it holds for \(V_{i-1}\), it follows that \(\mathsf{Select}_{T_{i-1},n_{i-1}}^{V_{i-1},\hat{f}_{i}\hat{g}_{i}}(C_{n_{i-1}},\alpha,\varepsilon_{i})\) makes an error with probability at most
\[\varepsilon_{i}+(n_{i-1}\cdot\log 1/\varepsilon_{i})^{\tau}\cdot 2\varepsilon_{i- 1}\leq 2\varepsilon_{i}.\]
By the definition of making an error and our assumption that either \(\hat{f}_{k+1}=f_{k+1}\) or \(\hat{g}_{k+1}=f_{k+1}\) (from which we know either \(\hat{f}_{i}=f_{i}\) or \(\hat{g}_{i}=f_{i}\)), it follows that \(V_{i}(\alpha)\) outputs \(f_{i}(\alpha)\) with probability at least \(1-2\varepsilon_{i}\). \(\diamond\)
Note that \(\mathsf{Output}_{T_{k},n_{k}}^{V_{k+1}}\) makes at most \(4n_{k}\) queries to \(V_{k+1}\). It follows from Claim 5.8 that when either \(\hat{f}_{k+1}=f_{k+1}\) or \(\hat{g}_{k+1}=f_{k+1}\), we have that \(V(1^{n_{k}},\pi_{1},\pi_{2})\) outputs \(z\) with probability at least \(1-(4n_{k})\cdot 1/(100n_{k+1})\geq 2/3\). The correctness of \(V\) then follows from our choice of \(\omega\).
Running time of \(V\).Finally, we analyze the running time of \(V\), for which we first need to bound \(\log\varepsilon_{i}^{-1}\). First, we have
\[\log\varepsilon_{k+1}^{-1}=\log n_{k+1}+\log 100.\]
By our definition of \(\varepsilon_{i}\) and the fact that \(\tau\) is a constant, we have
\[\log\varepsilon_{i}^{-1} =\log\varepsilon_{i+1}^{-1}+\log 4+\tau\cdot\big{(}\log n_{i}+ \log\log\varepsilon_{i+1}^{-1}\big{)}\] \[\leq 2\log\varepsilon_{i+1}^{-1}+O(\log n_{i}).\]
Expanding the above and noting that \(k\leq t\leq O(\log n_{0})\), for every \(i\in[k+1]\) we have that
\[\log\varepsilon_{i}^{-1}\leq 2^{k}\cdot O\Bigg{(}\sum_{\ell=0}^{k}\log n_{ \ell}\Bigg{)}\leq\operatorname{poly}(n_{0})\cdot\log n_{k}.\]
Now we are ready to bound the running time of the \(V_{i}\). First \(V_{0}\) runs in \(T_{0}=\operatorname{poly}(n_{0})\) time. For every \(i\in[k+1]\), by the definition of \(V_{i}\), we know that \(V_{i}\) runs in time
\[T_{i}=O\Big{(}(n_{i-1}\cdot\log 1/\varepsilon_{i})^{\tau}\Big{)}\cdot(T_{i-1}+n _{k}^{\beta}+1),\]
where \(\beta\) is a sufficiently large constant and \(n_{k}^{\beta}\) bounds the running time of answering each query \(\mathsf{Select}_{T_{i-1},n_{i-1}}^{V_{i-1},\hat{f}_{i}\hat{g}_{i}}(C_{n_{i-1}},\alpha,\varepsilon_{i})\) makes to \(\hat{f}_{i}\) or \(\hat{g}_{i}\), by running \(F\) or \(G\), respectively.
Expanding out the bound for \(T_{k}\), we know that \(V_{k+1}\) runs in time
\[2^{O(k)}\cdot(\operatorname{poly}(n_{0})\cdot\log n_{k})^{O(k\cdot\tau)}\cdot n _{k}^{\beta}\cdot\prod_{i=1}^{k+1}n_{i-1}^{\tau}.\]
Since \(n_{k}=n_{0}^{10^{k}}\) and \(k\leq O(\log n_{0})\), the above can be bounded by \(\operatorname{poly}(n_{k})\). This also implies that \(V\) runs in \(\operatorname{poly}(n_{k})\) time as well, which completes the analysis of the \(\mathsf{S}_{2}\mathsf{BPP}\) verifier \(V\).
Derandomization of the \(\mathsf{S}_{2}\mathsf{BPP}\) verifier \(V\) into the desired \(\mathsf{S}_{2}\mathsf{P}\) verifier \(V_{A}\).Finally, we use the underlying proof technique of \(\mathsf{S}_{2}\mathsf{BPP}=\mathsf{S}_{2}\mathsf{P}\)[10, 11] to derandomize \(V\) into a deterministic \(\mathsf{S}_{2}\mathsf{P}\) verifier \(V_{A}\) that outputs \(z\).
By repeating \(V\)\(\operatorname{poly}(n_{k})\) times and outputs the majority among all the outputs, we can obtain a new \(\mathsf{S}_{2}\mathsf{BPP}\) verifier \(\widetilde{V}\) such that
* There exists \(\omega\in\{0,1\}^{n_{k+1}}\) such that for every \(\pi\in\{0,1\}^{n_{k+1}}\), we have \[\Pr[\widetilde{V}(1^{n_{k}},\omega,\pi)=z]\geq 1-2^{-n_{k}}\qquad\text{and} \qquad\Pr[\widetilde{V}(1^{n_{k}},\pi,\omega)=z]\geq 1-2^{-n_{k}}.\] (2)
Let \(\ell=\operatorname{poly}(n_{k})\) be an upper bound on the number of random coins used by \(\widetilde{V}\). We also let \(m:=\operatorname{poly}(\ell,n_{k+1})\leq\operatorname{poly}(n_{k})\) and use \(\widetilde{V}(1^{n_{k}},\pi_{1},\pi_{2};r)\) to denote the output of \(\widetilde{V}\) given randomness \(r\). Now, we define \(V_{A}\) as follows: It takes two vectors \(\vec{\pi}_{1},\vec{\pi}_{2}\in\{0,1\}^{n_{k+1}}\times\big{(}\{0,1\}^{\ell} \big{)}^{m}\) as proofs. For \(\vec{\pi}_{1}=(\alpha,u_{1},u_{2},\ldots,u_{m})\) and \(\vec{\pi}_{2}=(\beta,v_{1},v_{2},\ldots,v_{m})\), \(V_{A}\) outputs the majority of the multi-set
\[\{\widetilde{V}(1^{n_{k}},\alpha,\beta;u_{i}\oplus v_{j})\}_{(i,j)\in[m]^{2}},\]
where \(u_{i}\oplus v_{j}\) denotes the bit-wise XOR of \(u_{i}\) and \(v_{j}\) (if no strings occur more than \(m^{2}/2\) times in the set above, then \(V_{A}\) simply outputs \(\bot\)).
We will show there exists \(\vec{\omega}=(\gamma,r_{1},\ldots,r_{m})\) such that for every \(\vec{\pi}\in\{0,1\}^{n_{k+1}}\times\big{(}\{0,1\}^{\ell}\big{)}^{m}\),
\[\Pr[V_{A}(1^{n_{k}},\vec{\omega},\vec{\pi})=z]\quad\text{and}\quad\Pr[V_{A}(1 ^{n_{k}},\vec{\pi},\vec{\omega})=z].\]
We first claim that there exist \(r_{1},\ldots,r_{m}\in\{0,1\}^{\ell}\) such that for every \(u\in\{0,1\}^{\ell}\) and for every \(\pi\in\{0,1\}^{n_{k+1}}\), it holds that (1) for at least a \(2/3\) fraction of \(i\in[m]\), we have \(\widetilde{V}(1^{n_{k}},\omega,\pi;r_{i}\oplus u)=z\) and (2) for at least a \(2/3\) fraction of \(i\in[m]\), we have \(\widetilde{V}(1^{n_{k}},\pi,\omega;r_{i}\oplus u)=z\).
To see this, for every fixed \(u\in\{0,1\}^{\ell}\) and \(\pi\in\{0,1\}^{n_{k+1}}\), by a simple Chernoff bound, the probability, over \(m\) independently uniformly drawn \(r_{1},\ldots,r_{m}\), that more than a \(1/3\) fraction of \(i\in[m]\) satisfies \(\widetilde{V}(1^{n_{k}},\omega,\pi;r_{i}\oplus u)\neq z\) is at most \(2^{-\Omega(m)}\), and the same probability upper bound holds for the corresponding case of \(\widetilde{V}(1^{n_{k}},\pi,\omega;r_{i}\oplus u)\neq z\) as well. Our claim then just follows from a simple union bound over all \(u\in\{0,1\}^{\ell}\) and \(\pi\in\{0,1\}^{n_{k+1}}\).
Now, let \(\gamma\) be the proof \(\omega\) such that the condition (2) holds. We simply set \(\vec{\omega}=(\gamma,r_{1},\ldots,r_{m})\). From our choice of \(\gamma\) and \(r_{1},\ldots,r_{m}\), it then follows that for every \(v_{1},\ldots,v_{m}\in\{0,1\}^{\ell}\) and \(\pi\in\{0,1\}^{n_{k+1}}\), at least a \(2/3\) fraction of \(\widetilde{V}(1^{n_{k}},\gamma,\pi;r_{i}\oplus v_{j})\) equals \(z\), and similarly for \(\widetilde{V}(1^{n_{k}},\pi,\gamma;r_{i}\oplus v_{j})\). This completes the proof.
Wrapping up.Finally, we generalize \(A\) and \(V_{A}\) to work on all inputs \(1^{n}\). On input \(1^{n}\), \(V_{A}\) calculates the largest \(\ell\) such that \(n^{(\ell)}\leq n\), and also calculates the largest \(k^{\prime}\) such that \(n^{(\ell)}_{k^{\prime}}\leq n\). If \(n^{(\ell)}_{k^{\prime}}\neq n\), then \(V_{A}\) immediately outputs \(\bot\) and halts. Otherwise, \(V_{A}\) receives an advice bit indicating whether \(k^{\prime}=k^{(\ell)}\), where \(k^{(\ell)}\) is the largest integer such that \(f^{(\ell)}_{k^{(\ell)}}\not\in\operatorname{Range}(\mathsf{GGM}_{T^{(\ell)}_{k }}[C_{n^{(\ell)}_{k}}])\). If this is the case, then \(V_{A}\) runs the verification procedure above; otherwise, it immediately outputs \(\bot\) and halts. It is easy to see that \(V_{A}\) runs in \(\operatorname{poly}(n)\) time, and is an infinitely-often single-valued \(\mathsf{FS}_{2}\mathsf{P}\) algorithm solving the range avoidance problem of \(\{C_{n}\}\).
Moreover, observe that in the proof of Lemma5.5, all considered input lengths (the \(n^{(\ell)}_{i}\)) are indeed powers of \(2\). So we indeed have the following slightly stronger result.
**Corollary 5.9**.: _Let \(\{C_{n}\colon\{0,1\}^{n}\to\{0,1\}^{2n}\}_{n\in\mathbb{N}}\) be a \(\mathsf{P}\)-uniform family of circuits. There is a single-valued \(\mathsf{FS}_{2}\mathsf{P}\) algorithm \(A\) with one bit of advice such that for infinitely many \(r\in\mathbb{N}\), letting \(n=2^{r}\), \(A(1^{n})\) outputs \(y_{n}\in\{0,1\}^{2n}\setminus\operatorname{Range}(C_{n})\)._
We need the following reduction from Korten which reduces solving range avoidance with one-bit stretch to solving range avoidance with doubling stretch.
**Lemma 5.10** ([12, Lemma 3]).: _Let \(n\in\mathbb{N}\). There is a polynomial time algorithm \(A\) and an \(\mathsf{FP^{NP}}\) algorithm \(B\) such that the following hold:_
1. _Given a circuit_ \(C\colon\{0,1\}^{n}\to\{0,1\}^{n+1}\)_,_ \(A(C)\) _outputs a circuit_ \(D\colon\{0,1\}^{n}\to\{0,1\}^{2n}\)_._
2. _Given any_ \(y\in\{0,1\}^{2n}\setminus\mathrm{Range}(D)\)_,_ \(B(C,y)\) _outputs a string_ \(z\in\{0,1\}^{n+1}\setminus\mathrm{Range}(C)\)_._
The following corollary then follows by combining Lemma5.10 and Theorem2.3.
**Corollary 5.11**.: _Let \(\{C_{n}\colon\{0,1\}^{n}\to\{0,1\}^{n+1}\}_{n\in\mathbb{N}}\) be a \(\mathsf{P}\)-uniform family of circuits. There is a single-valued \(\mathsf{FS_{2}P}\) algorithm \(A\) with one bit of advice such that for infinitely many \(r\in\mathbb{N}\), letting \(n=2^{r}\), \(A(1^{n})\) outputs \(y_{n}\in\{0,1\}^{n+1}\setminus\mathrm{Range}(C_{n})\)._
The following corollary follows from creftype2.4 and Corollary5.11.
**Corollary 5.12**.: \(\mathsf{S_{2}E}/_{1}\not\subset\mathsf{SIZE}[2^{n}/n]\)_._
Finally, we also note that by letting \(C_{n}\) be a universal Turing machine mapping \(n\) bits to \(n+1\) bits in \(\mathrm{poly}(n)\) time, we have the following strong lower bounds for \(\mathsf{S_{2}E}/_{1}\) against non-uniform time complexity classes with maximum advice.
**Corollary 5.13**.: _For every \(\alpha(n)\geq\omega(1)\) and any constant \(k\geq 1\), \(\mathsf{S_{2}E}/_{1}\not\subset\mathsf{TIME}[2^{kn}]/_{2^{n}-\alpha(n)}\)._
From Remark5.6 and noting that the derandomization of \(\mathsf{S_{2}BPP}\) verifier \(V\) to \(\mathsf{S_{2}P}\) verifier \(A_{V}\) also relativizes, we can see that all the results above relativize as well.
### Infinitely Often Single-Valued \(\mathsf{FS_{2}P}\) Algorithm for Arbitrary Input Range Avoidance
Theorem5.7 and Corollary5.11 only give single-valued \(\mathsf{FS_{2}P}\) algorithms for solving range avoidance for \(\mathsf{P}\)-uniform families of circuits. Applying Korten's reduction [12], we show that it can be strengthened into a single-valued infinitely-often \(\mathsf{FS_{2}P}\) algorithm solving range avoidance given an arbitrary input circuit.
We need the following reduction from [12].
**Lemma 5.14** ([12, Theorem 7]).: _There is an \(\mathsf{FP^{NP}}\) algorithm \(A_{\mathsf{Korten}}\) satisfying the following:_
1. \(A_{\mathsf{Korten}}\) _takes an_ \(s\)_-size circuit_ \(C\colon\{0,1\}^{n}\to\{0,1\}^{n+1}\) _and a truth table_ \(f\in\{0,1\}^{2^{m}}\) _such that_ \(2^{m}\geq s^{3}\) _and_ \(n\leq s\) _as input._
2. _If the circuit complexity of_ \(f\) _is at least_ \(c_{1}\cdot m\cdot s\) _for a sufficiently large universal constant_ \(c_{1}\in\mathbb{N}\)_, then_ \(A_{\mathsf{Korten}}(C,f)\) _outputs a string_ \(y\in\{0,1\}^{n+1}\setminus\mathrm{Range}(C)\)_._
**Theorem 5.15**.: _There is a single-valued \(\mathsf{FS_{2}P}\) algorithm \(A\) with one bit of advice such that for infinitely many \(s\in\mathbb{N}\), for all \(s\)-size circuits \(C\colon\{0,1\}^{n}\to\{0,1\}^{n+1}\) where \(n\leq s\), \(A(C)\) outputs \(y_{C}\in\{0,1\}^{n+1}\setminus\mathrm{Range}(C)\)._
Proof Sketch.: By Corollary5.11, there is a single-valued \(\mathsf{FS_{2}P}\) algorithm \(W\) with one bit of advice such that for infinitely many \(n\in\mathbb{N}\), \(W(1^{2^{n}})\) outputs a string \(f_{n}\in\{0,1\}^{2^{n}}\) with \(\mathsf{SIZE}(f_{n})\geq 2^{n}/n\).
Now we construct our single-valued \(\mathsf{FS_{2}P}\) algorithm \(A\) with one bit of advice as follows: given an \(s\)-size circuit \(C\colon\{0,1\}^{n}\to\{0,1\}^{n+1}\) with \(n\leq s\) as input; let \(m=\lceil\log s^{3}\rceil\) and \(f_{m}=W(1^{2^{m}})\); output \(A_{\mathsf{Korten}}(C,f_{m})\). It follows from Theorem2.3 that \(A\) is a single-valued \(\mathsf{FS_{2}P}\) algorithm with one bit of advice (the advice of \(A\) is given to \(W\)).
Finally, \(\mathsf{S}_{2}\mathsf{P}\subseteq\mathsf{ZPP}^{\mathsf{NP}}\)[11] implies that every single-valued \(\mathsf{FS}_{2}\mathsf{P}\) algorithm can also be implemented as a single-valued \(\mathsf{FZPP}^{\mathsf{NP}}\) algorithm with polynomial overhead. Therefore, the above theorem also implies an infinitely often \(\mathsf{FZPP}^{\mathsf{NP}}\) algorithm for range avoidance.
**Reminder of Theorem 1.5**.: _There is a single-valued \(\mathsf{FZPP}^{\mathsf{NP}}\) algorithm \(A\) with one bit of advice such that for infinitely many \(s\in\mathbb{N}\), for all \(s\)-size circuits \(C\colon\{0,1\}^{n}\to\{0,1\}^{n+1}\) where \(n\leq s\), \(A(C)\) outputs \(y_{C}\in\{0,1\}^{n+1}\setminus\operatorname{Range}(C)\). That is, for all those \(s\), there is a string \(y_{C}\in\{0,1\}^{n+1}\setminus\operatorname{Range}(C)\) such that \(A(C)\) either outputs \(y_{C}\) or \(\bot\), and the probability (over the inner randomness of \(A\)) that \(A(C)\) outputs \(y_{C}\) is at least \(2/3\)._
## Acknowledgments
Part of the work was done when all authors were participating in the Meta-Complexity program at the Simons Institute. Lijie Chen is supported by a Miller Research Fellowship. Shuichi Hirahara is supported by JST, PRESTO Grant Number JPMJPR2024, Japan. Hanlin Ren received support from DIMACS through grant number CCF-1836666 from the National Science Foundation. We thank Oliver Korten, Zhenjian Lu, Igor C. Oliveira, Rahul Santhanam, Roei Tell, and Ryan Williams for helpful discussions. We also want to thank Jiatu Li, Igor C. Oliveira, and Roei Tell for comments on an early draft of the paper.
|
2309.04816 | Non-LTE Monte Carlo Radiative Transfer. III. The thermal properties of
Tilted and Warped Be Star Discs | We use the three-dimensional Monte Carlo radiative transfer code HDUST to
model Be stars where the disc is tilted from the equatorial plane of the star.
We compute 128 models across 4 spectral types, B0, B2, B5 and B8, tilting the
disc by $0^o$, $10^o$, $20^o$, and $40^o$, while varying disc density according
to spectral type. We also compute every model for an average and high stellar
rotation rate. We first discuss non-tilted disc temperatures and show its
non-linear dependence on stellar and disc parameters. We find that tilting the
disc minimally affects the density-weighted average disc temperature, but
tilting does create a temperature asymmetry in disc cross sections, which is
more pronounced for a faster rotation rate. We also investigate the effect
tilting has on $V$-band magnitude, polarization, and the H$\alpha$ line.
Tilting the disc does affect these observables, but the changes are entirely
dependent on the position of the observer relative to the direction of tilt. We
find the observables that distinguish tilting from a change in density or
geometry are the H$\alpha$ line shape, where it can transition between
single-peaked and double-peaked, and the polarization position angle, whose
value is dependent on the projected major elongation axis of the disc on the
sky. We also present one early and one late-type model with warped discs. We
find their temperature structure varies a small amount from the uniformly
tilted models, and the different observables correspond to different tilt
angles, consistent with their expected volume of origin within the disc. | M. W. Suffak, C. E. Jones, A. C. Carciofi, T. H. de Amorim | 2023-09-09T14:55:19Z | http://arxiv.org/abs/2309.04816v1 | Non-LTE Monte Carlo Radiative Transfer. III. The thermal properties of Tilted and Warped Be Star Discs
###### Abstract
We use the three-dimensional Monte Carlo radiative transfer code _hdust_ to model Be stars where the disc is tilted from the equatorial plane of the star. We compute 128 models across 4 spectral types, B0, B2, B5 and B8, tilting the disc by 0\({}^{\circ}\), 10\({}^{\circ}\), 20\({}^{\circ}\), and 40\({}^{\circ}\), while varying disc density according to spectral type. We also compute every model for an average and high stellar rotation rate. We first discuss non-tilted disc temperatures and show its non-linear dependence on stellar and disc parameters. We find that tilting the disc minimally affects the density-weighted average disc temperature, but tilting does create a temperature asymmetry in disc cross sections, which is more pronounced for a faster rotation rate. We also investigate the effect tilting has on \(V\)-band magnitude, polarization, and the H\(\alpha\) line. Tilting the disc does affect these observables, but the changes are entirely dependent on the position of the observer relative to the direction of tilt. We find the observables that distinguish tilting from a change in density or geometry are the H\(\alpha\) line shape, where it can transition between single-peaked and double-peaked, and the polarization position angle, whose value is dependent on the projected major elongation axis of the disc on the sky. We also present one early and one late-type model with warped discs. We find their temperature structure varies a small amount from the uniformly tilted models, and the different observables correspond to different tilt angles, consistent with their expected volume of origin within the disc.
keywords: binaries: general - circumstellar matter - radiative transfer - stars: emission-line, Be
## 1 Introduction
Classical Be stars are defined as non-supergiant B-type stars that have, or have had, Balmer lines in emission (Collins, 1987). These emission lines are known to form in a gaseous circumstellar disc that has developed around the equator of the star. The exact process which leads to the formation of these discs is uncertain, but coupling rapid rotation with non-radial pulsations (Baade et al., 2016) is thought to be the most-likely mechanism for stellar mass-loss. In addition to Balmer line emission, Be star discs are also characterised by excess continuum emission, particularly at infrared (IR) and radio wavelengths, and by linear polarization (for recent examples, see Ghoreyshi et al., 2021; Marr et al., 2022). The most recent comprehensive review of classical Be stars is given by Rivinius et al. (2013).
Observables seen from Be star discs are not only highly dependent on the density structure of the disc, but also on the disc temperature, as the temperature ultimately is what sets the state of the gas through its level populations and ionization state (Carciofi & Bjorkman, 2008). Until the late twentieth century, the temperature of Be star discs was assumed to be constant, or simply fall off with a radial power-law (Waters, 1986). The first attempt to self-consistently determine the disc temperature was performed by Millar & Marlborough (1998), who determined the temperature by equating the rates of energy gain and loss at each point in the disc. They applied this technique to various case studies of both early and late-type Be stars in subsequent publications (Millar & Marlborough, 1999, 2000; Millar et al., 2000) and found temperature differences of thousands of Kelvin between the midplane and upper edge of the disc. Jones et al. (2004) added to this method by accounting for metals with the inclusion of iron. They found that for the early type star, \(\gamma\) Cas, the inclusion of metals lead to an overall cooling of the disc, and a slight heating at the inner most disc, within 3 stellar radii. However for the late type star 1 Del, the most heating occurred on the outer edges of the disc, which was illuminated by light from the poles, and the greatest cooling happened in the middle portion of the disc, not near the dense equatorial plane.
Carciofi & Bjorkman (2006) investigated the temperature structure of early-type Be star discs with their 3-dimensional (3D) non-local thermodynamic equilibrium (non-LTE) Monte Carlo code _hdust_. In their models, they found the temperature at the dense midplane of the disc initially drops within 3-5 stellar radii before rising back to the optically thin equilibrium temperature, while the thin upper layers of the disc were approximately isothermal, consistent with Millar & Marlborough (1998). They also found the disc to be almost completely ionized, except for a small portion in the midplane near the minimum temperature. Carciofi & Bjorkman (2008) further investigated this non-isothermal structure by presenting a self-consistent solution for the viscous decretion disc (VDD) scenario. They de
termined that the varying temperature affects the density structure in two ways: 1) the radial temperature gradient changes the radial fall-off of the density, and 2) the reduction in temperature within the midplane results in the collapse of the disc onto itself, thereby causing a decrease in its scale height. They conclude that a non-isothermal disc density model must be used for detailed modelling of Be star disc observables. However, many successful modelling efforts of Be stars using indust have utilized the simpler isothermal density formula while solving for non-isothermal disc temperatures (Silaj et al., 2016; Ghoreyshi et al., 2018; Suffak et al., 2020; Marr et al., 2021).
Over the past decade, the possibility of Be star discs warping, tilting, and precessing, has gained a lot of attention (see Martin et al., 2011; Brown et al., 2019; Suffak et al., 2022, for example). There have been a number of studies using hydrodynamical simulations to predict the nature of warping, tilting, and oscillations of Be star discs in situations where a binary companion's orbit is misaligned to the initial plane of the disc (Martin et al., 2014; Cyr et al., 2017; Suffak et al., 2022), many of which focus on Be/X-ray binary system parameters (Martin et al., 2014; Brown et al., 2019, for example). The simulations of Suffak et al. (2022) showed that, under the influence of a misaligned binary companion, a Be star disc can undergo episodes of disc tearing, as well as develop eccentric gaps near the primary star during disc dissipation, in addition to tilting, warping, and precessing. The phenomena of disc precession and disc tearing are the best current explanation for the behaviour of the observables in the Be star Pleione (Marr et al., 2022; Martin and Lepp, 2022).
So far, none of the studies that investigated dynamically simulated disc tilting, warping, precession, etc., have investigated the effects this would have on the disc temperature structure, or its observables in a systematic way. As well, late-type Be stars have been dramatically understudied compared to their early-type counterparts. In this paper, we first provide results of static 3D radiative transfer models, showing the temperature structure of non-tilted Be star discs, ranging in spectral type from B0 to B8 (Section 2). We then show the same discs, uniformly tilted from the equatorial plane, and discuss their temperature structure (Section 3), before we present two scenarios where the continuum, Balmer line, and polarization signatures could allow a tilted disc to be detected (Section 4). We also briefly discuss how a warped disc may differ from a flat-tilted disc in Section 5. Our discussion and conclusions are presented in Section 6.
## 2 Non-tilted disc temperatures
We chose a computational grid of four spectral types from B0 to B8, to capture both early and late-type Be star behaviour. The stellar parameters for each spectral type were taken from Cox (2000) and Silaj et al. (2010), who interpolated their parameters from Cox (2000). We model our disc density based on the widely-used equation
\[\rho(r,z)=\rho_{0}\left(\frac{R_{*}}{r}\right)^{n}\exp{-\frac{z^{2}}{2H^{2}}}, \tag{1}\]
where \(\rho_{0}\) is the base density, \(R_{*}\) is the equatorial radius of the star, \(H\) is the disc scale height, and \(r\) and \(z\) are respectively the radial and vertical coordinates in the disc. Equation 1 is physically motivated by the viscous decretion disc (VDD) model of Lee et al. (1991), that in its simplest form predicts \(n=3.5\) for an isothermal and geometrically thin disc, and has been used in many studies, such as Silaj et al. (2010); Jones et al. (2008); Suffak et al. (2022); Marr et al. (2021). The scale height is calculated by
\[H(r)=\frac{a}{\Omega}\left(\frac{r}{R_{*}}\right)^{1.5}, \tag{2}\]
where \(a\) is the sound speed, calculated assuming a disc temperature \(60\%\) of the star's effective temperature (Carciofi and Bjorkman, 2006), and \(\Omega\) is the Keplerian orbital frequency at the equator of the star. We selected two base density (\(\rho_{0}\)) values for each spectral type, based on the limits of base density versus stellar effective temperature shown in figure 8a of Vieira et al. (2017). Figure 8b of Vieira et al. (2017) also shows there is no bounds on \(n\) with effective temperature, so we choose to use values of \(n\) of 2 and 3.5 for every spectral type, as these are approximately the lower and upper limits of \(n\) for the majority of stars studied in Vieira et al. (2017). Finally, we compute each model for two different stellar rotation rates, setting the critical fraction, W (defined in equation 6 of Rivinius et al., 2013, as the ratio of the rotational velocity at the equator to the Keplerian circular orbital velocity at the equator), to 0.7 or 0.95. Figure 9 of Rivinius et al. (2013) shows 0.7 to be about the average \(W\) for Be stars, while 0.95 is on the extreme upper end, nearing the critical rotation rate where the outward centrifugal force at the equator would be equal to the inward pull of gravity.
The disc size is held constant for each spectral type at 50 equatorial radii (\(R_{eq}\)). The equatorial radius was scaled to be consistent with the chosen value of \(W\), satisfying the formula (Rimulo et al., 2018)
\[W=\sqrt{2\left(\frac{R_{\rm eq}}{R_{\rm p}}-1\right)}, \tag{3}\]
where \(R_{\rm p}\) is the stellar polar radius. Table 1 presents stellar and disc parameters in our models.
### Azimuthally-Averaged Temperature Slice
Across all of our models, regardless of spectral type, rotation rate, or disc density parameters, we find the following common traits: (1) the tenuous upper layers of the disc (i.e., far from the midplane) are fully ionized and approximately isothermal; (2) the very dense disc midplane contains the diversity between the models, it can be cooler or hotter than the upper layers depending on model parameters, and is only partially ionized; (3) between these two regions exists a transition layer between the fully ionized outer disc and partly ionized inner disc, where relatively thin, hot sheaths arise. However, when examining the temperature structure in more detail, the behaviour seen from one model to another is very non-linear and is coupled to the disc density structure, spectral type, and stellar rotation rate. We summarize these particularities in appropriate detail below.
#### 2.1.1 Early Spectral Types
Figure 1 shows the azimuthally-averaged cross sections of the non-tilted disc models for the B0 and B2 models of our grid. We see that the models where \(n=2\), whose discs have a slow density fall-off and thus are much more dense than \(n=3.5\), have a very large cool, partially ionized region surrounding the midplane of the disc, while the outer regions are much hotter and fully ionized. The inner cool regions are due to the disc being optically thick, while in the outer regions, the density drops and the temperature can reach much higher values. In the models where \(n=3.5\), we see that the midplane has a much smaller cool region and then transitions to hotter temperatures with increasing radius as the disc becomes optically thin. The upper hot layers are also larger than in the \(n=2\) case due to the densities
falling off faster and more of the disc being optically thin. These inner and outer regions are separated by a hot thin sheath, which has also been seen in other publications (Carciofi & Bjorkman, 2006; Sigut & Jones, 2007).
In Figure 2, we have plotted the disc temperature and ionization fraction (fraction of hydrogen in the disc that is ionized) for a column at a radial position 30 \(R_{*}\) for model 11, which prominently displays these hot sheaths. We can see that the spike in temperature (i.e., the hot sheath) occurs right at the boundary between the cooler inner portion where the disc is partially ionized, and the hotter outer layers where the disc is fully ionized. This can be explained by the inner cool region being optically thick and locally trapping the UV radiation, so as the vertical direction offers the largest escape probability due to lower opacity, the UV radiation travels vertically and further heats the gas directly above and below the inner cool region. When we compute the bound-free and bound-bound optical depths, as well as the hydrogen level populations, they show inverse profiles to the ionization fraction in Figure 2, being highest around the midplane of the disc and trending towards zero as height in the disc increases. These same trends occur for all models that have these hot sheaths in their cross sections, including both early- and late- spectral types.
The position of these hot sheaths also noticeably changes between models, as they move closer together as the \(n\) value rises, or as the disc base density decreases. Both of these changes to the density structure make the upper disc layers more tenuous, which allows the disc to be fully ionized to a greater vertical depth, making the inner partially ionized disc portion thinner, and thus the transition regions closer to the midplane. This can easily be seen by comparing the cross sections of model 1 to model 2 and 3, respectively. There are also cases where the disc is so tenuous that it is nearly entirely ionized, even in the midplane, seen for example in model 4. Here there is no cooler section in the midplane. Instead the midplane is hotter than the upper disc layers, due to the denser midplane being able to reprocess UV radiation and increase the role of diffuse radiation in that area.
In models 5 to 8 and 13 to 16, we see that increasing the stellar rotation rate to \(W=0.95\) does not change the qualitative temperature patterns in the disc. However the temperatures in the upper disc are notably higher than in the slower rotating case and the hot sheaths and disc midplane can be slightly warmer than with slower rotation. This indicates that the hotter stellar poles caused by increased gravity darkening at this high rotation is able to "carve" into the disc deeper, penetrating the disc midplane with more UV radiation and raising its temperature.
#### 2.1.2 Late Spectral Types
The cross sections for our B5 and B8 non-tilted models are shown in Figure 3. In these later spectral types, we see some different behaviour in the temperature structure than the early B0 and B2 type stars. The highest density, lowest \(n\) model for the B5 spectral type is similar to the analogues for B0 and B2 stars. However, at lower densities, for \(n~{}=~{}2\) in both B5 and B8 type stars, we see the midplane is hotter than the outer disc, which is opposite the early type stars. This is the same as Millar & Marlborough (1999b) found in their work for the late-type star 1 Del. They explain that this temperature inversion is due to collisions populating the upper levels, and thus photoionization from these upper levels is able to heat the gas, while the disc remains optically thick to Lyman continuum photons. However, as these hot midplane sections are radially extended in our discs when the density has already fallen off exponentially, collisions are not going to be a major factor, and thus this hotter midplane would be due to the discs ability to reprocess the UV radiation (as mentioned for model 4 in Section 2.1.1) and locally heat the denser midplane through diffuse radiation. Conversely, we see in models where \(n=3.5\), that the midplane temperature drops off in the outermost disc. Due to the much faster drop off of density compared to the \(n=2\) model, there is not enough diffuse radiation contributed from the disc itself to make up for the lack of UV radiation reaching the outer midplane of the disc from the late-type star.
With a higher rotation rate, we see the B5 models largely retain the same structure as their slower-rotating counterparts, however the rapidly rotating B8 models, particularly models 30, 31, and 32, display dramatically hotter disc midplanes, as well as hot sheaths which did not appear in the slower rotating case. We interpret this again as the hotter poles being able to "carve" farther into the disc and cause greater heating in the midplane. Here the high stellar rotation gives qualities of both an early-type star from the hot poles, and a late-type star from the very cool equator causing this large change in temperature cross section.
\begin{table}
\begin{tabular}{c c c c c c c c c} \hline \hline Sp. Type & M (M\({}_{\odot}\)) & R\({}_{p}\) (R\({}_{\odot}\)) & W & T\({}_{\rm eff}\) (K) & \(L\) (L\({}_{\odot}\)) & \(\rho_{0}\) (g cm\({}^{-3}\)) & n & Model \# \\ \hline B0 & 17.5 & 7.4 & 0.7 & 30000 & 39740 & \(1\times 10^{-10}\) & 2/3.5 & 1/2 \\ & & & 0.7 & & & \(1\times 10^{-11}\) & 2/3.5 & 3/4 \\ & & & 0.95 & & & \(1\times 10^{-10}\) & 2/3.5 & 5/6 \\ & & & 0.95 & & & \(1\times 10^{-11}\) & 2/3.5 & 7/8 \\ B2 & 9.11 & 5.33 & 0.7 & 21000 & 4950 & \(5\times 10^{-11}\) & 2/3.5 & 9/10 \\ & & & 0.7 & & & \(5\times 10^{-12}\) & 2/3.5 & 11/12 \\ & & & 0.95 & & & \(5\times 10^{-11}\) & 2/3.5 & 13/14 \\ & & & 0.95 & & & \(5\times 10^{-12}\) & 2/3.5 & 15/16 \\ B5 & 5.9 & 3.9 & 0.7 & 15000 & 690 & \(5\times 10^{-12}\) & 2/3.5 & 17/18 \\ & & & 0.7 & & & \(5\times 10^{-13}\) & 2/3.5 & 19/20 \\ & & & 0.95 & & & \(5\times 10^{-12}\) & 2/3.5 & 21/22 \\ & & & 0.95 & & & \(5\times 10^{-13}\) & 2/3.5 & 23/24 \\ B8 & 3.8 & 3.0 & 0.7 & 12000 & 167 & \(1\times 10^{-12}\) & 2/3.5 & 25/26 \\ & & & 0.7 & & & \(1\times 10^{-13}\) & 2/3.5 & 27/28 \\ & & & 0.95 & & & \(1\times 10^{-12}\) & 2/3.5 & 29/30 \\ & & & 0.95 & & & \(1\times 10^{-13}\) & 2/3.5 & 31/32 \\ \hline \end{tabular}
\end{table}
Table 1: Stellar and disc parameters used in our indent grid of models. Left to right is the spectral type, stellar mass, polar radius, fraction of critical rotation velocity, effective temperature, luminosity, disc base density, disc density slope, and model number.
## 3 Tilted-Disc Temperatures
To expand our work on our non-tilted grid, we tilted all of our models listed in Table 1 by \(\alpha=10^{\circ}\), \(20^{\circ}\), and \(40^{\circ}\) away from the equatorial plane. Figure 4 shows a schematic of the orientation of our tilted disc models in cartesian coordinates, where the star lies at the origin. The disc is tilted about the \(x\)-axis, so the disc tilt angle is measured from the \(y\)-axis. When referring to azimuthal angles (\(\phi\)), we denote the positive \(x\)-axis as having \(\phi=0^{\circ}\), and the positive \(y\)-axis having \(\phi=90^{\circ}\). Thus \(\phi\) values of \(180^{\circ}\) and \(270^{\circ}\) correspond to the negative \(x\) and \(y\)-axes, respectively. The height of the disc midplane can thus be given as
\[Z(r_{i},\phi_{j})=-r_{i}\sin\left\{\,\arctan\left[\,\sin(\phi_{j})\tan\left( \alpha\right)\,\right]\right\}, \tag{4}\]
where \(r_{i}\) and \(\phi_{j}\) are the radial and azimuthal coordinates on the midplane, and \(\alpha\) is the disc tilt angle about the \(x\)-axis (either \(10^{\circ}\), \(20^{\circ}\), or \(40^{\circ}\)).
To assess any global changes in disc temperature due to disc tilting, we calculate the mass-averaged disc temperature, \(\tilde{T}_{M}\), using the formula
\[\tilde{T}_{M}=\frac{1}{M_{\rm disc}}\sum_{t=0}^{N}T_{i}\rho_{i}V_{i}, \tag{5}\]
where \(M_{disc}\) is the total mass of the disc, and \(T_{i}\), \(\rho_{i}\) and \(V_{i}\) are the temperature, density, and volume of the \(i\)-th cell. The sum is performed over all \(N\) cells in the disc. This formula was adapted from the equation for density-weighted average temperature of McGill et al. (2013). The results of these calculations are presented in Figure 5. We
Figure 1: Disc cross sections of the azimuthally-averaged temperatures of models 1-16, as noted in the subplot titles. The colour of each cell in the cross section corresponds to temperature as shown in the colour bar. The axes are in stellar equatorial radii, which is different for each spectral type, consistent with the polar radius listed in Table 1.
Figure 2: Ionization fraction (blue, left \(y\)-axis), and temperature (orange, right \(y\)-axis) versus \(z\) height in the non-tilted disc of model 11. The radial distance of the measurements from the central star is about 30 \(R_{*}\).
find that there is a clear relationship between \(\bar{T}_{\mathbf{M}}\) and disc tilt angle: the greater the tilt angle, the greater \(\bar{T}_{\mathbf{M}}\). While this trend is clear, the changes are not large, with most discs varying in temperature by less than 1000 K. The change is especially small in our densest models, where the change in average temperature cannot be distinguished between being caused by the tilt or simply the error in the nature of Monte-Carlo simulations. We also note that most of these density-weighted average temperatures are below 60% \(\bar{T}_{\text{eff}}\), which Carofi & Bjorkman (2006) found to be a good isothermal approximation of Be star discs. Since the density-weighted average weights the most dense, and hence most optically thick regions the highest, it is not surprising that our least dense models come closest to, or sometimes end up greater than, the 60% \(\bar{T}_{\text{eff}}\) mark. This is consistent with the results of our non-tilted discs seen in Figures 1 and 3 where the lower density discs have a higher temperature. It is worth mentioning that by definition, our tilted models are not anchored at the equator of the star, thus the innermost disc may have slightly inflated temperatures due to directly seeing a hotter part of the star than if it were aligned at the equator.
### Detailed Temperature Structure
Although the density-weighted average disc temperature doesn't change appreciably with tilt angle, we find that a tilted disc can have significant changes in the temperature structure of certain areas of the disc. An example is shown in Figure 6, where we show cross sections of discs with parameters of model 1, tilted by \(10^{\circ}\), \(20^{\circ}\), and \(40^{\circ}\). In the line of nodes cross section of the disc (\(\phi=0^{\circ}\), left column) we see that changes in the temperature structure from the non-tilted model (top left panel of Figure 1) are hard to detect. However, in the cross section farthest from the equator (\(\phi=90^{\circ}\), right column of Figure 6), we see noticeable temperature difference between the top and bottom of the disc, with the top of the disc becoming cooler and the bottom of the disc becoming hotter. This is due to the effect of gravity darkening arising from rapid rotation, resulting in the stellar equator being cooler than the pole. Thus, the part of the disc that moves closer to the equator when tilted, ends up being cooler than the non-tilted solution, and the part that moves closer to the stellar pole ends up hotter than the non-tilted case. We note that the disc midplane does not significantly change temperature when tilted because its high density compared to the rest of the disc makes it insensitive to the change in radiation input caused by the tilting of the disc.
In Figure 7, we show the same plots as Figure 6, but for one of our late-type models. We see that the temperature trends of the \(\phi=90^{\circ}\) cross section of the disc are the same, with the top of the disc cooling and the bottom heating as it is oriented closer to the pole of the star. In the line of nodes of the disc, however, we see a change in the temperature structure: the hot bands that were on either side of the midplane in the non-tilted models, greatly lessen in temperature as the overall disc tilt angle increases. Since the central star does not change, this change in temperature structure is due to the diffuse
Figure 3: Same as Figure 1, but for models 17-32. Note the change in the maximum temperature of the colour bar scale.
radiation field of the disc affecting the disc temperature significantly. Plots similar to Figures 6 and 7 for all other models are presented in Appendix A.
Overall we see that the average cell temperature of the \(\phi=90^{\circ}\) cross sections of the disc can differ by as much as 30% when compared to the non-tilted models. The large majority of this difference however, is in the optically thin outer disc. As shown in Figure 8, while the midplane temperature does differ in the \(\phi=90^{\circ}\) cross sections of the disc from the non-tilted model, the difference is quite small. Appendix B contains similar plots of the midplane for all models. As the tilt angle increases, we see the temperature of the innermost disc increase as well. This is due to the midplane being oriented closer to the pole, and therefore directly seeing a hotter area of the star. We also see that the midplane temperature profile of the tilted disc still has the same structure as past publications (Millar and Marborough, 1998; Carciofi and Bjorkman, 2006), with the temperature reaching a minimum within the first few stellar radii before increasing to an approximately isothermal temperature in the outer disc. It is important to note that for some of our densest models, the midplane temperature does not increase from its minimum due to the high density of the disc. It is also noteworthy that this structure becomes less pronounced as one moves towards the top and bottom of the disc, away from the midplane.
## 4 Tilted Disc Observables
Due to the 3-dimensional nature of our simulations, we specify our observer position by two spherical coordinates, the polar angle \(\theta\) and the azimuthal angle \(\phi\). \(\theta\) ranges from \(0^{\circ}\) when looking pole-on with the star, to \(90^{\circ}\) when looking at the equatorial plane, while \(\phi\) is defined in the range [\(0^{\circ}\),\(360^{\circ}\)) in the same manner as shown in Figure 4.
Observables of Be stars are highly dependent on the orientation of the disc with respect to the observer. It is well known that a non-tilted disc seen edge-on will have dimmer photometric magnitudes, higher polarization levels, and shell emission lines, compared to a disc viewed pole-on, which will have brighter photometric magnitudes, close to zero polarization, and single-peaked emission lines. However, in a tilted disc scenario, looking edge-on with the disc may not imply one is looking equator-on with the star, and a face-on view of the disc would not be pole-on with the star. Thus, the changes in observables that occur due to disc tilting would be expected to be highly dependent on the orientation of the tilt with respect to the observer.
In their hydrodynamic simulations, Suffak et al. (2022) showed that a Be star disc can tilt and then precess under the influence of a misaligned binary companion. We now examine these two scenarios where a tilted disc may be able to be detected through a change in their observables.
### Case I: Viewing a Disc with Varying Tilt Angle
In this first scenario, a misaligned binary companion torques the disc above and below the equatorial plane of the primary star, meaning the axis of the disc's tilt is aligned with the companion's line of nodes. Here, the orientation between the disc tilt axis and the stationary observer is constant, thus the observer merely sees the disc tilt in a constant direction over time.
Since our observing coordinates are defined with respect to the central star and tilt axis of the disc, we can simulate a disc tilting over time by plotting any observable viewed from the same (\(\theta\), \(\phi\)) observer position over four simulations where a model disc is tilted by \(0^{\circ}\), \(10^{\circ}\), \(20^{\circ}\), and \(40^{\circ}\). An example of this is shown in Figure 9, which shows, for a constant \(\theta\) of \(80^{\circ}\) and \(\phi\) of \(90^{\circ}\), \(180^{\circ}\), and \(315^{\circ}\), \(V\)-band magnitude, H\(\alpha\) equivalent width (EW), H\(\alpha\) violet-to-red (V/R) ratio, polarization percentage and position angle (PA) in the \(V\)-band, versus disc tilt angle for systems with parameters of model 1. Here we see the change in an observable as the disc tilts is very dependent on the observer's position.
For example, at \(\phi=0^{\circ}\) or \(180^{\circ}\), the observer will be aligned with the tilt axis of the disc, thus the \(V\) magnitude, EW, and percent polarization do not vary as much as at other \(\phi\) angles, however the polarization position angle will vary greatly with increasing tilt angle. This is in contrast to \(\phi=90^{\circ}\), where the disc is tilted to be more face-on with the observer. As the tilt angle increases, we see \(V\) magnitude increase, while EW and polarization decrease and the position angle stays constant. Not plotted is \(\phi=270^{\circ}\), where the disc tilts to be more edge-on with the observer, and the trends would be generally reversed from the \(\phi=90^{\circ}\) case. Finally, for an intermediate azimuthal angle of \(315^{\circ}\), all of the observables vary greatly, with the only clear trend being the polarization position angle. Note that these results are degenerate with \(\phi=225^{\circ}\), while the degenerate pair of \(45^{\circ}\) and \(135^{\circ}\) will give similar results. The polarization position angle is thus the one observable which shows constant change as the disc tilts (except when the observer is exactly perpendicular to the tilt axis). We also see for this simulation, large
Figure 4: Two schematics showing the orientation of the tilted disc with respect to the \(x\), \(y\), and \(z\) axes. The azimuthal angle (\(\phi\)) is defined with \(\phi=0^{\circ}\) along the positive \(x\)-axis as shown. The disc is tilted about the \(x\)-axis, and thus the tilt angle \(\alpha\) is defined from the \(y\)-axis. The central star is represented by the blue circle at the origin of both diagrams. The disc and star sizes are not to scale.
V/R ratios of about 10% at certain azimuthal observing angles when the disc is tilted by 40\({}^{\circ}\). This is an extreme example however, since most of our models don't reach a V/R ratio of more than 2 to 5%.
Figure 10 shows the same plot as Figure 9, but for a late-type B8 star, model 29. We see another example of the variability of the change of observables depending on observer position, and that the magnitude of these changes for late-type stars are much less than those of the early-type example shown previously, particularly the \(V\) magnitude and polarization, which is expected due to the lower disc density.
To briefly investigate disc obscuration effects, we took two models, 10 and 25, at a tilt angle of 40\({}^{\circ}\) and computed them with a disc radius of 20 \(R_{*}\) instead of 50. These models presented no difference in observables aside from a slightly lower H\(\alpha\) EW. This is expected, as the H\(\alpha\) emission comes from a large portion of the disc, while visible continuum emission and polarization come from the inner few stellar radii. This shows that the material in the outer disc is largely optically thin, and thus the size of a tilted disc would not affect observables other than H\(\alpha\) until the disc radius is reduced to a few stellar radii.
#### 4.1.1 H\(\alpha\) Line Shapes
The shape of the H\(\alpha\) line in Be stars is largely due to Doppler shift caused by the relative velocity between the disc material and the observer. In a non-tilted disc, the H\(\alpha\) line in Be stars is seen as single-peaked when looked pole-on with the star and face-on with the disc (\(\theta\ =\ 0^{\circ}\)), and is a double-peaked line when observed at other inclination angles due to disc material moving both toward and away from the observer. As the H\(\alpha\) emission line is a defining characteristic of Be stars, we also looked at the shapes of the emission lines in our simulations to see if the changing shape might show indications of disc tilting.
Overall we find there are three different patterns in which the tilt of the disc can affect the H\(\alpha\) line. The first is shown in Figure 11, where \(\phi\ =\ 180^{\circ}\) and the observer is aligned with the tilt axis of the disc, so the disc is seen as rotating either clockwise or counter
Figure 5: Density weighted average disc temperatures for our early-type (a) and late-type (b) models. The model number, listed in Table 1 is on the x-axis. The tilt angle of each disc model is indicated by the legend. The grey lines indicate 60% of the star’s \(T_{\rm eff}\) for each model.
Figure 6: Temperature structure of the line of nodes cross section (left column) and the cross section farthest from the equator (right column) of the tilted discs with parameters of model 1 (see Table 1). The top row is for a 10\({}^{\circ}\) tilt, middle row for 20\({}^{\circ}\) tilt, and bottom row for a 40\({}^{\circ}\) tilt, as indicated by the \(\sigma\) value in the leftmost plot of each row. The colour corresponds to the temperature of a given cell, as indicated by the colour bar on the right.
Figure 7: Same as Figure 6, but for the parameters of model 30. Note the change in the maximum temperature of the colour bar scale.
clockwise from the observers perspective. In this case, we see the line strength stay approximately the same, but the V/R ratio increases with increasing tilt angle. This effect is particularly noticeable in the early-type stars at high (\(\theta\geq 60^{\circ}\)) inclinations, where the projected area of the disc does not change very much with increasing tilt angle. The lines plotted here are one of the most extreme V/R ratios that we have obtained from our models.
The second pattern is shown in Figure 12, where \(\phi=90^{\circ}\) and the disc tilts to be more face-on or edge-on with the observer. Here we see that the line starts out as double-peaked for \(0^{\circ}\) tilt, and transitions to a single-peaked line when the tilt is \(40^{\circ}\). This behaviour occurs for all spectral types and densities. The reverse of this process is also seen in our models, with the lines shifting from single-peaked to double-peaked with increasing tilt angle for pole-on observing angles. It is worth noting that the equivalent width of the line is approximately constant in these scenarios as well.
Finally when \(\phi=315^{\circ}\), the motion of the disc relative to the observer is a combination of a rotation clockwise or counterclockwise in the plane of the sky, and a tilt to be more edge-on or face-on. Figure 13 shows this case, where the line shape and peak separation stay the same, but the line decreases in strength as tilt angle increases for the B0 and B2 type stars, and slightly increases in strength for the late type stars. The reverse is also seen for some observing angles. These changes in the normalized lines are largely due to a change in continuum level rather than a change in the emission itself.
### Case II: Watching a Tilted Disc Precess
The second scenario where a tilted disc may be able to be detected is where an already tilted disc precesses under the influence of a misaligned binary companion. This occurs in many simulations of Suffak et al. (2022), particularly after mass-loss from the disc is turned off, the disc is no longer anchored to the equator of the star,
Figure 8: Plots of temperature vs. radius of model 4 at the midplane of the disc in two different azimuthal directions; \(\phi=0^{\circ}\) (a), and \(\phi=90^{\circ}\) (b). The four different lines are for different disc tilt angles as indicated by the legend.
Figure 10: Same format as Figure 9, but for parameters of model 29.
Figure 9: Plots showing (top to bottom) \(V\)-band magnitude, H\(\alpha\) equivalent width, H\(\alpha\) V/R ratio, polarization percentage, and polarization position angle (PA) in the \(V\)-band, versus disc tilt angle, for systems with parameters of model 1 (Table 1). All points are for a \(\theta\) observing angle of \(80^{\circ}\), and the different coloured lines indicate different \(\phi\) observing angles as indicated by the legend. Some \(\phi\) directions may be degenerate and thus not every line will show on every plot.
and the line of nodes of the disc is free to rotate about the primary star.
By holding the polar viewing angle (\(\theta\)) constant, and moving around the star and disc in \(\phi\), we can see what observational signature a disc may present if it were precessing about the pole of the star. This is shown in Figure 14, where we've plotted the same quantities as Figure 9 versus \(\phi\), for model 1 with a disc tilt of 40\({}^{\circ}\). From this Figure we can see that, as the observational viewpoint moves around the star/disc system at a constant \(\theta\), the observables oscillate quite significantly as the disc moves from being edge-on with the observer at 0\({}^{\circ}\) and 180\({}^{\circ}\), to being more face-on at 90\({}^{\circ}\) and 270\({}^{\circ}\). Moving the observer like this is exactly the same as if the disc were rigidly precessing about the pole of the star and the observer was stationary, but this is accomplished here without the need for computationally expensive hydrodynamical simulations.
Unlike the case of tilting a disc, precession shows signatures in all observables. The percent polarization and \(V\) magnitude will oscillate at half the precession period as more or less of the inner disc becomes visible as the disc precesses about the star. The H\(\alpha\) EW also oscillates at half the precession period and can increase with \(V\) magnitude as more of the disc is visible, or decrease as \(V\) magnitude increases due to the changing continuum level. The position angle will oscillate about zero at the same period as the precession. These period/half-period trends can be seen easily with the dashed line in Figure 14 shifted 180\({}^{\circ}\). We note here that the half-period trends of \(V\) magnitude, H\(\alpha\) EW, and percent polarization are not perfectly symmetric because the \(\theta\) of 80\({}^{\circ}\) means the observer will see slightly more or less of the central star from opposite sides of the disc. If \(\theta\) were 90\({}^{\circ}\), the half-period trends would ideally have perfect symmetry. The V/R trend is more complex, as the trend from \(\phi\,=\,90^{\circ}\) to 270\({}^{\circ}\) is the reverse of the trend from 270\({}^{\circ}\) to 90\({}^{\circ}\). This is due to the relative velocities "reversing" as the observer moves to the opposite side of the disc, thus the trend in this case is antisymmetric, with 90\({}^{\circ}\) and 270\({}^{\circ}\) being the nodes of the oscillation.
## 5 Warped vs. Tilted discs
We recognize our flat tilted models are limited, particularly at the star-disc boundary, where we have the inner disc tilting the same amount as the outer disc. In reality, it is more likely that the Be star disc would be anchored to the equator of the star, and the rest of the disc would be warped away from the equator by some degree.
To test the difference this may cause, we chose to apply a warp to models 10 and 25, instead of a flat tilt. To warp the computational grid, we fix the first radial bin at the equator, and then linearly increment the degree of tilt with each subsequent radial bin, up to a maximum tilt of 40\({}^{\circ}\) at the furthest radius. The height of the disc midplane is then given by
\[Z(r_{i},\phi_{j})=-r_{i}\,\sin\left\{\arctan\left[\,\sin(\phi_{j})\tan\left( \alpha\frac{i}{49}\right)\right]\right\}, \tag{6}\]
where the only difference from Equation 4 is that now we have altered the disc tilt angle \(\alpha\), such that it gets bigger with increasing radius (\(i\) denotes the radial cell index, which goes from 0-49 in our models), so the disc becomes warped instead of a flat tilt. These models also have a disc radius of 50 \(R_{*}\), and thus the most highly inclined outer parts
Figure 11: Simulated H\(\alpha\) lines of models 1 (top left), 9 (top right), 17 (bottom left) and 25 (bottom right), for four different disc tilt angles as indicated by the legend. The model spectra are seen from an observer at position \(\phi\,=\,180^{\circ}\), \(\theta\,=\,80^{\circ}\).
Figure 12: Simulated H\(\alpha\) lines of models 4 (top left), 12 (top right), 20 (bottom left) and 28 (bottom right), for four different disc tilt angles as indicated by the legend. The model spectra are seen from an observer at position \(\phi\,=\,90^{\circ}\), \(\theta\,=\,40^{\circ}\).
Figure 13: Simulated H\(\alpha\) lines of models 6 (top left), 14 (top right), 22 (bottom left) and 30 (bottom right), for four different disc tilt angles as indicated by the legend. The model spectra are seen from an observer at position \(\phi\,=\,315^{\circ}\), \(\theta\,=\,90^{\circ}\).
of the disc do not contribute much to the observables in question here as they are known to originate in the inner disc, which is only moderately tilted.
Figure 15 shows temperature cross sections of the warped disc compared to its flat tilted counterpart. We see that in the inner warped disc, the upper and lower edges of the disc are slightly cooler than the flat tilted model. This is due to the inner disc still being anchored at the equator in the warped case, and thus the inner disc is seeing less radiation than when it is tilted at 40\({}^{\circ}\) with the rest of the disc. This effect is seen even in the non-warped slice of the disc at \(\phi=0^{\circ}\), due to less diffuse radiation within the disc being able to heat this cross section. In the outer warped disc, the upper and lower edges are respectively cooler and warmer than the flat-tilted case due to the upper edge being shielded by the inner disc at the equator, and the lower edge being freely exposed to radiation from higher stellar latitudes. Figure 15 also shows how, not surprisingly, the temperature structure warps with the warped density of the disc, and that the interior structure is the same as the tilted model aside from this warp.
With respect to observables, comparison of the warped disc to the different tilted simulations highlights how different areas of the disc are responsible for different observables. To compare these observables between the tilted models and warped models, we held the \(\theta\) observing angle constant, and calculated a chi-squared value over all \(\phi\) observing angles between each tilted model and the warped model. For the early-type warped model, we find that the \(V\) magnitude, percent polarization, and polarization PA of the warped model are best matched by the non-tilted and 10\({}^{\circ}\) tilted models, and occasionally the 20\({}^{\circ}\) tilted model. This is expected, as these observables originate in the inner disc, which is the least tilted part of the warped disc. On the other hand, H\(\alpha\) EW is best matched by the 40\({}^{\circ}\) tilt for near pole-on \(\theta\) angles, and by 0\({}^{\circ}\) and 10\({}^{\circ}\) tilt angles for \(\theta\) values greater than 50\({}^{\circ}\). This is due to a large increase in the continuum emission at certain \(\phi\) angles for the 40\({}^{\circ}\) flat tilted model, which causes the H\(\alpha\) EW to drop significantly. The V/R ratio of the H\(\alpha\) line also follows the same trends as the EW. The same trends are seen for the late-type (model 25) warped disc, except that the H\(\alpha\) line best matches the flat 40\({}^{\circ}\) tilted disc for all viewing inclinations due to the optically thin disc.
For comparison purposes, in Figure 16 we show the observables of a precessing warped disc, similar to what was shown in Figure 14 for the flat tilted model. The stellar and disc parameters of both Figures are different, so they cannot be directly compared, however we do see that a warped disc, if it were precessing, produces the same period and half-period trends as the flat tilted model, with some asymmetry in the photometry, polarization, and H\(\alpha\) line.
## 6 Discussion and Conclusions
In this paper, we have shown how tilting a Be star disc out of the equatorial plane of the primary star can affect the disc's temperature structure, as well as it's observables. We modelled B0, B2, B5, and B8 type stars, with different densities, two different rotation rates, and disc tilt angles of 0\({}^{\circ}\), 10\({}^{\circ}\), 20\({}^{\circ}\) and 40\({}^{\circ}\).
We find that the temperature structure between non-tilted early and late type stars can differ greatly, and the behaviour from model to model is highly non-linear. The exact temperature structure is dependent on the disc density configuration, the spectral type, and the stellar rotation rate, which means depending on the model parameters, we see particular trends in the temperature behaviour.
In our non-tilted models we see that all discs have an inner cool region, the extent of which dramatically depends on the density exponent \(n\). This can be of significance, as low excitation lines, particularly Fe ii, are known to originate in these cool inner disc volumes (Carciofi & Bjorkman, 2006). Fe ii emission lines have been well documented in Be stars (Hauschik et al., 1996), and their line-cooling effects have also been explored (Jones et al., 2004). Since Fe ii emission lines originate in these cool areas, their shape could be used as a tracer of the radial extent of these regions, assuming the width of the line is largely due to Doppler broadening. In this sense, if the Fe ii line had large peak separation and a sharp drop-off in the wings, the center cool region would be relatively small, however a large cool region would mean a large formation loci for Fe ii and its line shape could be similar to Balmer emission lines, albeit with lower peak intensity. Thus, Fe ii lines may be a valuable constraint on the value of \(n\) in Be star disc models, and shows the great importance of having a non-isothermal disc model (which was attempted but not conclusively shown by Klementi et al., 2015).
We find presence of hot bands above and below the midplane in nearly all disc density configurations, consistent with findings in other studies (Millar & Marlborough, 1998; Carciofi & Bjorkman, 2006; Sigut & Jones, 2007; McGill et al., 2011). We offer the first concrete explanation of these sheaths, showing in Figure 2 that the sheaths occur right at the boundary between where the disc is partially and fully ionized. This strongly indicates that these sheaths are the result of UV radiation that has been trapped in the optically thick, cold inner disc, escaping vertically through the disc and adding excess heat to the optically thin outer disc right at the boundary of this partially ionized region. We also predict that if the disc is dense enough, diffuse radiation near the midplane of the disc can play a large role in heating the disc midplane, sometimes causing the midplane to be warmer than the upper disc layers despite not being fully ionized, particularly seen in our models with late-type stars.
We also investigated the difference between the star having an average rotation rate of 70% the critical velocity versus a high rotation rate, 95% of the critical velocity. In our B0, B2, and B5 models, this increase in rotation had marginal effects, only heating the outer disc
Figure 14: Top to bottom, the \(V\)-band magnitude, H\(\alpha\) equivalent width, H\(\alpha\) V/R ratio, polarization percentage in the \(V\)-band, and position angle, versus azimuthal viewing angle \(\phi\), for model 1 with the disc tilted 40\({}^{\circ}\). The system is viewed at a \(\theta\) of 80\({}^{\circ}\). The dashed line is shifted by 180\({}^{\circ}\) to facilitate comparison between periods.
and midplane slightly, but keeping the overall temperature structure the same. This is different when looking at our B8 models, where an increase in rotation caused a large increase in midplane temperature, as well as the appearance of prominent hot sheaths. In this case, the combination of the higher rotation giving hotter poles, along with the lower densities used for our B8 models, allow the radiation from hot poles to "carve" farther into the disc, causing substantial heating in and around the midplane.
Overall the temperature structure of our non-tilted models is remarkably similar to the works of Millar & Marlborough (1998, 1999a,b,c); Millar et al. (2000), who did detailed work on both the early-type star \(\gamma\) Cas, and the late-type star 1 Del, using an escape probability method and by balancing the energy contributions to calculate the disc temperature structures of these stars. This similarity comes despite their code using a different density prescription, and only having five hydrogen energy levels included. hdust, used here, uses a Monte Carlo technique to solve the radiative transfer, has 12 non-LTE and 25 LTE hydrogen energy levels, and also accounts for line radiation and bound-bound transitions. We argue that the strong agreement of the temperature distributions between previous work, including Carciofi & Bjorkman (2006) for B3 spectral types also using hdust, and the new work presented here, infers that the temperature and ionization levels are primarily controlled by photoionization -- recombination equilibrium. This agreement also provides strong evidence of the broad applicability of our work to gaseous discs.
In tilting the disc, we see modest large-scale changes to the disc's average temperature, with it increasing slightly as the disc tilt angle increases. On a smaller scale, we find that with increasing tilt angle, the part of the disc that moves towards the equator becomes cooler, while the portion of the disc moving towards the stellar pole becomes hotter. These changes are explained by gravity darkening of the rapidly rotating star, making the stellar poles hotter than the stellar equator. This anti-symmetric change of disc temperatures is why the overall average disc temperature does not change appreciably when the disc is tilted. The temperature in the midplane of the disc is largely unchanged by the disc tilting due to it's higher density than the rest of the disc, although for already optically thin discs,
Figure 15: Each panel is a similar to Figure 6; (a) and (b) are for model 10, while (c) and (d) are for model 25. The top row in each subplot is for the model with a flat 40\({}^{\circ}\) tilt, while the bottom row is for the model warped to a maximum of 40\({}^{\circ}\).
the midplane can vary in temperature as well. This behaviour is seen across all spectral types.
Examining the observables of our tilted disc simulations, we offer two scenarios where a disc tilt may be detected.
The first case is where the disc is actively observed to be tilting. In this scenario, the change in observables is entirely dependent on the direction of tilt relative to the observer. A disc may appear completely differently with a \(90^{\circ}\) or \(180^{\circ}\) change of relative orientation as the disc either moves to be more face-on or more edge-on with the observer as it tilts. This variability would make it difficult to interpret whether changes in observables of a Be star would be due to a disc tilting or simple changes in disc density or size. The strongest evidence of disc tilting would appear in the polarization PA where, if one were looking along the axis of the disc's tilt, the position angle should exactly match the tilt angle of the disc, however the position angle will change some amount as long as the observers line of sight is not exactly perpendicular to the tilt axis. No other change in geometry would cause a change of tens of degrees in the position angle, making it a key measurement to look at for proof of disc tilting. The shape of the H\(\alpha\) line would also be a clear indication of disc tilting as the shape changes from single-peaked to double-peaked and vice-versa. This change could not be brought about by a simple change of density structure or a larger/smaller disc, and could only occur with a major change of disc geometry such as tilting of the H\(\alpha\) emitting region. This would also be seen in other emission lines as well, not just H\(\alpha\). The advantage of these two observables being the leading indicators of tilting is that one of them should appear no matter the orientation of the disc, given a large enough disc tilt. If the disc is tilting to be more face-on or edge-on with the observer, the H\(\alpha\) line shape would change in shape while the position angle would not. On the other hand, if the observer was looking more aligned with the tilt axis of the disc, the polarization position angle would change, while the H\(\alpha\) line shape would be approximately constant. Thus, both the emission line shape and polarization position angle are key signatures of disc tilting. Another difference that could set a tilted disc apart from a non-tilted disc is the V/R ratio in the H\(\alpha\) emission lines, however this ratio is not particularly strong in our models apart from a few cases of early-type stars where the disc density and tilt angle is highest. There is no clear pattern to why those few models show stronger V/R ratios than others, so it would be difficult to discern, without further constraints by other observables, whether V/R ratios in actual observations of Be stars are due to a tilted disc or a density enhancement in the disc, like those produced by spiral enhancements in \(\zeta\) Tau (Stefl et al., 2009; Escolano et al., 2015), and 48 Lib (Silaj et al., 2016).
The second case is where the disc is already tilted, and undergoing precession due to the influence of a misaligned binary companion. We are able to simulate the disc processing about the stellar pole by holding the \(\theta\) observing angle constant and changing \(\phi\) only. Here we find that the percent polarization, \(V\) magnitude, and H\(\alpha\) EW oscillate at half the precession period, although the oscillation will not be perfectly symmetric unless the observer is directly aligned with the stellar equator. The position angle, on the other hand, will oscillate in sync with disc precession. The V/R ratio undergoes an antisymmetric half-period oscillation, with nodes at \(\phi=90^{\circ}\) and \(270^{\circ}\) due to the violet and red sides of the disc reversing when the observer moves to the other side of the disc.
We then investigated two other scenarios. First, we computed two truncated disc models out to a radius of 20 \(R_{*}\), to see any possible obscuring effects that may happen compared to a disc 50 \(R_{*}\) in size. These simulations revealed that the outer disc from 20 to 50 stellar radii only marginally increased H\(\alpha\) emission, while not changing other examined observables. The temperature structure was also unchanged out to 20 stellar radii, as expected.
Second, and more importantly, we computed two models that were linearly warped up to a maximum angle of \(40^{\circ}\). This model revealed a cooler outer disc temperature versus its flat tilted counterpart, and an inner temperature structure that followed the warp of the disc. The observables of this model are essentially a mix of all the tilted models together, with some observables better matching the non-tilted or \(10^{\circ}\) models, while others matched the higher tilt models. This shows how important it is to recognize that Be star discs emit some wavelengths from dense inner volumes while other wavelengths come from larger radial positions in the disc. These warped models are merely an initial test to see what effects a warped disc may introduce. A proper warped disc study is beyond the scope of this paper, though certainly merits its own study.
These simulations are a vital step towards simulations of more complex disc configurations, such as ones containing warped discs, or those presented by Suffak et al. (2022), which contain holes and tearing of the disc. The flat-tilted models here will be a good benchmark for analysis of these discs that are tilted, warped, and have asymmetric density distributions. With the fundamentals presented here, we will be able to tackle more complicated Be star systems such as Pleione, which is suspected to have a periodic tearing disc (Marr et al., 2022; Martin & Lepp, 2022).
## Acknowledgements
We thank the anonymous referee for their thorough comments which improved the paper. We greatly acknowledge the work of Marr (2022), whose preliminary work on the temperature of tilted discs inspired and aided this work. C.E.J. acknowledges support through the National Science and Engineering Research Council of Canada. A. C. C. acknowledges support from CNPq (grant 311446/2019-1) and FAPESP (grants 2018/04055-8 and 2019/13354-1). THA acknowledges support from FAPESP (grant 2021/01891-2 ). This work was made possible through the use of the Shared Hierarchical Academic Research Computing Network (SHARCNET).
Figure 16: Same as Figure 14, but for the warped disc of model 10. The system is viewed at \(\theta=80^{\circ}\).
## Data Availability
Although there is no observational data, the hdust models computed for this work can be made available upon request.
## References
* Baade et al. (2016) Baade D., et al., 2016, A&A, 588, A56
* Brown et al. (2019) Brown R. O., Coe M. J., Ho W. C. G., Okazaki A. T., 2019, MNRAS, 488, 387
* Carciofi & Bjorkman (2006) Carciofi A. C., Bjorkman J. E., 2006, ApJ, 639, 1081
* Carciofi & Bjorkman (2008) Carciofi A. C., Bjorkman J. E., 2008, ApJ, 684, 1374
* Collins George (1987) Collins George W. I., 1987, in Slettebak A., Snow T. P., eds, IAU Colloq. 92: Physics of Be Stars, p. 3
* Cox (2000) Cox A. N., 2000, Allen's astrophysical quantities
* Cyr et al. (2017) Cyr I. J., Jones C. E., Panoglou D., Carciofi A. C., Okazaki A. T., 2017, MNRAS, 471, 596
* Escolano et al. (2015) Escolano C., Carciofi A. C., Okazaki A. T., Rivinius T., Baade D., Stefl S., 2015, A&A, 576, A112
* Ghoryshi et al. (2018) Ghoryshi M. R., et al., 2018, MNRAS, 479, 2214
* Ghoryshi et al. (2021) Ghoryshi M. R., Carciofi A. C., Jones C. E., Faes D. M., Baade D., Rivinius T., 2021, ApJ, 909, 149
* Hanuschik et al. (1996) Hanuschik R. W., Hummel W., Sutorius E., Dietle O., Thimm G., 1996, A&AS, 116, 309
* Jones et al. (2004) Jones C. E., Sigut T. A. A., Marlborough J. M., 2004, MNRAS, 352, 841
* Jones et al. (2008) Jones C. E., Tycner C., Sigut T. A. A., Benson J. A., Hutter D. J., 2008, ApJ, 687, 598
* Klement et al. (2015) Klement R., et al., 2015, A&A, 584, A85
* Lee et al. (1991) Lee U., Osaki Y., Saio H., 1991, MNRAS, 250, 432
* Marr (2022) Marr K., 2022, PhD thesis, University of Western Ontario, London, ON, CA, [https://ir.lib.uwo.ca/etd/8376](https://ir.lib.uwo.ca/etd/8376)
* Marr et al. (2021) Marr K. C., Jones C. E., Carciofi A. C., Rubio A. C., Mota B. C., Ghoreyshi M. R., Hatfield D. W., Rimulo L. R., 2021, ApJ, 912, 76
* Marr et al. (2022) Marr K. C., Jones C. E., Tycner C., Carciofi A. C., Silva A. C. F., 2022, ApJ, 928, 145
* Martin & Lepp (2022) Martin R. G., Lepp S., 2022, MNRAS, 516, L86
* Martin et al. (2011) Martin R. G., Pringle J. E., Tout C. A., Lubow S. H., 2011, MNRAS, 416, 2827
* Martin et al. (2014a) Martin R. G., Nixon C., Armitage P. J., Lubow S. H., Price D. J., 2014a, ApJ, 790, L34
* Martin et al. (2014b) Martin R. G., Nixon C., Lubow S. H., Armitage P. J., Price D. J., Dogan S., King A., 2014b, ApJ, 792, L33
* McGill et al. (2011) McGill M. A., Sigut T. A., Jones C. E., 2011, ApJ, 743, 111
* McGill et al. (2013) McGill M. A., Sigut T. A. A., Jones C. E., 2013, ApJS, 204, 2
* Miller & Marlborough (1998) Miller C. E., Marlborough J. M., 1998, ApJ, 494, 715
* Miller & Marlborough (1999a) Miller C. E., Marlborough J. M., 1999a, ApJ, 516, 276
* Miller & Marlborough (1999b) Miller C. E., Marlborough J. M., 1999b, ApJ, 516, 280
* Miller & Marlborough (1995) Miller C., Marlborough J. M., 1995b, ApJ, 526, 400
* Miller et al. (2000) Miller C. E., Sigut T. A. A., Marlborough J. M., 2000, MNRAS, 312, 465
* Rimulo et al. (2018) Rimulo L. R., et al., 2018, MNRAS, 476, 3555
* Rivinius et al. (2013) Rivinius T., Carciofi A. C., Marttyan C., 2013, A&ARv, 21, 69
* Sigut & Jones (2007) Sigut T. A. A., Jones C. E., 2007, ApJ, 668, 481
* Sliaji et al. (2010) Sliaji J., Jones C. E., Tycner C., Sigut T. A. A., Smith A. D., 2010, ApJS, 187, 228
* Sliaji et al. (2016) Sliaji J., et al., 2016, ApJ, 826, 81
* Suffix et al. (2020) Suffix R. W., Jones C. E., Tycner C., Henry G. W., Carciofi A. C., Mota B. C., Rubio A. C., 2020, ApJ, 890, 86
* Suffix et al. (2022) Suffix R. M., Jones C. E., Carciofi A. C., 2022, MNRAS, 509, 931
* Vieira et al. (2017) Vieira R. G., Carciofi A. C., Bjorkman J. E., Rivinius T., Baade D., Rimulo L. R., 2017, MNRAS, 464, 3071
* Waters (1986) Waters L. B. F. M., 1986, A&A, 162, 121
* Stefl et al. (2009) Stefl S., et al., 2009, A&A, 504, 929
## Appendix A Tilded Temperature Cross Sections
### B Midplane Temperatures of Tilted vs Non-Tilted Discs
This paper has been typeset from a T&A/A/A file prepared by the author.
Figure 17: Same as Figure 6, but for models 2-7.
Figure 18: Same as Figure 6, but for models 8-13.
Figure 19: Same as Figure 6, but for models 14-19.
Figure 20: Same as Figure 6, but for models 20-25.
Figure 21: Same as Figure 6, but for models 26-29, 31 and 32
Figure 21: Same as Figure 6, but for models 26-29, 31 and 32
Figure 22: Temperature vs. radius at the disc midplane for all models with a B0 or B2 central star, in the direction \(\phi\,=\,90^{\circ}\). The four lines are for four different tilt angles as indicated by the legend. |
2306.17751 | Confirming Resonance in Three Transiting Systems | Although resonant planets have orbital periods near commensurability,
resonance is also dictated by other factors, such as the planets'
eccentricities and masses, and therefore must be confirmed through a study of
the system's dynamics. Here, we perform such a study for five multi-planet
systems: Kepler-226, Kepler-254, Kepler-363, Kepler-1542, and K2-32. For each
system, we run a suite of N-body simulations that span the full parameter-space
that is consistent with the constrained orbital and planetary properties. We
study the stability of each system and look for resonances based on the
libration of the critical resonant angles. We find strong evidence for a
two-body resonance in each system; we confirm a 3:2 resonance between
Kepler-226c and Kepler-226d, confirm a 3:2 resonance between Kepler-254c and
Kepler-254d, and confirm a three-body 1:2:3 resonant chain between the three
planets of Kepler-363. We explore the dynamical history of two of these systems
and find that these resonances most likely formed without migration. Migration
leads to the libration of the three-body resonant angle, but these angles
circulate in both Kepler-254 and Kepler-363. Applying our methods to additional
near-resonant systems could help us identify which systems are truly resonant
or non-resonant and which systems require additional follow-up analysis. | Tyler Quinn, Mariah MacDonald | 2023-06-30T16:01:37Z | http://arxiv.org/abs/2306.17751v1 | # Confirming Resonance in Three Transiting Systems
###### Abstract
Although resonant planets have orbital periods near commensurability, resonance is also dictated by other factors, such as the planets' eccentricities and masses, and therefore must be confirmed through a study of the system's dynamics. Here, we perform such a study for five multi-planet systems: Kepler-226, Kepler-254, Kepler-363, Kepler-1542, and K2-32. For each system, we run a suite of _N_-body simulations that span the full parameter-space that is consistent with the constrained orbital and planetary properties. We study the stability of each system and look for resonances based on the libration of the critical resonant angles. We find strong evidence for a two-body resonance in each system; we confirm a 3:2 resonance between Kepler-226c and Kepler-226d, confirm a 3:2 resonance between Kepler-254c and Kepler-254d, and confirm a three-body 1:2:3 resonant chain between the three planets of Kepler-363. We explore the dynamical history of two of these systems and find that these resonances most likely formed without migration. Migration leads to the libration of the three-body resonant angle, but these angles circulate in both Kepler-254 and Kepler-363. Applying our methods to additional near-resonant systems could help us identify which systems are truly resonant or non-resonant and which systems require additional follow-up analysis.
Exoplanet dynamics (490), Exoplanet migration (2205), Exoplanet structure (495) 0000-0002-4061-8088]Tyler Quinn
0000-0002-1882-7885]Mariah G. MacDonald
## 1 Introduction
While in operation, the Kepler space telescope discovered over 4,500 planet candidates during both the Kepler and K2 missions. Today, many of these candidates have been confirmed, and Kepler-era exoplanets have contributed to the growth of the confirmed exoplanet catalog to over 5,000 and the catalog of candidate planets to over 8,500. This large sample size has led to many investigations into planetary composition, formation, dynamics, and evolution through astrobiological studies.
One intrigue raised by these studies is mean-motion resonance (MMR). MMR occurs when two or more orbiting bodies periodically exert gravitational perturbations on each other, leading to a repeated exchange of energy and angular momentum. We can predict MMR by observing the orbital frequency of neighboring planets. If in resonance, the ratio of neighboring planets' periods will reduce to a ratio of small integers, such as 2:1 or 12:5. However, determining resonance requires a deeper study into the system's dynamics since a period ratio of small integers does not necessarily mean the system is in resonance. Such in-depth studies have been conducted and confirmed resonance in a handful of Kepler systems such as Kepler-80 (MacDonald et al., 2016), Kepler-223 (Mills et al., 2016), and K2-138 (MacDonald et al., 2022).
Mean-motion resonance can form in systems with two or more orbiting bodies. The simplest form of MMR is the two-body resonance. Mathematically, this is defined as the oscillation or libration of the two-body critical angle:
\[\Theta_{b,c}=j_{1}\lambda_{b}+j_{2}\lambda_{c}+j_{3}\omega_{b}+j_{4}\omega_{c} +j_{5}\Omega_{b}+j_{6}\Omega_{c} \tag{1}\]
where \(\lambda_{p}\) is the mean longitude of planet \(p\), \(\omega_{p}\) is the argument of periapsis, \(\Omega_{p}\) is the longitude of the ascending node, \(j_{i}\) are coefficients which sum to zero, and planet \(b\) orbits closer to the host star than planet \(c\).
In systems with three or more orbiting bodies, numerous bodies may be in resonance, either in a chain of two-body resonances or in a three or more body resonance. A zeroth-order three-body MMR is defined by
the difference of the two-body resonant angles:
\[\phi_{b,c,d}=\Theta_{c,d}-\Theta_{b,c}=m\lambda_{d}-(m+n)\lambda_{c}+n\lambda_{b} \tag{2}\]
where \(\lambda_{p}\) is the mean longitude of planet \(p\), and \(m\) and \(n\) are integers. This angle is independent of all longitudes of periapsis (\(\bar{\omega}=\Omega+\omega\)), making it ideal for resonant study in systems with poorly constrained orbital angles and eccentricity.
Traditionally, such resonances are confirmed if all solutions to the system's RV or TTV forward modeling lead to librating angles. Unfortunately, few systems produce large enough perturbations that could be detected with a typical survey cadence (30 minute cadence from photometry and \(\sim\)few day cadence from radial velocities). Due to a lack of high-precision measurements of these systems, we must model all solutions to a system--across all potential parameters that are consistent with the data--to confirm resonance. In the case that all solutions result in the planets locked in MMRs, we are able to confirm resonance in the system.
MacDonald et al. (2022) were the first to confirm a resonance without forward modeling either transit times or the radial velocity signal of the planets. They found that three of the planets of K2-138 are locked in a resonant chain in 99% of \(N\)-body simulations that spanned the entirety of parameter space that was previously constrained by both photometry and radial velocity measurements, providing a method of MMR confirmation in the absence of high-cadence, high-precision data.
Such a method, if applied on a larger scale to more systems, would enable us to confirm more resonances. Since resonances allow for the constraint of planetary properties, the system's formation history, and the planets' long term stability, a significant number of confirmed resonant systems would allow us to start leveraging these dynamics to better understand planet formation and evolution.
Here, we perform such an analysis on five systems: Kepler-226, Kepler-254, Kepler-363, Kepler-1542, and K2-32. Each of these systems was suggested to be a "broken," full-system 3:2 resonant chain, where the discovery of an additional planet would complete the chain (Christiansen et al., 2018). However, the period ratios of adjacent known planets suggest the presence of resonant chains. Very few known systems with similar architecture exist (Livingston et al., 2018), and confirmation of such a chain can provide valuable insight into the dynamics, history, and composition of systems of this architecture.
In Section 2, we briefly describe the five systems we study and discuss the initial conditions and parameters of our \(N\)-body simulations. We then present our results and analyze the resonant configurations of each system in Section 3. For two of the systems in which we confirm resonance, we use the resonances to constrain the planetary masses and orbital periods and discuss forming the chain in Section 4 before summarizing and concluding our work in Section 5.
## 2 Methods
Kepler-226 is a G-type star hosting a super-Earth and two Earth-sized planets with orbital periods between 4 and 8 days. These three planets could be locked in a 2:3:4 resonant chain. Since their initial confirmation (Rowe et al., 2014), the anti-correlated TTVs of planets b and c constrained their masses to \(M_{b}=24.0^{+11.8}_{-10.1}\ M_{\oplus}\) and \(M_{c}=45.2^{+22.5}_{-19.1}\ M_{\oplus}\), although the radii of these two planets (\(R_{b}=1.64\ R_{\oplus}\) and \(R_{c}=2.47\ R_{\oplus}\), Berger et al., 2018) suggest these values to be overestimates. Although the TTVs and period ratios of the system suggest this chain of resonances, the specific dynamics of the system have yet to be explore.
Kepler-254 is a relatively dim (\(V=16.012\)) G-type star, hosting three confirmed exoplanets with orbital periods ranging from 5.8 days to 18.7 days. The period ratios of adjacent planets suggest the system could be locked in a 1:2:3 resonant chain. Jontof-Hutter et al. (2021) suggest that Kepler-254d and Kepler-254c could be locked in a 3:2 resonance. However, the orbital dynamics of Kepler-254 have yet to be included in an in-depth study to confirm MMRs.
Kepler-363 is a relatively bright (\(V=13.472\)) G-type star, hosting three confirmed exoplanets. These planets orbit their stair fairly rapidly, with orbital periods ranging from 3.6 days to 11.9 days. The period ratios of adjacent planets suggest the system could be locked into a 1:2:3 resonant chain. The orbital dynamics of Kepler-363 have yet to be included in any in-depth study to confirm resonance in the system.
Kepler-1542 is a G-type star that hosts four transiting planets and one planetary candidate, all smaller than Earth and orbiting within 8 days. The orbital periods of the planets suggest a chain of resonances of 4:3, 5:4, 7:6, and 6:5 if we include the candidate. Validated by Morton et al. (2016), the four planets have never been included in an in-depth study of the system.
K2-32 is a G-type star in a binary system, hosting four transiting planets. The innermost planet K2-32e was most recently discovered and validated by Heller et al. (2019), suggesting that these four planets are in a 1:2:5:7 chain of mean motion resonances. Although the orbital periods suggest this resonance, as do many follow-up studies (e.g., Lillo-Box et al., 2020), the dynamics of this system have yet to be explored.
Following the methods of MacDonald et al. (2022), we seek to understand the dynamics of these systems by running _N_-body simulations using the python module REBOUND(Rein and Liu, 2012). We run a suite of 1000 simulations, drawing initial values for planetary masses, inclinations, and orbital periods from independent, normal distributions that are centered on values constrained by current photometry. For Kepler-226, Kepler-254, and Kepler-363, we use the results from Thompson et al. (2018) for all parameters except planetary radii, for which we use the updated stellar, and therefore planetary, radii from Berger et al. (2018). For Kepler-1542, we use parameters from Morton et al. (2016), and for K2-32 we use the values from Heller et al. (2019). For planets without mass constraints, we draw masses from the mass-radius relationship described in Weiss and Marcy (2014)1. Each simulation therefore initializes with a set of parameters that is unique from other simulations but consistent with current data. Using the WHFast integrator (Rein and Tamayo, 2015), we integrate the modeled systems for 10 Myr with a timestep of 5% the innermost planet's period. We summarize the simulation initial conditions for our simulations in Table 1.
Footnote 1: We explore a large range of masses for each planet and use the resulting resonances to constrain the planet masses. We therefore are not sensitive to any specific mass-radius relationship.
## 3 Results
For each of our five systems of interest, we run a suite of 1000 _N_-body simulations for 10 Myr and analyze the results of each suite for two-body and three-body resonances. We stop integrations when any planet experiences a close encounter, defined by a distance of less than three Hill radii. To confirm a chain of resonances, we search for simulations where the three-body angle is librating or where both of the two-body angles are librating.
We find it unlikely that Kepler-1542 and K2-32 contain any resonant chains; for each of these systems, no three-body angle librated in our simulations, regardless of planetary mass. In Kepler-1542, the resonant angle \(\Theta_{e,d}=7\lambda_{d}-6\lambda_{e}-\omega_{e}\) librated in 82% of simulations, and in K2-32 the resonant angles \(\Theta_{c,d}=3\lambda_{d}-2\lambda_{c}-\omega_{c}\) and \(\Theta_{e,b}=2\lambda_{b}-1\lambda_{e}-\omega_{e}\) librated in 70% and 68% of simulations, respectively. Because not all solutions to our current data lead to these angles librating, we cannot claim the planets are in resonance.
In Kepler-226, we find that the two-body angle \(\Theta^{\prime}_{c,d}=3\lambda_{d}-2\lambda_{c}-\omega_{d}\) librates about \(180^{\circ}\) in 99.8% of our simulations, but with large libration amplitudes of \(90.5^{+23.19}_{15.22}\). The two-body angle \(4\lambda_{c}-3\lambda_{b}-\omega_{c}\) librates in 42% of our simulations, and the three-body angle circulates in all simulations. While we are therefore able to confirm the 3:2 resonance between Kepler-226c and Kepler-226d, we are not able to confirm a resonant chain.
We focus the rest of this work on the two remaining systems, Kepler-254 and Kepler-363. We summarize the results of the resonance analysis for all systems in Table 2.
### Kepler-254
Through our analysis, we find that nearly all (99.6%) simulations of Kepler-254 remained stable during the 10 Myr integrations, i.e. no planets experienced a close encounter or were ejected, regardless of initial parameter values. Of these simulations, 42.4% result in a 1:2:3 three-body resonant chain. The two-body angle \(\Theta_{b,c}=2\lambda_{c}-\lambda_{b}-\omega_{c}\) librates in 42.4% of the simulations, and the two-body angle \(\Theta_{c,d}=3\lambda_{d}-2\lambda_{c}-\omega_{c}\) librates in 99.2% of the simulations. The three-body angle \(\phi_{1}=3\lambda_{d}-4\lambda_{c}+\lambda_{b}\) circulated in all of the simulations. We show the evolution of one the _N_-body simulations in Figure 1. Given these results, we are therefore able to confirm a two-body resonance between Kepler-254c and Kepler-254d where the angle \(\Theta_{c,d}\) librates around \(0^{\circ}\) with an amplitude of \(65.1^{+4.6}_{-5.0}\). A three-body resonant chain is probable but requires further analysis and more precise orbits to confirm. The system could therefore benefit from follow-up observation and analysis.
### Kepler-363
Regardless of the initial parameters, nearly all 1000 simulations of Kepler-363 remained stable for the 10 Myr integration. We find the 2:1 resonant angle \(\Theta_{b,c}=2\lambda_{c}-\lambda_{b}-\omega_{c}\) librates in 99.2% of simulations, and the 3:2 resonant angle \(\Theta_{c,d}=3\lambda_{d}-2\lambda_{c}-\omega_{c}\) librates in 92.6% of simulations. Of all 1000 simulations, 92.4% result in a three-body 1:2:3 resonant chain.
The two-body angles \(\Theta_{b,c}\) and \(\Theta_{c,d}\) librate about \(0^{\circ}\) with moderate amplitudes of \(35.1^{+30.0}_{-17.8}\) and \(55.1^{+13.9}_{-13.7}\), respectively, and the two-body angle and \(\Theta_{c,d}\) librates around \(180^{\circ}\) with large amplitudes of \(96.98^{+34.82}_{-35.54}\). Curious enough, the three-body angle \(\phi=3\lambda_{d}-4\lambda_{c}+\lambda_{b}\) does not librate in any of our simulations. We discuss the implications of this circulating angle in more detail in Section 4. We show the evolution of one the _N_-body simulations in Figure 2.
## 4 Discussion
With the confirmation of resonance, we are able to study additional information about a system and its planets. In particular, resonances allow us to constrain planetary masses and orbits and to explore the formation and subsequent dynamical history of the planets.
### Using Resonance to Constrain Masses and Orbits
We explore the differences in planetary parameters between simulations that resulted in resonance and those that did not. We perform a two-sample Kolmogorov-Smirnov test, exploring the null hypotheses that the masses, eccentricities, and orbital periods of the planets in resonance and the planets not in resonance are drawn from the same distribution. As an example, we take the distribution of masses of Kepler-363b from simulations where \(\Theta_{b,c}\) librates as one sample for the K-S test, and the distribution of that planet's mass from simulations where the same angle circulates as the second sample.
For all parameters except the eccentricity of Kepler-363c, we recover large _p_-values (p \(>~{}0.05\)) and fail to reject the null hypothesis. For Kepler-363c's eccentricity, we recover a _p_-value of 0.018, suggesting that the two distributions are statistically different. We find that the resulting eccentricity for simulations with a librating \(\Theta_{c,d}\) is smaller than for those with a circulating \(\Theta_{c,d}\) (\(2.3^{+1.8}_{-1.4}\times 10^{-4}\) and \(3.0^{+1.5}_{-1.7}\times 10^{-4}\), respectively). Although we are not able to use the system's resonances to constrain the planets' masses, we do find that this system's resonant state is not very dependant on the planetary masses, confirming that more precise mass measurements are not necessary to confirm these resonances.
### Constraining dynamical history
With confirmed resonances, we are now able to study each system's formation and evolution. Although resonant chains are typically seen as the hallmark of disk-driven migration, two additional pathways exist to form resonant chains that are each consistent with in situ formation (MacDonald and Dawson, 2018). Following the prescription of MacDonald and Dawson (2018), the three chain formation pathways are long-scale migration (**LM**; hypothesizes the planets were formed both further from
\begin{table}
\begin{tabular}{l c c c} \hline \hline \multicolumn{1}{c}{ Kepler-226} & b & c & d \\ \hline \(P\) [d] & \(3.940997\pm 0.000020\) & \(5.34955\pm 0.000014\) & \(8.109044\pm 0.000094\) \\ t\({}_{0}\) [d] & \(69.09337\) & \(104.80599\) & \(65.80333\) \\ \(i\) [\({}^{\circ}\)] & \(88.88\pm 0.2\) & \(89.62\pm 0.2\) & \(89.92\pm 0.2\) \\ \(M_{p}\) [\(M_{\oplus}\)] & \(4.271^{+1.933*}_{-1.828}\) & \(6.237^{+2.071*}_{-1.852}\) & \(2.440^{+1.984*}_{-1.243}\) \\ \hline \hline Kepler-254 & b & c & d \\ \hline \(P\) [d] & \(5.82666\pm 0.00001\) & \(12.41218\pm 0.00008\) & \(18.7464\pm 0.0001\) \\ t\({}_{0}\) [d] & \(106.01\) & \(75.54\) & \(80.13\) \\ \(i\) [\({}^{\circ}\)] & \(89.88\pm 0.2\) & \(89.95\pm 0.2\) & \(89.11\pm 0.2\) \\ \(M_{p}\) [\(M_{\oplus}\)] & \(8.84^{+2.02*}_{-1.94}\) & \(5.75^{+1.99*}_{-2.00*}\) & \(6.72^{+0.03*}_{-1.98}\) \\ \hline \hline Kepler-363 & b & c & d \\ \hline \(P\) [d] & \(3.61460279\pm 0.00003\) & \(7.54235832\pm 0.00004\) & \(11.93205399\pm 0.00005\) \\ to [d] & \(67.695\) & \(245965.961\) & \(245975.106\) \\ \(i\) [\({}^{\circ}\)] & \(86.02\pm 0.2\) & \(88.44\pm 0.2\) & \(89.52\pm 0.2\) \\ \(M_{p}\) [\(M_{\oplus}\)] & \(3.05^{+1.83}_{-1.65}\)* & \(4.67^{+2.12*}_{-1.90}\) & \(5.34^{+2.06*}_{-1.94}\) \\ \hline \hline Kepler-1542 & c & b & e & d \\ \hline \(P\) [d] & \(2.8922302\pm 1.472e-05\) & \(3.95116882\pm 1.633e-05\) & \(5.10115756\pm 2.409e-05\) & \(5.99273738\pm 2.26e-05\) \\ t\({}_{0}\) [d] & \(65.86465\) & \(67.22178\) & \(65.42378\) & \(64.74864\) \\ \(i\) [\({}^{\circ}\)] & \(89.89\pm 0.2\) & \(88.05\pm 0.2\) & \(89.68\pm 0.2\) & \(88.08\pm 0.2\) \\ \(M_{p}\) [\(M_{\oplus}\)] & \(0.429^{+0.386*}_{-0.228}\) & \(0.803^{+0.823*}_{-0.420}\) & \(0.805^{+0.801*}_{-0.445}\) & \(1.083^{+0.979*}_{-0.570}\) \\ \hline \hline K2-32 & e & b & c & d \\ \hline \(P\) [d] & \(4.34882^{+0.000096}_{-0.00075}\) & \(8.99182^{+0.000088\pm}_{-0.000084}\) & \(20.66186^{+0.00102}_{-0.00008}\) & \(31.7142^{+0.0011}_{-0.0010}\) \\ t\({}_{0}\) [d] & \(1998.886\) & \(2000.92713\) & \(1999.42271\) & \(2003.7913\) \\ \(i\) [\({}^{\circ}\)] & \(90.0^{**}_{-0.8}\) & \(89.1\pm 0.7\) & \(89.3\pm 0.9\) & \(89.3\pm 0.9\) \\ \(M_{p}\) [\(M_{\oplus}\)] & \(1.095^{+2.248*}_{-0.625}\) & \(16.5^{+2.7}_{-2.7}\) & \(<12.1\) & \(10.3^{+4.8}_{-4.3}\) \\ \hline \end{tabular} Note. – Initial conditions used for the simulations, including orbital period \(P\), mid-transit time \(t_{0}\), sky-plane inclination \(i\), and planetary mass \(M_{p}\). We initialize all planets on circular orbits. We use the values published by Rowe et al. (2014) for all parameters of Kepler-226, Kepler-254, and Kepler-363, except for planetary radii, where we use the updated stellar and therefore planetary radii from Berger et al. (2018). For Kepler-1542, we use parameters from Morton et al. (2016), and for K2-32 we use the values from Heller et al. (2019). We assume stellar masses of \(0.831~{}M_{\odot}\)(Thompson et al., 2018), \(0.943~{}M_{\odot}\)(Berger et al., 2018), \(1.173~{}M_{\odot}\)(Thompson et al., 2018), \(0.933~{}M_{\odot}\)(Thompson et al., 2018), and \(0.856~{}M_{\odot}\)(Heller et al., 2019) for the stars as ordered in the table. All parameters were drawn from independent, normal distributions, centered on the nominal values with widths equal to the value’s uncertainty; for parameters with unequal upper and lower uncertainties, we take the larger uncertainty as the width.
\({}^{*}\) planetary masses were drawn from the mass-radius relation Weiss and Marcy (2014).
\({}^{**}\) At the time of this work, no estimate existed for this value, so we fix the parameter and do not draw it from a normal distribution.
\end{table}
Table 1: Planetary Properties for Determining Resonance
their star and each other when compared to current observations), short-scale migration (**SM**; planets formed near current observations, just outside of resonance, where small shifts in the planets' semi-major axes will lead to resonance), and eccentricity dampening (**ECC**; planets formed near current observations, just outside of resonance, where damping to the planets' eccentricities will lead to resonance).
To study the formation of the resonances in Kepler-254 and Kepler-363, we follow the methods of MacDonald and Dawson (2018) which we briefly describe here. For each formation pathway, we run a suite of 500 \(N\)-body simulations with the same initial conditions shown in Table 1 except with inflated orbital periods. We use the modify_orbits_forces routine in the REBOUNDx library (Tamayo et al., 2020) and the WHFast integrator (Rein and Tamayo, 2015). For the **LM** simulations, we initialize the inner planet at 1 au from its host star and start the other planets just wide of the observed resonances2. For the **SM** and **ECC** simulations, we initialize the planets a small percentage wide of their observed orbits, where we draw this percentage for each planet and each simulation from a normal distribution of \(N[5,3]\%\). All simulations start with the planets out of resonance. We then form the resonant chains by damping the semi-major axes and/or eccentricities of the planets, following the prescription in Papaloizou and Larwood (2000). For the **LM** and **SM** simulations, we damp only the outer planet's eccentricity and semi
\begin{table}
\begin{tabular}{l c c c} \hline \hline \multicolumn{1}{c}{ Angle} & \% librating & Center [\({}^{\circ}\)] & Amplitude [\({}^{\circ}\)] \\ \hline K2-32 & stable = 984 & resonant = 664 & \\ \(\Theta_{c,b}=2\lambda_{b}-\lambda_{e}-\omega_{e}\) & 67.58\% & -0.005 \({}^{+0.349}_{-0.315}\) & 48.4 \({}^{+23.8}_{-20.2}\) \\ \(\Theta_{b,c}=2\lambda_{c}-\lambda_{b}-\omega_{b}\) & 14.43\% & 0.036 \({}^{+0.501}_{-0.513}\) & 58.3 \({}^{+14.0}_{-31.8}\) \\ \(\Theta_{c,d}=3\lambda_{d}-2\lambda_{c}-\omega_{c}\) & 69.92\% & 0.015 \({}^{+2.073}_{-2.065}\) & 64.5 \({}^{+9.7}_{-18.9}\) \\ \(\Theta^{\prime}_{c,b}=2\lambda_{b}-\omega_{b}\) & 0.90\% & -5.38\({}^{+12.70}_{-14.09}\) & 134.64\({}^{+6.23}_{-4.81}\) \\ \(\Theta_{b,c}=2\lambda_{c}-\lambda_{b}-\omega_{c}\) & 0.00\% & -1.14 \({}^{+0.81}_{-1.09}\) & \\ \(\Theta^{\prime}_{c,d}=3\lambda_{d}-2\lambda_{c}-\omega_{d}\) & 6.80\% & -0.06\({}^{+15.61}_{-1.71}\) & \\ \hline Kepler-226 & stable = 998 & resonant = 457 & \\ \(\Theta_{c,b}=4\lambda_{c}-3\lambda_{b}-\omega_{c}\) & 42.00\% & -0.052 \({}^{+0.571}_{-0.504}\) & 119.1 \({}^{+22.2}_{-20.9}\) \\ \(\Theta_{c,d}=3\lambda_{d}-2\lambda_{c}-\omega_{c}\) & 45.80\% & -0.05 \({}^{+0.91}_{-0.88}\) & 135.03 \({}^{+11.18}_{-30.3}\) \\ \(\Theta^{\prime}_{b,c}=4\lambda_{c}-3\lambda_{b}-\omega_{b}\) & 41.60\% & 179.9 \({}^{+0.577}_{-0.428}\) & 137.25 \({}^{+9.21}_{-11.85}\) \\ \(\Theta^{\prime}_{c,d}=3\lambda_{d}-2\lambda_{c}-\omega_{d}\) & 99.8\% & 179.9 \({}^{+13.88}_{-1.11}\) & 90.5 \({}^{+22.19}_{-15.62}\) \\ \hline Kepler-254 & stable = 996 & resonant = 422 & \\ \(\Theta_{b,c}=2\lambda_{b}-\lambda_{b}-\omega_{c}\) & 42.40\% & 0.021 \({}^{+0.35}_{-0.29}\) & 118.6 \({}^{+21.48}_{-49.9}\) \\ \(\Theta^{\prime}_{c,d}=3\lambda_{d}-2\lambda_{c}-\omega_{c}\) & 99.20\% & -0.15 \({}^{+2.29}_{-2.22}\) & 65.1\({}^{+1.6}_{-5.0}\) \\ \(\Theta^{\prime}_{b,c}=2\lambda_{c}-\lambda_{b}-\omega_{b}\) & 0.00\% & -1.7 \({}^{+1.7}_{-1.81}\) & 87.1 \({}^{+12.34}_{-14.21}\) \\ \hline Kepler-363 & stable = 998 & resonant = 924 & \\ \(\Theta_{b,c}=2\lambda_{c}-\lambda_{b}-\omega_{e}\) & 99.2\% & 0.0029 \({}^{+0.224}_{-0.243}\) & 35.1\({}^{+30.0}_{-17.78}\) \\ \(\Theta_{c,d}=3\lambda_{d}-2\lambda_{c}-\omega_{e}\) & 92.95\% & -0.02 \({}^{+0.54}_{-0.44}\) & 55.1 \({}^{+13.9}_{-13.7}\) \\ \(\Theta^{\prime}_{b,c}=2\lambda_{b}-\lambda_{b}-\omega_{b}\) & 0.0\% & - & - \\ \(\Theta^{\prime}_{d,d}=3\lambda_{d}-2\lambda_{c}-\omega_{d}\) & 98.8\% & 179.97 \({}^{+0.41}_{-0.37}\) & 96.98\({}^{+34.82}_{-35.54}\) \\ \hline Kepler-1542 & stable = 897 & resonant = 0 & \\ \(\Theta_{c,b}=4\lambda_{b}-3\lambda_{c}-\omega_{b}\) & 9.81\% & 0.16 \({}^{+0.66}_{-0.60}\) & 64.5 \({}^{+8.7}_{-24.9}\) \\ \(\Theta_{b,c}=5\lambda_{e}-4\lambda_{b}-\omega_{b}\) & 5.13\% & -0.17 \({}^{+0.88}_{-0.88}\) & 74.1 \({}^{+2.7}_{-2.4}\) \\ \(\Theta_{c,d}=7\lambda_{d}-6\lambda_{c}-\omega_{e}\) & 81.94\% & -0.05 \({}^{+0.88}_{-0.81}\) & 61.2 \({}^{+10.8}_{-16.5}\) \\ \(\Theta^{\prime}_{c,b}=4\lambda_{b}-3\lambda_{c}-\omega_{c}\) & 0.50\% & -5.19\({}^{+3.04}_{-0.85}\) & 132.50\({}^{+3.34}_{-1.50}\) \\ \(\Theta^{\prime}_{b,c}=5\lambda_{e}-4\lambda_{b}-\omega_{e}\) & 2.80\% & -2.47\({}^{+12.97}_{-8.74}\) & 127.21 \({}^{+2.25}_{-2.12}\) \\ \(\Theta^{\prime}_{c,d}=7\lambda_{d}-6\lambda_{c}-\omega_{d}\) & 29.2\% & 1.08\({}^{+11.11}_{-13.93}\) & 131.72\({}^{+7.9}_{-4.75}\) \\ \hline \end{tabular} Note. – For each system, the number of simulations out of 1000 that survived 10 Myr, the number of simulations where all planets participate in the chain, then, for each angle, the percentage of simulations where the angle librates and the center and amplitude of the libration. For each system, all three-body angles were circulating.
\end{table}
Table 2: Resonance Results
major axis3, and for the **ECC** simulations, we damp the eccentricity of all planets. We draw the timescales for the semi-major axis damping (\(\tau_{a}\)) and eccentricity damping (\(\tau_{e}\)) from independent, log-uniform distributions of log \(\tau_{a}\) = U[7, 9] yr, log \(\tau_{e}\) = U[4, 6] yr; log \(\tau_{a}\) = U[6, 9] yr, log \(\tau_{e}\) = U[4, 7] yr; and log \(\tau_{e}\) = U[5, 7] yr for the **LM**, **SM**, and **ECC** suites, respectively. We explore a wide range of damping timescales, represent
Figure 1: Example evolution of the orbital periods, eccentricities, inclinations, all four two-body resonant angles, and the three-body resonant angle of the three planets of Kepler-254. We find that the two-body angle \(\Theta_{c,d}\) librates in nearly all of our simulations, the two-body angle \(\Theta_{b,c}\) only librates in approximately 40%, and the corresponding three-body angle circulates in each one. The initial values for this simulation were drawn from independent, normal distributions, as described in Section 2 and summarized in Table 1. We integrate this simulation beyond 10 Myr for visualization purposes.
ing a wide range of disk conditions, to avoid fine-tuning our simulations.
We integrate each system forward with a timestep of 5% the innermost planet's observed orbital period. After 5\(\times\)10\({}^{6}\) years, we "turn off" the damping effects and integrate for another 0.25 Myr to ensure stability after the gas disk would dissipate. We then study each resulting simulation for librating two- and three-body resonant angles.
We find we are able to produce a full three-body resonant chain in systems like Kepler-254 and Kepler-363 through all three formation pathways. However, each formation pathway yields unique results, which we discuss in turn below. We summarize the centers and am
Figure 2: Example evolution of the orbital periods, eccentricities, inclinations, all four two-body resonant angles, and the three-body resonant angle of the three planets of Kepler-363. We find that the two-body angles \(\Theta_{b,c}\), \(\Theta_{c,d}\), and \(\Theta^{\prime}_{c,d}\), librate in nearly all of our simulations, but the corresponding three-body angle circulates in each one. The initial values for this simulation were drawn from independent, normal distributions, as described in Section 2 and summarized in Table 1. We integrate this simulation beyond 10 Myr for visualization purposes.
plitudes of librating angles resulting from each formation pathway in Table 3, and we compare examples from each of these formation pathways in Figures 3 and 4.
_Short-scale migration:_ For both systems, short-scale migration (**SM** suite) results in the three-body angle \(\phi=\Theta_{b,c}-\Theta_{c,d}\) librating in some of the simulations (34% and 25% for Kepler-254 and Kepler-363, respectively), and librating about \(180^{\circ}\), \(\sim\)\(285^{\circ}\), and a third center with moderate amplitudes (\(\sim 10-20^{\circ}\)).
_Long-scale migration:_ Since very few of the **LM** simulations for Kepler-254 remained stable for the full integration time, and with only one simulation in resonance, we are unable to perform any meaningful statistical analysis on this suite. The long-scale migration for Kepler-363 resulted in very few simulations where \(\phi\) librates and only 27% of the stable simulations with a three-body resonant chain.
_Eccentricity-damping:_ We find that eccentricity-damping results in the libration of the two-body angles \(\Theta_{b,c}\), \(\Theta_{c,d}\), and \(\Theta^{\prime}_{c,d}\) for both Kepler-254 and Kepler-363 in about half of the simulations, but very rarely results in the libration of \(\Theta^{\prime}_{b,c}=2\lambda_{c}-\lambda_{b}-\omega_{b}\) or of the three-body angle \(\phi\). For Kepler-254, \(\Theta_{b,c}\) and \(\Theta_{c,d}\) each librate about \(0^{\circ}\) with small amplitudes of \(5.96^{+5.23}_{-0.62}\) and \(5.76^{+9.26}_{-1.21}\), respectively, similar to the centers we recover in Section 3.1 but with significantly smaller amplitudes. For Kepler-363, \(\Theta_{b,c}\) and \(\Theta_{c,d}\) each librate about \(0^{\circ}\) with amplitudes of \(4.22^{+2.45}_{-0.47}\) and \(28.32^{+4.05}_{-2.13}\), respectively, similar to the centers we recover in Section 3.2 but, again, with significantly smaller amplitudes.
In Section 3.1, we confirmed the two-body resonance between Kepler-254c and Kepler-254d, but we were unable to confirm a resonance between the inner planet pair in the system. Since each of the formation pathways resulted in the libration of this angle and therefore each pathway is possible given our current data, we cannot select one pathway over another as more probable. We find it likely that the resonant chain of Kepler-363 formed through eccentricity- damping, which we discuss in more detail below in Section 4.3.
### Unique Dynamical Configuration
The three planets of Kepler-363 are locked in a three-body resonance, where both two-body angles librate and the three-body angle \(\phi=\Theta_{c,d}-\Theta_{b,c}=3\lambda_{d}-4\lambda_{c}+\lambda_{b}\) circulates; the three-body angle even circulates in most of our chain-formation simulations (see Table 2). Typically, the three-body angle will librate if the associated two-body angles librate4, and so we must ask: how could this resonant chain form _without_ the libration of this three-body angle? We also find that the angle \(\Theta^{\prime}_{b,c}=2\lambda_{c}-\lambda_{b}-\omega_{b}\) always circulates in our simulations and the angle \(\Theta^{\prime}_{c,d}=2\lambda_{d}-\lambda_{c}-\omega_{d}\) always librates in our simulations. We can use all five resonant angles (\(\Theta_{b,c}\), \(\Theta_{c,d}\), \(\Theta^{\prime}_{b,c}\), \(\Theta^{\prime}_{c,d}\), and \(\phi\)) to study the possible formation history of Kepler-363; a likely formation pathway would result in systems with dynamics similar to those we observe: \(\Theta_{b,c}\), \(\Theta_{c,d}\), and \(\Theta^{\prime}_{c,d}\) are librating but \(\Theta^{\prime}_{b,c}\) and \(\phi\) are circulating.
Footnote 4: Although the opposite is not true in the case of pure three-body resonance
_Short-scale migration:_ The angle \(\Theta^{\prime}_{b,c}\) librates in 38% of our **SM** simulations, and \(\Theta^{\prime}_{c,d}\) librates in 82.9% of our **SM** simulations. In addition, the three-body angle \(\phi\) librates in 33% of our simulations. If \(\Theta^{\prime}_{b,c}\) and \(\phi\) are indeed circulating, we find it unlikely that the resonant chain formed through short-scale migration.
_Long-scale migration:_ As discussed above, it is challenging to form this chain through long-scale migration as the system becomes unstable without large eccentricity damping. However, we still find numerous sets of initial parameters that result in \(\phi\) librating. It is therefore possible that that this resonant chain formed through long-scale migration but requires more fine-tuning of parameters.
_Eccentricity damping:_ From our 500 simulations, only seven (1.4%) result in the libration of \(\Theta^{\prime}_{b,c}\), and only three (0.6%) result in the libration of \(\phi\). Of the seven simulations resulting in the libration of \(\Theta^{\prime}_{b,c}\), one simulation has only this angle librating and all other angles circulating, one simulation does not result in \(\Theta_{c,d}\) librating, one simulation results in all angles librating, and the remaining four simulations result in all two-body angles librating. The angle \(\phi\) librates in one simulation where all angles librate and in two simulations where all other angles circulate. We therefore find that it is challenging for \(\Theta^{\prime}_{b,c}\) and \(\phi\) to librate if this chain was formed without any change in the planets' semi-major axes.
Since we are only able to simulate the formation of resonant chains in systems _similar_ to Kepler-363, we caution against claims of one formation mechanism; however, we find that the angles \(\Theta^{\prime}_{b,c}\) and \(\phi\) do not librate in chains formed with eccentricity-damping when the angles \(\Theta_{b,c}\), \(\Theta_{c,d}\), and \(\Theta^{\prime}_{c,d}\)_do_ librate, resulting in the dynamics we observe. Resonant chains formed through short-scale and long-scale migration both result in the libration of \(\Theta^{\prime}_{b,c}\) and \(\phi\) in the majority of simulations where the other angles librate.
## 5 Conclusion
Planets in mean motion resonance with one another periodically exchange energy and angular momentum, enabling us to constrain the formation history of in
dividual systems and identify indicators of formation history in other systems. Because the confirmation of resonance requires an in-depth study of a system's dynamics, most resonances have not been confirmed. Here, we perform such a dynamical study of five multi-planet systems, which are the main targets of the system.
Figure 3: Example evolution of systems like Kepler-363, forming the resonant chain through three formation pathways: eccentricity damping only, short-scale migration, and long-scale migration. The period ratio marked as black dots is the ratio between planets b and c, the period ratio marked as green dots is the ratio between planets c and d, and the vertical red line indicates when we “turn-off” the damping effects. Although each pathway is able to lock the planets into both two-body resonances, both short-scale migration and long-scale migration result in the libration of the three-body angle \(3\lambda_{d}-4\lambda_{c}-\lambda_{b}\) which we find to be circulating.
systems whose period ratios suggest they could be in resonance.
For each system, we run a suite of \(N\)-body simulations, exploring the full range of possible planetary and orbital parameters as constrained by available data. We confirm
Figure 4: Example evolution of systems like Kepler-254, forming the resonant chain through three formation pathways: eccentricity damping only, short-scale migration, and long-scale migration. The period ratio marked as black dots is the ratio between planets b and c, the period ratio marked as green dots is the ratio between planets c and d, and the vertical red line indicates when we “turn-off” the damping effects. Both eccentricity dampening and short-scale migration are lock the planets into both two-body resonances while only short-scale migration results in the libration of the three-body angle \(3\lambda_{d}-4\lambda_{c}-\lambda_{b}\) which we find to be circulating. Long-scale migration did not lead to enough simulations remaining stable to yield statistically significant results.
that two planets are in resonance if their critical resonant angle librates in at least 90% of our simulations. Kepler-1542 and K2-32 each contain at least one planet pair that is likely in resonance, but the uncertainties on the planet masses and orbits prohibit us from confirming these resonances. We confirm the 3:2 resonance between Kepler-226c and Kepler-226d, confirm the 3:2 resonance between Kepler-254c and Kepler-254d, and confirm the 1:2:3 resonant chain between the three planets of Kepler-363. For each of these systems, we find that the three-body critical angle \(\phi=\Theta_{c,d}-\Theta_{b,c}=3\lambda_{d}-4\lambda_{c}+\lambda_{b}\) circulates in all of our simulations, even when both \(\Theta_{c,d}\) and \(\Theta_{b,c}\) librate. All five of these systems could benefit from additional data and certainly additional analysis, as their proximity to resonance likely results in measurable TTVs.
We explore the dynamical history of Kepler-254 and Kepler-363, integrating the systems through three potential resonant chain formation pathways: long-scale migration, short-scale migration, and only eccentricity damping. Under our simple migration model, both migration pathways lead to the libration of the three-body angle, suggesting that the resonances in these two systems are more likely to have formed in the absence of migration.
Our methods to confirm or constrain resonances within systems in the absence of high-precision data can be applied to other systems with near-resonant planets and would provide a list of potential new resonances that require further analysis. With the confirmation of new resonances and particularly new resonant chains, we are able to fully leverage the benefits of resonances and constrain the formation history of exoplanetary systems.
We thank the anonymous referee for the constructive review that improved this work. The authors acknowledge use of the ELSA high performance computing clus
\begin{table}
\begin{tabular}{l l l l l l l l} \hline \hline \multicolumn{1}{c}{ Angle} & \multicolumn{1}{c}{\% librating} & \multicolumn{1}{c}{Center [\({}^{\circ}\)]} & \multicolumn{1}{c}{Amplitude [\({}^{\circ}\)]} & \multicolumn{1}{c}{Angle} & \multicolumn{1}{c}{\% librating} & \multicolumn{1}{c}{Center [\({}^{\circ}\)]} & \multicolumn{1}{c}{Amplitude [\({}^{\circ}\)]} \\ \hline Kepler-254 & **SM** & stable = 481/500 & res = 368/500 & Kepler-363 & **SM** & stable = 474/500 & res = 356/500 \\ \(\phi_{1}\) & 8.11 & \(180.32^{+2.43}_{-2.20}\) & \(15.33^{+18.7}_{-7.98}\) & \(\phi_{1}\) & 11.6 & \(90.99^{+19.23}_{-17.78}\) & \(13.64^{+17.28}_{-9.18}\) \\ & 17.0 & \(81.75^{+28.74}_{-11.12}\) & \(14.07^{+19.00}_{-8.12}\) & & 6.3 & \(180.37^{+10.54}_{-2.58}\) & \(20.86^{+30.08}_{-10.24}\) \\ & 9.1 & \(287.56^{+7.94}_{-6.50}\) & \(22.48^{+19.64}_{-12.91}\) & & 7.4 & \(285.27^{+5.85}_{-12.72}\) & \(20.93^{+11.22}_{-12.03}\) \\ \(\Theta_{b,c}\) & 49.9 & \(0.11^{+3.38}_{-0.85}\) & \(7.95^{+37.17}_{-4.04}\) & \(\Theta_{b,c}\) & 8.6 & \(-48.88^{+14.25}_{-8.56}\) & \(14.05^{+9.50}_{-7.08}\) \\ & 10.8 & \(-48.62^{+11.01}_{-8.02}\) & \(10.64^{+8.53}_{-5.50}\) & & 68.6 & \(0.17^{+16.90}_{-1.06}\) & \(12.09^{+28.53}_{-7.99}\) \\ & 16.2 & \(45.53^{+9.70}_{-10.71}\) & \(11.48^{+8.50}_{-6.27}\) & \(\Theta_{c,d}\) & 82.1 & \(0.02^{+2.63}_{-1.71}\) & \(15.72^{+14.21}_{-15.57}\) \\ \(\Theta_{c,d}\) & 84.0 & \(0.04^{+3.40}_{-1.09}\) & \(7.44^{+20.09}_{-5.34}\) & \(\Theta^{\prime}_{b,c}\) & 13.1 & \(279.32^{+16.34}_{-26.93}\) & \(8.53^{+9.41}_{-4.82}\) \\ \(\Theta^{\prime}_{b,c}\) & 21.2 & \(287.45^{+6.09}_{-35.36}\) & \(7.24^{+8.55}_{-3.98}\) & & 8.4 & \(65.87^{+10.33}_{-6.30}\) & \(11.62^{+6.35}_{-5.09}\) \\ & 11.6 & \(66.59^{+8.83}_{-4.17}\) & \(10.12^{+8.16}_{-5.96}\) & & 7.2 & \(179.04^{+2.10}_{-21.20}\) & \(31.02^{+43.83}_{-21.05}\) \\ & 6.7 & \(179.70^{+2.10}_{-2.93}\) & \(15.55^{+36.56}_{-8.64}\) & \(\Theta^{\prime}_{c,d}\) & 82.9 & \(179.97^{+1.42}_{-1.58}\) & \(11.30^{+18.78}_{-8.21}\) \\ \(\Theta^{\prime}_{c,d}\) & 81.3 & \(179.99^{+14.76}_{-1.02}\) & \(7.78^{+18.20}_{-6.24}\) & & & & \\ \hline Kepler-254 & **ECC** & stable = 500/500 & res = 238/500 & Kepler-363 & **ECC** & stable = 500/500 & res = 229/500 \\ \(\phi_{1}\) & 0.0 & & & & \(\phi_{1}\) & 0.6 & & \\ \(\Theta_{b,c}\) & 47.6 & \(-0.017^{+0.587}_{-0.541}\) & \(5.96^{+5.23}_{-0.62}\) & \(\Theta_{b,c}\) & 46.8 & \(0.019^{+0.392}_{-0.412}\) & \(4.22^{+2.45}_{-0.47}\) \\ \(\Theta_{c,d}\) & 62.0 & \(-0.013^{+0.478}_{-0.522}\) & \(5.76^{+9.26}_{-1.21}\) & \(\Theta_{c,d}\) & 47.0 & \(-0.031^{+2.242}_{-1.735}\) & \(28.32^{+4.05}_{-2.13}\) \\ \(\Theta^{\prime}_{b,c}\) & 0.0 & & & \(\phi_{b,c}\) & 1.4 & & \\ \(\Theta^{\prime}_{c,d}\) & 62.2 & \(180.00^{+0.31}_{-0.32}\) & \(3.69^{+0.61}_{-0.83}\) & \(\Theta^{\prime}_{c,d}\) & 47.2 & \(179.99^{+0.84}_{-0.79}\) & \(15.57^{+14.46}_{-0.89}\) \\ \hline Kepler-254 & **LM** & stable = 13/500 & res = 1/500 & Kepler-363 & **LM** & stable = 73/500 & res = 21/500 \\ \(\phi_{1}\) & 0.2 & & & \(\phi_{1}\) & 1.2 & & \\ \(\Theta_{b,c}\) & 0.2 & & & \(\phi_{b,c}\) & 6.4 & \(0.28^{+6.81}_{-2.38}\) & \(9.16^{+50.69}_{-5.05}\) \\ \(\Theta_{c,d}\) & 0.2 & & & \(\phi_{c,d}\) & 6.8 & \(0.19^{+5.94}_{-1.98}\) & \(40.42^{+38.66}_{-38.84}\) \\ \(\Theta^{\prime}_{b,c}\) & 0.2 & & & \(\phi^{\prime}_{b,c}\) & 1.2 & & \\ \(\Theta^{\prime}_{c,d}\) & 0.2 & & & \(\phi^{\prime}_{c,d}\) & 8.8 & \(179.99^{+2.97}_{-3.84}\) & \(26.93^{+20.19}_{-23.70}\) \\ \hline \end{tabular} Note. – For each system, the number of simulations that survived the full integration, the number of simulations where all planets participate in a 1:2:3 chain, then, for each angle, the percentage of simulations where the angle librates and the center and amplitude of the libration. We do not include center or amplitude data for angles librating in fewer than 5% of simulations.
ter at The College of New Jersey for conducting the research reported in this paper. This cluster is funded in part by the National Science Foundation under grant numbers OAC-1826915 and OAC-1828163.
|
2310.20633 | Defining a New NLP Playground | The recent explosion of performance of large language models (LLMs) has
changed the field of Natural Language Processing (NLP) more abruptly and
seismically than any other shift in the field's 80-year history. This has
resulted in concerns that the field will become homogenized and
resource-intensive. The new status quo has put many academic researchers,
especially PhD students, at a disadvantage. This paper aims to define a new NLP
playground by proposing 20+ PhD-dissertation-worthy research directions,
covering theoretical analysis, new and challenging problems, learning
paradigms, and interdisciplinary applications. | Sha Li, Chi Han, Pengfei Yu, Carl Edwards, Manling Li, Xingyao Wang, Yi R. Fung, Charles Yu, Joel R. Tetreault, Eduard H. Hovy, Heng Ji | 2023-10-31T17:02:33Z | http://arxiv.org/abs/2310.20633v1 | # Defining a New NLP Playground
###### Abstract
The recent explosion of performance of large language models (LLMs) has changed the field of Natural Language Processing (NLP) more abruptly and seismically than any other shift in the field's 80-year history. This has resulted in concerns that the field will become homogenized and resource-intensive. The new status quo has put many academic researchers, especially PhD students, at a disadvantage. This paper aims to define a new NLP playground by proposing 20+ PhD-dissertation-worthy research directions, covering theoretical analysis, new and challenging problems, learning paradigms, and interdisciplinary applications.
## 1 Introduction
It is the best of times. It is the worst of times. We are living in an incredibly exciting yet strange era of Natural Language Processing (NLP) research due to the recent advancements of large language models (LLMs) on various data modalities, from natural language Brown et al. (2020) and programming language Chen et al. (2021); Wang et al. (2023) to vision Radford et al. (2021); Li et al. (2022); Wang et al. (2022) and molecules Edwards et al. (2022); Zeng et al. (2022); Su et al. (2022).
At the core, LLMs produce text sequences word-by-word by computing conditional probability based on context. At a sufficiently large scale, they can answer questions, generate arguments, write poetry, impersonate characters, negotiate contracts and achieve competitive results across a wide variety of standard NLP tasks including entity typing, sentiment analysis, and textual entailment, showcasing "emergent behavior" such as in-context learning Wei et al. (2022).
However, this "moment of breakthrough" received a polarized response in the NLP research community: while some welcomed the progress, others felt lost. Why is NLP so vulnerable to a single advancement?
In retrospect, when NLP adopted the machine learning paradigm in the early 1990s it started along a journey that led to increased homogeneity. The dominant methodology became: (1) Identify a challenge problem or task; (2) Create a dataset of desired input-output instances; (3) Select or define one or more evaluation metrics; and (4) Develop, apply, and refine machine learning models and algorithms to improve performance.
If a challenge did not support the creation of a dataset (e.g., text styles of people in different professions) or metric (e.g., summaries of novels or movies), or worse yet if it was not amenable to a machine learning solution, then mainstream NLP simply did not address it. For a long time, NLG was in this position because its starting point --semantic representations-- were neither standardized, nor easy to produce at scale, nor amenable to direct evaluation. No dataset, no metric -- little attention. Yet multi-sentence NLG starting with deep semantic input, and with output tailored to different audiences, is arguably the most complex task in NLP, since it involves so many aspects of linguistic communication together. As such, it surely deserved the concentrated effort that NLP has bestowed on MT, Speech Recognition, QA, and other major challenges in the past.
Suddenly, within the space of a few months, the landscape changed. NLP encountered an engine that seemingly could do everything the field had worked on for decades. Many subtasks in NLP seemed to become irrelevant overnight: Which grammar formalism to parse into? Which historical structure and focus control model for multi-sentence coherence? Which neural architecture is optimal for information extraction or summarization? None of that matters if the magical engine can do the entire end-to-end language-to-language task seamlessly Sanh et al. (2022); OpenAI (2023).
Dozens of Ph.D. theses lost their point, because their point was a small step in the process that no longer seemed needed. The dominant paradigm is also challenged: instead of setting up benchmarks and then developing models accordingly, people started discovering new abilities of such models (Bubeck et al., 2023) (who knew that LLMs could draw unicorns using TikZ?).
An important constraint is the practicality of the goal. This newer generation of LLMs is beyond the practical reach of all but a small number of NLP researchers. Unless one of the organizations building LLMs provides free access for research --an unlikely occurrence given the estimated six-figure monthly expense to run one-- or a procedure is developed to construct university-sized ones cheaply, the academic NLP community will have to be quite creative in identifying things that either generative LLMs cannot do _in principle_ or applications that can be built without re-training them and at the same time are important and doable _in practice_.
Inspired by the efforts of a group of PhD students (Ignat et al., 2023), we believe it would be a valuable exercise to define new research roadmaps. We believe that while LLMs seemingly close research avenues, they also open up new ones. Current LLMs remain somewhat monolithic, expensive, amnesic, delusional, uncreative, static, assertive, stubborn, and biased black boxes. They still have a surprising deficiency (near-random performance) in acquiring certain types of knowledge (Wang et al., 2023f), knowledge reasoning and prediction. In this paper, we aim to define a new NLP playground by proposing a wide range of PhD-dissertation-worthy research directions to democratize NLP research again. In particular, we cover observations and suggestions along the perspectives of LLM theory (Section 2), challenging new tasks (Section 3), important but understudied learning paradigms (Section 4), proper evaluation (Section 5), and interdisciplinary applications (Section 6).
## 2 Theoretical Analysis of LLMs
There is a growing necessity to open the black box of machine learning models through theoretical analysis. In this section, we advocate for both **mathematical** (by mathematical analysis) and **experimental** (inducing rules and laws such as Ghorbani et al. (2021); Hoffmann et al. (2022) from extensive experimental observations) theories of LLMs.
### Mechanism Behind Emergent Abilities
LLMs have displayed impressive emergent capabilities such as instruction following, chain-of-thought reasoning, and in-context learning (Brown et al., 2020; Wei et al., 2022; Min et al., 2022; Wei et al., Logan IV et al., 2022; Wei et al., 2021). For example, the ability of **instruction following** enables models to follow novel instructions. For guidance on prompting beyond heuristics, we need a comprehensive understanding of how instructions work. Some initial theories suggest an explanation through Bayesian inference (Jiang, 2023), which relies on strong assumptions without practical insights. Here we advocate for theories on the feasibility of constraining or measuring models' deviation from instructions. A multi-player setting is also important, where one user's prompt is composed with another player's prompt (such as OpenAI's hidden meta instruction) before being fed into LLMs, where additional security issues might arise for the first user.
**Chain-of-thought (CoT)** reasoning is where LLMs tackle complex tasks by generating solutions in a sequential, step-by-step manner. CoT theoretically enhances the computational capacity of Transformer-based models to solve problems exceeding \(\mathcal{O}(n^{2})\) complexity. While some constructive explanations have been suggested (Feng et al., 2023a), they are not fully validated as the underlying mechanism. Importantly, it is worth investigating the verifiability problem of the reasoning chain (whether CoT can be trusted as a valid logic chain) and its calibration (whether LLMs formulate ad-hoc CoTs for arbitrary conclusions).
**In-context learning (ICL)**, where LLMs learn from demonstration examples in-context without parameter updates, has seen explanations based on gradient-descent (Akyurek et al., 2022; von Oswald et al., 2022), kernel regression (Han et al., 2023a) or Bayesian inference (Xie et al., 2023; Jiang, 2023). Important challenges remain and necessitate more comprehensive explanations, such as sensitivity to example order and robustness to perturbed input-output mapping. We hypothesize that a deeper understanding of how LLMs balance algorithmic solutions with implicit language inference can help clarify these questions, which might be approachable by exploring how LLMs disentangle semantic and functional information.
**Model-specific vs. Model-agnostic** is a persistent gap among explanations, raising the question of whether the emergent abilities depend on the Transformer architecture or simply fitting the pre-training data. With some recent work suggesting that other architectures achieve comparable performance in some domains Peng et al. (2023); Zhai et al. (2021), this open question is important for prioritizing among model design (including other architectures), prompting engineering, and simply carefully collecting larger datasets. To bridge this gap, we also advocate for theoretical frameworks beyond (mixture) of HMMs to better model language data properties.
### Theoretical Robustness and Transparency
**Robustness** is to ensure that no backdoor designs or adversarial usages can be easily implemented in the model. Although not a novel problem by definition, this issue has new implications and formulations in the LLM era. In a situation where most users do not have access to the pre-training and model-editing details, we call for research into robustness diagnosis _for arbitrary given LLM_. Despite negative evidence suggesting it may be nearly impossible to prevent adversarial prompting under certain conditions Wolf et al. (2023), we maintain a positive outlook and hope that it can be potentially overturned under more realistic conditions, such as high computational complexity in searching for adversarial prompts.
**Transparency** in LLMs is concerned with alignment between the model's self-explanations and its internal computational rationale. With empirical studies suggesting that LLMs may not always accurately express their "thoughts" Turpin et al. (2023), computational modeling of LLM intentions becomes essential. The quest for transparency is important for preventing LLMs from generating misleading rationales to humans. We advocate for establishing both positive and negative theorems on counteracting false rationales under different conditions, along with examining associations between "faithfulness" modes and neuron activities in specific architectures.
## 3 New and Challenging Tasks
### Knowledge Acquisition and Reasoning
Knowledge inside LLMsThe black box property of LLMs poses a significant challenge when it comes to evaluating implicit knowledge within the model. Initial studies have been conducted to elicit/identify Cohen et al. (2023); Shin et al. (2020); Petroni et al. (2019, 2020); Fung et al. (2023); Gudibande et al. (2023); Li et al. (2023) and localize/edit knowledge Dai et al. (2021); Meng et al. (2022); Zhu et al. (2020); Mitchell et al. (2022); De Cao et al. (2021); Hase et al. (2023); Meng et al. (2022); Mitchell et al. (2022). However, our understanding of the knowledge organization within language models (_where_ and _how_ knowledge is stored) is still limited, and it remains uncertain whether full comprehension is achievable. Moreover, existing studies primarily focus on factual or commonsense knowledge, overlooking more complex knowledge such as rules of inference Boolos et al. (2002).
Large-Scale Knowledge ReasoningLLMs have demonstrated promising performance across various reasoning tasks Dua et al. (2019); Miao et al. (2020); Cobbe et al. (2021); Yu et al. (2020); Bhagavatula et al. (2020); Talmor et al. (2019) when appropriately prompted, such as through the use of Chain-of-Thought Wei et al. (2022); Chowdhery et al. (2022); Xue et al. (2023); Diao et al. (2023); Wang et al. (2023); Paul et al. (2023) or Program-of-Thought Chen et al. (2022). However, current reasoning benchmarks Cobbe et al. (2021); Ling et al. (2017); Patel et al. (2021); Hosseini et al. (2014); Miao et al. (2020); Koncel-Kedziorski et al. (2016); Talmor et al. (2019); Geva et al. (2021) focus on reasoning with small-scale context, typically consisting of hundreds of words. This level of reasoning falls short when tackling complex tasks, such as scientific research, which demands knowledge from extensive volumes of related literature and domain-specific knowledge bases. Retrieval-augmentation Guu et al. (2020); Khandelwal et al. (2020); Borgeaud et al. (2022); Izacard et al. (2022); Lai et al. (2023) serves as a powerful tool for integrating large-scale contextual knowledge into language models. However, current retrieval methods predominantly rely on semantic similarities, while humans possess the _accommodative_ learning Illeris (2018) ability to draw inspirations from semantically dissimilar knowledge and transfer it to the target task. To achieve this, we not only need to extend the input context length, but also understand how models organize knowledge and develop more effective knowledge representations and evaluation metrics (Section 5).
Faithfulness and FactualityEnsuring the truthfulness of generation output requires optimal utilization of internal knowledge within the model and external knowledge, which includes the input context, knowledge bases, and open web resources. Access to external knowledge typically relies on the success of information retrieval Lewis et al. (2020); He et al. (2023); Yu et al. (2023, 2023), information extraction Wen et al. (2021); Huang et al. (2023), grounded generation Li et al. (2021, 2022); Gao et al. (2023); Weller et al. (2023); Lai et al. (2023) and knowledge-augmented generation Petroni et al. (2020); Geva et al. (2023). Internal knowledge involves the implicit parametric knowledge stored within the model, the correction and refinement of which is limited to the inference stage Lee et al. (2022); Meng et al. (2022, 2022); Chen et al. (2023). To effectively minimize hallucination and correct factual errors, it is crucial to not only decipher how knowledge is interpreted through model parameter patterns, but to understand how the model pieces knowledge together and governs the underlying logic during generation. A significant challenge in knowledge-guided generation is defining an appropriate knowledge representation that supports both complex structures and distributed representations. We believe this representation should combine the strength of symbolic-based reasoning to minimize unwarranted inferences, and the flexibility of distributed representations to encode any semantic granularity. Drawing insights from misinformation detection and knowledge comparative reasoning systems could also be one useful dimension of signals for improving faithfulness and factuality Liu et al. (2021); Fung et al. (2021); Wu et al. (2022, 2023).
### Creative Generation
Although people have long envisioned using models for creative writing, this has only become a reality recently, when language generation models could reliably produce fluent text. Compared to previous sections where generated text is a vehicle for knowledge, creative use cases focus more on the style or form of language and encourage open-ended output. 1
Footnote 1: In this section we limit our scope to applications of text generation, however, we fully acknowledge the potential of multi-modal creative generation, such as generating personal avatars, movie clips, and 3D scenes.
Creative Writing AssistanceSince language models offer conditional generation ability out-of-the-box, they have been adopted by many people in the creative industry for brainstorming or research tools Kato and Goto (2023); Gero et al. (2023); Halperin and Lukin (2023). One key challenge for such tools is promoting creative generation, instead of generating the most probable continuation, which was what language models were trained for. Current LMs have been observed by writers to over-rely on cliques or tropes and produce overly moralistic and predictable endings Chakrabarty et al. (2024). While the plot should be unexpected, details in the story should not go against commonsense (unless it is part of the setting), and maintain consistency within the story. This requires a model that enables controllability over the level of creativity in its output. Do we need to train a more creative model, or can we fix the problem at the inference stage? On the other hand, the focus on detoxification of LMs through RLHF (reinforcement learning with human feedback) might have led to the incompetency of the model in navigating deeper and morally challenging themes.
Another direction for exploration is how to build better writing tools that work together with humans. Some attempts have been made to allow users to interact through instructions Chakrabarty et al. (2022) or use editing sequences to improve writing quality Schick et al. (2022). These could serve as critical building blocks toward the goal of developing a model that supports different types of input and can improve itself and personalize through interaction. In addition, models can also assist in different stages of writing, such as world-building and reviewing drafts. It remains to be explored where the model is most effective and where human writers should step in and make decisions.
Interactive ExperiencesText generation models can not only be assistants for writing static scripts but also open up an opportunity to create dynamic and personalized experiences for the user by conditioning on their input. These interactive experiences can be used for education, therapy, game design, or filmmaking. More recently, there have been attempts to connect conversational models with other components such as speech recognition, text-to-speech, and audio-to-face rendering to create an end-to-end immersive experience of interacting with non-playable characters23. Another related open area for exploration is to create
emotion-oriented experiences, which is one of the key goals of storytelling (Lugmayr et al., 2017). We should consider creating narratives based on the desired emotional response and the reader's feedback (Brahman and Chaturvedi, 2020; Ziems et al., 2022; Mori et al., 2022).
## 4 New and Challenging Learning Paradigms
### Multimodal Learning
In light of the remarkable progress of the language world, we are now poised to venture into a multitude of modalities that were previously beyond consideration. Some learning signals stem from reading static data, such as images, videos, speech, and more, which will be discussed in this section; while other signals require interacting with the physical world, which will be detailed in Section 4.2.2.
Multimodal encoding, at its core, involves learning the "correspondence" or "alignment" among various modalities, which always facing the challenges of **Granularity Difference** across modalities. This is a new and growing area with several solutions proposed to align across modalities: (1) a hard alignment that enables granularity-aware fusion (Tan and Bansal, 2020; Li et al., 2022; Momeni et al., 2023; Wang et al., 2022, 2023f); (2) a soft alignment to project the text space with the vision space (Zhou et al., 2023; Li et al., 2023b; Zhu et al., 2023; Lin et al., 2023). Beyond these semantic alignment challenges, there are further difficulties when it comes to non-semantic abstractions:
**Geometric Reasoning:** Recognizing spatial relationships, such as "_left_", "_right_", "_beside_", "_above_", or "_behind_", requires comprehensive geometric mental simulation, which existing models consistently making errors (Kamath et al., 2023). Maintaining transformation invariance, regardless of position, rotation, or scale, remains a core challenge. Besides, current models, predominantly trained on 2D images, inherently miss out on the intricacies of 3D spatial configurations, inhibiting understanding of depth and relative object sizes based on distance. To address these challenges, existing efforts augment existing large models with an agent view to infer spatial layouts, predicting possible navigations from visual and textual cues (Liu et al., 2022; Berrios et al., 2023; Feng et al., 2023b). However, we believe the underlying challenge lies in the missing objective of geometric reasoning. Existing pretraining paradigms predominantly focus on semantic alignment between image/video-language pairs, while features (e.g., low-level edges, lines) are largely omitted in the encoded image representation.
**Context Ambiguity:** Accurate understanding should factor in the wide context of temporal dynamics, social dynamics, emotional dynamics, and more. The temporal dimension presents a unique challenge in understanding vision and speech. Existing methods only focus on temporal ordering (Zellers et al., 2021, 2022) and forward/backward generation (Seo et al., 2022; Yang et al., 2023; Cheng et al., 2023). However, temporal dynamics is much more complicated. For instance, a video gesture (like a nod) may correspond to a later affirmation in the speech (Li et al., 2019). Such ambiguity requires reasoning over a wider context with various constraints. Emotion, another yet-underexplored abstract dimension, is conveyed through tone, pitch, speed in speech, and through expressions or body language in vision. Besides, social norm understanding is challenging as the same word or facial expression can convey different emotions depending on the context. Thus, potential solutions require to take into account various contexts, including preceding conversations or events, along with casual reasoning.
**Hierarchical Perception:** Human cognition is inherently hierarchical. When processing visual signals, our attention is not uniformly distributed across every pixel but focus on salient regions that carry the most information, allowing us to quickly identify key features and make sense of our surroundings (Hochstein and Ahissar, 2002; Eickenberg et al., 2017). However, existing models overlook such attention hierarchy and tend to lose focus when asking about visual details (Gao et al., 2023b). To address this challenge, interpreting natural scenes requires hierarchical recognition, from broader contexts down to detailed attribute abstraction. Besides, aligning visual hierarchies with linguistic structures is important. Further, it requires the ability to perform abstraction over details, balancing between an abstracted scene understanding and intricate recognition is an ongoing challenge.
### Online Learning
Trained on static corpora, existing models are incapable of keeping themselves updated on new information or learning from interaction history for
self-improvement. To alleviate these issues, this section discusses the need for next-generation models to learn in an _online_ setting.
#### 4.2.1 Updating Information Within Models
A straightforward approach to updating models is to continue training on new data. This is however not efficient, since we only care about new information which accounts for a small fraction of the data, nor effective, as fine-tuning on new data might interfere with learned information in models. To achieve efficient updates, we would like the model to automatically identify notable information in new data (Yu and Ji, 2023) instead of relying on heavy human selection or preprocessing as in knowledge editing tasks (Dai et al., 2021; Meng et al., 2022a,b; Zhu et al., 2020; De Cao et al., 2021; Hase et al., 2023; Mitchell et al., 2022b). Effectively updating the model requires overcoming the bias toward (Yu and Ji, 2023; Wei et al., 2023) as well as avoiding catastrophic forgetting (McCloskey and Cohen, 1989; Ratcliff, 1990) of learned prior information. This might be achieved by changing the training paradigm to increase model capacity over time (e.g. progressive training (Gong et al., 2019), MoE (Shen et al., 2023)) or better understanding of knowledge organization within models (as detailed in Section 3.1) so that edits can be performed with minimal interference.
#### 4.2.2 Learning from Continuous Interactions
Interaction is essential in human learning (Jarvis, 2006). Humans learn how to best tackle different tasks by interacting with the **environment**, and they learn social norms from their interactions with other **humans**. Moreover, such interactions are **multi-turn** in nature, allowing humans to iteratively refine their actions for the task at hand _and_ continuously improve their mental model's capability of performing similar tasks in the future.
Interaction with EnvironmentsWe consider environments a broad category of systems that provide feedback upon actions. The world we live in can be regarded as a typical environment: the law of physics would decide the world state change and provide sensor stimuli to the actor (e.g., Ahn et al. (2022)). Training a model (i.e., Embodied AI) that can interact with the physical world through multi-modal input (Driess et al., 2023; Jiang et al., 2023) poses challenges related to multi-modal learning (Section 4.1) as well as unique challenges due to long-horizon planning requirements and dynamic environments. The concept of environments also extends to human-crafted environments (e.g., programming language interpreters (Wang et al., 2023b), embodied simulators (Shridhar et al., 2020)) that provide automated feedback for any input by rules. Such artificial environments allow easy collection of automatic feedback which could prepare models for deployment in the physical world.
Interaction with HumansBeyond learning from generic human preference towards building generalist agents (Ouyang et al., 2022), real-world applications typically require customizable solutions (e.g., personalized agents) to be created efficiently. We advocate for a new learning paradigm where models can be taught through (multi-modal) interactions with humans, including natural language feedback (Padmakumar et al., 2022; Wang et al., 2023c) and physical demonstration (Lee, 2017). Such complex problem nature may also involve customized retrieval from a large toolset of specialized models and effective action planning (Qin et al., 2023; Yuan et al., 2023).
## 5 Evaluation
As models become increasingly powerful and multi-purpose, their evaluation has become a growing bottleneck for advancing NLP. We first discuss the question of "what should be evaluated" followed by "how should we measure performance."
### Benchmarks
Language models are known to be multi-task learners, and the new generation of LLMs can achieve impressive performance under few-shot or even zero-shot conditions. This has led to the creation of many general benchmarks such as GLUE (Wang et al., 2018), SuperGLUE (Wang et al., 2019), MMLU (Hendrycks et al., 2021), Super-NaturalInstructions (Wang et al., 2022a), HELM (Liang et al., 2022), and AGIEval (Zhong et al., 2023). While setting up comprehensive benchmarks is useful, current benchmarks still have the following limitations: (1) lack diverse and difficult tasks that are important for real-world applications; (2) only contain static data sets that are not sufficient for applications that require multi-turn context-dependent input such as situation-grounded dialog; (3) robustness deficiencies, and (4) lack of support for performance analysis.
Although some benchmarks extend to thousands of NLP tasks, most of them are variants of sentence-level tasks, while ignoring more challenging tasks such as structured prediction and cross-document reasoning. For example, Li et al. (2023) reported that LLMs methods obtained 25.2%-68.5% lower performance than state-of-the-art methods based on much smaller models for nearly all of the Information Extraction tasks. Task design should also aim to assist with human users' daily tasks, as exemplified by the most popular tasks being related to planning and seeking advice by the ChatGPT users at ShareGPT 4. Another issue is that benchmarks quickly saturate due to the development of newer models, and thus "live" benchmarks that can be updated over time Kiela et al. (2021) might be worth pursuing.
Footnote 4: [https://sharegpt.com/](https://sharegpt.com/)
To move beyond static data, we believe that simulated environments such as large-scale multi-player game environments can serve as an efficient solution. Games have been used as a way of benchmarking progress of reinforcement learning algorithms Silver et al. (2018); Guss et al. (2021) and also used to collect static datasets in NLP Urbanek et al. (2019); Bara et al. (2021); Lai et al. (2022). Game worlds provide a cheap way to explore different environments and situations, which is necessary for grounded language learning and learning through interaction. Humans can interact with models playing as characters in the game to evaluate their performance, or we can let models interact with each other Park et al. (2023) and evaluate their interaction behavior as a whole.
Finally, we advocate for work on model diagnosis beyond the current brittle paradigm of case study through manual inspection: methods that help identify which parts of the input the model underperform on Liu et al. (2021), what are the model's behavior patterns and what data this performance could be attributed to Ilyas et al. (2022).
### Metrics
Automatic evaluation metrics have been an accelerant for NLP progress in the last 20 years. Heuristic-based metrics Papineni et al. (2002); Lin (2004); Lavie and Agarwal (2007) have been found to correlate weakly with human preferences Liu et al. (2016). As a result, the field has pivoted to model-based metrics which have shown better alignment with human judgment Lowe et al. (2017); Zhang et al. (2020); Sellam et al. (2020); Yuan et al. (2021); Zhong et al. (2022). However such metrics might allow for shortcut approaches or come with biases embedded in the scoring model Sun et al. (2022).
Automatic metrics struggle with open-ended natural language generation problems such as conversation and creative writing tasks due to the absence of ground truth. LLMs present an opportunity to tackle this problem Zheng et al. (2023); Fu et al. (2023); Liu et al. (2023), but they also suffer from certain biases including position, verbosity, and self-enhancement biases (models prefer themselves) that users should be cautious about. We need to develop metrics beyond accuracy and evaluate aspects such as robustness Chen et al. (2023), bias, consistency Chan et al. (2023), informativeness, truthfulness, and efficiency.
On the other hand, human evaluation has traditionally been perceived as the more trustworthy evaluation method and a better indicator of the model utility. However, as models improve, it is questionable whether crowdworkers are adequate to serve as assessors (or annotators), particularly in fields such as science, healthcare, or law. Annotator bias Geva et al. (2019); Sap et al. (2022) and disagreement Fornaciari et al. (2021) should also be taken into consideration. If we design our models to be "assistants", a more useful human evaluation might not be to identify which output is more correct, but which output can help the human complete the task more efficiently.
## 6 NLP+X Interdisciplinary Applications
### Human-Centered NLP
As LLMs become ubiquitous in both the research and public spheres, mitigating potential harms, both allocation and representation Blodgett et al. (2020), to social groups using these models must be a core consideration. Social bias and stereotypes are a common way for LLMs to materialize these internal defects, so debiasing these models is important for fairness and robustness. Furthermore, LLMs must be aware of the extra-contextual requirement of abiding by the sociocultural norms expected by the user Fung et al. (2023), especially when used as chatbots directly interacting with humans.
Post-hoc debiasing and improving the social awareness of pretrained LLMs are important to this end. Though modern approaches have made great advances in democratizing LLM training, most
builders don't have a need to pretrain their own LLMs, opting to, at most, fine-tune them. Rather than hope that an LLM is unbiased after pretraining, many researchers have discussed the utility in having a separate general debiasing step to account for any unintended associations stemming from pretraining (Yu et al., 2023; Omrani et al., 2023; Yang et al., 2023). Relatively less explored is the complementary requirement of augmenting LLMs with the awareness and ability to abide by sociocultural norms. The crux of the problem is training the model to recognize _what_ behaviors in its training data are the results of sociocultural norms, discover _why_ and _when_ those norms should be followed, and _how_ those norms can be followed (i.e., is it only in a specific way or is this a behavior that can be generalized across situations?).
Another important direction is personalization based on the user, particularly for chatbots. LLMs have an amazing ability to multiplex behavior based on the language context provided in the prompt (Section 2.1), but they do not have the ability to account for the audience apart from what's inferred from text. This poses a problem for personalization because the same context or conversation can have differing levels of appropriateness depending on the audience (e.g., something that one finds relatively harmless may be incredibly offensive to someone else). Thus, we must improve LLMs' ability to infer the personal norms and appropriate behaviors in each individual context independently and act accordingly. This may, in part, involve bridging the gap between distant users who share similar beliefs to decode latent representations (Sun et al., 2023). In parallel, we can also provide users with multi-dimensional controls for generation (Han et al., 2023), including their sentiment, political stance, and moral values, so that they can directly influence the model's language usage.
### NLP for Science
One area with the most potential impact from NLP is science (Hope et al., 2022; Zhang et al., 2023). Although researchers have long been interested in extracting actionable information from the literature (Hersh and Bhupatiraju, 2003; Griffiths and Steyvers, 2004; Li et al., 2016; Wang et al., 2021), this has been challenging due to the variety and complexity of scientific language. With the growing capabilities of NLP techniques, intensified focus is now deserved because of both the potential impacts and the challenges that will need to be overcome.
One exciting emerging area is jointly learning natural language and other data modalities in the scientific domain (Edwards et al., 2021; Zeng et al., 2022; Edwards et al., 2022; Taylor et al., 2022), and one of the largest problems in current LLMs-hallucination-becomes a strength for discovering new molecules (Edwards et al., 2022), proteins (Liu et al., 2023), and materials (Xie et al., 2023).
Another noteworthy application is NLP for Medicine. As a particular motivating example, there are an estimated \(10^{33}\) realistic drug-like molecules (Polishchuk et al., 2013). Within these drugs, there are substructures which confer beneficial drug properties, and the knowledge about these properties are reported in millions of scientific papers. However, existing LLMs are pretrained only from unstructured text and fail to capture this knowledge, in part due to inconsistencies in the literature.
Recent solutions for domain-knowledge-empowered LLMs include development of a lightweight adapter framework to select and integrate structured domain knowledge into LLMs (Lai et al., 2023), data augmentation for knowledge distillation from LLMs in general domain to scientific domain (Wang et al., 2023), and tool learning frameworks leveraging foundation models for more complicated sequential actions problem solving (Qin et al., 2023; Qian et al., 2023). Overall, future research can explore bespoke architectures, data acquisition techniques, and training methodologies for comprehending the diverse modalities, domain-specific knowledge, and applications within science.
### NLP for Education
LLMs readily capture a vast knowledge of many subjects, and augmenting LLMs with external knowledge naturally leads to improved abilities for eliciting that knowledge to generate lesson plans and materials. However, there are also applications in education which seem distinct from general NLP tasks. In particular, personalizing education and the educational experience with LLMs would allow educators to focus on the more general efforts of high-level teaching. Then, the utility of using language models to educate comes not from the language model's ability to "learn" the appropriate
knowledge but in its ability to find associations. One facet of this challenge comes from identifying and analyzing gaps in a student's understanding or learning. For example, apart from simply scoring essays or responses across discrete dimensions such as fluency or sentence structure or by identifying keyspans (Mathias and Bhattacharyya, 2020; Takano and Ichikawa, 2022; Fiacco et al., 2022), one could use LLMs to determine which parts of a freeform submission indicate a gap and associate it with a learning goal provided by the teacher, without using specific (and costly to create) gold-labeled responses, so that the student has actionable feedback and can work on self-improvement. As part of this work, we need to accurately identify which portions of the response are written by the student as opposed to copied from an AI assistant. This would ensure that gaps aren't hidden, but would require a longitudinal view of the student's ability. Also, we must be able to ensure that the LLM's recommendations are based on actual details of the student and the text rather than being general predictions with high priors or based on hallucinations. Furthermore, rather than simplifying original lesson materials (Mallinson et al., 2022; Omelianchuk et al., 2021), we should invest in using LLMs to generate or retrieve materials or scaffolding that _help_ to advance the students' learning rate.
## 7 What We Need
Our overall aim is to combat both the stultification of NLP as a mere evaluation optimization endeavor and to dispel fears that LLMs and generative AI will shut down the field. As an old saying goes, frequent moves make a tree die but a person prosperous. Just as NLP researchers in the 1980s had to learn about machine learning and then embrace it as a core technique in the field, so we now must explore and embrace LLMs and their capabilities. Machine learning did not'solve' the challenges of NLP: it did not produce an engine that could learn languages, translate, answer questions, create poetry, and do all the things a child can do. Some people claim that LLMs can do all this, and more. But we are in the first flush of engagement, and have not yet have time to discover all their shortcomings.
Central is the challenge of scale. No child needs to read or hear more than half the internet's English text in order to use language. What reasoning and sensory capabilities do people have that LLMs lack? How can NLP research evolve to model and encompass those? We urgently need global infrastructures to dramatically scale up computing resources, because the open-source models still cannot achieve performance comparable to GPT variants (Gudibande et al., 2023). But we also urgently need deeper thinking about the foundational conceptual models driving our field.
During this unique period when NLP researchers feel uncertain regarding which research problems to pursue, we as a community need a collective effort to systematically change and refine our paper review system and academic success measurements, in order to establish a more inclusive research environment and encourage researchers (particularly those in junior positions) to explore long-term, high-risk topics that are crucial for the entire field. The new challenges also require us to be more open-minded to close collaboration with researchers from other fields, including social science, natural science, computer vision, knowledge representation and reasoning, and human-computer interaction.
### Limitations
In this paper we describe some new or under-explored NLP research directions that remain dissertation-worthy. We propose a wider and exciting version of NLP that encourages people to focus on a wider range of more challenging and difficult problems with exciting potential impacts for social good. These problems may not always admit of easy datasets and pure machine learning solutions. Our list is not meant to be exhaustive, and we choose these directions as examples. It is up to NLP researchers to uncover the problems and develop novel solutions.
### Ethical Considerations
The research areas listed in this document are a few of the main areas ripe for exploration; additional ones exist. We do not intend for our proposed positions to be forcefully pedagogical. We encourage diverse and deeper investigation of worthy research areas. Within these proposed directions, we acknowledge that some require access to users' personal information (e.g. chatbot personalization in Section 6.1), and some applications might have high impact on users (e.g. using models to assess a student's grasp of knowledge for targeted education
in Section 6.3). The use of LLMs for creative work has also led to concerns about copyright and regulations over whether AI can be credited as authors. We do not support the use of LLMs for screening or resource allocation purposes without safeguarding measures. Even for lower risk use cases, we opt for more research on the robustness, transparency, and fairness of systems. Finally, we must evaluate the compliance of prompting LLMs with laws and regulations. For instance in education applications, if we require information about the student, we must refer to laws such as FERPA/DPA/GDPR, especially in an online learning setting.
## Acknowledgements
This work is based upon work supported by U.S. DARPA KAIROS Program No. FA8750-19-2-1004, U.S. DARPA CCU Program No. HR001122C0034, U.S. DARPA ECOLE Program No. #HR00112390060, U.S. DARPA ITM FA8650-23-C-7316, U.S. DARPA SemaFor Program No. HR001120C0123 and U.S. DARPA INCAS Program No. HR001121C0165. The opinions, views and conclusions contained herein are those of the authors and should not be interpreted as necessarily representing the official policies, either expressed or implied, of DARPA, or the U.S. Government. The U.S. Government is authorized to reproduce and distribute reprints for governmental purposes notwithstanding any copyright annotation therein.
|
2309.08962 | Dynamic Separation Logic | This paper introduces a dynamic logic extension of separation logic. The
assertion language of separation logic is extended with modalities for the five
types of the basic instructions of separation logic: simple assignment,
look-up, mutation, allocation, and de-allocation. The main novelty of the
resulting dynamic logic is that it allows to combine different approaches to
resolving these modalities. One such approach is based on the standard weakest
precondition calculus of separation logic. The other approach introduced in
this paper provides a novel alternative formalization in the proposed dynamic
logic extension of separation logic. The soundness and completeness of this
axiomatization has been formalized in the Coq theorem prover. | Frank S. de Boer, Hans-Dieter A. Hiep, Stijn de Gouw | 2023-09-16T11:31:05Z | http://arxiv.org/abs/2309.08962v2 | # Dynamic Separation Logic
###### Abstract
This paper introduces a dynamic logic extension of separation logic. The assertion language of separation logic is extended with modalities for the five types of the basic instructions of separation logic: simple assignment, look-up, mutation, allocation, and de-allocation. The main novelty of the resulting dynamic logic is that it allows to combine different approaches to resolving these modalities. One such approach is based on the standard weakest precondition calculus of separation logic. The other approach introduced in this paper provides a novel alternative formalization in the proposed dynamic logic extension of separation logic. The soundness and completeness of this axiomatization has been formalized in the Coq theorem prover.
S +
Footnote †: footnote]Email: [email protected]
Footnote 2: Email: [email protected]
Footnote 3: Email: [email protected]
## 1 Introduction
This paper describes a study into the expressive power of separation logic (SL, for short) with regard to the formalization of _weakest preconditions_[7]. To this end, we introduce a novel dynamic logic extension of SL, which we abbreviate by DSL (for Dynamic Separation Logic).
SL [19] extends Hoare logic for the specification and verification of heap manipulating programs in terms of pre- and postconditions. The assertion language of SL features the basic heap assertion (\(x\mapsto e\)), '\(x\) points to \(e\)', which expresses that the variable \(x\) denotes the single allocated memory location which stores the value of the expression \(e\). The so-called separating conjunction (\(p*q\)) allows to split the heap, that is, the set of allocated memory locations and their contents, into two disjoint parts one of which satisfies the conjunct \(p\) and the other satisfies \(q\). The separating implication (\(p\rightarrow q\)), roughly, holds if every extension of the heap satisfies \(q\), whenever \(p\) holds for the extension itself (separately). For an
introduction to SL and an extensive survey of the literature, intended for a broad audience, see the paper by A. Chargueraud [5].
Dynamic logic [9] generalizes Hoare logics by introducing for each statement of the underlying programming language a corresponding modality, so that the formula \([S]p\) expresses the weakest precondition of the statement \(S\) with respect to the postcondition \(p\). Informally, \([S]p\) is valid if every terminating computation establishes \(p\). In this paper we extend the assertion language of SL with _modalities_ for the five types of the basic instructions of SL: simple assignment, look-up, mutation, allocation, and de-allocation. For any such basic instruction \(S\), we then can introduce in the Hoare logic the axiom
\[\{[S]p\}\ S\ \{p\}\]
which is trivially sound and complete by definition of \([S]p\). In case \(S\) is a simple assignment \(x:=e\) and \(p\) is an assertion in standard SL, we can resolve the weakest precondition \([S]p\), as in first-order dynamic logic, simply by _substituting_ every free occurrence of \(x\) in \(p\) by the expression \(e\).4 In SL we can resolve \([S]p\), for any other basic instruction \(S\), by a formula with a hole \(C_{S}(\cdot)\) in SL itself, such that \(C_{S}(p)\) is equivalent to \([S]p\). For example, the assertion
Footnote 4: After suitable renaming of the bound variables in \(p\) such that no variable of \(e\) gets bound.
\[(\exists y(x\mapsto y))\ast((x\mapsto e)\dashrightarrow p)\]
states that the heap can be split in a sub-heap which consists of a single memory cell denoted by \(x\) such that \(p\) holds for every extension of the other part with a single memory cell denoted by \(x\) and which contains the value of \(e\). It follows that this assertion is equivalent to \([[x]:=e]p\), where the _mutation_ instruction \([x]:=e\) assigns the value of the expression \(e\) to the heap location denoted by the variable \(x\).
The main contribution of this paper is a complementary approach to resolving \([S]p\), for any basic instruction. In this approach we obtain an alternative characterization of the weakest precondition \([S]p\) by a novel axiomatization of the modalities in DSL. This axiomatization allows for a characterization of \([S]p\)_compositionally_ in terms of the syntactical structure of \(p\).
O'Hearn, Reynolds, and Yang introduced local axioms [15] and show how to derive from these local axioms a weakest precondition axiomatization of the basic instructions in SL, using the frame rule and the separating implication for expressing the weakest precondition. However, the separating implication is actually not needed to prove completeness of the local axioms for simple assignments, look-up, allocation, and de-allocation. We illustrate the expressiveness of DSL by extending this result to the local mutation axiom. We further illustrate the expressiveness of DSL by a novel _strongest postcondition_ axiomatization.
Using the proof assistant Coq, we have formally verified the soundness and completeness proofs of the axiomatization of the DSL modalities. All our results can be readily extended to a programming language involving (sequential) control structures such as loops.
## Acknowledgments
The authors are grateful for the constructive feedback provided by the anonymous referees.
## 2 Syntax and semantics
We follow the presentation of SL in [19]. A heap5\(h\) is represented by a (finitely-based) _partial_ function \(\mathbb{Z}\rightharpoonup\mathbb{Z}\) and the domain of \(h\) is denoted by \(\textit{dom}(h)\). We write \(h(n)=\bot\) if \(n\not\in\textit{dom}(h)\). The heaps \(h,h^{\prime}\) are disjoint iff \(\textit{dom}(h)\cap\textit{dom}(h^{\prime})=\emptyset\). A heap \(h\) is partitioned in \(h_{1}\) and \(h_{2}\), denoted by \(h=h_{1}\uplus h_{2}\), iff \(h_{1}\) and \(h_{2}\) are disjoint, \(\textit{dom}(h)=\textit{dom}(h_{1})\cup\textit{dom}(h_{2})\), and \(h(n)=h_{i}(n)\) if \(n\in\textit{dom}(h_{i})\) for \(i\in\{1,2\}\).
Footnote 5: All italicized variables are typical meta-variables, and we use primes and subscripts for other meta-variables of the same type, e.g. \(h\), \(h^{\prime}\), \(h^{\prime\prime}\), \(h_{1}\), \(h_{2}\) are all heaps.
\(V\) denotes a countably infinite set of integer variables, with typical element \(x\). A store \(s\) is a total function \(V\to\mathbb{Z}\). We abstract from the syntax of arithmetic expressions \(e\), and Boolean expressions \(b\). By \(\textit{var}(e)\) (resp. \(\textit{var}(b)\)) we denote the finite set of variables that occur in \(e\) (resp. \(b\)). We have the Boolean constants **true** and **false**, and (\(e_{1}=e_{2}\)) is a Boolean expression given arithmetic expressions \(e_{1}\) and \(e_{2}\).
\(\langle x:=e,h,s\rangle\Rightarrow(h,s[x:=s(e)])\),
\(\langle x:=[e],h,s\rangle\Rightarrow(h,s[x:=h(s(e))])\) if \(s(e)\in\mathit{dom}(h)\),
\(\langle x:=[e],h,s\rangle\Rightarrow\textbf{fail}\) if \(s(e)\not\in\mathit{dom}(h)\),
\(\langle[x]:=e,h,s\rangle\Rightarrow(h[s(x):=s(e)],s)\) if \(s(x)\in\mathit{dom}(h)\),
\(\langle[x]:=e,h,s\rangle\Rightarrow\textbf{fail}\) if \(s(x)\not\in\mathit{dom}(h)\),
\(\langle x:=\textbf{cons}(e),h,s\rangle\Rightarrow(h[n:=s(e)],s[x:=n])\) where \(n\not\in\mathit{dom}(h)\).
\(\langle\textbf{dispose}(x),h,s\rangle\Rightarrow(h[s(x):=\bot],s)\) if \(s(x)\in\mathit{dom}(h)\),
\(\langle\textbf{dispose}(x),h,s\rangle\Rightarrow\textbf{fail}\) if \(s(x)\not\in\mathit{dom}(h)\).
By \(s(e)\) we denote the integer value of \(e\) in \(s\), and by \(s(b)\) we denote the Boolean value of \(b\) in \(s\). Following [19] expressions thus do not refer to the heap. By \(s[x:=v]\) and \(h[n:=v]\) we denote the result of updating the value of the variable \(x\) and the location \(n\), respectively. The definition of \(h[n:=v]\) does not require that \(n\in\mathit{dom}(h)\). More specifically, we have
\[h[n:=v](m)=\left\{\begin{aligned} & v&\text{if }n=m\\ & h(m)&\text{otherwise}\end{aligned}\right.\]
Thus, \(\mathit{dom}(h[n:=v])=\mathit{dom}(h)\cup\{n\}\). For heaps we also define the clearing of a location, denoted by \(h[n:=\bot]\). We have \(h[n:=\bot](m)=\bot\) if \(n=m\), and \(h[n:=\bot](m)=h(m)\) otherwise. Similarly, we have \(\mathit{dom}(h[n:=\bot])=\mathit{dom}(h)\setminus\{n\}\).
Following [19], we have the following basic instructions: \(x:=e\) (simple assignment), \(x:=[e]\) (look-up), \([x]:=e\) (mutation), \(x:=\textbf{cons}(e)\) (allocation), \(\textbf{dispose}(x)\) (de-allocation). Just like [10],
_We will not give a full syntax of [statements], as the treatment of conditionals and looping statements is standard. Instead, we will concentrate on assignment statements, which is where the main novelty of the approach lies._
The successful execution of any basic instruction \(S\) is denoted by \(\langle S,h,s\rangle\Rightarrow(h^{\prime},s^{\prime})\), whereas \(\langle S,h,s\rangle\Rightarrow\textbf{fail}\) denotes a failing execution (e.g. due to access of a 'dangling pointer'). See Figure 1 for their semantics (and see Appendix, Figure A.1, for the full syntax and semantics).
We follow [10] in the definition of the syntax and semantics of the assertion language of SL but we use a different atomic 'weak points to' formula (as in [18] and [6]). In DSL we have additionally a modality for each statement \(S\), which has highest binding priority.
\[p,q:=b\mid(e\hookrightarrow e^{\prime})\mid(p\to q)\mid(\forall xp)\mid(p \ast q)\mid(p\twoheadrightarrow q)\mid[S]p\]
By \(h,s\models p\) we denote the truth relation of classical SL, see Figure 2. Validity of \(p\) is denoted by \(\models p\). Semantics of DSL extends the semantics of SL by giving semantics to the modality, expressing the weakest precondition. We further have the usual abbreviations: \(\neg p\) denotes \((p\rightarrow\textbf{false})\), \((p\lor q)\) denotes \((\neg p\to q)\) (negation has binding priority over implication), \(p\equiv q\) denotes \((p\to q)\wedge(q\to p)\), \((\exists xp)\) denotes \(\neg(\forall x(\neg p))\) and note that \(x\) is bound in \(p\). By logical connective we mean the connectives \(\neg,\wedge,\vee,\rightarrow,\forall,\exists\), and by separating connective we mean \(\ast\) and \(\neg\ast\). Further, \((e\hookrightarrow-)\) denotes \(\exists x(e\hookrightarrow x)\) for a fresh \(x\), \(\textbf{emp}\) denotes \(\forall x(x\not\rightarrow-)\), and \((e\mapsto e^{\prime})\) denotes \((e\hookrightarrow e^{\prime})\wedge(\forall x((x\hookrightarrow-)\to x=e))\) for a fresh \(x\). We use \(\not\leadsto\) and \(\neq\) as negations of the predicate as usual, and in particular \((e\not\hookrightarrow-)\) is \(\neg\exists x(e\hookrightarrow x)\). We may drop matching parentheses if doing so would not give rise to ambiguity. Note that \(h,s\models\textbf{emp}\) iff \(\mathit{dom}(h)=\emptyset\), and \(h,s\models(e\mapsto e^{\prime})\) iff \(\mathit{dom}(h)=\{s(e)\}\) and \(h(s(e))=s(e^{\prime})\). An assertion is _first-order_ if its construction does not involve separating connectives or modalities.
The assertion \((e\hookrightarrow e^{\prime})\) is implied by \((e\mapsto e^{\prime})\), and to express the latter using the former requires the use of separating connectives (i.e. \((e\hookrightarrow e^{\prime})\) is equivalent to \(\textbf{true}\ast(e\hookrightarrow e^{\prime})\)), whereas our definition of \((e\mapsto e^{\prime})\) requires only logical connectives, and thus we use \((e\hookrightarrow e^{\prime})\) as atomic formula.
A specification \(\{p\}\)\(S\)\(\{q\}\) is a triple that consists of a precondition \(p\), a program \(S\), and a postcondition \(q\). Specifications are interpreted in the sense of strong partial correctness, which ensures absence of explicit
Figure 1: Semantics of basic instructions of heap manipulating programs.
\(h,s\models b\) iff \(s(b)=\mathbf{true}\),
\(h,s\models(e\hookrightarrow e^{\prime})\) iff \(s(e)\in\mathit{dom}(h)\) and \(h(s(e))=s(e^{\prime})\),
\(h,s\models(p\wedge q)\) iff \(h,s\models p\) and \(h,s\models q\),
\(h,s\models(p\to q)\) iff \(h,s\models p\) implies \(h,s\models q\),
\(h,s\models(\forall xp)\) iff \(h,s[x:=n]\models p\) for all \(n\),
\(h,s\models(p*q)\) iff \(h_{1},s\models p\) and \(h_{2},s\models q\) for some \(h_{1},h_{2}\) such that \(h=h_{1}\uplus h_{2}\),
\(h,s\models(p*q)\) iff \(h^{\prime},s\models p\) implies \(h^{\prime\prime},s\models q\) for all \(h^{\prime},h^{\prime\prime}\) such that \(h^{\prime\prime}=h\uplus h^{\prime}\),
\(h,s\models[S]p\) iff \(\langle S,h,s\rangle\not\Rightarrow\mathbf{fail}\) and \(h^{\prime},s^{\prime}\models p\) for all \(h^{\prime},s^{\prime}\) such that \(\langle S,h,s\rangle\Rightarrow(h^{\prime},s^{\prime})\).
## 3 A sound and complete axiomatization of DSL
In dynamic logic axioms are introduced to simplify formulas in which modalities occur. For example, we have the following basic equivalences **E1-3** for simple assignments.
**Lemma 3.1** (Basic equivalences): _Let \(S\) denote a simple assignment \(x:=e\) and \(\circ\) denote a (binary) logical or separating connective._
\[[S]\mathbf{false} \equiv\mathbf{false}\] ( **E1** ) \[[S](p\circ q) \equiv[S]p\circ[S]q\] ( **E2** ) \[[S](\forall yp) \equiv\forall y([S]p)\] ( **E3** )
_In_ **E3** _we assume that_ \(y\) _does not appear in_ \(S\)_, neither in the left-hand-side of the assignment_ \(S\) _nor in its right-hand-side._
The proofs of these equivalences proceed by a straightforward induction on the structure of \(p\), where the base cases of Boolean expressions and the weak points to predicate are handled by a straightforward extension of the _substitution lemma_ for standard first-order logic. By \(b[e/x]\) we denote the result of replacing every occurrence of \(x\) in the Boolean expression \(b\) by the expression \(e\) (and similar for arithmetic expressions).
**Lemma 3.2** (Substitution lemma): \[[x:=e]b\equiv b[e/x]\qquad[x:=e](e^{\prime}\hookrightarrow e^{\prime\prime}) \equiv(e^{\prime}[e/x]\hookrightarrow e^{\prime\prime}[e/x])\] ( **E4** ): **Proof.** This lemma follows from the semantics of simple assignment modality and the substitution lemma of first-order expressions: \(s(e^{\prime}[e/x])=s[x:=s(e)](e^{\prime})\). Note that expressions do not refer to the heap. \(\Box\)
The above equivalences **E1-3** do not hold in general for the other basic instructions. For example, we have \(\left[x:=[e]\right]\mathbf{false}\equiv\neg(e\hookrightarrow-)\). On the other hand, \(\left[x:=\mathbf{cons}(0)\right]\mathbf{false}\equiv\mathbf{false}\), but \(\left[x:=\mathbf{cons}(0)\right](x\neq 0)\) is not equivalent to \(\neg([x:=\mathbf{cons}(0)](x=0))\), because \(\left[x:=\mathbf{cons}(0)\right](x\neq 0)\) is equivalent to \((0\hookrightarrow-)\) ('zero is allocated'), whereas \(\neg([x:=\mathbf{cons}(0)](x=0))\) expresses that \((n\not\hookrightarrow-)\), for some \(n\neq 0\) (which holds for any finite heap).
The above equivalences **E1-3**, with **E2** restricted to the (standard) logical connectives, _do_ hold for the _pseudo_ instructions \(\langle x\rangle:=e\), a so-called _heap update_, and \(\langle x\rangle:=\bot\), a so-called _heap clear_. These pseudo instructions are defined by the transitions
\[\langle\langle x\rangle:=e,h,s\rangle\Rightarrow(h[s(x):=s(e)],s)\text{ and } \langle\langle x\rangle:=\bot,h,s\rangle\Rightarrow(h[s(x):=\bot],s)\]
Figure 2: Semantics of Dynamic Separation Logic.
In contrast to the mutation and de-allocation instructions, these pseudo-instructions do not require that \(s(x)\in\mathit{dom}(h)\), e.g., if \(s(x)\not\in\mathit{dom}(h)\) then the heap update \(\langle x\rangle:=e\) extends the domain of the heap, whereas \([x]:=e\) leads to failure in that case. From a practical viewpoint, the heap update and heap clear pseudo-instructions are 'lower level' instructions, e.g. in processors that implement virtual memory (where an operating system allocates memory on the fly whenever a program performs a write to a virtual address that is not allocated), and on top of these instructions efficient memory allocation algorithms are implemented, e.g. malloc and free in C. In the following lemma we give an axiomatization in DSL of the basic SL instructions in terms of simple assignments and these two pseudo-instructions. For comparison we also give the standard SL axiomatization [19, 8, 3].
**Lemma 3.3** (Axioms basic instructions): \[[x:=[e]]p\equiv\exists y((e\hookrightarrow y)\wedge[x:=y]p),\] ( **E5 **)** \[[[x]:=e]p\equiv\left\{\begin{array}{l}(x\hookrightarrow-)\wedge[ \langle x\rangle:=e]p\\ (x\mapsto-)\ast((x\mapsto e)\twoheadrightarrow p)\end{array}\right.\] ( **E6 **)** \[[x:=\mathbf{cons}(e)]p\equiv\left\{\begin{array}{l}\forall x((x \not\rightarrow-)\rightarrow[\langle x\rangle:=e]p)\\ \forall x((x\mapsto e)\twoheadrightarrow p)\end{array}\right.\] ( **E7 **)** \[[\mathbf{dispose}(x)]p\equiv\left\{\begin{array}{l}(x \hookrightarrow-)\wedge[\langle x\rangle:=\bot]p\\ (x\mapsto-)\ast p\end{array}\right.\] ( **E8 **)
Note that \([x:=y]p\) in **E5** reduces to \(p[y/x]\) by **E1-4**. For technical convenience only, we require in the axioms for \(x:=\mathbf{cons}(e)\) that \(x\) does not appear in \(e\) (see Section 5 to lift this restriction).
In the sequel **E5-8** refer to the corresponding DSL equivalences. The proofs of these equivalences are straightforward (consist simply of expanding the semantics of the involved modalities) and therefore omitted.
We have the following SL axiomatization of the heap update and heap clear pseudo-instructions.
\[[\langle x\rangle:=e]p\equiv((x\mapsto-)\ast((x\mapsto e) \twoheadrightarrow p))\vee((x\not\rightarrow-)\wedge((x\mapsto e)\twoheadrightarrow p))\] \[[\langle x\rangle:=\bot]p\equiv((x\mapsto-)\ast p)\vee((x \not\rightarrow-)\wedge p)\]
This axiomatization thus requires a case distinction between whether or not \(x\) is allocated.
For the complementary approach, we want to resolve the modalities for the heap update and heap clear instructions compositionally in terms of \(p\). What thus remains for a complete axiomatization is a characterization of \([S]b\), \([S](e\hookrightarrow e^{\prime})\), \([S](p\ast q)\), and \([S](p\twoheadrightarrow q)\), where \(S\) denotes one of the two pseudo-instructions. Lemma 3.4 provides an axiomatization in DSL of a heap update.
**Lemma 3.4** (Heap update): _We have the following equivalences for the heap update modality._
\[[\langle x\rangle:=e]b\equiv b,\] ( **E9 **)** \[[\langle x\rangle:=e](e^{\prime}\hookrightarrow e^{\prime\prime}) \equiv(x=e^{\prime}\wedge e^{\prime\prime}=e)\vee(x\neq e^{ \prime}\wedge e^{\prime}\hookrightarrow e^{\prime\prime}),\] ( **E10 **)** \[[\langle x\rangle:=e](p\ast q)\equiv([\langle x\rangle:=e]p\ast q^{ \prime})\vee(p^{\prime}\ast[\langle x\rangle:=e]q),\] ( **E11 **)** \[[\langle x\rangle:=e](p\twoheadrightarrow q)\equiv p^{\prime} \twoheadrightarrow[\langle x\rangle:=e]q,\] ( **E12 **)
_where \(p^{\prime}\) abbreviates \(p\wedge(x\not\rightarrow-)\) and, similarly, \(q^{\prime}\) abbreviates \(q\wedge(x\not\rightarrow-)\)._
These equivalences we can informally explain as follows. Since the heap update \(\langle x\rangle:=e\) does not affect the store, and the evaluation of a Boolean condition \(b\) only depends on the store, we have that \(([\langle x\rangle:=e]b)\equiv b\).
Predicting whether \((e^{\prime}\hookrightarrow e^{\prime\prime})\) holds after \(\langle x\rangle:=e\), we only need to make a distinction between whether \(x\) and \(e^{\prime}\) are aliases, that is, whether they denote the same location, which is simply expressed by \(x=e^{\prime}\). If \(x=e^{\prime}\) then \(e^{\prime\prime}=e\) should hold, otherwise \((e^{\prime}\hookrightarrow e^{\prime\prime})\) (note again, that \(\langle x\rangle:=e\) does not affect the values of the expressions \(e,e^{\prime}\) and \(e^{\prime\prime}\)). As a basic example, we compute
\[[\langle x\rangle:=e](y\hookrightarrow-) \equiv\text{(definition $y\hookrightarrow-$)}\] \[[\langle x\rangle:=e]\exists z(y\hookrightarrow z) \equiv\text{(\bf E3)}\] \[\exists z[\langle x\rangle:=e](y\hookrightarrow z) \equiv\text{(\bf E10)}\] \[\exists z((y=x\wedge e=z)\vee(y\neq x\wedge(y\hookrightarrow z))) \equiv\text{(semantics SL)}\] \[y\neq x\rightarrow(y\hookrightarrow-)\]
We use this derived equivalence in the following example:
\[[\langle x\rangle:=e](y\mapsto-) \equiv\text{(definition $y\mapsto-$)}\] \[[\langle x\rangle:=e]((y\hookrightarrow-)\wedge\forall z((z \hookrightarrow-)\to z=y)) \equiv\text{(\bf E2, E3, E9)}\] \[[\langle x\rangle:=e](y\hookrightarrow-)\wedge\forall z([\langle x \rangle:=e](z\hookrightarrow-)\to z=y) \equiv\text{(see above)}\] \[(y\neq x\rightarrow(y\hookrightarrow-))\wedge\forall z((z\neq x \rightarrow(z\hookrightarrow-))\to z=y) \equiv\text{(semantics SL)}\] \[y=x\wedge(\mathbf{emp}\vee(x\mapsto-))\]
Predicting whether \((p*q)\) holds after the heap update \(\langle x\rangle:=e\), we need to distinguish between whether \(p\) or \(q\) holds for the sub-heap that contains the (updated) location \(x\). Since we do not assume that \(x\) is already allocated, we instead distinguish between whether \(p\) or \(q\) holds initially for the sub-heap that does _not_ contain the updated location \(x\). As a simple example, we compute
\[[\langle x\rangle:=e](\mathbf{true}*(x\mapsto-)) \equiv\text{(\bf E9,E11)}\] \[(\mathbf{true}*((x\mapsto-)\wedge(x\not\hookrightarrow-)))\vee((x \not\hookrightarrow-)*[\langle x\rangle:=e](x\mapsto-) \equiv\text{(see above)}\] \[(\mathbf{true}*((x\mapsto-)\wedge(x\not\hookrightarrow-)))\vee((x \not\hookrightarrow-)*(\mathbf{emp}\vee(x\mapsto-))) \equiv\text{(semantics SL)}\] \[(\mathbf{true}*\mathbf{false})\vee((x\not\hookrightarrow-)*( \mathbf{emp}\vee(x\mapsto-))) \equiv\text{(semantics SL)}\] \[\mathbf{true}\]
Note that this coincides with the above calculation of \([\langle x\rangle:=e](y\hookrightarrow-)\), which also reduces to \(\mathbf{true}\), instantiating \(y\) by \(x\).
The semantics of \((p\twoheadrightarrow q)\) after the heap update \(\langle x\rangle:=e\) involves universal quantification over all disjoint heaps that do not contain \(x\) (because after the heap update \(x\) is allocated). Therefore we simply add the condition that \(x\) is not allocated to \(p\), and apply the heap update to \(q\). As a very basic example, we compute
\[[\langle x\rangle:=0]((y\hookrightarrow 1)\twoheadrightarrow(y \hookrightarrow 1)) \equiv\text{(\bf E12)}\] \[((y\mapsto 1)\wedge(x\not\hookrightarrow-))\twoheadrightarrow[ \langle x\rangle:=0](y\hookrightarrow 1)) \equiv\text{(\bf E10)}\] \[((y\mapsto 1)\wedge(x\not\hookrightarrow-))\twoheadrightarrow((y =x\wedge 0=1)\vee(y\neq x\wedge y\hookrightarrow 1)) \equiv\text{(semantics SL)}\] \[\mathbf{true}\]
Note that \((y\hookrightarrow 1)\twoheadrightarrow(y\hookrightarrow 1)\equiv\mathbf{true}\) and \([\langle x\rangle:=0]\mathbf{true}\equiv\mathbf{true}\).
**Proof of Lemma 3.4.**
**E9**: \(h,s\models[\langle x\rangle:=e]b\) iff (semantics heap update modality) \(h[s(x):=s(e)],s\models b\) iff (\(b\) does not depend on the heap) \(h,s\models b\)
**E10**: \(h,s\models[\langle x\rangle:=e](e^{\prime}\hookrightarrow e^{\prime\prime})\) iff (semantics heap update modality) \(h[s(x):=s(e)],s\models e^{\prime}\hookrightarrow e^{\prime\prime}\) iff (semantics points-to) \(h[s(x):=s(e)](s(e^{\prime}))=s(e^{\prime\prime})\) iff (definition \(h[s(x):=s(e)]\)) if \(s(x)=s(e^{\prime})\) then \(s(e)=s(e^{\prime\prime})\) else \(h(s(e^{\prime}))=s(e^{\prime\prime})\) iff (semantics assertions) \(h,s\models(x=e^{\prime}\wedge e^{\prime\prime}=e)\vee(x\neq e^{\prime}\wedge e ^{\prime}\hookrightarrow e^{\prime\prime})\)
**E11**: \(h,s\models[\langle x\rangle:=e](p*q)\) iff (semantics heap update modality) \(h[s(x):=s(e)],s\models p*q\). From here we proceed as follows. By the semantics of separating conjunction, there exist \(h_{1}\) and \(h_{2}\) such that \(h[s(x):=s(e)]=h_{1}\uplus h_{2}\), \(h_{1},s\models p\), and \(h_{2},s\models q\). Let \(s(x)\in\mathit{dom}(h_{1})\) (the other case runs similarly). So \(h[s(x):=s(e)]=h_{1}\uplus h_{2}\) implies \(h_{1}(s(x))=s(e)\) and \(h=h_{1}[s(x):=h(x)]\uplus h_{2}\), By the semantics of the heap update modality, \(h_{1}(s(x))=s(e)\) and \(h_{1},s\models p\) implies \(h_{1}[s(x):=h(x)],s\models[\langle x\rangle:=e]p\). Since \(s(x)\not\in\mathit{dom}(h_{2})\), we have \(h_{2},s\models q\wedge x\not\hookrightarrow-\). By the semantics of separation conjunction we conclude that \(h,s\models[\langle x\rangle:=e]p*q^{\prime}\) (\(q^{\prime}\) denotes \(q\wedge x\not\hookrightarrow-\)).
In the other direction, from \(h,s\models[\langle x\rangle:=e]p*q^{\prime}\) (the other case runs similarly) we derive that there exist \(h_{1}\) and \(h_{2}\) such that \(h=h_{1}\uplus h_{2}\), \(h_{1},s\models[\langle x\rangle:=e]p\) and \(h_{2},s\models q^{\prime}\). By the semantics of the heap update modality it follows that \(h_{1}[s(x):=s(e)],s\models p\). Since \(s(x)\not\in\mathit{dom}(h_{2})\), we have that \(h[s(x):=s(e)]=h_{1}[s(x):=s(e)]\uplus h_{2}\), and so \(h[s(x):=s(e)],s\models p*q\), that is, \(h,s\models[\langle x\rangle:=e](p*q)\).
**E12**: \(h,s\models[\langle x\rangle:=e](p*q)\) iff (semantics of heap update modality) \(h[s(x):=s(e)],s\models p\twoheadrightarrow q\) iff (semantics separating implication) for every \(h^{\prime}\) disjoint from \(h[s(x):=s(e)]\): if \(h^{\prime},s\models p\) then \(h[s(x):=s(e)]\uplus h^{\prime},s\models q\) iff (since \(s(x)\not\in\mathit{dom}(h^{\prime})\)) for every \(h^{\prime}\) disjoint from \(h\): if \(h^{\prime},s\models p\wedge x\not\hookrightarrow-\) then \((h\uplus h^{\prime})[s(x):=s(e)],s\models q\) iff (semantics of heap update modality) for every \(h^{\prime}\) disjoint from \(h\): if \(h^{\prime},s\models p\wedge x\not\hookrightarrow-\) then \(h\uplus h^{\prime},s\models[s(x):=s(e)]q\) iff (semantics separating implication) \(h,s\models(p\wedge x\not\hookrightarrow-)\twoheadrightarrow[\langle x\rangle:=e]q\).
The equivalences for the heap clear modality in the following lemma can be informally explained as follows: Since \(\langle x\rangle:=\bot\) does not affect the store, and the evaluation of a Boolean condition \(b\) only depends on the store, we have that \([\langle x\rangle:=\bot]b=b\). For \(e\hookrightarrow e^{\prime}\) to hold after executing \(\langle x\rangle:=\bot\), we must initially have that \(x\neq e\) and \(e\hookrightarrow e^{\prime}\). As a simple example, we have that \(\forall y,z(y\not\hookrightarrow z)\) characterizes the empty heap. It follows that \([\langle x\rangle:=\bot](\forall y,z(y\not\hookrightarrow z))\) is equivalent to \(\forall y,z(\neg(y\neq x\wedge y\hookrightarrow z))\). The latter first-order formula is equivalent to \(\forall y,z(y=x\lor y\not\hookrightarrow z)\). This assertion thus states that the domain consists at most of the location \(x\), which indeed ensures that after \(\langle x\rangle:=\bot\) the heap is empty. To ensure that \(p*q\) holds after clearing \(x\) it suffices to show that the initial heap can be split such that both \(p\) and \(q\) hold in their respective sub-heaps with \(x\) cleared. The semantics of \(p\twoheadrightarrow q\) after clearing \(x\) involves universal quantification over all disjoint heaps that do may contain \(x\), whereas before executing \(\langle x\rangle:=\bot\) it involves universal quantification over all disjoint heaps that do _not_ contain \(x\), in case \(x\) is allocated initially. To formalize in the initial configuration universal quantification over all disjoint heaps we distinguish between all disjoint heaps that do not contain \(x\) and _simulate_ all disjoint heaps that contain \(x\) by interpreting both
\(p\) and \(q\) in \(p\twoheadrightarrow q\) in the context of heap updates \(\langle x\rangle:=y\) with _arbitrary_ values \(y\) for the location \(x\). As a very basic example, consider \([\langle x\rangle:=\bot\!]((x\mathrel{\hbox to 0.0pt{\raisebox{1.29pt}{$\rightharpoonup$}} \raisebox{-1.29pt}{$\rightharpoonup$}}0)\twoheadrightarrow(x\mathrel{\hbox to 0.0pt{ \raisebox{1.29pt}{$\rightharpoonup$}} \raisebox{-1.29pt}{$\rightharpoonup$}}0))\), which should be equivalent to **true**. The left conjunct \(((x\mathrel{\hbox to 0.0pt{\raisebox{1.29pt}{$\rightharpoonup$}} \raisebox{-1.29pt}{$\rightharpoonup$}}0)\wedge(x\mathrel{\hbox to 0.0pt{ \raisebox{1.29pt}{$\rightharpoonup$}} \raisebox{-1.29pt}{$\rightharpoonup$}}-))\twoheadrightarrow[\langle x\rangle:= \bot\!](x\mathrel{\hbox to 0.0pt{\raisebox{1.29pt}{$\rightharpoonup$}} \raisebox{-1.29pt}{$\rightharpoonup$}}0))\) of the resulting formula after applying **E16** is equivalent to **true** (because \((x\mathrel{\hbox to 0.0pt{\raisebox{1.29pt}{$\rightharpoonup$}} \raisebox{-1.29pt}{$\rightharpoonup$}}0)\wedge(x\mathrel{\hbox to 0.0pt{ \raisebox{1.29pt}{$\rightharpoonup$}} \raisebox{-1.29pt}{$\rightharpoonup$}}-)\) is equivalent to **false**). We compute the second conjunct (in the application of **E10** we omitted some trivial reasoning steps):
\[\forall y([\langle x\rangle:=y](x\mathrel{\hbox to 0.0pt{ \raisebox{1.29pt}{$\rightharpoonup$}} \raisebox{-1.29pt}{$\rightharpoonup$}}0)\twoheadrightarrow[\langle x\rangle:=y](x \mathrel{\hbox to 0.0pt{\raisebox{1.29pt}{$\rightharpoonup$}} \raisebox{-1.29pt}{$\rightharpoonup$}}0)\equiv(\hbox{\bf E10})\] \[\forall y(y=0\twoheadrightarrow y=0) \equiv(\hbox{\rm semantics SL})\] **true** **Lemma 3.5** (Heap clear): _We have the following equivalences for the heap clear modality._
\[[\langle x\rangle:=\bot]b \equiv b,\] (**E13**) \[[\langle x\rangle:=\bot](e\mathrel{\hbox to 0.0pt{ \raisebox{1.29pt}{$\rightharpoonup$}} \raisebox{-1.29pt}{$\rightharpoonup$}}e^{\prime}) \equiv(x\neq e)\wedge(e\mathrel{\hbox to 0.0pt{\raisebox{1.29pt}{$ \rightharpoonup$}}\raisebox{-1.29pt}{$\rightharpoonup$}}e^{\prime}),\] (**E14**) \[[\langle x\rangle:=\bot](p\ast q) \equiv[\langle x\rangle:=\bot]p\ast[\langle x\rangle:=\bot]q,\] (**E15**) \[[\langle x\rangle:=\bot](p\twoheadrightarrow q) \equiv((p\wedge x\not\mathrel{\hbox to 0.0pt{\raisebox{1.29pt}{$ \rightharpoonup$}} \raisebox{-1.29pt}{$\rightharpoonup$}}-)\twoheadrightarrow[\langle x\rangle:= \bot]q)\wedge\forall y([\langle x\rangle:=y]p\twoheadrightarrow[\langle x \rangle:=y]q),\] (**E16**)
_where \(y\) is fresh._
**Proof.** Here we go.
**E13**: \([\langle x\rangle:=\bot]b\equiv b\). As above, it suffices to observe that the evaluation of \(b\) does not depend on the heap.
**E14**: \(h,s\models[\langle x\rangle:=\bot](e\mathrel{\hbox to 0.0pt{\raisebox{1.29pt}{$ \rightharpoonup$}}\raisebox{-1.29pt}{$\rightharpoonup$}}e^{\prime})\) iff (semantics heap clear modality) \(h[\langle s(x)\rangle:=\bot],s\models e\mathrel{\hbox to 0.0pt{\raisebox{1.29pt}{$ \rightharpoonup$}}\raisebox{-1.29pt}{$\rightharpoonup$}}e^{\prime}\) iff (semantics points-to) \(s(e)\in\mathit{dom}(h[\langle s(x)\rangle:=\bot])\) and \(h[\langle s(x)\rangle:=\bot](s(e))=h(s(e))=s(e^{\prime})\) iff (semantics assertions) \(h,s\models x\neq e\wedge e\mathrel{\hbox to 0.0pt{\raisebox{1.29pt}{$ \rightharpoonup$}}\raisebox{-1.29pt}{$\rightharpoonup$}}e^{\prime}\) iff (semantics heap clear modality) \(h[\langle s(x)\rangle:=\bot],s\models p\ast q\) iff (semantics separation conjunction) \(h_{1},s\models p\) and \(h_{2},s\models q\), for some \(h_{1},h_{2}\) such that \(h[\langle s(x)\rangle:=\bot]=h_{1}\uplus h_{2}\) iff (semantics heap clear modality) \(h_{1},s\models[\langle x\rangle:=\bot]p\) and \(h_{2},s\models[\langle x\rangle:=\bot]q\), for some \(h_{1},h_{2}\) such that \(h=h_{1}\uplus h_{2}\). Note: \(h=h_{1}\uplus h_{2}\) implies \(h[\langle s(x)\rangle:=\bot]=h_{1}[\langle s(x)\rangle:=\bot]\uplus h_{2}[ \langle s(x)\rangle:=\bot]\), and, conversely, \(h[\langle s(x)\rangle:=\bot]=h_{1}\uplus h_{2}\) implies there exists \(h_{1}^{\prime},h_{2}^{\prime}\) such that \(h=h_{1}^{\prime}\uplus h_{2}^{\prime}\) and \(h_{1}=h_{1}^{\prime}[\langle s(x)\rangle:=\bot]\) and \(h_{2}=h_{2}^{\prime}[\langle s(x)\rangle:=\bot]\).
**E16**: \(h,s\models[\langle x\rangle:=\bot](p\twoheadrightarrow q)\) iff (semantics heap clear modality) \(h[s(x):=\bot],s\models p\twoheadrightarrow q\).
From here we proceed as follows. First we show that \(h,s\models((p\wedge x\not\mathrel{\hbox to 0.0pt{\raisebox{1.29pt}{$ \rightharpoonup$}} \raisebox{-1.29pt}{$\rightharpoonup$}}-)\twoheadrightarrow[\langle x\rangle:= \bot]q)\) and \(h,s\models\forall y([\langle x\rangle:=y]p\twoheadrightarrow[\langle x\rangle:= y]q)\) implies \(h[s(x):=\bot],s\models p\twoheadrightarrow q\). Let \(h^{\prime}\) be disjoint from \(h[s(x):=\bot]\) and \(h^{\prime},s\models p\). We have to show that \(h[s(x):=\bot]\uplus h^{\prime},s\models q\). We distinguish the following two cases.
* First, let \(s(x)\in\mathit{dom}(h^{\prime})\). We then introduce \(s^{\prime}=s[y:=h^{\prime}(s(x))]\). We have \(h^{\prime},s^{\prime}\models p\) (since \(y\) does not occur in \(p\)), so it follows by the semantics of the heap update modality that \(h^{\prime}[s(x):=\bot],s^{\prime}\models[\langle x\rangle:=y]p\). Since \(h^{\prime}[s(x):=\bot]\) and \(h\) are disjoint (which clearly follows from that \(h^{\prime}\) and \(h[s(x):=\bot]\) are disjoint), and since \(h,s^{\prime}\models[\langle x\rangle:=y]p\twoheadrightarrow[\langle x\rangle:= y]q\), we have that \(h\uplus(h^{\prime}[s(x):=\bot]),s^{\prime}\models[\langle x\rangle:=y]q\). Applying again the semantics of the heap update modality, we obtain \((h\uplus(h^{\prime}[s(x):=\bot]))[s(x):=\bot]
\(s^{\prime}(y)],s^{\prime}\models q\). We then can conclude this case observing that \(y\) does not occur in \(q\) and that \(h[s(x):=\bot\uplus h^{\prime}=(h\uplus(h^{\prime}[s(x):=\bot]))[s(x):=s^{\prime}( y)]\).
* Next, let \(s(x)\not\in dom(h^{\prime})\). So \(h^{\prime}\) and \(h\) are disjoint, and thus (since \(h,s\models(p\wedge x\not\hookrightarrow-)\twoheadrightarrow[\langle x\rangle:= \bot]q\)) we have \(h\uplus h^{\prime},s\models[\langle x\rangle:=\bot]q\). From which we derive \((h\uplus h^{\prime})[s(x):=\bot],s\models q\) by the induction hypothesis. We then can conclude this case by the observation that \(h[s(x):=\bot]\uplus h^{\prime}=(h\uplus h^{\prime})[s(x):=\bot]\).
Conversely, assuming \(h[s(x):=\bot],s\models p\twoheadrightarrow q\), we first show that \(h,s\models(p\wedge x\not\hookrightarrow-)\twoheadrightarrow[\langle x\rangle:= \bot]q\) and then \(h,s\models\forall y([\langle x\rangle:=y]p\twoheadrightarrow[\langle x \rangle:=y]q)\).
* Let \(h^{\prime}\) be disjoint from \(h\) and \(h^{\prime},s\models p\wedge x\not\hookrightarrow-\). We have to show that \(h\uplus h^{\prime},s\models[\langle x\rangle:=\bot]q\), that is, \((h\uplus h^{\prime})[s(x):=\bot],s\models q\) (by the semantics of the heap clear update). Clearly, \(h[s(x):=\bot]\) and \(h^{\prime}\) are disjoint, and so \(h[s(x):=\bot]\uplus h^{\prime},s\models q\) follows from our assumption. We then can conclude this case by the observation that \((h\uplus h^{\prime})[s(x):=\bot]=h[s(x):=\bot]\uplus h^{\prime}\), because \(s(x)\not\in dom(h^{\prime})\).
* Let \(h^{\prime}\) be disjoint from \(h\) and \(s^{\prime}=s[y:=n]\), for some \(n\) such that \(h^{\prime},s^{\prime}\models[\langle x\rangle:=y]p\). We have to show that \(h\uplus h^{\prime},s^{\prime}\models[\langle x\rangle:=y]q\). By the semantics of the heap update modality it follows that \(h^{\prime}[s(x):=n],s^{\prime}\models p\), that is, \(h^{\prime}[s(x):=n],s\models p\) (since \(y\) does not occur in \(p\)). Since \(h^{\prime}[s(x):=n]\) and \(h[s(x):=\bot]\) are disjoint, we derive from the assumption \(h[s(x):=\bot],s\models p\twoheadrightarrow q\) that \(h[s(x):=\bot]\uplus h^{\prime}[s(x):=n],s\models q\). Again by the semantics of the heap update modality we have that \(h\uplus h^{\prime},s^{\prime}\models[\langle x\rangle:=y]q\) iff \((h\uplus h^{\prime})[s(x):=n],s^{\prime}\models q\) (that is, \((h\uplus h^{\prime})[s(x):=n],s\models q\), because \(y\) does not occur in \(q\)). We then can conclude this case by the observation that \((h\uplus h^{\prime})[s(x):=n]=h[s(x):=\bot]\uplus h^{\prime}[s(x):=n]\).
\(\Box\)
We denote by \(\mathbf{E}\) the _rewrite system_ obtained from the equivalences \(\mathbf{E1}\)-\(\mathbf{16}\) by orienting these equivalences from left to right, e.g., equivalence \(\mathbf{E1}\) is turned into a rewrite rule \([S]\mathbf{false}\Rightarrow\mathbf{false}\). The following theorem states that the rewrite system \(\mathbf{E}\) is complete, that is, confluent and strongly normalizing. Its proof is straightforward (using standard techniques) and therefore omitted.
**Theorem 3.6** (Completeness of \(\mathbf{E}\)):
* **Normal form.** _Every standard formula_ \(p\) _of SL is in normal form (which means that it cannot be reduced by the rewrite system_ \(\mathbf{E}\)_)._
* **Local confluence.** _For any two reductions_ \(p\Rightarrow q_{1}\) _and_ \(p\Rightarrow q_{2}\) _(_\(p\) _a formula of DSL) there exists a DSL formula_ \(q\) _such that_ \(q_{1}\Rightarrow q\) _and_ \(q_{2}\Rightarrow q\)_._
* **Termination.** _There does not exist an infinite chain of reductions_ \(p_{1}\Rightarrow p_{2}\Rightarrow p_{3}\cdots\)_._
We now show an example of the interplay between the modalities for heap update and heap clear. We want to derive
\[\{\forall x((x\not\hookrightarrow-)\to p)\}\ x:=\mathbf{cons}(0); \mathbf{dispose}(x)\ \{p\}\]
where statement \(x:=\mathbf{cons}(0);\mathbf{dispose}(x)\) simulates the so-called random assignment [9]: the program terminates with a value of \(x\) that is chosen non-deterministically. First we apply the axiom \(\mathbf{E8}\) for de-allocation to obtain
\[\{(x\hookrightarrow-)\wedge[\langle x\rangle:=\bot]p\}\ \mathbf{dispose}(x)\ \{p\}.\]
Next, we apply the axiom \(\mathbf{E8}\) for allocation to obtain
\[\{\forall x((x\not\hookrightarrow-)\rightarrow[\langle x\rangle:=0]((x \hookrightarrow-)\wedge[\langle x\rangle:=\bot]p))\}\]
\[x:=\mathbf{cons}(0)\]
\[\{(x\hookrightarrow-)\wedge p[\langle x\rangle:=\bot]\}.\]
Applying \(\mathbf{E10}\) (after pushing the heap update modality inside), followed by some basic first-order reason
ing, we can reduce \([\langle x\rangle:=0](\exists y(x\hookrightarrow y))\) to true. So we obtain
\[\{\forall x((x\not\hookrightarrow-)\rightarrow[\langle x\rangle:=0][ \langle x\rangle:=\bot]p)\}\] \[x:=\mathbf{cons}(0)\] \[\{(x\hookrightarrow-)\wedge p[\langle x\rangle:=\bot]\}.\]
In order to proceed we formalize the interplay between the modalities for heap update and heap clear by the following general equivalence:
\[[\langle x\rangle:=e][\langle x\rangle:=\bot]p\equiv[\langle x\rangle:=\bot]p\]
We then complete the proof by applying the sequential composition rule and consequence rule, using the above equivalence and the following axiomatization of the heap clear modality:
\[(x\not\hookrightarrow-)\wedge[\langle x\rangle:=\bot]p\equiv(x\not \hookrightarrow-)\wedge p\]
The above axiomatization can be extended in the standard manner to a program logic for sequential while programs, see [9], which does not require the frame rule, nor any other adaptation rule besides the consequence rule. For recursive programs however one does need more adaptation rules: a further discussion about the use of the frame rule in a completeness proof for recursive programs is outside the scope of this paper.
## 4 Expressiveness DSL
In this section, we illustrate the expressiveness of DSL in a completeness proof of the local mutation axiom and a novel strongest postcondition axiomatization.
### Completeness local axioms
We consider the completeness of the following local mutation axiom (completeness of the local axioms for the other standard basic instructions have already been established, as observed in the Introduction)
\[\{x\mapsto-\}\ [x]:=e\ \{x\mapsto e\}\]
The proof itself does not make use of the separating implication.
**Theorem 4.1** (Completeness local mutation axiom): _If \(\models\{p\}\ [x]:=e\ \{q\}\) then \(\{p\}\ [x]:=e\ \{q\}\) is derivable using the local mutation axiom, frame rule, and consequence rule._
**Proof.** The problem here is how to compute a 'frame' \(r\) for a given valid specification \(\{p\}\ [x]:=e\ \{q\}\) so that \(p\) implies \((x\mapsto-)*r\) and \((x\mapsto e)*r\) implies \(q\). We show here how the heap update modality can be used to describe such a frame. Let \(\models\{p\}\ [x]:=e\ \{q\}\) and \(r\) denote \(\exists y([\langle x\rangle:=y]p)\) for some fresh \(y\). By the local axiom and the frame rule, we first derive
\[\{(x\mapsto-)*r\}\ [x]:=e\ \{(x\mapsto e)*r\}.\]
Let \(h,s\models p\). To prove that \(h,s\models(x\mapsto-)*r\), it suffices to show that there exists a split \(h=h_{1}\uplus h_{2}\) such that \(h_{1},s\models(x\mapsto-)\) and \(h_{2},s[y:=n]\models[\langle x\rangle:=y]p\), for some \(n\). Since \(\models\{p\}\ [x]:=e\ \{q\}\) we have that \(s(x)\in\mathit{dom}(h)\). So we can introduce the split \(h=h_{1}\uplus h_{2}\) such that \(h_{1},s\models(x\mapsto-)\) and \(h_{2}=h[s(x):=\bot]\). By the semantics of the heap update modality it then suffices to observe that \(h_{2},s[y:=h(s(x))]\models[\langle x\rangle:=y]p\) if and only if \(h_{2}[s(x):=h(s(x))],s\models p\) (\(y\) does not appear in \(p\)), that is, \(h,s\models p\).
On the other hand, we have that \((x\mapsto e)\ast r\) implies \(q\): Let \(h,s\models(x\mapsto e)\ast r\). So there exists a split \(h=h_{1}\uplus h_{2}\) such that \(h_{1},s\models x\mapsto e\) and \(h_{2},s\models r\). Let \(n\) be such that \(h_{2},s[y:=n]\models[\langle x\rangle:=y]p\). By the semantics of the heap update modality again we have that \(h_{2},s[y:=n]\models[\langle x\rangle:=y]p\) if and only if \(h_{2}[s(x):=n],s\models p\) (here \(y\) does not appear in \(p\)). Since \(\models\{p\}\ [x]:=e\ \{q\}\) it then follows that \(h_{2}[s(x):=s(e)],s\models q\), that is, \(h,s\models q\) (note that \(h=h_{2}[s(x):=s(e)]\) because \(h(s(x))=s(e)\) and \(h_{2}=h[s(x):=\bot]\)). \(\Box\)
### Strongest postcondition axiomatization
Before we discuss a novel strongest postcondition axiomatization using the modalities of DSL, it should be noted that in general the semantics of program logics which require absence of certain failures gives rise to an asymmetry between weakest preconditions and strongest postconditions: For any statement \(S\) and postcondition \(q\) we have that \(\models\{\mbox{\bf false}\}\ S\ \{q\}\). However, for any precondition \(p\) which does not exclude failures, there does not exist _any_ postcondition \(q\) such that \(\models\{p\}\ S\ \{q\}\). We solve this by simply requiring that the given precondition does not give rise to failures (see below).
Figure 3 contains our novel strongest postcondition axiomatization SP-DSL, where the main novelty is in the use of the heap update and heap clear modalities in the axiomatization of the mutation, allocation, and de-allocation instruction. It is worthwhile to contrast, for example, the use of the heap clear modality to express freshness in the strongest postcondition axiomatization of the allocation instruction with the following traditional axiom (assuming that \(x\) does not occur free in \(p\)):
\[\{p\}\ x:=\mbox{\bf cons}(e)\ \{p\ast(x\mapsto e)\}\]
where freshness is enforced by the introduction of the separating conjunction (which as such increases the complexity of the postcondition). More specifically, we have the following instance of the allocation axiom in Figure 3 (also making use of that \(x\) does not appear in the precondition)
\[\{y\hookrightarrow 0\}\ x:=\mbox{\bf cons}(1)\ \{[\langle x\rangle:=\bot](y \hookrightarrow 0)\wedge(x\hookrightarrow 1)\}\]
Applying **E14** we obtain
\[\{y\hookrightarrow 0\}\ x:=\mbox{\bf cons}(1)\ \{y\neq x\wedge(y\hookrightarrow 0 )\wedge(x\hookrightarrow 1)\}\]
On the other hand, instantiating the above traditional axiom we obtain
\[\{y\hookrightarrow 0\}\ x:=\mbox{\bf cons}(1)\ \{(y\hookrightarrow 0)\ast(x \mapsto 1)\}\]
which is implicit and needs unraveling the semantics of separating conjunction. Using the heap clear modality we thus obtain a basic assertion in predicate logic which provides an explicit but simple account of aliasing.
Figure 3: Strongest postcondition axioms of separation logic (SP-DSL), where \(y\) is fresh everywhere and \(x\) does not occur in \(e\) in case of \(x:=\mbox{\bf cons}(e)\).
**Theorem 4.2** (Soundness and completeness SP-DSL): _For any basic instruction \(S\), we have \(\models\{p\}\ S\ \{q\}\) if and only if \(\{p\}\ S\ \{q\}\) is derivable from the axioms in SP-DSL (Figure 3) and (a single application of) the rule of consequence._
Proof.: We showcase the soundness and completeness of the strongest postcondition axiomatization of allocation (soundness and completeness of the strongest postconditions for the mutation and de-allocation instructions follow in a straightforward manner from the semantics of the heap update modality).
* \(\models\{p\}\ x:=\mathbf{cons}(e)\ \{[\langle x\rangle:=\bot](\exists y([x:=y]p)) \wedge x\hookrightarrow e\}\): Let \(h,s\models p\). We have to show that \(h[n:=s(e)],s[x:=n]\models[\langle x\rangle:=\bot](\exists y([x:=y]p))\wedge x \hookrightarrow e\), for \(n\not\in\mathit{dom}(h)\). By definition \(h[n:=s(e)],s[x:=n]\models x\hookrightarrow e\). By the semantics of the heap clear modality and existential quantification, it then suffices to show that \(h[n:=\bot],s[x:=n][y:=s(x)]\models[x:=y]p\), which by the semantics of the simple assignment modality boils down to \(h,s[y:=s(x)]\models p\) (note that \(n\not\in\mathit{dom}(h)\), that is, \(h,s\models p\) (\(y\) does not appear in \(p\)), which holds by assumption.
* \(\models\{p\}\ x:=\mathbf{cons}(e)\ \{q\}\) implies \(\models\{[\langle x\rangle:=\bot](\exists y(p[x:=y]))\wedge x\hookrightarrow e )\to q\): Let \(h,s\models[\langle x\rangle:=\bot](\exists y([x:=y]p))\wedge x\hookrightarrow e\). We have to show that \(h,s\models q\). By the semantics of the heap clear modality we derive from the above assumption that \(h[s(x):=\bot],s\models\exists y(p[x:=y])\). Let \(h[s(x):=\bot],s[y:=n]\models p[x:=y]\), for some \(n\). It follows from the semantics of the simple assignment modality that \(h[s(x):=\bot],s[x:=n]\models p\) (\(y\) does not appear in \(p\)). Since \(s(x)\not\in\mathit{dom}(h[s(x):=\bot])\), we have that \(\langle x:=\mathbf{cons}(e),h[s(x):=\bot],s[x:=n]\rangle\Rightarrow(h[s(x):=s[ x:=n](e)],s)\). Since we can assume without loss of generality that \(x\) does not occur in \(e\) we have that \(s[x:=n](e)=s(e)\), and so from the assumption that \(h,s\models x\hookrightarrow e\) we derive that \(h[s(x):=s[x:=n](e)]=h\). From \(\{p\}\ x:=\mathbf{cons}(e)\ \{q\}\) then we conclude that \(h,s\models q\).
## 5 Extensions
A straightforward extension concerns the general mutation instruction \([e]:=e^{\prime}\), which allows the use of an arbitrary arithmetic expression \(e\) to denote the updated location. We can simulate this by the statement \(x:=e;\ [x]:=e^{\prime}\), where \(x\) is a fresh variable. Applying the modalities we derive the following axiom
\[\{(e\hookrightarrow-)\wedge[x:=e][\langle x\rangle:=e^{\prime}]p\}\ [e]:=e^{\prime}\ \{p\}\]
where \(x\) is a fresh variable.
Another straightforward extension concerns the allocation \(x:=\mathbf{cons}(e)\) in the case where \(x\) does occur in \(e\). The instruction \(x:=\mathbf{cons}(e)\) can be simulated by \(y:=x\); \(y:=\mathbf{cons}(e[y/x])\) where \(y\) is a fresh variable. Applying the sequential composition rule and the axiom for basic assignments, it is straightforward to derive the following generalized backwards allocation axiom:
\[\{\forall y((y\not\hookrightarrow-)\rightarrow[y:=x][\langle y\rangle:=e[y /x]]p)\}\ x:=\mathbf{cons}(e)\ \{p\}\]
where \(y\) is fresh.
Reynolds introduced in [19] the allocation instruction \(x:=\mathbf{cons}(\bar{e})\), which allocates a consecutive part of the memory for storing the values of \(\bar{e}\): its semantics is described by
\[\langle x:=\mathbf{cons}(\bar{e}),h,s\rangle\Rightarrow(h[\bar{m}:=s(\bar{e})],s[x:=m_{1}])\]
where \(\bar{e}=e_{1},\ldots,e_{n}\), \(\bar{m}=m_{1},\ldots,m_{n}\), \(m_{i+1}=m_{i}+1\), for \(i=1,\ldots,n-1\), \(\{m_{1},\ldots,m_{n}\}\cap\mathit{dom}(h)=\emptyset\), and, finally,
\[h[\bar{m}:=s(\bar{e})](k)=\left\{\begin{array}{ll}h(k)&\mbox{if $k\not\in\{m_{1}, \ldots,m_{n}\}$}\\ s(e_{i})&\mbox{if $k=m_{i}$ for some $i=1,\ldots,n$.}\end{array}\right.\]
Let \(\bar{e}^{\prime}\) denote a sequence of expressions \(e^{\prime}_{1},\ldots e^{\prime}_{n}\) such that \(e^{\prime}_{1}\) denotes the variable \(x\) and \(e^{\prime}_{i}\) denotes the expression \(x+(i-1)\), for \(i=2,\ldots,n\). The storage of the values of \(e_{1},\ldots,e_{n}\) then can be modeled by a sequence of heap update modalities \([\langle e^{\prime}_{i}\rangle:=e_{i}]\), for \(i=1,\ldots,n\). We abbreviate such a sequence by \([\langle\bar{e}^{\prime}\rangle:=\bar{e}]\). Assuming that \(x\) does not occur in one of the expressions \(\bar{e}\) (this restriction can be lifted as described above), we have the following generalization of the above backwards allocation axiom
\[\{\forall x(\Big{(}\bigwedge_{i=1}^{n}(e^{\prime}_{i}\not\hookrightarrow-) \Big{)}\to[\langle\bar{e}^{\prime}\rangle:=\bar{e}|p\rangle)\}\ x:=\mathbf{cons }(\bar{e})\ \{p\}\]
#### Recursive predicates
Next we illustrate the extension of our approach to recursive predicates for reasoning about a linked list. Assuming a set of user-defined predicates \(r(x_{1},\ldots,x_{n})\) of arity \(n\), we introduce corresponding basic assertions \(r(e_{1},\ldots,e_{n})\) which are interpreted by (the least fixed point of) a system of recursive predicate definitions \(r(x_{1},\ldots,x_{n}):=p\), where the user-defined predicates only occur positively in \(p\).
If for any recursive definition \(r(x_{1},\ldots,x_{n}):=p\) only the formal parameters \(x_{1},\ldots,x_{n}\) occur free in \(p\), we can simply define \([x:=e]r(e_{1},\ldots,e_{n})\) by \(r(e_{1}[e/x],\ldots,e_{n}[e/x])\). However, allowing global variables in recursive predicate definitions does affect the interpretation of these definitions. As a very simple example, given \(r(y):=x=1\), clearly \(\{r(y)\}\ x:=0\ \{r(y)\}\) is invalid (and so we cannot simply define \([x:=0]r(y)\) by \(r(y[0/x])\)). Furthermore, substituting the parameters of \(r\) clearly does not make sense for modalities with heap modifications (such as mutation, allocation, etc.): as subformulas may depend on the heap, these may require alias analysis _in the definition of \(r\)_.
We illustrate how our dynamic logic works with recursively defined predicates on a characteristic linked list example. In particular, let \(r\) be the recursively defined _reachability_ predicate
\[r(x,y):=x=y\vee\exists z((x\mapsto z)*r(z,y)).\]
We shall prove \(\{r(\mathit{first},y)\}\ \mathit{first}:=\mathbf{cons}(\mathit{first})\ \{r(\mathit{first},y)\}\). To do so, we model \(\mathit{first}:=\mathbf{cons}(\mathit{first})\) by \(u:=\mathit{first};\ \mathit{first}:=\mathbf{cons}(u)\), for some fresh variable \(u\). Thus it is sufficient to show
\[\{r(\mathit{first},y)\}\ u:=\mathit{first};\ \mathit{first}:=\mathbf{cons}(u)\ \{r( \mathit{first},y)\}.\]
We first calculate the weakest precondition of the last assignment: \([\mathit{first}:=\mathbf{cons}(u)]r(\mathit{first},y)\). Using equivalence (**E7**) of Lemma 3.3 we obtain \(\forall\mathit{first}((\mathit{first}\not\hookrightarrow-)\to[(\mathit{first }):=u]r(\mathit{first},y)\).
Next, we simplify the modal subformula \([\langle\mathit{first}\rangle:=u]r(\mathit{first},y)\) we first unfold the definition of \(r\), obtaining \(\mathit{first}=y\vee\exists z((\mathit{first}\mapsto z)*r(z,y))\). By Lemma 3.4\((\mathbf{E11})\), \([\langle\mathit{first}\rangle:=u](\mathit{first}\mapsto z*r(z,y))\) reduces to the disjunction of \((\mathit{first}\mapsto z\wedge\mathit{first}\not\hookrightarrow-)*[\langle \mathit{first}\rangle:=u]r(z,y))\) and \([\langle\mathit{first}\rangle:=u](\mathit{first}\mapsto z)*(r(z,y)\wedge \mathit{first}\not\hookrightarrow-)\). In the first disjunct, the left-hand side of the separating conjunction asserts that \(\mathit{first}\) is allocated (and points to \(z\)) and that simultaneously \(\mathit{first}\) is not allocated. This clearly is false in every heap, so that whole disjunct reduces to \(\mathbf{false}\). Simplifying the second disjunct (reducing the modality with equivalence (**E10**) of Lemma 3.4) and applying standard logical equivalences, yields that the whole subformula is equivalent to
\[\mathit{first}=y\vee(r(u,y)\wedge(\mathit{first}\not\hookrightarrow-)).\]
Applying the allocation axiom and an application of the consequence rule, we obtain
\[\{\forall\mathit{first}((\mathit{first}\not\hookrightarrow-)\to( \mathit{first}=y\lor r(u,y)))\}\]
\[\mathit{first}:=\mathbf{cons}(u)\]
\[\{r(\mathit{first},y)\}.\]
Renaming _first_ by the fresh variable \(f\) does not affect \(r\), so
\[\{\forall f((f\not\rightarrow-)\rightarrow(f=y\lor r(u,y)))\}\] \[\mbox{\it first}:=\mbox{\bf cons}(u)\] \[\{r(\mbox{\it first},y)\}\]
can be derived. Also substituting \(u\) for _first_ does not affect the definition of \(r\). It then suffices to observe that \(r(\mbox{\it first},y)\) (trivially) implies \(\forall f((f\not\rightarrow-)\rightarrow(f=y\lor r(\mbox{\it first},y)))\).
## 6 Formalization in Coq
The main motivation behind formalizing results in a proof assistant is to rigorously check hand-written proofs. For our formalization we used the dependently-typed calculus of inductive constructions as implemented by the Coq proof assistant. We have used no axioms other than the axiom of function extensionality (for every two functions \(f,g\) we have that \(f=g\) if \(f(x)=g(x)\) for all \(x\)). This means that we work with an underlying intuitionistic logic: we have not used the axiom of excluded middle for reasoning classically about propositions. However, the decidable propositions (propositions \(P\) for which the excluded middle \(P\vee\neg P\) can be proven) allow for a limited form of classical reasoning.
We formalize the basic instructions of our programming language (assignment, look-up, mutation, allocation, and deallocation) and the semantics of basic instructions. For Boolean and arithmetic expressions we use a shallow embedding, so that those expressions can be directly given as a Coq term of the appropriate type (with a coincidence condition assumed, i.e. that values of expressions depend only on finitely many variables of the store).
There are two approaches in formalizing the semantics of assertions: shallow and deep embedding. We have taken both approaches. In the first approach, the shallow embedding of assertions, we define assertions of DSL by their extension of satisfiability (i.e. the set of heap and store pairs in which the assertion is satisfied), that must satisfy a coincidence condition (assertions depend only on finitely many variables of the store) and a stability condition (see below). The definition of the modality operator follows from the semantics of programs, which includes basic control structures such as the **while**-loop. In the second approach, the deep embedding of assertions, assertions are modeled using an inductive type and we explicitly introduce two meta-operations on assertions that capture the heap update and heap clear modality. We have omitted the clauses for **emp** and \((e\mapsto e^{\prime})\), since these could be defined as abbreviations, and we restrict to the basic instructions.
In the deep embedding we have no constructor corresponding to the program modality \([S]p\). Instead, two meta-operations denoted \(p[\langle x\rangle=e]\) and \(p[\langle x\rangle:=\bot]\) are defined recursively on the structure of \(p\). Crucially, we formalized and proven the following lemmas (the details are almost the same as showing the equivalences hold in the shallow embedding, Lemmas 3.4 and 3.5):
**Lemma 6.1** (Heap update substitution lemma): \(h,s\models p[\langle x\rangle:=e]\) _iff \(h[s(x):=s(e)],s\models p\)._
**Lemma 6.2** (Heap clear substitution lemma): \(h,s\models p[\langle x\rangle:=\bot]\) _iff \(h[s(x):=\bot],s\models p\)._
By also formalizing a deep embedding, we show that the modality operator can be defined entirely on the meta-level by introducing meta-operations on formulas that are recursively defined by the structure of assertions: this captures Theorem 3.6. The shallow embedding, on the other hand, is easier to show that our approach can be readily extended to complex programs including **while**-loops.
In both approaches, the semantics of assertions is classical, although we work in an intuitionistic meta-logic. We do this by employing a double negation translation, following the set-up by R. O'Connor [14]. In particular, we have that our satisfaction relation \(h,s\models p\) is stable, i.e. \(\neg\neg(h,s\models p)\) implies \(h,s\models p\). This allows us to do classical reasoning on the image of the higher-order semantics of our assertions.
The source code of our formalization is accompanied with this paper as a digital artifact (which includes the files shallow/Language.v and shallow/Proof.v, and the files deep/Heap.v, deep/Language.v,
deep/Classical.v). The artifact consists of the following files:
* shallow/Language.v: Provides a shallow embedding of Boolean expressions and arithmetic expressions, and a shallow embedding of our assertion language, as presented in the prequel.
* shallow/Proof.v: Provides proof of the equivalences (**E1-16**), and additionally standard equivalences for modalities involving complex programs.
* deep/Heap.v: Provides an axiomatization of heaps as partial functions.
* deep/Language.v: Provides a shallow embedding of Boolean expressions and arithmetic expressions, and a deep embedding of our assertion language, on which we inductively define the meta operations of heap update and heap clear. We finally formalize Hoare triples and proof systems using weakest precondition and strongest postcondition axioms for the basic instructions.
* deep/Classical.v: Provides the classical semantics of assertions, and the strong partial correctness semantics of Hoare triples. Further it provides proofs of substitution lemmas corresponding to our meta-operators. Finally, it provides proofs of the soundness and completeness of the aforementioned proof systems.
## 7 Conclusion and related work
To the best of our knowledge no other works exist that study dynamic logic extensions of SL. We have shown how we can combine the standard programming logics in SL with a new DSL axiomatization of both weakest preconditions and strongest postconditions. These new axiomatizations in DSL have the so-called property of _gracefulness_:6 any first-order postcondition gives rise to a first-order weakest precondition (for any basic instruction). A property that existing axiomatizations of SL, such as given by C. Bannister, P. Hofner and G. Klein [4], and M. Faisal Al Ameen and M. Tatsuta [9], lack. (See also [22].) As a simple example, in our approach \([[x]:=0](y\hookrightarrow z)\) can be resolved to the first-order formula
Footnote 6: The term ‘graceful’, coined by J.C. Blanchette [23], comes from higher-order automated theorem proving where it means that a higher-order prover does not perform significantly worse on first-order problems than existing first-order provers that lack the ability to reason about higher-order problems.
\[(x\hookrightarrow-)\wedge((y=x\wedge z=0)\vee(y\neq x\wedge y \hookrightarrow z))\]
by applying the above equivalences **E6** and **E10**. The standard rule for backwards reasoning in [20] however gives the weakest precondition:
\[(x\mapsto-)\ast((x\mapsto 0)\twoheadrightarrow(y\hookrightarrow z))\]
Despite their different formulations, both formulas characterize \([[x]:=0](y\hookrightarrow z)\), and thus must be equivalent. In fact, the equivalence has been proven in our Coq formalization (Section 6). Surprisingly, this however exceeds the capability of all the automated SL provers in the benchmark competition for SL [21]. In particular, only the CVC4-SL tool [18] supports the fragment of SL that includes the separating implication connective. However, from our own experiments with that tool, we found that it produces an incorrect counter-example and reported this as a bug to one of the maintainers of the project [17]. In fact, the latest version, CVC5-SL, reports the same input as 'unknown', indicating that the tool is incomplete. Furthermore, we have investigated whether the equivalence of these formulas can be proven in an interactive tool for reasoning about SL: the Iris project [12]. However, also in that system it is not possible to show the equivalence of these assertions, at least not without adding additional axioms to its underlying model [13]. On the other hand, the equivalence between the above two formulas can be expressed in quantifier-free separation logic, for which a complete axiomatization of all valid formulas has been given in [7].
In general, the calculation of \([S]p\) by means of a compositional analysis of \(p\), in contrast with the standard approach, does not generate additional _nesting_ of the separating connectives. On the other
hand, the compositional analysis generates a case distinction in the definitions of \([\langle x\rangle:=e](p\ast q)\) and \([\langle x\rangle:=\bot](p\rightarrow q)\). How the combined application of the two approaches works in practice needs to be further investigated. Such an investigation will also involve the use of the modalities for the basic instructions in the generation of the verification conditions of a program (as is done for example in the KeY tool [1] for the verification of Java programs), which allows to _postpone_ and _optimize_ their actual application. For example, the equivalence
\[[x:=e][\langle y\rangle:=e^{\prime}]p\equiv[\langle y\rangle:=e^{\prime}[e/x]] [x:=e]p\]
allows to resolve the simple assignment modality by 'pushing it inside'.
Other works that investigate weakest preconditions in SL are briefly discussed below. For example, [3] investigates both weakest preconditions and strongest postconditions in SL, also obtained through a trans-formational approach. However, the transformation uses other separating connectives (like _septraction_), and thus is not graceful. On the other hand, in [13] an alternative logic is introduced which, instead of the separating connectives, extends standard first-order logic with an operator \(\mathit{Sp}(p)\) which captures the parts of the heap the (first-order) formula \(p\) depends on. Thus also [13] goes beyond first-order, and is not graceful. But the main motivation of that work coincides with ours: avoiding unnecessary reasoning about the separating connectives.
Our artifact formalizes the syntax and semantics of programs and assertions of SL. We plan to further extend our formalization to support practical program verification, and investigate how to integrate our approach in Iris [11]: we will consider how DSL can also work for a shallow embedding of SL. Then the generated verification conditions require a proof of the validity of corresponding assertions in SL, which can be discharged by providing a proof directly in Coq. Further, we will investigate the application of DSL to concurrent SL [4] and permission-based SL [2].
|
2309.16584 | Collaborative Distributed Machine Learning | Various collaborative distributed machine learning (CDML) systems, including
federated learning systems and swarm learning systems, with different key
traits were developed to leverage resources for development and use of machine
learning (ML) models in a confidentiality-preserving way. To meet use case
requirements, suitable CDML systems need to be selected. However, comparison
between CDML systems regarding their suitability for use cases is often
difficult. This work presents a CDML system conceptualization and CDML
archetypes to support comparison of CDML systems and introduce scientific and
practical audiences to the principal functioning and key traits of CDML
systems. | David Jin, Niclas Kannengießer, Sascha Rank, Ali Sunyaev | 2023-09-28T16:44:18Z | http://arxiv.org/abs/2309.16584v3 | # A Design Toolbox for the Development of Collaborative Distributed Machine Learning Systems
###### Abstract
To leverage data for the sufficient training of machine learning (ML) models from multiple parties in a confidentiality-preserving way, various collaborative distributed ML (CDML) system designs have been developed, for example, to perform assisted learning, federated learning, and split learning. CDML system designs show different traits, including high agent autonomy, ML model confidentiality, and fault tolerance. Facing a wide variety of CDML system designs with different traits, it is difficult for developers to design CDML systems with traits that match use case requirements in a targeted way. However, inappropriate CDML system designs may result in CDML systems failing their envisioned purposes. We developed a CDML design toolbox that can guide the development of CDML systems. Based on the CDML design toolbox, we present CDML system archetypes with distinct key traits that can support the design of CDML systems to meet use case requirements.
collaborative distributed machine learning (CDML), privacy-enhancing technologies (PETs), assisted learning, federated learning (FL), split learning, swarm learning, multi-agent systems (MAS).
## I Introduction
The training of machine learning (ML) models requires sufficient training data in terms of quantity and quality to make meaningful predictions with little generalization error. Sufficient training data is, however, seldom available from a single party (e.g., a bank or a hospital), which can prevent the adequate training of ML models [1]. Inadequate training of ML models can result in large generalization errors, rendering ML models ineffective [2].
To reduce generalization errors of ML models, developers request training data from multiple third parties. Training data retrievals from third parties are often subject to compliance, social, and technical challenges [3, 4, 5] that hinder the acquisition of sufficient training data. For example, strict data protection laws and regulations prohibit the disclosure of specific kinds of data, such as personal data by the General Data Protection Regulation of the European Union [6] and organizational data by the Healthcare Insurance Portability and Accountability Act of the USA [7]. From a social perspective, privacy behaviors of individuals restrict information flows to third parties based on personal preferences [8], preventing access to their training data. Insufficient computing resources inhibit the transfer of large data sets from data centers to developers in an acceptable time [3, 4]. To reduce generalization errors of ML models by using training data from multiple parties, an ML paradigm is required that solves those challenges.
Collaborative distributed ML (CDML) is an ML paradigm that can be implemented to overcome, in particular, compliance and technical challenges in using data from multiple parties to train ML models [9, 10, 11, 12, 13, 14]. In CDML systems, such as federated learning systems [10], split learning systems [11], a and swarm learning systems [14], each party operates at least one quasi-autonomous agent (referred to as agent in the following). Agents in CDML systems train (parts of) ML models on their local training data and self-controlled compute in a distributed manner. Agents only share their locally computed training results (interim results) with other agents, for example, gradients [15], activations [11], and (pseudo-)residuals [12]. Reconstructing training data from interim results is commonly difficult [9]. Using interim results received from other agents, agents improve their local (parts of) ML models. Following the CDML paradigm, parties can keep control over their training data, which can help solve compliance challenges. Moreover, CDML can help to solve technical challenges because large packages of training data are not transferred to single parties to train ML models, saving bandwidth. Moreover, computational resources for training ML models are distributed across multiple agents, which decreases the amount of computational resources a single party must possess to train ML models.
The potential of CDML to leverage large training data quantities in a confidentiality-preserving and resource-efficient way has sparked enormous interest in practice and research for various use cases with different requirements for CDML systems. For instance, effective next-word prediction in virtual smartphone keyboards requires language models to be trained on a large quantity of heterogeneous training data representative of future user inputs. To meet this requirement, CDML systems must be scalable to involve millions [16] or even billions of agents [10]. Another CDML use case is the prediction of financial risks in portfolio management [17, 18]. Financial institutions rely on ML models to predict investment risks in portfolio management. As customers pay for portfolio management, such ML models are core assets to financial institutions. To protect such core assets, CDML systems must enable collaborative training of ML models
without disclosing ML models to competitors.
To meet different use case requirements, practice and research developed specialized CDML system designs. For instance, federated learning systems are scalable to engage billions of agents to train ML models for next-word prediction [19]. Assisted learning systems are unsuitable for this purpose due to the sequential processing of interim results [17]. Conversely, assisted learning seems to be suitable for training ML models for portfolio management because ML model confidentiality is protected in the learning process. Federated learning requires agents to disclose ML models and, thus, is unsuitable for use cases requiring ML model confidentiality. Developers need to understand how envisioned traits of CDML systems (e.g., high scalability, ML model confidentiality) can be achieved by designing CDML systems in a targeted manner.
The proliferation of a wide variety of specialized CDML system designs introduced a large number of design options (e.g., regarding the structure of interim results and the parts of ML models disclosed to other agents) that constitute the CDML system design space. Developers must select and combine design options from the CDML system design space to design CDML systems with traits that meet use case requirements (e.g., high scalability, ML model confidentiality, and high robustness of the training process). The targeted selection and combination of design options requires developers to thoroughly understand the CDML system design space and traits arising from the implementation of design options in CDML systems. An insufficient understanding of the CDML system design space can lead developers to select design options that can cause CDML systems to fail their purposes, for example, when ML models for portfolio management are inadvertently leaked in unsuitable training processes. However, literature on CDML systems is scattered, which is why the CDML system design space remains unclear and, thus, how envisioned key traits can be achieved through targeted CDML system designs. To support the targeted design of CDML systems suitable for use cases, we ask the following research questions:
_RQ1: What does the CDML system design space look like?_
_RQ2: What are the key traits of principal CDML system designs?_
To answer our research questions, we applied a three-step research approach. First, we developed the CDML design toolbox, which is a conceptualization of the CDML system design space. The CDML design toolbox specifies the fundamentals of CDML systems (e.g., agent roles and their interactions) and design options for the customization of CDML systems (e.g., combinations of agent roles in single agents, communication paths between agents, and types of interim results). For the conceptualization, we analyzed literature on CDML and developed agent-based models in the schemes presented in the Gaia methodology [20]. These schemes are commonly used to develop agent-based models that can serve as blueprints for implementing distributed software systems, such as CDML systems. Then, we tested the validity of the CDML design toolbox by modeling CDML system designs using the CDML design toolbox. Second, we developed CDML archetypes based on commonalities and differences between the modeled CDML systems. Third, we reviewed publications on CDML system designs to extract key traits of the CDML archetypes.
This work has three principal contributions to practice and research. First, by presenting the CDML design toolbox, we offer a consolidated design knowledge base of CDML systems that introduces the main design commonalities of CDML systems and offers design options for the customization of CDML system designs to meet use case requirements. This consolidation of previously scattered design knowledge in agent-based models (e.g., the roles model, the interactions model) facilitates the application of the Gaia methodology for systematically designing custom CDML systems. Moreover, by presenting design options implemented in CDML system designs, the CDML design toolbox helps to compare CDML system designs systematically. Second, by showcasing CDML archetypes, we inform of combinations of design options commonly used in practice and research. The CDML archetypes can be refined to develop blueprints of CDML systems tailored to use cases using the CDML design toolbox, which facilitates designing CDML systems. Third, by presenting key traits of CDML archetypes, we support developers in understanding how design options can be leveraged to achieve specific key traits. By using the CDML archetypes and their key traits, developers are enabled to evaluate CDML system designs in their suitability for use cases before implementing the designs. Thereby, we support the targeted design of CDML systems for use cases.
The remainder of this work is structured into six sections. First, we explain the foundations of CDML, related research on CDML systems, and introduce basic concepts of multi-agent systems (MAS). Second, we describe how we developed the CDML design toolbox, including a brief introduction to the Gaia methodology [20]. Moreover, we describe how we developed CDML archetypes using the CDML design toolbox and how we identified their key traits. Third, we present the CDML design toolbox. Fourth, we describe CDML archetypes and explain how different combinations of design options can lead to key traits of CDML systems. Fifth, we discuss our principal findings and describe the contributions and limitations of this work. Moreover, we give an outlook for future research directions. We conclude with a brief summary of this work and our personal takeaways.
## II Background and Related Research
### _Collaborative Distributed Machine Learning_
CDML combines the ML approaches of collaborative ML (CML) and distributed ML (DML). Leveraging training data from various parties is the focus of CML [21, 22, 23]. In CML systems, training data from multiple parties is used in a centralized or siloed way. In centralized CML, agents send their local data to a central data server that various agents can access to train ML models using the shared training data. To preserve training data confidentiality, data may only be provided to the central data server in encrypted form. The used cryptographic techniques (e.g., homomorphic encryption [23, 24]) allow agents to train ML models on the encrypted data while the plain training data remains
confidential. However, the cryptographic techniques will likely lead the centrally controlled computing system to consume more resources for the ML model training [25]. Overall, agents in centralized CML depend on central data servers. Crashes of central data servers can lead such CML systems to failure.
Distributed ML was developed to accelerate the training of large ML models, such as deep learning models, by distributing training tasks to multiple agents that train (parts of) ML models in parallel. Distributed ML systems can train ML models in two ways [26, 27, 28]: data parallel and model parallel. In data parallel training, partitions of the entire training data set are passed to agents. Each agent trains the same ML model on individual subsets of the whole training data set. In model parallel training, each agent uses identical data but only trains a part of the ML model.
In preparation for DML, training data is usually gathered by a central party that sets up the DML system (e.g., in computing clusters). The central party then identically distributes the gathered training data across agents to achieve a uniform workload for each. Through the uniform workload distribution, idle times of agents are aimed to be low so that the ML model training is performed with high computational efficiency [26].
The training process in DML is often coordinated by a central server, called parameter server [29, 30, 31]. After the local training of the ML model, agents transmit their ML model updates to the parameter server. The parameter server stores ML model updates and offers the latest parameters to agents. Agents fetch the parameters to proceed with the local training of the ML model.
An alternative to using parameter servers in DML is all-reduce [28, 32, 33]. In all-reduce, all agents have similar roles, thus executing identical tasks. The identical execution of tasks by all agents makes central parameter servers obsolete. Each agent aggregates training results and distributes them to other agents in the DML system. Any agent is notified about ML model updates to proceed with the local training of the latest version of ML models.
In summary, CML centers on the sharing and collaborative use of training data, while DML centers on performance improvements in training ML models. However, DML hardly contributes to overcoming the legal and social challenges related to leveraging training data from multiple parties in a confidentiality-preserving way.
The combination of principles of CML (e.g., leveraging training data from various parties) and DML (e.g., the distributed execution of ML tasks across multiple agents) forms the foundation for CDML. In CDML systems, trainer agents receive ML tasks from other agents and use local training data to accomplish ML tasks. ML tasks specify the objectives pursued with ML models (e.g., next-word prediction) and include information about the approach (e.g., what ML model architecture should be used). This approach can implement DML techniques, which can eventually speed up the training process by parallelization. However, because the training data is usually unknown to participants in the ML system, identical distribution of training data, like in purely DML, is hard to achieve. Thus, the performance benefits targeted in DML systems may not be fully leveraged [34].
### _Related Research on CDML_
As one of the first CDML concepts, federated learning has established training data confidentiality and distributed computing as a fundamental goal pursued when applying the CDML paradigm [16, 35, 10]. Soon after its introduction, various shortcomings of federated learning became apparent. For example, federated learning systems have been shown to be inefficient due to high communication costs [9] and prone to performance bottlenecks caused by the use of a central parameter server [36]. From a security perspective, federated learning systems are prone to failures due to an adversarial central parameter server [9].
To tackle the shortcomings of federated learning, practice and research brought forth other CDML concepts, including swarm learning, split learning, and assisted learning. Like federated learning, swarm learning aims at the collaborative and distributed training of global ML models known to all parties involved in the training process. Other than federated learning systems, swarm learning systems rely on redundant agents orchestrating the training process in peer-to-peer networks [14]. The redundant execution of tasks in swarm learning systems can make swarm learning systems more robust than federated learning systems [14]. However, the strong redundancies render swarm learning systems usually less resource-efficient and more complex compared to federated learning systems.
In split learning systems [11], agents only train parts of ML models defined by a so-called cut layer. Cut layers indicate the layers of neural networks where the complete neural network is split. Agents only receive the specifications of the cut layer as a kind of interface to input parameters for the training of the rest of the ML model. By only disclosing parts of ML models specified by cut layers, split learning helps to keep (at least parts of) ML models confidential. However, the gain in ML model confidentiality in split learning systems comes at the cost of the training performance of ML models compared to federated learning [37].
In assisted learning [12], the focus on preserving the confidentiality of training data is extended to ML models and even the purposes of ML models. In assisted learning, a user agent requests feedback on statistics relevant to training an ML model from service agents. Such feedback can include residuals of its own ML model. The user agent incorporates feedback received from service agents into its local ML model. This process can be executed repeatedly until the ML model reaches sufficient prediction performance. By enabling agents to decide which agents they want to assist, assisted learning can improve the autonomy of agents. However, the increased autonomy comes with coordination challenges, for example, how to assess the potential of agents to assist in a learning task and in which order agents interact [38].
Various design options for customizing CDML systems to meet use case requirements have been developed, such as
federated learning systems with multiple hierarchical levels. In each hierarchical level, a preprocessing of previous training results is executed by aggregating a subset of training results. The global ML model is then computed from multiple aggregated training results [10]. Another design option for federated learning systems is to form subnetworks to deal with heterogeneous computational resources between trainer agents [39]. Agents with more computing resources (e.g., servers) execute training tasks that consume more computational resources than agents with only a few (e.g., smartphones).
Extant research has started to compare CDML systems to understand their commonalities and differences. Such comparisons are often based on benchmarks, for example, between systems of federated learning, split learning, and SplitFed learning [13] and between systems of federated learning, swarm learning, and decentralized federated learning [40]. CDML system benchmarks commonly offer valuable help in understanding likely CDML system behaviors, especially in terms of performance (e.g., convergence speed of ML models [13], communication cost [13] and prediction performance [40]). Such benchmarks can support practitioners in meeting performance requirements for CDML systems. However, benchmark results are only helpful at a limited scale to understand possible CDML system designs and their key traits, as they seldom explain how CDML system designs lead to different system behaviors. Moreover, benchmark studies only shed light on a few CDML system designs, leaving the entirety of the CDML system design space unknown.
Other works compare CDML system designs. Several design options for federated learning systems were revealed, describing different network topologies for communication (e.g., via central servers and peer-to-peer) and computational schedules [3], such as sequential training of ML models and parallel training synchronized by a central server. Key traits that originate from the different design options are discussed with a focus on confidentiality. Design differences between other CDML systems (e.g., assisted learning systems and split learning systems) remain unknown. In a comparison between federated learning systems, split learning systems, and SplitFed learning systems [13], key traits of those CDML systems are pointed out, with a focus on learning performance, resource consumption, ML model confidentiality, and training data confidentiality. Despite these valuable insights, several design options (e.g., regarding the network topology and computational schedules) and their influences on key traits of CDML systems remain unclear.
Since extant comparisons focus only on selected systems of a few CDML concepts, it is still hard to understand the entirety of the CDML system design space. To help developers design CDML systems that meet use case requirements, the CDML system design space must be understood, including the various CDML concepts, design options, and key traits of CDML system designs. This knowledge of the CDML system design space needs to become available in actionable form.
### _Multi-Agent Systems_
The multi-agent system (MAS) concept [41] offers a theoretical lens to model systems based on agents (e.g., computing nodes) and their interactions in a specified environment [42, 20]. The MAS concept is widely used in computer science to model hardware systems and software systems, especially in the field of artificial intelligence (AI) systems [43, 44]. Since the MAS concept is established to develop blueprints of systems for their implementation [45, 20, 46], it seems to be adequate to represent the CDML system design space in a CDML design toolbox that helps to design, analyze, and advance CDML systems. In the following, we introduce the basic properties of the MAS concept relevant to this work. Important MAS properties are summarized in Table I.
MASs are systems comprised of a population of agents. By design, MASs can limit the population to a finite number of agents or allow an infinite number of agents. Within MASs, agents can form groups, so-called coalitions. Coalitions can comprise entire MAS populations or population partitions. Agents can be part of multiple coalitions at the same time [42, 20]. We consider each CDML system as a coalition within a superordinate MAS. As agents can be part of multiple coalitions, agents can simultaneously participate in multiple CDML systems.
Coalitions can be controlled in a centralized or decentralized way. In centralized coalition control, a single or a few agents coordinate interactions between agents in the coalition, for example, in federated learning systems [16]. In decentralized coalition control, multiple or even all agents have equitable influences on the coordination of the coalition.
In coalitions, there are two common goal structures. Agents can pursue individual goals or common goals. Since agents can be part of multiple coalitions, agents can pursue multiple goals at the same time. For example, an agent may pursue an individual goal in one coalition (e.g., training its own ML model in an assisted learning system) and a common goal in another coalition (e.g., training a shared ML model in a swarm learning system).
Agents can have different kinds of interaction to reach their goals in coalitions. They can act in a competitive, cooperative, and independent manner. When agents compete with each other, they need to fight for scarce resources to accomplish their tasks. Cooperative agents support each other in the accomplishment of common goals, where individual agents (or subgroups of agents) work on different tasks. In federated learning systems, for example, some agents only train ML models, while other agents aggregate interim training results [16, 47]. When agents collaborate, each agent is involved in each task to accomplish shared goals. Swarm learning systems are mostly collaborative, as most agents perform similar tasks in the ML model training [14].
MASs and coalitions can differ in their openness to allowing agents to join and leave arbitrarily. Closed MASs only allow specified agents to join. In some federated learning systems, only selected agents are permitted to join the coalitions [10]. Open MASs allow agents to join and leave arbitrarily, for example, in many peer-to-peer learning systems [48, 49].
Population diversity refers to the heterogeneity of agent types in a population. Agent types are sets of roles that are assigned to agents to specify their tasks in a coalition [20]. If many agents in a population have largely different agent types, the population is heterogeneous. For example, hierarchical federated learning systems comprise up to four different agent types that collaborate and execute different tasks in the training of ML models. If most agents have identical agent types, the population is homogeneous. Swarm learning systems, for example, can be considered homogeneous because all agents execute identical tasks in the training of ML models [14].
## III Methods
We applied a three-step research approach to conceptualize the CDML design space (RQ1) and extract key traits of CDML systems originating from different designs (RQ2). First, we conceptualized CDML systems described in literature (Section III-A). Based on the conceptualization, we developed the CDML design toolbox. We modeled CDML systems using the CDML design toolbox to test its applicability. Second, we used the models of the CDML systems to develop CDML archetypes (Section III-B). Third, we extracted traits of CDML system designs from literature. We assigned the CDML system designs, including their traits, to the CDML archetypes and aggregated the traits to key traits (see Section III-C). In the following, we describe our methods in detail.
### _CDML Design Toolbox Development_
To develop the CDML design toolbox, we adopted the Gaia methodology for agent-oriented modeling [20]. Using the structures of the five agent-based models presented in the Gaia methodology (see Section III-A1), we conceptualized CDML systems presented in the literature by applying open coding, axial coding, and selective coding [50] as described in Section III-A2. The literature analysis revealed design options for CDML systems (e.g., agent role distributions, optional communication paths, and structures of training processes). We tested and refined our coding in three iterations by classifying CDML systems into our coding (see Section III-A3).
#### Iii-A1 The Gaia Methodology
One main purpose of the Gaia methodology is to support the development of agent-based models that can serve as blueprints for the implementation of software systems [20]. The Gaia methodology is constituted of an analysis stage and a design stage. In the analysis stage, a roles model and an interactions model are developed, enabling an abstract view of a system. This abstract view constitutes the concept level of the system description that enables an analysis of system structures. The roles model describes the tasks and basic processes, including the resources that agents can use. Roles essentially describe the functions that an agent performs within the system. Each role consists of four main aspects: responsibilities, permissions, activities, and protocols. Responsibilities define the functions an agent of a particular role needs to perform. An exemplary responsibility of an agent in the role of an _updater_ in CDML systems could be the aggregation of ML models trained by other agents into a global ML model. Permissions describe which resources are available to agents with specific roles to fulfill their responsibilities. Exemplary resources for agents in the role of _updater_ are information about the ML model to be trained and local training data. Activities are computations that agents perform locally without interaction with other agents. In the case of agents in the _trainer_ role, local training of an ML model is an exemplary activity. Protocols as part of the roles model reference protocol definitions in the interactions model that describe how interactions between agents of specific roles are designed. For example, _updater_ agents must interact with agents with the _trainer_ role to retrieve interim training results and complete the training process.
The interactions model specifies how agents with specific roles interact with each other in a purposeful way. Frequently recurring interactions between agents with other agents, objects, or the environment of the MAS are recorded as interaction patterns. Each interaction pattern is described in a protocol definition. Protocol definitions include six attributes: purpose, initiator, interactor, input, output, and processing. The purpose includes a textual description of the meaning of interaction, for example, "passing an ML model for its training". Interactions originate from an agent (i.e., an initiator) and are directed to an interaction partner (i.e., a responder). For interaction, the initiator prepares an input and issues the input into the interaction process. The output comprises the information received by the responder at the end of the interaction.
Based on the roles model and the interactions model, envisioned CDML systems can be detailed in the design stage of the Gaia methodology. The design stage centers on the development of an agent model, a service model, and an acquaintance model. These models form the design level of the system representation. In combination, the concept level and the design level form blueprints for the implementation of concrete software systems [20].
The agent model describes the agent types utilized by CDML systems. Agent types are combinations of roles. Moreover, the agent model describes instances of these agent types that will populate the CDML system. The service model
describes the main services that are necessary to implement an agent role. The services that an agent can execute depend on its roles and corresponding activities and protocols. The acquaintance model describes communication paths between different agent types in the CDML system. The acquaintance model helps to identify communication bottlenecks that may arise during run-time.
Similar to the structure of the Gaia methodology, the CDML design toolbox comprises an abstract concept level and a more detailed design level. The concept level describes the general design of CDML systems, focusing on their commonalities (e.g., roles and interactions). On the design level, the CDML design toolbox describes design options to customize and differentiate CDML systems.
#### Iii-A2 Conceptualization of CDML Systems
To develop the CDML design toolbox, we conceptualized CDML systems in three steps: _start set compilation_, _development of an initial version of the CDML design toolbox_, and _test and iterative refinement_. We describe the three steps in more detail in the following.
Start Set CompilationFor the development of the CDML design toolbox, we first compiled a start set constituted of publications on CDML systems. To systematize the search for potentially relevant publications, we specified the following inclusion criteria (see Table II): _English language_, _level of detail_, _topic fit_, and _uniqueness_. We excluded publications from the start set that did not meet all inclusion criteria.
After specifying the inclusion criteria, each author independently generated their own set of publications potentially relevant to developing the CDML design toolbox. We searched for publications that cover a large variety of CDML systems and offer detailed descriptions of CDML system designs. Then, we consolidated the independently generated sets of publications into a preliminary start set. The preliminary start set included peer-reviewed scientific publications and grey literature. Next, we applied the inclusion criteria to the publications in the preliminary start set (see Table II). We removed one publication from the preliminary set of relevant literature because it was a duplicate. Based on the full texts of the remaining 29 publications, we independently rated the relevance of each publication for the conceptualization as "relevant", "maybe relevant", and "irrelevant" based on the inclusion criteria (see Table II). Whenever we were at variance regarding the relevance of publications (e.g., when one author felt the level of detail of a publication was sufficient and another author disagreed), we discussed the relevance of the publication in more detail until we concluded with unanimous decisions to include or exclude the publication from the preliminary start set. This relevance assessment led us to exclude 18 further publications from the preliminary start set. The final start set included eleven publications to be analyzed for the development of the initial version of the conceptualization.
Development of an Initial Version of the CDML Design ToolboxWe analyzed the publications in the start set by applying open, axial, and selective coding [50]. In open coding, we extracted aspects of CDML systems relevant to explain their designs and functioning. After coding the literature in the set of relevant publications, we iteratively refined our coding to achieve mutual exclusiveness between our codes and the exhaustiveness of our coding. For example, we merged the codes "client" and "device" into the code "trainer" and the codes "sendParameters" and "sendGradients" into the code "transmitInterimResult".
In axial coding, we extracted relationships between the codes developed in open coding. For example, we identified that the code "transmitInterimResult" can be implemented differently. We coded each implementation (e.g., "activations" and "gradients") and noted the relationship between "transmitInterimResult" and "gradients".
In selective coding, we classified the extracted codes into coding schemes. The coding schemes correspond to five agent-oriented models (i.e., the roles model, the interactions model, the agent model, the preliminary service model, and the acquaintance model) introduced in the Gaia methodology [20]. For example, we classified the code "trainer" as a role in the roles model and the code "transmitInterimResult" as a protocol in the interactions model.
After the analysis, we refined the coding to improve the mutual exclusiveness between codes and the exhaustiveness of our coding. For example, we abstracted the code "aggregator" to "updater" to include DCML systems in which the ML model is updated with and without aggregating interim results.
#### Iii-A3 Test and Iterative Refinement
We gathered evidence for the external validity of our CDML design toolbox by testing whether CDML systems, which we did not use to develop our conceptualization, can be successfully modeled with our CDML design toolbox. To find CDML systems for testing the external validity of our conceptualization, we applied a backward search and a forward search to the set of relevant publications. We decided on the relevance of each publication collated in the backward and forward searches based on the previously used inclusion criteria (see Table II). If a publication met our inclusion criteria, we added the publication to our set of relevant literature.
We again applied open, axial, and selective coding to analyze the new relevant publications. Based on the coding, we classified the CDML systems into the preliminary CDML design toolbox comprised of the agent-based models of the Gaia methodology and the assigned codes.
When we recognized that a CDML system could not be classified into our conceptualization, we refined our con
ceptualization accordingly and continued with the test and iterative refinement until we had analyzed all relevant CDML publications identified in the last round of backward and forward searches. When our conceptualization needed to be refined, we repeated this third step of our methods, "Test and Refinement". We executed this step three times (see Table III).
During the first iteration, we used four publications from the backward search and five publications from the forward search, presenting eleven CDML systems. When classifying the eleven CDML systems into our conceptualization, we recognized the need for refinements of the CDML design toolbox. For example, we added the role _coordinator_ to map the sampling-service from the newly added gossip learning system [49].
During the second iteration, we included one publication from the backward search and eight publications from the forward search. When classifying the nine CDML systems presented in those publications into the conceptualization, we recognized the need to refine our CDML design toolbox. For example, we needed to add activities and protocols while also requiring a revision of existing definitions of activities and protocols. For instance, we added the protocol "assignInterimResultRecepient" and redefined the protocol 'SignalReadiness" so that agents with the roles _trainer_ or _updater_ can execute the protocol.
In the third iteration, we tested the conceptualization based on nine CDML systems presented in nine publications. We did not identify any further need to refine our concept and decided our concept to be final. Overall, the conceptualization was successfully tested on 43 CDML systems. 15 of these CDML systems required refinements of our conceptualization.
### _CDML Archetype Development_
Since the concept level of the CDML design toolbox points out commonalities between CDML systems, we focused on the design level to identify CDML archetypes. The design level allows for the differentiation between CDML system designs. We developed an agent model, preliminary service model, and acquaintance model for each CDML system. Using these models, we analyzed the corresponding CDML system designs to identify similarities. Based on the identified similarities, we developed CDML archetypes.
Agent ModelWe started our analysis by examining role distributions in CDML systems to extract common agent types. To identify agent types and their distribution in CDML systems, we analyzed the agent models of the 43 CDML systems, which we previously used for testing the validity of the CDML design toolbox (see Section III-A2). We developed one agent model for each of the analyzed CDML systems. Next, we compared the individual models with each other to identify similarities and differences between the used agent types and their distribution in the corresponding CDML systems. Based on similarities between the agent models, we classified the 43 CDML systems into 18 groups of CDML systems. Each CDML system was assigned to exactly one group.
Preliminary Service ModelWe analyzed the grouped CDML systems to reveal similarities in the design options implemented for activities and protocols. For example, CDML systems in a group all use the design option "only interim result definition" for the protocol provideMLTask. If CDML systems associated with different groups showed similar uses of design options, we merged these groups into candidate CDML archetypes. For example, we merged assisted learning systems with split learning systems because both systems use the design option "activations" for the protocol transmitInterimResult. Overall, we merged 18 groups of CDML systems into six candidate CDML archetypes.
Acquaintance Model and Main ProcessesWe analyzed the communication paths of the individual CDML systems using their acquaintance models. Whenever we observed similarities in acquaintance models of CDML systems associated with different groups, we merged the groups. After analyzing the acquaintance models, we merged our six candidate CDML archetypes into four final CDML archetypes (i.e., the confidentiality archetype, the control archetype, the flexibility archetype, and the robustness archetype). Overall, we assigned each of the 43 CDML systems to one of the four CDML archetypes.
### _Identification of Key Traits of CDML Archetypes_
Using the set of relevant publications on CDML systems that we used to develop the CDML design toolbox (see Section III-A2), we performed open coding [50] to extract preliminary traits of CDML system (e.g., robustness against
the participation of malicious agents) that authors point out to highlight strengths and weaknesses of CDML system designs. We noted the referenced CDML systems for all preliminary traits and noted explanations of how the trait originates from the CDML design in axial coding [50]. For example, the key trait "communication bottleneck" is referenced in several publications about federated learning systems. This trait originates from the reliability of federated learning systems on a central agent [51, 40, 52]. We added a description of whether the referenced CDML system has a strength or weakness in the respective trait. Our analysis revealed 132 codes representing preliminary traits of 43 CDML systems. Subsequently, we harmonized the preliminary traits in three iterations to ensure mutual exclusiveness and exhaustiveness of our coding [50]. For example, we aggregated the preliminary traits "does not rely on an orchestrator" and "no need to rely on a third party" to the trait "fault-tolerant". Our analysis revealed 38 traits of CDML systems.
Next, we mapped the 38 traits to the CDML systems to their corresponding CDML archetypes. We evaluated which traits of individual CDML systems apply to all CDML systems assigned to corresponding CDML archetypes. We assigned the set of traits shared by all CDML systems associated with a CDML archetype to the corresponding CDML archetype as key traits. For example, we extracted the trait "not reliant on single agents" from literature on blockchain-based federated learning systems. To evaluate whether this trait also applies to all CDML systems of the robustness archetype, we analyzed the CDML systems of the robustness archetype (e.g., swarm learning) regarding their redundancy of agent types. Since all CDML system designs of the robustness archetype show a high redundancy of agent types, "not reliant on single agents" became a key trait of the robustness archetype. We repeated this process for all traits extracted from the literature analysis at the beginning of this step.
## IV The CDML Design Toolbox
Our CDML design toolbox comprises a concept level and a design level. The concept level (see Section IV-A) describes how CDML systems are designed in principle, including agent roles and agent interactions. Roles are assigned to agents in order to specify the activities and protocols to be executed by corresponding agents. After the role assignment, agents keep their roles until the coalition dissolves. Agents do not have to act in all their assigned roles simultaneously but in at least one role. The design level (see Section IV-B) includes design options that developers can use to design CDML systems. Exemplary design options encompass the assignment of agent types (i.e., combinations of roles) to agents in the CDML system and the definition of types of interim results to be transmitted between agents. The design options are presented in an agent model, a preliminary service model, and an acquaintance model. The agent model shows common combinations of agent types used in CDML systems. In the preliminary service model, we describe design options for implementing activities and protocols described in the roles model. The acquaintance model illustrates communication paths between these agent types in existing CDML systems.
To make the models incorporated in our CDML design toolbox tangible, we describe them along the principal CDML life cycle. The CDML life cycle incorporates three sequential phases each CDML system passes through: the initialization phase, the operation phase, and the dissolution phase. In the initialization phase, agents form and initialize a coalition that can become a CDML system. The initialization phase described in this paper focuses on the autonomous formation of CDML systems by agents in MASs. Alternatively, developers can manually initialize CDML systems. However, the manual setup of CDML systems is out of the scope of this work. In the operation phase, agents interact in order to train or execute ML models. In the dissolution phase, the agents end their collaboration and dissolve the CDML system. Because multiple CDML systems may be formed in a single MAS (e.g., in open MAS), these phases can be passed through in parallel. For simplicity, we describe these three phases using the example of the formation of a single coalition that becomes a CDML system and dissolves. We describe variants of the CDML system design (e.g., in terms of numbers of agents with specific roles) in Section IV-B.
### _Concept Level of the CDML Design Toolbox_
The concept level of our CDML design toolbox incorporates a roles model and an interactions model. The roles model comprises role descriptions, activities of agents, and responsibilities. The interactions model includes protocols that specify interactions between agents.
Initialization PhaseIn the initialization phase, agents form a coalition of at least two agents that aim to collaborate to accomplish an ML task. The formation of coalitions, which can become CDML systems, is triggered by a _configurator_ agent. The _configurator_ agent stores the CDML system specifications (activity: registerCoalition) about the purpose of envisioned CDML systems (i.e., the general prediction problem that ought to be addressed) and requirements for agents that are searched to join the coalition (e.g., in terms of the needed training data structure). The _configurator_ agent defines (parts of) the initial ML model (activity: defineInitialMLModel) to be trained. Definitions of the (parts of) initial ML models are, for instance, the (first) layers of neural networks, a (sub-) set of parameters of linear regressions, activation functions, and the ML model architecture. Moreover, the _configurator_ agent defines the structure and type of interim results (activity: defineInterimResult) to be transmitted between agents in the envisioned CDML system. Interim results are updates that are computed by agents based on local training data and the locally available (part of an) ML model. Then, the _configurator_ agent registers the coalition (activity: registerCoalition) with a repository and starts an application process.
Agents fetch the CDML system specifications from the repository. Based on the CDML system specifications, agents decide whether to participate in the CDML system. Agents that decide to participate submit an application, including the roles they apply for, to the _configurator_ agent (protocol: applyForCoalition). Commonly, agents can apply for the roles _coordinator_, _selector_, _trainer_, and _update_.
The _configurator_ agent iteratively checks for applications from agents (activity: awaitApplications). Upon application receipt, the _configurator_ agent decides whether to accept or reject the agent for the CDML system (activity: decideOnApplication). Then, the _configurator_ agent responds to the applying agent with an acceptance message or a rejection message (protocol: informApplicant).
When _trainer_ and _update_ agents join the coalition, the _coordinator_ agent assigns _trainer_ agents to _update_ agents they will interact with in the operation phase and inform the respective agents about the assignment (protocol: assignInterimResultRecipient). The _trainer_ agent sends its interim result to its assigned _updater_ agent. The _updater_ agent can return interim results to its assigned _trainer_ agent(s) after updating (parts of) the ML model.
The _configurator_ agent sends the ML task (protocol: provideMLTask) to agents in the coalition. ML tasks are a collection of information required to train and update ML models and can include the initial ML model definition and the interim result definition.
At the end of the initialization phase, at least two agents of the coalition must have been assigned the following roles to form a CDML system: _configurator_, _coordinator_, _selector_, _trainer_, and _updater_. Agents may have multiple roles. We describe common combinations of roles on the design level of the CDML design toolbox (see Section IV-B).
After the initialization phase, the _coordinator_ agent handles applications of agents on behalf of the _configurator_ agent, which executes the activities awaitApplications, decideOnApplication and the protocols applyForCoalition and informApplicant. The _coordinator_ agents send the ML task to the accepted agents (protocol: provideMLTask). After the initialization of the CDML system, ML models can be trained and executed in the operation phase.
Operation PhaseIn the operation phase, agents participate in the training and execution of ML models according to their assigned roles. At the beginning, the _trainer_ agent and the _updater_ agent signal their readiness to the _selector_ agent (protocol: signalReadiness). Agents that have signaled their readiness iteratively check for triggers from the _selector_ agent to execute activities and protocols required to collaboratively train and update ML models (activity: awaitSelectionSignal).
The _selector_ agent selects _trainer_ agents and _updater_ agents (activity: selectAgent) to act in at least one of these roles. Then, the _selector_ agent requests the selected agents to act in the corresponding roles (protocol: announceAgentSelection). Agents that are selected for the role _trainer_ use their locally available (parts of the) ML model and local training data to compute interim results (activity: trainMLModel). The _trainer_ agent sends its interim result to the _updater_ agent (protocol: transmitInterimResult). The _Update_ agent results until it receives interim results (activity: awaitInterimResults) and then uses the interim results received from _trainer_ agents to compute a new version of the locally available (part of the) ML model (activity: updateMLModel). The execution order of training, updating, and transmitting interim results can vary between CDML systems (see Section IV-B). The procedure outlined in the operation phase is typically executed repeatedly. Protocols and activities may be executed in parallel or sequentially.
Dissolution PhaseIn the dissolution phase, agents stop executing the processes described in the operation phase. This can be the case if agents decide that (parts of) the ML model(s) have been sufficiently trained or, in case that other agents are required to execute ML models, that they do not need to execute ML model anymore. When agents end their collaboration, the CDML system dissolves.
### _Design Level of the CDML Design Toolbox_
While the concept level of the CDML design toolbox offers an abstract description CDML system designs, the design level can guide detailed specifications of concrete CDML system designs as follows. The first step in designing CDML systems entails the specification of an agent model (see Section IV-B1) that presents the assignment of agent types to agents. Agent types incorporate all roles that are simultaneously assigned to single agents. The CDML design toolbox offers a set of agent types commonly used in CDML systems in the agent model (see Section IV-B1). Second, developers need to tailor the activities and protocols associated with agent types to the requirements of the envisioned CDML system. In Section IV-B2, the CDML design toolbox offers a range of design options on how activities and protocols can be implemented to develop service models for CDML systems. Finally, the acquaintance model needs to specify communication paths between agents (see Section IV-B3). While some communication paths are integral to all CDML systems (e.g., _trainer_ agents sending interim results to _updater_ agents, see Section IV-B1), others are contingent on the characteristics of CDML systems (e.g., _updater_ agents returning interim results to _trainer_ agents). The CDML design toolbox introduces communication paths necessary to operate CDML systems successfully. This list comprises necessary and optional communication paths and helps developers consider communication efficiency and communication bottlenecks when designing CDML systems.
In the following, we describe the three models (i.e., the agent model, the preliminary service model, and the acquaintance model) that can be utilized to develop CDML systems.
#### Iv-B1 Agent Model
Agent types are a combination of roles identified in the roles model that can serve as a blueprint to implement agents in CDML systems. Following the concept level of the CDML design toolbox (see Section IV-A), CDML systems require at least two agents with agent types that in combination comprise the following roles: _configurator_, _coordinator_, _selector_, _trainer_, and _updater_. These roles can be assigned to agents in seven combinations (see Table V), each combination forming an individual agent type. Identical agent types can be assigned to multiple agents, for example, to increase redundancies in the processing of ML tasks [14] or to distribute workload in the processing of ML tasks [10]. First, the _Tra_ agent type only comprises the role _trainer_. Agents of the _Tra_ agent type only train the ML model without updating it with interim results from other agents. The _Tra_ agent type is utilized in CDML systems with only one training round [53].
Second, the _CooSel_ agent type comprises the roles _coordinator_ and _selector_. This agent type is utilized in CDML systems with a peer-to-peer structure. If agent selection and the assignment of _trainer_ agents to _updater_ agents follow a sophisticated rule (e.g., unbiased peer-to-peer sampling service [54]), _CooSel_ agents can be implemented that only focus on the selection and assignment of agents [49, 55].
Third, the _TraUpd_ agent type combines the roles _trainer_ and _updater_. The _TraUpd_ agent type is implemented in many CDML systems since it combines the two main roles accounting for training ML models. _TraUpd_ agents can train ML models but can include interim results into their local ML models [35, 47, 56].
Fourth, the _ConTraUpd_ agent type combines the roles _configurator_, _trainer_, and _updater_. The _ConTraUpd_ agent type is mainly used in split learning systems and assisted learning systems. The _configurator_ role is required since agents in these CDML systems define their own ML model [11, 12].
Fifth, the _ConCooSelUpd_ agent type combines the roles _configurator_, _coordinator_, _selector_, and _updater_. _ConCooSelUpd_ agents primarily operate central servers in federated learning systems [35, 47].
Sixth, the _CooSelTraUpd_ agent type combines the roles _coordinator_, _selector_, _trainer_, and _updater_. This agent type has a high degree of autonomy as it can execute all activities and protocols except those with the _configurator_ role. The _CooSelTraUpd_ agent type is used in CDML systems to create a high level of redundancy [57, 14, 58].
Seventh, the _ConCooSelTraUpd_ agent type combines the roles _configurator_, _coordinator_, _selector_, _trainer_, and _updater_. This agent type is assigned to central agents in federated learning (e.g., [59]) that train ML models or a single agent that initiates the ML model to be trained in peer-to-peer-based CDML systems (e.g., the BrainTorrent system [48] and the gossip learning systems [49]).
#### Iv-B2 Preliminary Service Model
The key activities and protocols introduced at the concept level of the CDML design toolbox (see Table IV) can be implemented based on various design options. It is important to note that the following descriptions do not represent a complete service model [20]. Complete service models are usually highly context-dependent and, thus, out of scope for this work. The following descriptions of design options for the key activities and protocols are intended as a foundation for developing detailed service models.
ActivitiesWe identified 12 design options for five key activities. The activity awaitApplications has two design options. First, the agent population awaits agent applications to join the coalition "only during the initialization phase". Applications are ignored when the CDML system is already initialized. For example, in most variants of split learning systems [11], the ML model layers to be trained need
to be assigned to agents during the initialization phase, which prevents agents from joining after the initialization phase. Second, the agent population accepts applications "always" [14]. This allows agents to join the CDML system arbitrarily.
The activity selectAgent has three design options. First, agents can be selected for a role "based on votes from other agents" in the CDML system. The _selector_ collects the votes of other agents and decides which agents should execute which activities and protocols; for example, all agents in the CDML system can vote on which agent activates the _updater_ role and executes the updating of the ML model (activity: updateMLModel) [14]). Second, agents can be selected "based on agent attributes", for example, based on the size of agents' datasets [53]. Third, agents can be selected "randomly" to activate a role and execute corresponding activities and protocols [48, 60].
The activity awaitInterimResults has two design options. To maintain liveness in CDML systems, the waiting time of agents for interim results can be "response-bound" or "time-bound". If the waiting time of the agents is "response-bound" [61], the _updater_ agent waits for a specified number of interim results before updating the ML model with the interim results received. "Response-bound" waiting for interim results can decrease the liveness of CDML systems if set too high; for example, when an agent with the role _updater_ awaits interim results from all _trainer_ agent, but on _trainer_ agent may have crashed, the _updater_ agent may theoretically wait infinitely. "Time-bound" waiting tackles this issue [10]. If the waiting time exceeds a specified time bound, the _updater_ agent updates the ML model with all interim results received during the waiting period. However, "time-bound" waiting may lead the _updater_ agent to ignore interim results received too late.
The activity updateMLModel has two design options. First, _updater_ agents can perform "batched updates" [52, 53, 57, 62]. In "batched updates", _updater_ agents use a set of interim results received from _trainer_ agents to update their ML model at one time. Second, _updater_ agents can perform "individual updates" to separately update the ML model for each interim result received from a _trainer_ agent or an _updater_ agent [11, 61].
The activity trainMLModel has three design options. First, _trainer_ agents can "train two complete ML models". In this case, _trainer_ agents compute two separate ML models. A local ML model that learns representations of the training data and a global ML model that is trained on the local ML model instead of the raw training data. An advantage of this approach is that the local ML model can protect confidential attributes from the global ML model, thus improving training data confidentiality. Moreover, the communication efficiency can be improved because the global ML model requires fewer parameters due to the local ML model learning being the foundation for the global ML model [63, 64]. Second, _trainer_ agents can "train one complete ML model". A complete ML model refers to the entire set of parameters comprising the ML model. In most CDML systems, _trainer_ agents store and train one complete ML model [16, 47]. Third, _trainer_ agents can "train a part of an ML model". A part of an ML model refers to a subset of ML model parameters. Exemplary parts of ML models are layers of a neural network or a subset of coefficients of linear regression. Training only a part of an ML model has two main advantages. First, _trainer_ agents require less storage and computing resources. Second, due to _trainer_ agents only having access to a part of the ML model, the complete ML model can remain confidential [11, 12].
ProtocolsWe identified nine design options for three key protocols. We identified two design options for the protocol provideMLTask. First, the agent with the role _configurator_ can provide "only interim result definitions" to other agents in the CDML system. The agent with the role _configurator_ only provides the interface between agents (e.g., exchange parameters or gradients). The exact ML model to be used remains unknown to other agents (e.g., in terms of the ML model architecture and its hyperparameters) [12]. Second, the _configurator_ agent provides both the interim result definition and initial ML model definition (e.g., [10, 35]).
The protocol announceAgentSelection has two design options. First, the _selector_ agent can announce which agent should activate which role [10, 49]. Second, the _selector_ agent can announce what agents should activate which role and announce the training sample IDs that should be trained [12].
There are five design options for the protocol transmitInterimResult. First, agents can transmit "parameter values" [19, 65]. Parameter values refer to a set of variables or weights that the ML model learns from the training data and that determine how the ML model makes predictions based on the input data. Second, agents can transmit "gradients" [35, 61]. Gradients refer to the directional slopes or change rates of a mathematical function. Third, agents can transmit "activations with labels" [11, 66]. We refer to activations as intermediate outputs of an ML model for a given input. When the ML model is presented with input data, it propagates the data through its layers, applies the learned parameters (weights and biases), and produces an output. We refer to the output as "activations" if it is not the final output of the ML model. If the output is from the final layer / includes all parameter values of the ML model, we call the outputs predictions. Fourth, agents can transmit "activations without labels" [11, 66]. Fifth, agents can transmit "(pseudo-)residuals" [12]. Residuals refer to the differences between the actual target values and the predicted values generated by an ML model. Pseudo-residuals can be considered intermediate residuals and are often used in boosting algorithms.
#### Iv-B3 Acquaintance Model
Several communication paths between agents are required for the functioning of CDML systems. Some of those communication paths are indispensable in every CDML system; other communication paths only appear in some CDML systems. Based on our concept level of CDML systems (see Section IV-A), we describe indispensable communication paths and optional communication paths (design options) in the following. Since communication paths differ between the lifecycle phases of CDML systems, we describe the communication paths for each phase separately.
Initialization PhaseThe _configurator_ agent must have a bidirectional communication path to all other
agents for two purposes: first, to participate in the coalition application process (protocols: applyForCoalition, informApplicant); second, to provide them with the ML task definition (protocol: provideMLTask).
The _coordinator_ agent must have a unidirectional communication path to the _trainer_ agent to inform the agent to which _updater_ agent they should send their interim results (protocol: assignInterimResultRecipient). This communication path allows for more flexibility by enabling sub-coalitions that form around _updater_ agents [10, 19, 67].
The _coordinator_ agent may have a unidirectional communication path to the _updater_ agents. Via such a communication path, the _coordinator_ agent can inform the _updater_ agents to which _updater_ agents they should send intermediate results (protocol: assignInterimResultRecipient). This communication path can be used for a hierarchically organized CDML system, in which _updater_ agents communicate with each other to improve their local ML model without using local training data [10, 19, 67].
Operation PhaseThe _selector_ agents must have a bidirectional communication path to the _trainer_ agent and the _updater_ agent. This communication path enables the _selector_ agent to receive signals that these agents are ready to participate in the training (protocol: signalReadiness) and to inform these agents that they are selected for the training (protocol: announceAgentSelection).
The _trainer_ agent must have a unidirectional communication path to the _updater_ agent to send it interim results (protocol: transmitInterimResult).
The _coordinator_ agent can have a bidirectional communication path to all other agent roles if applications can be received and processed after the initialization phase. In this case, the _coordinator_ agent take over handling the applications from the _configurator_ agent (protocols: applyForCoalition, informApplicant). Because agents can apply and be admitted to a CDML system after the initialization phase, this communication path enables the CDML system to address issues in the agent population during the operation phase. For example, if it becomes clear during the operation phase that the training data is insufficient, more _trainer_ agents can be admitted to the CDML system.
The _updater_ agent can have unidirectional or bidirectional communication paths with an other _updater_ agent to exchange information about their ML model update (e.g., [19, 10]). This communication path allows for hierarchical structures with more than one _updater_ agent.
The _trainer_ agent can have bidirectional communication paths to the _updater_ agent, for example, to send and receive interim results (protocol: transmitInterimResult). Such bidirectional communication paths are common in CDML systems. In some CDML systems (e.g., one-shot federated learning [53]), the _trainer_ agent sends interim training results to the _updater_ agent without receiving interim results in return [53].
Dissolution PhaseDuring the dissolution phase, the communication paths between agents are dissolved. Agents that have stored a local ML model can keep it and use it to make predictions on their own.
## V CDML Archetypes
We developed four CDML archetypes that reflect CDML system designs common in practice and research: the confidentiality archetype, the control archetype, the flexibility archetype, and the robustness archetype. The CDML archetypes are distinguished by their agent models, acquaintance models, and principal functioning, including preliminary service models. Table VI gives an overview of the four CDML archetypes we describe in detail in the following. The coalition-forming phase is outside the scope of the archetype descriptions because developers can set up CDML systems that correspond to the CDML archetypes. For each CDML archetype, we highlight common design variants.
### _Confidentiality Archetype_
The confidentiality archetype is suitable for use cases in which agents want to preserve the confidentiality of ML models, ML tasks, and training data. Agents only store parts of ML models. The full architectures of ML models trained in the confidentiality archetype are not disclosed. Thus, no agent has access to the global ML model. Instead, the global ML model is distributed across several agents, which only store parts of it. ML models are not synchronized coalition-wide during ML model training and for ML model inference. Exemplary CDML systems of the confidentiality archetype are split learning [11, 66, 70], assisted learning [12, 68], gradient assisted learning [17], SplitFed learning [37], FDML [71], hierarchical SplitFed learning [19], and FedLite [72].
#### V-A1 Agent Model
The confidentiality archetype comprises the agent types _ConCooSelUpd_ and _ConTraUpd_. In its basic configuration, the confidentiality archetype comprises one _ConCoSelUpd_ agent and at least one _ConTraUpd_ agent.
#### V-A2 Acquaintance Model
In the confidentiality archetype, the _ConCooSelUpd_ agent can communicate with all _ConTraUpd_ agents on bidirectional communication paths (see Figure 1). _ConTraUpd_ agents do not communicate with each other directly.
#### V-A3 Principal Functioning
In the initialization phase, the _ConCooSelUpd_ agent configures its local part of the ML model and defines the interim results to be transmitted (activities: defineInitialMLModel, defineInterimResult). Local parts of the ML model can be specific layers of a neural network in split learning [11] or just parts of a layer of a neural network in vertical split learning [11] and assisted learning [12]. Examples of interim results include activations of a particular layer of a neural network (e.g., referred to as the cut layer in split learning) [11] or (pseudo-)residuals [17]. The _ConCoSelUpd_
Fig. 1: Exemplary acquaintance model of the confidentiality archetype
agent then provides _ConTraUpd_ agents with the interim result definition (protocol: provideMLTask; design option: provide only interim result definition). After receiving the interim result definition, _ConTraUpd_ agents individually set up their local parts of the ML model following the interim results definition. For example, the ConTraUpd agents in split learning systems set up the layers of a neural network from the input layer to the cut layer. The number of outputs of the cut layer is set depending on the interim results definition.
The operation phase starts with the _ConTraUpd_ agents signaling their readiness to the _ConCoSelUpd_ agent (protocol: signalReadiness) to participate in the subsequent training round. Then, _ConTraUpd_ agents wait for a response (activity: awaitSelectionSignal). The ConCoSelUpd agent decides which _ConTraUpd_ agents to select for the next training round (activity: selectAgent). For example, this selection can be made based on agent attributes or randomly. After the selection, the ConCooSelUpd agent announces its decision to the _ConTraUpd_ agents (protocol: announceAgentSelection). Selected _ConTraUpd_ agents train their parts of the ML model (activity: trainMLModel; design option: train a part of the ML model) and transmit their interim results to the ConCooSelUpd agent (protocol: transmitInterimResult; design option: activations with labels, (pseudo-)residuals). The _ConCooSelUpd_ agent waits for incoming interim results (protocol: awaitInterimResults). The _ConCooSelUpd_ agent uses the interim results to update (and train) its local (part of the) ML model (activities: trainMLModel, updateMLModel).
Depending on the implementation, the _ConCooSelUpd_ agent then transmits another interim result back to the _ConTraUpd_ agents (protocol: transmitInterResult; design option: gradients). _ConTraUpd_ agents use it to update their local part of the ML model. The _ConCooSelUpd_ agent decides how often this process is repeated.
#### Iii-A4 Key Traits
The confidentiality archetype relies on a strongly hierarchical agent organization and does not have a coalition-wide synchronization of ML models. The missing synchronization of ML models among agents leads to the fact that ML models can be kept confidential. The main trait of the confidentiality archetype is that confidentiality entails training data confidentiality and the ML model confidentiality because agents only have access to parts of the ML model. Next to enabling ML model confidentiality, The confidentiality archetype can be very computation efficient since agents only have to store and compute a part of the ML model, which can be potentially very large [11, 72]. The confidentiality archetype requires fewer training rounds than the control archetype and converges quickly [11, 66]. The confidentiality archetype has high communication costs due to the ML model partitioning and the communication of both activations and gradients [72]. Some CDML systems that correspond to the confidentiality archetype, such as split learning systems (e.g., [11]), can have high idle times of trainer agents since the trainer agents only interact with the updater agents sequentially [37]. Other CDML systems, such as SplitFed learning systems, address this issue by combining elements of split learning and federated learning and, thus, can reduce the idle times [37]. As no agent has access to the entire ML model, the coalition (or a subset of it) is required to make ML model inferences. Therefore, the coalition can only be resolved when the ML model is not used anymore.
#### Iii-A5 Variants of the Confidentiality Archetype
U-Shaped Split Learning [11]U-shaped split learning systems can be used to train neural networks. A selected ConTraUpd agent executes the forward propagation up to a specific layer (i.e., the first cut layer) and only transmits activations to the _ConCooSelUpd_ agent (protocol: transmitInterimResults; design option: activations without labels). The _ConCooSelUpd_ continues the forward propagation up to the second cut layer and transmits activations back to the ConTraUpd agent. The ConTraUpd agent completes the forward propagation, starts the backpropagation, and transmits the gradients of the second cut layer to the _ConCooSelUpd_ agent (protocol: transmitInterimResults; design option: gradients). Using these gradients, the _ConCooSelUpd_ agent continues the backpropagation to the first cut layer and transmits the gradients of the first cut layer to the ConTraUpd agent. The ConTraUpd agent executes the backpropagation for the remaining layers and, thus, completes a training round.
### _Control Archetype_
The control archetype is suitable for use cases in which one agent should have control over the DCML system. The control archetype incorporates a hierarchical communication structure with an agent on the top level that controls the training process. The agent on top receives all interim results and synchronizes the training process by deciding on the global ML model to be trained in each training round. Exemplary CDML systems of the control archetype implement variants of federated learning [10, 35, 61, 63], including one-shot federated learning [53], semiFL [59], heteroFL [39], and hierarchical federated learning [51, 52].
#### Iii-B1 Agent Model
CDML systems belonging to the control archetype comprise the agent types _ConCooSelUpd_ and _TraUpd_. The control archetype comprises one _ConCooSelUpd_ agent and at least one _TraUpd_ agent.
#### Iii-B2 Acquaintance Model
The acquaintance model of the control archetype has the structure of a tree (see Figure 2). Agents can bidirectionally communicate in a strictly hierarchical manner along the vertexes of the tree. In its basic form, there are two hierarchical levels (e.g., [10]): a root _ConCooSelUpd_ agent forms the top level of the hierarchy. At least one _TraUpd_ agent resides on the bottom level of the hierarchy. There can be additional levels between the top level and the bottom level (e.g., [51, 52]). The inner nodes of the tree are _ConCooSelUpd_ agents, whereas _TraUpd_ agents represent the leaves.
#### Iii-B3 Principal Functioning
In the initialization phase, the _ConCooSelUpd_ agent on the top level of the hierarchy defines the initial ML model and interim results (activities: defineInitialMLModel, defineInterimResult). Suppose there are additional _ConCooSelUpd_ agents on lower levels of the acquaintance model. In that case, the initial ML model and interim result definition are propagated to these agents by executing the protocol _provideMLTask_ (design option: ML model definition and interim result definition). _ConCooSelUpd_ agents on lower levels of the acquaintance model can only forward parts of the ML model (i.e., sub-models) to their child nodes. Thus, each _ConCooSelUpd_ agent can individually define the initial ML model and interim results for its descendants (activities: defineInitialMLModel, defineInterimResult).
In the operation phase, _TraUpd_ agents execute the _signal-Readiness_ protocol to signal their availability to participate in a training round to their respective parent _ConCooSelUpd_ agent. Then, _TraUpd_ agents wait for a selection signal (activity: awaitSelectionSignal). _ConCooSelUpd_ agents decide which of their child _ConCooSelUpd_ and _TraUpd_ agents to include in a training round. Once a sufficient number of
Fig. 2: Exemplary acquaintance model of the control archetype
child agents have signaled their readiness to a _ConCooSelUpd_ agent, it signals its readiness to its parent agent and waits for a selection signal (activity: awaitSelectionSignal). This process is repeated recursively throughout the hierarchy until it reaches the root _ConCooSelUpd_ agent. Then, the root _ConCooSelUpd_ agent selects (a subset of) its subordinate agents to participate in the upcoming training round (activity: selectAgent; design option: based on agent attributes or randomly) and announces its selection to its child agents (protocol: announceAgentSelection). Afterward, it transmits the current version of the ML model, or a part thereof, to selected child agents (protocol: transmitInterimResult; design option: gradients or parameter values) and waits for interim results (activity: awaitInterimResult; design option: waiting for a time-threshold or waiting for a response-threshold). This selection process is repeated recursively by descendant _ConCooSelUpd_ agents until it reaches the leaf _TraUpd_ agents. The _TraUpd_ agents update their local ML model based on the interim result received (activity: updateMLModel1; design option: batched update) and train it using local training data and self-controlled compute (activity: trainMLModel; design option: train one complete ML model or train a part of the ML model). After training is completed, _TraUpd_ agents initiate the transmitInterimResult protocol (design option: gradients or parameter values) with their respective parent _ConCooSelUpd_ agent as the responder. The parent _ConCooSelUpd_ agent waits until a defined threshold is reached (activity: awaitInterimResult; design option: waiting for a time-threshold or waiting for a response-threshold) and update their (part of the) ML model based on the interim results received (activity: updateMLModel; design option: batched update). Each ConCooSelUpd agent can decide how often to repeat this training procedure with its descendants. When the desired number of training rounds is completed, _ConCooSelUpd_ agents send the updated (part of the) ML model to their parent nodes (protocol: transmitInterimResult; design option: gradients or _parameter values_). Once the threshold of the root _ConCooSelUpd_ agent is reached, a coalition-wide training round is completed.
The procedure described for the operation phase is repeatedly executed until the dissolution phase is initiated by the root _ConCooSelUpd_ agent.
#### V-B4 Key Traits
The control archetype implements a strongly hierarchical organizational structure of agents and requires the coalition-wide synchronization of ML models. The combination of these traits leads to organizational structures in which a small fraction of all agents wield the predominant control over the CDML system. The control archetype is suitable for use cases with strict hierarchies where one or a few agents should keep control over the CDML system. The control archetype relies on only one root _ConCooSelUpd_ agent. If the one _updater_ agent crashes, the whole CDML system crashes [48, 49, 52]. Thus, the control archetype is not crash-fault tolerant. The use of multiple _updater_ agents assigned to multiple layers of the hierarchy of the control archetype can make the system tolerant to crashes of single _updater_ agents [19, 52]. If one _updater_ agent crashes, _updater_ agents can take the load in aggregating interim results of the crashed one. However, this redistribution of load to fewer _updater_ agents can drastically reduce the overall performance of the control archetype. The control archetype can be prone to performance bottlenecks due to a few central agents having to execute numerous computationally intensive activities and protocols [52, 53]. Such performance bottlenecks include computation [40] (i.e., during updating) and communication [51, 40] (i.e., sending and receiving interim results). Regarding the predictive performance of the ML model trained collaboratively, the control archetype usually performs better than the confidentiality archetype (e.g., [37]). The ML model usually converges faster than in CDML systems of the flexibility archetype (e.g., [9]). The coalition can be dissolved after training because the coalition is not required to make ML model inferences.
#### V-B5 Variants of the Control Archetype
TraUpd Agents as Tra Agents [53]TraUpd agents lose their _updater_ role and become _Tra_ agents. In this variant, the interim results are only transmitted from _Tra_ agents to _ConCooSelUpd_ agents. No interim results are transmitted back to _Tra_ agents. _Tra_ agents do not update their local ML models.
ConCooSelUpd Agents as ConCooSelTraUpd Agents [59]ConCoSelUpd agents gain the _trainer_ role and become _ConCooSelTraUpd_ agents. In these systems, the agents on higher levels of the hierarchy possess training data on their own and use it to train (parts of) the ML model themselves (e.g., [59]). _ConCooSelTraUpd_ agents train the ML model (activity: trainMLModel; design option: train one complete ML model or train a part of the ML model) while waiting for interim results of subordinate agents in the hierarchy.
TraUpd Agents Train Two Complete ML Models [63]TraUpd agents train two complete ML models locally (activity: trainMLModel; design option: train two complete ML models). _TraUpd_ agents train one ML model on local data. The second ML model is trained on the first ML model. Only the gradients or parameter values resulting from the training of the second ML model are transmitted to the superordinate agent.
### _Flexibility Archetype_
The flexibility archetype is suitable for use cases with communication topologies that can change at run-time [40]. The flexibility archetype offers a high degree of agent autonomy. Agents can arbitrarily join and leave the flexibility archetype without impeding the functioning of the CDML system [40]. In its basic variant, agents can select agents they want to collaborate with. Moreover, agents can decide if and when they execute activities (e.g., trainMLModel or updateMLModel) and protocols (e.g., signalReadiness or transmitInterimResult). The flexibility archetype is weakly hierarchically organized. ML models are not synchronized coalition-wide during ML model training. Exemplary CDML systems of the flexibility
archetype implement gossip learning [49], BrainTorrent [48], and decentralized federated learning [62, 64, 40, 69].
#### V-C1 Agent Model
The flexibility archetype comprises the agent types _ConCooSeITraUpd_ and _CooSeITraUpd_. In its basic configuration, the flexibility archetype comprises one _ConCooSeITraUpd_ agent and at least one _CooSeITraUpd_ agent.
#### V-C2 Acquaintance Model
To participate in the training, agents must establish a bidirectional communication path to at least one other agent. (see Figure 3). Other agents include _ConCooSeITraUpd_ agents and _CooSeITraUpd_ agents. Agents decide with which agents they interact on an equitable basis.
#### V-C3 Principal Functioning
In the initialization phase, the _ConCooSeITraUpd_ agent first defines the ML model (activity: defineInitialMLModel) and interim results (activity: defineInterimResult). The _ConCooSeITraUpd_ agent distributes the ML model and the interim result definition to other agents in the CDML system (protocol: provideMLTask; design option: provide initial MLmodel definition and interim result definition). Agents can join at any time (protocol: applyForCoalition; design option: always).
In the operation phase, each _ConCooSelfTraUpd_ and _CooSeITraUpd_ agents train the ML model locally using local training data and self-controlled computing resources. Afterward, each agent signals its readiness to activate its _updder_ role for the upcoming training round (protocol: signalReadiness) and waits for other agents to signal their readiness (activity: awaitAgentReadiness). Then, at least one agent that signals its readiness is selected (activity: selectAgent) to receive the interim results. Agents are usually selected randomly (design option: randomly, but can also be selected in a targeted manner (design option: based on agent attributes. The selection is announced to the selected agent (protocol: announceAgentSelection). Agents that are selected to activate the role _updater_ wait (activity: awaitInterimResult) until they receive the interim results from other agents using the protocol transmitInterimResult (design option: gradients or parameter values). Lastly, the selected agents use the interim results of other agents to update their local ML model (activity: updateMLModel). The update can entail several interim results (design option: batched update) or only one interim result from another agent (design option: individual update).
This process is repeated until the dissolution phase is initiated. The flexibility archetype dissolves when no agents engage in collaborative training anymore.
#### V-C4 Key Traits
The flexibility archetype is weakly hierarchical and agents store different states of ML models. ML models are not synchronized coalition-wide. Agents have a high degree of autonomy and can individually decide when to train collaboratively and with whom. Moreover, agents can individually decide to activate roles and execute activities and protocols, which leads to agents having little idle time [48].
The flexibility archetype can handle agent crashes better than the control archetype [49]. An agent dropping out of the system may temporarily reduce the performance of the flexibility archetype, but because a new agent can be easily integrated into the training process due to the lack of rules, the flexibility archetype can recover from the agent drop-out [9]. Because agents can largely operate independently of each other, no single agent is vital for the proper functioning of the CDML system. If agents are redundant, agents can theoretically be replaced. However, this may not always be possible because the flexibility archetype does not require redundant agents.
The flexibility archetype is not robust against malicious agents. Malicious agents are agents that tamper with training processes and manipulate collaboratively trained ML models [9]. Malicious agents can obfuscate their identities by arbitrarily joining and dropping out of the CDML system and arbitrarily switching their collaboration partners. Such obfuscation can facilitate the engagement of agents in performing malicious activities without detection (e.g., because reputation systems may not be applicable [42]). Moreover, even when malicious agents are identified, it is hard to punish them because rules (e.g., agents that act maliciously are forced to leave the system) are hardly enforceable in the flexibility archetype. The coalition can be dissolved after ML model training because the CDML system is not required to make ML model inferences.
#### V-C5 Variants of the Flexibility Archetype
Additional CooSeI Agent [49]There can be a dedicated _CooSeI_ agent (e.g., [49]). The remaining agents lose the _selector_ role and become _ConCooTraUpd_ and _ConTraUpd_ agents. In each training round, the _CooSeI_ agent selects a subset of the _ConCooTraUpd_ and _CooTraUpd_ agents to function as the updater (activity: selectAgent; design option: randomly) and assigns each of the remaining agents to one of the agents selected as an _updater_. Each agent then sends its interim result to the agent it was assigned to (protocol: transmitInterimResult; design option: gradients or parameter values).
### _Robustness Archetype_
The robustness archetype is suitable for use cases in which agents may inadvertently drop-out of the coalition during ML model training (e.g., due to crashes or network failures) because a large fraction of agents is redundant and, thus, can replace each other. The robustness archetype is weakly hierarchically organized and performs coalition-wide synchronization of the ML model. Exemplary CDML systems of the
Fig. 3: Acquaintance model of the flexibility archetype for an exemplary training round
robustness archetype are swarm learning system [14] and other blockchain-based CDML systems [57, 65].
#### V-B1 Agent Model
The robustness archetype comprises the agent types _ConCooSeITraUpd_ and _CooSeITraUpd_. In its basic configuration, the robustness archetype comprises one _ConCooSeITraUpd_ agent and at least one _CooSeITraUpd_ agent.
#### V-B2 Acquaintance Model
As illustrated in Figure 4, there can be bidirectional communication paths between all agents in the system. This includes both agents of the type _ConCooSeITraUpd_ and _CooSeITraUpd_.
#### V-B3 Principal Functioning
In the initialization phase of the robustness archetype, the _ConCooSeITraUpd_ agent defines the ML model and interim results and distributes corresponding definitions to other agents in the coalition (protocol: provideMLTask; design option: provide ML model definition and interim result definition). There must always be at least one _CooSeITraUpd_ agent and one _ConCooSeITraUpd_ agent to redundantly execute the roles _coordinator_, _selector_, _trainer_, and _updater_. Additional _CooSeITraUpd_ agents can join at any time (protocol: applyForCoalition; design option: always).
In the operation phase, _ConCooSeITraUpd_ and _CooSeITraUpd_ agents broadcast their readiness to activate their roles _updater_ and _trainer_ for the training in the robustness archetype (protocol: signalReadiness). All agents that received the broadcast individually decide whether the _ConCooSeITraUpd_ or _CooSeITraUpd_ agent should activate the _trainer_ and _updater_ role (activity: selectAgent). Agents broadcast their individual decisions to all agents in the robustness archetype. The final selection of _trainer_ and _updater_ is made through a consensus mechanism (design option: based on votes from other agents). Next, _ConCooSeITraUpd_ and _CooSeITraUpd_ agents start training the ML model using their locally available training data and compute (activity: trainMLModel; design option: train a complete ML Model). All selected agents receive identical interim results from agents that trained their ML model (protocol: transmitInterimResult; design option: gradients or parameter values). All agents use the identical interim results to update the ML model (activity: updateMLModel). For the update, all selected _updater_ agents use the results of from all other agents (design option: batched update). All agents, which computed ML model updates, broadcast their new interim results to all agents in the system (protocol: transmitInterimResult).
This process is repeated until the start of the dissolution phase. The dissolution phase starts when no agents engage in the collaborative training anymore.
#### V-B4 Key Traits
The robustness archetype is weakly hierarchical and is designed to train global ML models that are synchronized coalition-wide. Both of these traits culminate in CDML systems where agent types are redundantly assigned to agents. Agents process and store data of the global ML model redundantly, increasing the robustness of CDML systems. The robustness archetype uses a fully connected communication network [40]. Due to the high redundancy of agents, except the agent with the role _configurator_, the robustness archetype does not rely on single agents. This design prevents the robustness archetype from failing if some agents drop-out of the CDML system [57], for example, due to crashes and network failures. The robustness archetype allows for the replacement of _updater_ agents after each training round. Agents in the robustness archetype usually require large computational resources, for example, to compute ML model updates based on interim results from all other agents in the CDML system [40]. The coalition can be dissolved after training since the coalition is not required to make ML model inferences.
#### V-B5 Variants of the Robustness Archetype
A subset of agents activates the updater role per training round [14, 57]: Interim results are transmitted to and stored by all agents, but only a subset of agents activate their _udater_ role. From all _ConCooSeITraUpd_ and _CooSeITraUpd_ agents that signal their readiness (protocol: _signalReadiness_), not all agents are selected (activity: selectAgent; design options: based on agent attributes, based on votes from other agents, or randomly) to activate their _updater_ role in every training round. In some cases, only one agent is selected [14].
## VI Discussion
### _Principal Findings_
In this study, we present a CDML design toolbox, including a concept level and a design level. The concept level of the CDML design toolbox includes five roles (i.e., _configurator_, _coordinator_, _selector_, _trainer_, and _updater_), ten activities (e.g., updateMLModel), and seven protocols (e.g., transmitInterimResult) inherent to CDML systems. On the design level, the CDML design toolbox includes design options to customize CDML systems. For example, the roles _trainer_ and _updater_ can be combined into the agent type _TraUpd_. We present seven agent types and seven mandatory communication paths between these agent types. For example, agents with the role _updater_ can have communication paths among each other. Moreover, the CDML design toolbox presents design options for activities and protocols. Based on common combinations of design options, we present four principal CDML archetypes (i.e., the confidentiality archetype, control archetype, flexibility archetype, and robustness archetype) and their key traits.
The design level of the CDML design toolbox shows different implementations of roles, activities, and protocols in CDML systems that we describe as design options. Different
Fig. 4: Exemplary acquaintance model of the robustness archetype
combinations of design options can lead to different CDML systems. Our results show how CDML systems can be grouped and differentiated on the basis of common combinations of design options and resulting key traits. We observed significant similarities among CDML systems studied by research communities with limited overlap. It turns out that split learning systems and assisted learning systems implement similar design options; for example, they comprise only _ConCoSeIUpd_ and _ConTraUpd_ agents. Moreover, swarm learning systems and blockchain-based decentralized federated learning systems have similar design options. For example, both implement the agent types _ConCoSeITraUpd_ and _CoSeITraUpd_ but differ regarding the number of agents with an active _updater_ role each training round.
The presented CDML archetypes and their key traits show that no one-size-fits-all CDML system can be used for every use case. Developers must carefully assess the suitability of CDML systems based on their designs and different traits. For instance, the redundant distribution of roles in swarm learning enhances robustness. However, in use cases where most agents have limited resources, mandating that all agents perform all roles may result in the failure of the CDML system because agents may be assigned roles that exceed their resource capacities. Conversely, the redundancy in distributing agent roles can be better suited for use cases characterized by frequent agent drop-outs. Therefore, the careful assessment of CDML system suitability for use case requirements is mandatory to operate CDML systems successfully.
In the agent model (see Section IV-B1), we present the agent types that we identified in the analyzed publications. The presented agent types represent a subset of the possible combinations of agent roles. For example, we did not identify a _Con_ agent or an _Upd_ agent even though the implementation of such agents could be possible as long as all roles are distributed to agents in CDML systems. CDML systems that assign each agent only one role could also have new traits, including agents requiring fewer resources, that might be useful in many use cases. Because of the theoretical availability of more agent types and combinations of design options, more CDML system designs with different traits may become available in the future.
### _Contributions to Practice and Research_
With this study, we contribute to practice and research in three principal ways. First, by presenting the CDML design toolbox, we offer a consolidated design knowledge base of previously scattered design knowledge of CDML systems. Since the comparison of differences between CDML system designs has focused on a few design aspects (e.g., the training process), the CDML design toolbox enables systematic comparisons between CDML system designs covering a broad set of design options. The agent-based models on the concept level (i.e., the roles model and interactions model) of the CDML design toolbox present the main design commonalities of CDML systems (e.g., the use of specific agent roles and the principal training process). The three agent-based models on the design level (i.e., agent model, service model, and acquaintance model) can guide the systematic comparison between CDML system designs and the customization of CDML system designs to meet use case requirements. Moreover, the developed agent-based models can facilitate the application of the Gaia methodology for developing custom CDML system designs.
Second, by showcasing CDML archetypes, we offer starting points for the combination of design options to develop CDML system designs. The archetypes inform of combinations of design options commonly used in practice and research. The CDML archetypes can be customized by using the design options presented in the CDML design toolbox to develop blueprints of CDML systems. Thereby, in combination, the CDML archetypes and the CDML design toolbox offer actionable help in guiding the design of CDML systems.
Third, by presenting key traits of CDML archetypes, we support developers in deciding on combinations of design options to meet use case requirements. The key traits of CDML archetypes enable developers to choose the most fitting CDML archetype for use cases. Using the selected CDML archetype as a starting point, developers can use the CDML design toolbox and customize the archetype to show additional required traits. By executing this process, developers can evaluate CDML system designs in their suitability for use cases prior to implementing the designs.
### _Limitations_
For the development of the CDML design toolbox, the CDML archetypes, and the identification of key traits, we analyzed publications and CDML systems that we deemed to be representative of the CDML field. With our selection of publications and CDML systems for analysis, we aimed to cover the large spectrum of different CDML system designs. However, the number of publications and CDML systems significantly increased in the past years, making it impossible to incorporate all publications in our study but only a representative set of publications. The CDML design toolbox may not cover all CDML system designs.
To conceptualize CDML systems, we store to extract and understand their key design aspects (e.g., activities, processes, and roles), requiring the resolution of ambiguities, and to set extracted key aspects in relationships (e.g., roles and responsibilities). Although well-suited to conduct such research, qualitative research is inherently prone to subjective biases, for example, because publications are individually interpreted depending on personal conceptions. Despite our efforts to reduce such biases (e.g., through feedback on our results from ML experts), we cannot guarantee that we have completely eliminated them.
The analyzed publications focus on the core training process [11, 40, 48, 49, 53]. Other system components required to operate CDML systems are mostly neglected. By triangulating descriptions of CDML systems based on our coding and intense discussions with ML experts, we aimed to complete fragmented descriptions of CDML systems. Still, the CDML design toolbox may lack aspects not specifically mentioned in the analyzed publications. Similarly, a significant number
of the examined publications lacked sufficient detail in their descriptions of permissions of roles, activities, and protocols. This hindered us in describing permissions associated with agent roles at the concept level and impeded the development of a complete service model. Instead, we developed a preliminary service model that describes how activities and protocols can be implemented.
### _Future Research_
This work presents a wide range of CDML system designs that address the different requirements of use cases. We noticed that research on CDML systems remains predominantly theoretical, with only a few real-world implementations of CDML systems (e.g., [16]). To gain a more comprehensive understanding of the advantages and limitations of CDML systems in various use cases, future research should prioritize empirical investigations of practical implementations of CDML systems. This research should place particular emphasis on real-world implications, encompassing socio-technical aspects such as human perception and acceptance. The CDML design toolbox offers a foundation for knowledge transfers within the CDML community (e.g., to develop new CDML systems) and across multiple disciplines. In the following, we describe three areas for knowledge transfer that may be particularly interesting for improving CDML systems in future research.
Hyperparameter OptimizationAutomated hyperparameter optimization (HPO) has become very important in the development of ML models for manifold purposes [73], such as to improve ML model performance and decrease necessary computations in the training of ML models. For most automated HPO methods, such as Bayesian optimization [74, 75, 76], the availability of complete training data sets is assumed. This assumption lies at odds with decentralized training data management in CDML systems. Extant automated HPO methods are hardly applicable to CDML systems, which may result in under-optimized ML models trained in CDML systems [73]. The CDML design toolbox can serve as a foundation for future research to identify challenges in performing HPO in CDML systems with different designs and develop corresponding solutions.
Data ConfidentialityThe exchange of interim results instead of training data does not guarantee training data confidentiality per se [77]. To protect training data confidentiality, the combination of CDML and other privacy-enhancing technologies (PETs), such as differential privacy and homomorphic encryption, has become promising [78, 56]. Future research should develop guidelines for how to combine the CDML paradigm with other PETs reasonably.
RobustnessAgents may pursue individual goals in CDML systems. However, ensuring the accurate alignment between individual agent goals and the overarching goal of the CDML system is critical. Misalignment can have detrimental consequences, such as the introduction of the free-rider problem [79] and incentivizing agents to poison training data or ML models [80, 81, 82]. The free-rider problem is characterized by agents that provide subpar data while being able to improve their ML model from interim results received from other agents. Integrating robustness measures from diverse fields into CDML systems, such as financial incentives in economics and normative principles in sociology for agent behavior coordination [83, 84, 42, 82], could enhance the robustness of CDML systems against challenges, such as anticipating malicious actions of agents in CDML systems. Future research should extend the CDML design toolbox to include design options that improve the robustness of CDML systems and protect ML model training from malicious agent activity.
## VII Conclusion
This work presents a CDML design toolbox that can be used to guide developers in the development of CDML system designs. Leveraging the CDML design toolbox, we developed four CDML archetypes with different key traits that can guide developers in the design of CDML systems.
The CDML design toolbox is envisioned to offer a foundation for developers to design CDML systems suitable for use cases. With our presentation of design options, we aim to accelerate the design process and develop novel CDML systems that can cover an even wider range of use cases.
During our investigation, we recognized the substantial expansion of the CDML design space through contributions from practice and research. Following federated learning systems, alternative CDML systems, such as split learning systems, assisted learning systems, and gossip learning systems, have moved into the focus of practice and research.
We hope that the CDML design toolbox will support the targeted design of CDML systems suitable for use cases (e.g., by facilitating the use of the Gaia method [20]) so that training of ML models on sufficient training data becomes easier for developers. Owing to the considerable attention that CDML systems have garnered in practice and research and the emergence of novel CDML concepts beyond federated learning, we encourage the advancement of the CDML design toolbox in the future.
## Acknowledgement
We thank Benjamin Sturm, Kathrin Brecker, Marc Zoller, Mikael Beyene, Richard Guse, Simon Warsinsky, and Tobias Dehling for their valuable feedback on this work. This work was supported by funding from the topic Engineering Secure Systems of the Helmholtz Association (HGF) and by KASTEL Security Research Labs.
|
2309.14393 | LLMCarbon: Modeling the end-to-end Carbon Footprint of Large Language
Models | The carbon footprint associated with large language models (LLMs) is a
significant concern, encompassing emissions from their training, inference,
experimentation, and storage processes, including operational and embodied
carbon emissions. An essential aspect is accurately estimating the carbon
impact of emerging LLMs even before their training, which heavily relies on GPU
usage. Existing studies have reported the carbon footprint of LLM training, but
only one tool, mlco2, can predict the carbon footprint of new neural networks
prior to physical training. However, mlco2 has several serious limitations. It
cannot extend its estimation to dense or mixture-of-experts (MoE) LLMs,
disregards critical architectural parameters, focuses solely on GPUs, and
cannot model embodied carbon footprints. Addressing these gaps, we introduce
\textit{\carb}, an end-to-end carbon footprint projection model designed for
both dense and MoE LLMs. Compared to mlco2, \carb~significantly enhances the
accuracy of carbon footprint estimations for various LLMs. The source code is
released at \url{https://github.com/SotaroKaneda/MLCarbon}. | Ahmad Faiz, Sotaro Kaneda, Ruhan Wang, Rita Osi, Prateek Sharma, Fan Chen, Lei Jiang | 2023-09-25T14:50:04Z | http://arxiv.org/abs/2309.14393v2 | # LLMcarbon: Modeling the End-To-End Carbon Footprint of Large Language Models
###### Abstract
The carbon footprint associated with large language models (LLMs) is a significant concern, encompassing emissions from their training, inference, experimentation, and storage processes, including operational and embodied carbon emissions. An essential aspect is accurately estimating the carbon impact of emerging LLMs even before their training, which heavily relies on GPU usage. Existing studies have reported the carbon footprint of LLM training, but only one tool, mlco2, can predict the carbon footprint of new neural networks prior to physical training. However, mlco2 has several serious limitations. It cannot extend its estimation to dense or mixture-of-experts (MoE) LLMs, disregards critical architectural parameters, focuses solely on GPUs, and cannot model embodied carbon footprints. Addressing these gaps, we introduce _LLMCarbon_, an end-to-end carbon footprint projection model designed for both dense and MoE LLMs. Compared to mlco2, LLMCarbon significantly enhances the accuracy of carbon footprint estimations for various LLMs. The source code is released at [https://github.com/SotaroKaneda/MLcarbon](https://github.com/SotaroKaneda/MLcarbon)
## 1 Introduction
Large language models (LLMs) have established their supremacy in addressing a wide spectrum of natural language processing tasks (Brown et al., 2020). However, the proliferation of these models, coupled with increasingly expansive datasets (Sanderson, 2023; Anil et al., 2023), has woven LLM inferences into the fabric of everyday life (Campello de Souza et al., 2023). This surge in LLM adoption has, in turn, exacerbated the already considerable environmental impacts associated with machine learning (ML) (Thompson et al., 2021). For instance, the creation of a transformer with 213 million parameters through neural architecture search has been likened to the carbon dioxide equivalent (CO2eq) emissions of five cars over their entire lifespans (Strubell et al., 2019).
Given the ecological implications of LLMs, it becomes essential for both cloud service providers and regular users to gain a profound understanding of the carbon footprint of emerging LLMs. This awareness is particularly critical before embarking on resource-intensive training endeavors that entail the utilization of thousands of GPUs. During the initial design phase, key parameters such as the LLM's parameter count, hardware configurations, and the energy efficiency of the hosting data center need to be factored into a robust carbon footprint projection model. This model should possess the capability to swiftly and accurately estimate the carbon footprint, encompassing both _operational_ and _embodied_ carbon emissions. Moreover, it should provide valuable insights into metrics like test loss, training duration, and inference latency, all crucial aspects of LLM performance. The existence of such a carbon footprint projection model empowers cloud providers to intelligently explore the trade-off between test loss and carbon footprint when designing new LLMs. Additionally, it encourages everyday users to adopt practices that mitigate LLM carbon footprints by facilitating quantitative comparisons across various LLM configurations.
Currently, _there is a notable void in the availability of a comprehensive end-to-end carbon footprint projection model tailored specifically for LLMs_. Prior research efforts (Henderson et al., 2020; Wu et al., 2022; Anthony et al., 2020; Schwartz et al., 2020; Patterson et al., 2021; Dodge et al., 2022; Strubell et al., 2019; Lakim et al., 2022) have predominantly focused on recording and reporting the carbon footprint associated with the training phase of ML models. To date, only one tool,
mlco2 (Lacoste et al., 2019), has emerged capable of predicting the carbon footprint of an ML task based on parameters like GPU usage, training duration, and data center efficiency. However, mlco2 exhibits several serious limitations. Firstly, it is confined to convolutional neural networks (CNNs) and cannot extend its estimations to include the carbon footprint of LLMs. Secondly, mlco2 neglects crucial architectural aspects of ML models, such as parameter counts, resulting in overestimated projections. Thirdly, it exclusively considers GPUs, disregarding specialized ML hardware like TPUs (Jouppi et al., 2017), and assumes uniform peak computing throughput across GPUs, leading to inaccuracies in its carbon footprint assessments. Lastly, although the embodied carbon footprint of an ML task holds equal significance to its operational carbon footprint (Wu et al., 2022), mlco2 is incapable of modeling the embodied carbon footprint of an LLM based on its hardware resources.
In this paper, we propose an end-to-end carbon footprint projection model, _LLMCarbon_, which can accurately predict the carbon footprint of both dense and MoE LLMs during their training, inference, experimentation, and storage phases. LLMCarbon incorporates critical LLM, hardware, and data center parameters, such as LLM parameter count, hardware type, system power, chip area, and data center efficiency, to model both operational and embodied carbon footprints of an LLM. When validated against Google's published LLM carbon footprints, the results generated by LLM-Carbon exhibit differences of only \(\leq 8.2\%\), and thus are more accurate than those of mlco2.
## 2 Background
**LLM Carbon Footprint.** The carbon footprint of a LLM comprises two fundamental components (Gupta et al., 2022): the operational footprint, encompassing emissions stemming from hardware energy consumption, and the embodied footprint, encapsulating emissions arising from hardware manufacturing. Previous investigations (Henderson et al., 2020; Wu et al., 2022; Anthony et al., 2020; Schwartz et al., 2020; Patterson et al., 2022; Dodge et al., 2022; Strubell et al., 2019) have predominantly focused on recording and reporting the operational carbon footprint of various ML tasks. A notable exception is Wu et al. (2022), which delved into the embodied carbon footprint of ML tasks and revealed that within a Meta data center, the embodied carbon footprint of an LLM constitutes \(\sim 50\%\) of its operational carbon footprint.
**Neural Scaling Law**. The Neural Scaling Law (Kaplan et al., 2020) delineates a power-law relationship linking an LLM's test loss to three key factors: the number of model parameters, the scale of the training dataset, and the computational resources utilized during training. This relationship holds across diverse architectures and downstream ML tasks, spanning zero-shot, prompted, and fine-tuned scenarios (Caballero et al., 2023).
**Reducing LLM Carbon Footprint**. Efforts on reducing LLM carbon footprints have been channeled into 4 domains. Firstly, sparse MoE architectures (Fedus et al., 2022) have been proposed to enhance LLM performance by increasing model parameters while maintaining a similar computational load. Secondly, the adoption of specialized ML hardware, such as TPUs (Jouppi et al., 2017), has emerged as a more energy-efficient alternative to power-hungry GPUs. Thirdly, ML-focused data centers have optimized their facilities into large-scale systems, reducing cooling and infrastructure overhead to enhance power usage effectiveness (PUE) (Liu et al., 2020). Lastly, these data centers are transitioning to renewable energy sources like solar and wind power (Acun et al., 2023) to mitigate the operational carbon footprint of LLMs. However, the recent proliferation of ML-specific hardware within these data centers, driven by the diverse demands of ML tasks, is widening the gap between operational and embodied carbon footprints in the near future (Wu et al., 2022).
**Parallelism in LLM Processing**. Effective processing of LLMs necessitates the utilization of multiple computing devices, such as GPUs or TPUs, owing to significant LLM parameter counts. Four types of parallelism, i.e., data, tensor, pipeline, and expert, are commonly employed to enhance hardware efficiency, quantified as actual throughput relative to peak throughput.
* **Data Parallelism**: In data parallelism (Xing et al., 2015), the full LLM model is distributed to each computing device, while the input dataset is divided among these devices. Periodic gradient aggregation ensures that all devices maintain consistent model weights.
* **Tensor Parallelism**: Tensor parallelism (Narayanan et al., 2021) involves distributing an LLM's layers across multiple devices. Within a transformer layer, the self-attention block partitions key, query, and value matrices through column-wise division. The output linear layer directly handles the attention operation's partitioned output, with weight matrix partitioning by rows. In the two-layer MLP, the first layer is divided along columns, and the second along rows. Efficient data
coordination among partitions on different devices is achieved through two all-reduce operations in forward and backward passes.
* **Pipeline Parallelism**: In pipeline parallelism (Narayanan et al., 2021), an LLM's layers are distributed across multiple devices. Each device handles an equal number of layers, and microbatches split a batch for pipelined execution. Synchronous weight updates are ensured through pipelining. However, periodic pipeline flushes to synchronize steps across devices introduce "pipeline bubbles" at batch starts and ends, which need to be minimized for efficient pipeline model parallelism.
* **Expert Parallelism**: Expert parallelism (Kim et al., 2021) is tailored for parallelizing the training of MoE LLMs. This approach involves distributing distinct experts across various devices, enabling parallel execution. However, due to the separation of experts across multiple computing devices, explicit communication using all-to-all primitives becomes essential.
## 3 Related Work
Table 1 provides a comparison between LLMCarbon and existing research endeavors. The predominant focus of prior studies (Henderson et al., 2020; Wu et al., 2022; Anthony et al., 2020; Schwartz et al., 2020; Dodge et al., 2022; Strubell et al., 2019) has been the measurement and reporting of carbon footprints associated with the actual training phase of ML models, denoted as "others" in the table. Notably, only one previous model, mlco2 (Lacoste et al., 2019), possesses the capability to predict the carbon footprint of an LLM task based on metrics like GPU utilization, training duration, and data center efficiency. Nevertheless, mlco2 encounters four significant limitations. Firstly, mlco2 cannot estimate the carbon footprint of LLMs, particularly sparse MoE LLMs. Secondly, it overlooks essential architectural attributes of LLMs, such as LLM parameter count, resulting in exaggerated predictions. Thirdly, mlco2 exclusively considers GPUs and neglects specialized ML hardware like TPUs (Jouppi et al., 2017), assuming uniform peak computing throughput across all GPUs, thereby yielding imprecise carbon footprint estimations. Lastly, mlco2 cannot model the embodied carbon footprint of an LLM based on its hardware configuration.
## 4 LLMCarbon
### Overview
Figure 1 presents an overview of LLMCarbon for predicting the carbon footprint of an LLM. The inputs to LLMCarbon encompass the LLM's architectural description, data center specification, and hardware configuration. To output the LLM's carbon footprint, LLMCarbon employs a series of models, each processing specific input details. LLMCarbon can use the parameter model to determine the LLM's parameter count based on its architectural attributes, or directly accept the LLM's parameter count as input. With the LLM's parameter count and training token count, LLMCarbon calculates the test loss by the neural scaling law (Kaplan et al., 2020), and employs the FLOP model to estimate the volume of FLOPs required for LLM processing. Through the parameter count, LLMCarbon generates the optimal data, tensor, pipeline, and expert parallelism setting. Taking into account the parallelism setting and hardware configuration, LLMCarbon's hardware efficiency model computes the hardware efficiency, representing the real computing throughput divided by the peak computing throughput. Utilizing data center details, hardware efficiency, and FLOP count, LLMCarbon applies the operational carbon model to derive the LLM's operational carbon footprint. Similarly, by considering the hardware configuration, LLMCarbon's embodied carbon model yields the LLM's embodied carbon footprint. The overall carbon footprint of the LLM is then computed by summing both the operational and embodied carbon footprints.
\begin{table}
\begin{tabular}{c c c c c c c} \hline \hline \multirow{2}{*}{**scheme**} & predictive & MoE & architectural & specialized & operational & embodied \\ & modeling & support & parameters & hardware & carbon & carbon \\ \hline mlco2 & ✓ & ✗ & ✗ & ✗ & ✗ & ✗ \\ others & ✗ & ✗ & ✗ & ✗ & ✓ & ✓ \\
**LLMCarbon** & ✓ & ✓ & ✓ & ✓ & ✓ & ✓ \\ \hline \hline \end{tabular}
\end{table}
Table 1: The comparison of LLMCarbon against prior work.
Figure 1: The overview of LLMCarbon.
### Parameter Model
Among all LLM architectural attributes, the LLM parameter count has the largest impact on test loss (Kaplan et al., 2020). To reduce projection errors, LLMCarbon can take the parameter count as direct input, or estimate the parameter count by the parameter model. The parameter model's input comprises the LLM's architectural parameters including the hidden size (\(h\)), the number of layers (\(l\)), the vocabulary size (\(V\)), and the number of experts (\(N_{e}\)). For a dense LLM, we calculate its parameter count (\(P_{d}\)) by Equation 1 (Narayanan et al., 2021). An MoE LLM (Rajbhandari et al., 2022) replaces \(\rho\) (\(\rho\in(0,1]\)) feed-forward layers in its counterpart dense LLM with MoE layers. An MoE layer's parameter count is the sum of the expert parameter count (\(P_{exp}=8h^{2}N_{e}\)) and the self-attention parameter count (\(P_{att}=4h^{2}\)), so the parameter count (\(P_{c}\)) of an MoE LLM can be computed using Equation 2. The parameter model of LLMs adopting an encoder-decoder architecture can be viewed in Appendix A.
\[P_{d}\approx 12lh^{2}+Vh\hskip 28.452756pt(1)\hskip 42.679134ptP_{e}\approx(1- \rho)P_{d}+\rho(4h^{2}+8h^{2}N_{e})l \tag{2}\]
### Neural Scaling Law
The neural scaling law (Kaplan et al., 2020) predicts an LLM's test loss based on its parameter count \(P\) and the training dataset size \(D\). For ensuring the comparability of test losses across various models, sizes, and datasets, we adopt the Chinchilla scaling law (Hoffmann et al., 2022) formulated as Equation 3, where \(A\), \(B\), \(\alpha\), \(\beta\), and \(E\) are fitting constants. The test loss \(L\) equals to the summation of an irreducible term \(E\) and a reducible term diminishing through the scaling of \(P\) and \(D\).
\[L(P,D)=\frac{A}{P^{\alpha}}+\frac{B}{D^{\beta}}+E\hskip 14.226378pt(3)\hskip 28.452756pt TC\approx 6PD\hskip 14.226378pt(4)\hskip 28.452756ptIC\approx 2PD\hskip 14.226378pt(5)\]
### Flop Model
The FLOP model receives two inputs: the count of parameters (\(P\)) and the number of tokens (\(D\)) processed by the LLM processing. The primary component of FLOPs is the multiply-accumulate operations involving LLM weights and intermediate results. Within our FLOP model, the FLOP count necessary for training a dense LLM (\(TC\)) is estimated using Equation 4. For dense LLM inferences, the FLOP count (\(IC\)) is approximated as per Equation 5. To compute the FLOP count for MoE LLM processing, we input the parameter number of the dense base model (Rajbhandari et al., 2022) of the MoE LLM into Equations 4 and 5, respectively.
### Hardware Efficiency Model
Efficient processing of LLMs relies on achieving high hardware efficiency, which is calculated as the actual computing throughput divided by the peak throughput. This efficiency is largely determined by the optimal configuration of data, tensor, pipeline, and expert parallelism, along with the number of devices used for the task. Using too few or too many devices or improperly configuring parallelism can lead to reduced hardware efficiency. For example, achieving optimal parallelism for GPT-3 with 175 billion parameters requires 1.5K V100 GPUs, resulting in a hardware efficiency of 47% (Narayanan et al., 2021). Conversely, an unoptimized configuration using 10K V100 GPUs yields a substantially lower hardware efficiency of only 19.7% (Patterson et al., 2021).
**Optimal Parallelism Setting.** The optimal parallelism configuration is represented as \((p,t,d,e)\), where each variable corresponds to a degree of pipeline, tensor, data, and expert parallelism, respectively. For dense LLMs, optimal settings are derived from (Narayanan et al., 2021), depicted in Figure 2, where \(e=1\) is omitted. Initially, we increase tensor parallelism (\(t\)) up to \(z\) (e.g., \(z=8\)) when employing \(z\)-device servers (Narayanan et al., 2021), each containing \(z\) interconnected devices. This increment in \(t\) is confined to avoid exceeding communication bandwidth limits.
Once \(z\) is reached, further scaling for larger LLMs involves increasing pipeline parallelism (\(p\)) (Narayanan et al., 2021). However, the product of \(t\) and \(p\) (\(t\cdot p\)) must not exceed a certain threshold to ensure that LLM parameters and intermediate data fit into device memory. The number of devices required to achieve optimal hardware efficiency for dense LLM processing is calculated as \(n=t\cdot p\cdot d\) (Narayanan et al., 2021). A polynomial regression model is used to predict optimal hardware efficiency based on these parameters. For MoE LLMs, the optimal parallelism settings are adopted from (Chen et al., 2023). Assuming 64 experts within an MoE LLM, expert parallelism (\(e\)) is always set to 64, intertwining \(d\) and \(e\) for a uniform expert distribution. To reduce inter-device all-to-all communications, \(d\) is fixed at 1. Scaling MoE LLM parallelism is achieved by increasing pipeline parallelism (\(p\)). The number of devices required for optimal hardware efficiency in MoE LLM processing is also calculated as \(n=t\cdot p\cdot d\). MoE LLMs require fewer devices compared to dense LLMs with equivalent parameter counts due to their lower computational overhead. The optimal hardware efficiency during MoE LLM processing is represented in Figure 5. MoE LLMs achieve \(\sim 80\%\)(Chen et al., 2023) of the optimal hardware efficiency of their dense base models, due to extra host-device memory swaps.
\[\mathit{eff}_{re}=\begin{cases}\gamma_{0}\cdot\frac{\mathit{re}}{n}\cdot \mathit{eff}_{n}&re<n\\ \gamma_{1}\cdot\frac{n}{re}\cdot\mathit{eff}_{n}+\gamma_{2}&re>n\end{cases} \tag{6}\]
\[\mathit{eff}_{re}=\frac{\mathit{TFLOP}}{n_{dev}\cdot\mathit{FLOP}_{peak} \cdot\mathit{eff}} \tag{7}\]
**Fewer or More Computing Devices**. When the number of computing devices is not equal to \(t\cdot p\cdot d\), the hardware efficiency decreases. The efficiency (\(\mathit{eff}_{re}\)) with \(re\) devices can be calculated using Equation 6, where \(\gamma_{0}\sim\gamma_{2}\) are fitting constants, \(\mathit{eff}_{n}\) means the highest hardware efficiency, and \(n\) indicates the number of devices that can achieve \(\mathit{eff}_{n}\).
\[\mathit{energy}_{hard}=\sum_{i\in hardware\_set}(P_{i}\cdot\mathit{eff}_{i} \cdot n_{i}\cdot t_{i}) \tag{8}\]
\[\mathit{energy}_{oper}=\mathit{energy}_{hard}\cdot\mathit{PUE} \tag{9}\]
### Operational Carbon Model
By using the FLOP count (\(\mathit{TFLOP}\)), the hardware efficiency (\(\mathit{eff}\)), and the computing device number (\(n_{dev}\)), we can determine the execution time of a device through Equation 7, where \(\mathit{FLOP}_{peak}\) represents the device peak throughput. The total energy (\(\mathit{energy}_{hard}\)) consumed by all hardware units can be calculated using Equation 8, where \(P_{i}\) denotes the peak power of hardware unit \(i\); \(\mathit{eff}_{i}\) represents the hardware efficiency of hardware unit \(i\); \(n_{i}\) indicates the count of hardware unit \(i\); and \(t_{i}\) means the execution time of hardware unit \(i\). Hardware units encompass a range of components, including CPUs, LLM computing devices, memories, SSDs, and others.
\[\mathit{CO2eq}_{oper}=\mathit{energy}_{oper}\cdot\mathit{carb\_int} \tag{10}\]
**PUE**. Power Usage Effectiveness (PUE) (Henderson et al., 2020) serves as the industry standard metric for evaluating a data center's energy efficiency. It is defined as the ratio of the total energy consumption of the data center, including all auxiliary components like cooling, to the energy consumed solely by the computing hardware within the data center. The operational energy (\(\mathit{energy}_{oper}\)) associated with LLM processing can be calculated using Equation 9, where \(\mathit{energy}_{hard}\) denotes the energy used by the computing hardware within a data center, and \(\mathit{PUE}\) indicates the PUE of the specific data center.
\[\mathit{CO2eq}_{emb}=\sum_{i\in hardware\_set}\frac{t_{i}\cdot\mathit{CO2eq}_ {chip_{i}}}{\mathit{lifetime}_{i}} \tag{12}\]
**Carbon Intensity**. Carbon intensity is a metric that assesses the environmental impact of a data center's energy consumption. Carbon-free energy (CFE) denotes the proportion of renewable, carbon-free energy utilized within a data center. As a data center increases its utilization of renewable energy, it experiences an increase in CFE and a corresponding decrease in carbon intensity. Table 2 provides insights into the carbon intensity and CFE values for some data centers. The operational carbon footprint (\(\mathit{CO2eq}_{oper}\)) attributed to LLM processing is calculated using Equation 10, where \(\mathit{energy}_{oper}\) represents the operational energy for LLM processing, and \(\mathit{carb\_int}\) denotes the carbon intensity of the specific data center.
### Embodied Carbon Model
To quantify the chip's embodied carbon footprint (\(\mathit{CO2eq}_{chip}\)) within a specific hardware unit, Equation 11 is employed, where \(\mathit{area}\) represents the chip's area. The Carbon emitted Per unit Area
(_CPA_) is contingent on various semiconductor fabrication parameters, including yield, energy consumption per unit area during manufacturing, emissions from chemicals utilized in hardware production, and emissions associated with raw material sourcing for fabrication. Specific values for area and CPA for distinct hardware units are elaborated in Table 3, where area values for CPU, DRAM, SSD, TPU, and GPU are drawn from sources such as (Singh et al., 2020), (Choe, 2021), (Wiki, 2023b), and (Wiki, 2023a). CPA values for Micron, Samsung, and TSMC are extracted from (Garcia Bardon et al., 2020), and (TSMC, 2019). The total embodied carbon footprint (_CO2eq_emb_) originating from all hardware units involved in LLM processing is assessed using Equation 12, where _CO2eq_\({}_{chip}\) denotes the chip's embodied carbon footprint for hardware unit \(i\), _lifetime_\({}_{i}\) means the lifespan of hardware unit \(i\), and \(t_{i}\) represents the execution duration of hardware unit \(i\). The hardware units mentioned in Equation 12 include CPUs, LLM computing devices, memories, SSDs, and other components. Notably, Meta's data centers achieve an average utilization rate of \(60\%\) throughout the 5-year lifespan of hardware units (Wu et al., 2022).
### Total Carbon Footprint
The total carbon footprint (_CO2eq_) resulting from LLM processing is determined using Equation 13, where _CO2eq_\({}_{oper}\) indicates the operational carbon footprint of the LLM, and _CO2eq_\({}_{emb}\) denotes the embodied carbon footprint of the LLM.
## 5 Validation
We employ LLMCarbon to compute the operational footprints of five LLMs, including dense and MoE architectures, developed by Google, OpenAI, and Meta during their training phases. We also compute the operational footprint of another LLM, Noor (Lakim et al., 2022), during its storage phase. To validate the predictions of LLMCarbon, we compare our calculated operational footprint values with the previously published data for these LLMs. Moreover, we utilize LLMCarbon to predict the embodied footprint of an LLM developed by Meta and validate the result by comparing it with the actual embodied footprint data.
### Operational Carbon Footprint Validation
**Training Phase**. Table 4 presents the validation results of LLMCarbon's predictions on the training operational carbon footprint. To validate the training operational carbon footprint estimations yielded by LLMCarbon, we selected five LLMs: T5 (Raffel et al., 2020), GPT-3 (Brown et al., 2020), GShard (Lepikhin et al., 2021), Switch (Fedus et al., 2022), and XLM (Conneau et al., 2020). We list the inputs and outputs of LLMCarbon in Table 4. Within the table, "device TPD (W)" indicates the Chip Thermal Design Power of a computing device, while "avg. system power (W)" conveys the average system power for computing device, including TPU/GPU, host CPU, DRAM, and network interface. The inputs on the parameters of LLMs, hardware, and data centers, and the actual training operational carbon footprint values of these LLMs were collected from (Patterson et al., 2021) and (Wu et al., 2022). Since the parameter count of an LLM is considered as an architectural parameter of the LLM in (Patterson et al., 2021) and (Wu et al., 2022), we skipped the parameter model and directly used the parameter count as an input to LLMCarbon. The validation of the parameter model of LLMCarbon can be found in Appendix B. Owing to the adoption of suboptimal parallelism settings, the hardware efficiencies for training these LLMs hover within the range of \(39\%\) to \(19.7\%\), lower than the hardware efficiencies achieved with optimal parallelism configurations. Comparing
\begin{table}
\begin{tabular}{c c c} \hline \hline data & carbon & carbon \\ center & free & intensity \\ name & energy & _gCO2eq_/_kWh_ \\ \hline asia-east2 & 28\% & 360 \\ europe-north1 & 91\% & 127 \\ us-central1 & 97\% & 394 \\ us-south1 & 40\% & 296 \\ \hline \hline \end{tabular}
\end{table}
Table 2: The data center efficiency.
\begin{table}
\begin{tabular}{c c c c} \hline \hline hardware & description & unit & CPA \\ \hline CPU & TSMC 16nm & 147 \(mm^{2}\) & 1 _kgCO2_/_cm2_ \\ \hline DRAM & Micron 18nm & 256 GB & 0.024 _kgCO2_/_GB_ \\ \hline SSD & Samsung 20nm & 32 TB & 0.4 _kgCO2_/_GB_ \\ \hline TPUv3 & TSMC 16nm & 700 \(mm^{2}\) & 1 _kgCO2_/_cm2_ \\ TPUv4 & TSMC 7nm & 400 \(mm^{2}\) & 1.6 _kgCO2_/_cm2_ \\ \hline V100 & TSMC 12nm & 815 \(mm^{2}\) & 1.2 _kgCO2_/_cm2_ \\ H100 & TSMC 4nm & 814 \(mm^{2}\) & 1.8 _kgCO2_/_cm2_ \\ \hline \hline \end{tabular}
\end{table}
Table 3: The comparison of embodied carbon footprints.
the predicted operational carbon footprints to actual data, LLMCarbon's projections display disparities of \(\leq 8.2\%\). When predicting the operational carbon footprint during the training of MoE LLMs, LLMCarbon incurs a higher margin of error, due to the intricacy of MoE architectures. On the contrary, when compared to actual data, the training operational carbon footprint estimations made by mlco2 (Lacoste et al., 2019) suffer from huge disparities of more than \(69\%\), because mlco2 assumes all devices consistently operate at the peak computing throughput and consume the peak power.
**Inference Phase**. To validate the operational carbon footprint predictions generated by LLMCarbon, we consider the inferences of GPT3 with 175B parameters (Yu et al., 2022). These inferences were carried out on 16 A100 GPUs, using a batch size of 32 and an input size of 128 tokens (Yu et al., 2022). According to the hardware efficiency model, this specific hardware configuration yields a hardware efficiency of 9.26%. Achieving the optimal hardware efficiency for GPT3 requires \(\sim\)1.5K GPUs, which is significantly more than what was used for these inferences. LLMCarbon's predicted latency for this inference batch is 3.1s, while the actual latency for this inference batch is 3s (Yu et al., 2022). We assume the inference experiments took place in a data center with a PUE of 1.1 and a carbon intensity of 0.429 \(CO_{2}eq/KWh\). The difference between the predicted and actual inference operational carbon footprints does not exceed \(+3.3\%\).
**Storage Phase**. The typical power consumption of cloud storage is reported as 11.3W/TB (Posani et al., 2018), while the power consumption for data transfer within a data center is around 1.48W/TB (Baliga et al., 2011). Over a six-month storage phase, the Noor LLM (Lakim et al., 2022) encompasses 32.7TB of storage data, comprising curated data, bulk data, and the model. Additionally, it transfers a data volume of 277.4TB. Based on LLMCarbon's estimations, the storage data energy is predicted as 1.596MWh (compared to the actual 1.69MWh (Lakim et al., 2022)), while the energy consumption attributed to data transfer is projected to be 1.77MWh (compared to 1.8MWh (Lakim et al., 2022)). Notably, the projection accuracy of LLMCarbon regarding the operational energy during the storage phase showcases an error margin of less than 3.6%.
**Experimentation Phase**. The experimentation phase consisting of various activities of training, inference, and storage (Wu et al., 2022). And we have validated the training phase, inference phase, and storage phase of an LLM in previous sections.
\begin{table}
\begin{tabular}{l c c c c c} \hline \hline LLM & T5 & GPT3 & GShard & Switch & XLM \\ \hline reference & \multicolumn{4}{c}{(Patterson et al., 2021)} & (Wu et al., 2022) \\ developer & Google & OpenAI & Google & Google & Meta \\ type & dense & dense & MoE & MoE & dense \\ parameter \# (B) & 11 & 175 & 619 & 1500 & 0.55 \\ base model param. \# (B) & - & - & 2.3 & 7.41 & - \\ token \# (B) & 500 & 300 & 1K & 2K & 7K \\ \(CO_{2}eq/KWh\) & 0.545 & 0.429 & 0.177 & 0.33 & 0.413 \\ PUE & 1.12 & 1.1 & 1.09 & 1.1 & 1.1 \\ computing device & TPUv3 & V100 & TPUv3 & TPUv3 & V100 \\ device TPD (W) & 450 & 300 & 450 & 450 & 300 \\ avg. system power (W) & 310 & 330 & 288 & 245 & 342 \\ peak TFLOPs/s & 123 & 125 & 123 & 123 & 125 \\ achieved TFLOPs/s & 45.6 & 24.6 & 48 & 34.4 & 26.5 \\ hardware efficiency & 37\% & 19.7\% & 39\% & 28\% & 21.2\% \\ device \# & 512 & 10K & 1K & 512 \\ total zettaFLOPs & 40.5 & 314 & 13.3 & 82.2 & 23.9 \\ training days & 20 & 14.8 & 3.1 & 27 & 20.4 \\ \hline actual \(tCO_{2}eq\) & 46.7 & 552.1 & 4.3 & 59.1 & 39 \\ \hline mlco2 predicted \(tCO_{2}eq\) & 89.4 & 955.2 & 8.4 & 137.3 & 66.96 \\ mlco2 \(\Delta\) & \(+91.3\%\) & \(+73\%\) & \(+95.3\%\) & \(+132\%\) & \(+69\%\) \\ \hline
**LLMCarbon predicted \(tCO_{2}eq\)** & 45.66 & 553.87 & 4.46 & 63.9 & 37.6 \\
**LLMCarbon \(\Delta\)** & \(\mathbf{-2.22\%}\) & \(\mathbf{+0.32\%}\) & \(\mathbf{+3.8\%}\) & \(\mathbf{+8.2\%}\) & \(\mathbf{-3.54\%}\) \\ \hline \hline \end{tabular}
\end{table}
Table 4: The validation on the operational carbon footprints of various LLMs.
### Embodied Carbon Footprint Validation
Table 5 presents the validation results of the embodied carbon footprint estimated by LLMCarbon in comparison to the published data of XLM (Wu et al., 2022). This is the only publicly available data regarding the embodied carbon footprint of a LLM training hardware infrastructure to our best knowledge. The setup consists of 512 V100 GPUs organized into 64 8-GPU servers, each equipped with a CPU, a 32TB SSD disk, and a 256GB DRAM main memory system. Using the unit and CPA data from Table 3, we computed the values of \(\mathit{CO2eq}_{\mathit{chip}}\) presented in Table 5. The training duration of XLM is 20.4 days, and Wu et al. (2022) assumed a hardware unit lifetime of 5 years. Consequently, the \(\frac{time}{lifetime}\) values for all hardware units were determined to be \(1.12\%\). Apart from CPU, GPU, SSD, and DRAM, other hardware components (others) such as the motherboard, chassis, and PSU collectively contribute to \(15\%\)(Tannu and Nair, 2022) of the anticipated total embodied carbon footprint. In contrast to the reported embodied carbon footprint of XLM (Wu et al., 2022), the predictions produced by LLM-Carbon reveal a disparity of \(-3.05\%\).
## 6 Case Studies Using LLMCarbon
We used LLMCarbon to demonstrate the following case studies.
**Large Embodied Carbon Footprint**. The embodied carbon footprint throughout the life-cycle of an LLM is significant. Even when no computing activities occur, the LLM still incurs embodied carbon overhead due to the idle hardware allocated to the LLM. As illustrated in Figure 6, the embodied carbon footprint of an LLM across its entire life-cycle contributes to approximately \(24\%\sim 35\%\) of the overall carbon footprint (including embodied, training, inference, experimentation, and storage carbon footprints) of the LLM. We adopted the ratio between training, inference, and experimentation activities from (Wu et al., 2022). Furthermore, as data centers progressively shift towards adopting renewable energy sources, the embodied carbon footprint of an LLM will dominate the entire life-cycle carbon footprint of the LLM in the near future. For instance, 97% of the operational energy in a Meta data center (Wu et al., 2022) is provided by renewable sources. The embodied carbon footprints of diverse LLMs operating within this data center constitute \(92\%\sim 95\%\) of their entire life-cycle carbon footprints. This underscores the pivotal role of accounting for embodied carbon in the sustainability evaluation of LLMs.
**Optimal Parallelism Setting**. As discussed in Section 5.1, the training processes of the LLMs used in our validation lacked optimized parallelism settings. By using LLMCarbon, we pinpoint the optimal configurations for data, tensor, pipeline, and expert parallelism pertaining to these three LLMs. As illustrated in Figure 6, the adoption of these optimal parallelism settings leads to a noteworthy decrease (i.e., \(16\%\sim 39\%\)) in their operational carbon footprints.
**New Accelerators**. When employing distinctive computing devices for the LLM processing, the operational carbon footprints of an LLM tend to differ, while the embodied carbon footprints re
Figure 6: The carbon footprint of three LLMs in case studies.
Figure 7: The carbon footprint of GPT3 trained by different computing devices.
\begin{table}
\begin{tabular}{l l l l l} \hline \hline hardware & number & \(\mathit{CO2eq}_{\mathit{chip}}\) & \(\frac{time}{lifetime}\) & \(\mathit{CO2eq}_{\mathit{emb}}\) \\ & & (\(\mathit{kgCO}_{\mathit{2eq}}\)) & \(\frac{time}{lifetime}\) & (\(\mathit{tCO}_{\mathit{2eq}}\)) \\ \hline GPU & 512 & 9.78 & 1.12\% & 0.056 \\ CPU & 64 & 1.47 & 1.12\% & 0.0018 \\ SSD & 64 & 576 & 1.12\% & 0.412 \\ DRAM & 64 & 102.4 & 1.12\% & 0.073 \\ others & 64 & 148.2 & 1.12\% & 0.096 \\
**predicted sum** & & & & 0.64 \\ \hline \hline \multicolumn{4}{c}{actual 0.66 \(\mathit{tCO}_{\mathit{2eq}}\), \(\boldsymbol{\Delta}-\)3.05\%} \\ \hline \hline \end{tabular}
\end{table}
Table 5: The embodied carbon footprint validation against Meta XLM.
main similar. Figure 7 showcases the outcomes derived from training, inferring, and experimenting with three LLMs utilizing V100 GPU, H100 GPU, TPUv3, and TPUv4. Their embodied carbon footprints exhibit similarity, as the embodied carbon emissions of SSD and DRAM dominate their total embodied carbon footprints. However, compared to V100 GPUs, the operational carbon footprints of these LLMs are notably curtailed by 71% and 41% when employing H100 and TPUv4 accelerators, respectively. Embracing novel computing devices for LLMs presents a pragmatic path to mitigate their operational carbon footprints.
**Training Carbon Footprint Scaling**. In addition to the LLMs (i.e., T5, GPT3, GShard, Switch, XLM, and Noor) we used in validations, we also included other LLMs in our analysis, such as PaLM (Chowdhery et al., 2022), Gopher (Rae et al., 2021), Chinchilla (Hoffmann et al., 2022), LaMDA (Thoppilan et al., 2022), Jurassic-1 (Lieber et al., 2021), MT-NLG (Smith et al., 2022), Bloom (Scao et al., 2022), YaLM (Yandex, 2022), GLM (Zeng et al., 2023), GLaM (Du et al., 2022), FB-MoE (Artetxe et al., 2021), ST-MoE (Zoph et al., 2022), and PR-MoE (Rajbhandari et al., 2022). Among these LLMs, GShard, Switch, GLaM, FB-MoE, ST-MoE, and PR-MoE use sparse MoE architectures, while the other LLMs adopt dense architectures. We do not aim to directly compare the accuracy and carbon emissions of these original LLMs, since they were trained by different datasets and in different data centers. Instead, we study the test losses and training operational carbon footprints of some new LLM designs adopting the same architectures as these LLMs. We assume these new LLMs designs are trained using the same dataset and the same hardware infrastructure in the same data center. We present the test losses and training operational carbon footprints of these LLMs in Figure 8. To compute the test loss, we adopt the fitting constants including \(\alpha=0.34\), \(\beta=0.28\), \(A=406.4\), \(B=410.7\), and \(E=1.69\) for Equation 3 from (Hoffmann et al., 2022). Since the test loss of an MoE LLM with \(P\) parameters is similar to that of its dense counterpart with only \(P/8\) parameters (Rajbhandari et al., 2022), we decreased the \(P\) of MoE LLMs to \(P/8\) in Equation 3. The training processes of all LLMs use their optimal parallelism settings and the corresponding numbers of V100 GPUs hosted by a data center where PUE is 1.1 and \(\mathit{CO_{2}eq}/\mathit{KWh}\) is 0.431. Overall, an LLM with a larger number of parameters and trained on more tokens achieves a lower test loss but also consumes a larger training operational carbon footprint. Compared to dense LLMs, the Pareto front of MoE LLMs is closer to the origin point, indicating that an MoE LLM can obtain a lower test loss by the same training carbon footprint.
## 7 Conclusion
In this paper, we propose LLMCarbon, an end-to-end carbon footprint modeling tool for dense and MoE LLMs, which contribute significantly to carbon emissions during training, inference, experimentation, and storage processes. LLMCarbon can accurately assess the operational and embodied carbon footprints of an LLM, enabling efficient exploration of the design space by considering the trade-off between carbon footprint and test loss. It also promotes the adoption of carbon footprint reduction measures by facilitating quantitative comparisons among various LLM configurations.
|
2309.03837 | Cross-Task Attention Network: Improving Multi-Task Learning for Medical
Imaging Applications | Multi-task learning (MTL) is a powerful approach in deep learning that
leverages the information from multiple tasks during training to improve model
performance. In medical imaging, MTL has shown great potential to solve various
tasks. However, existing MTL architectures in medical imaging are limited in
sharing information across tasks, reducing the potential performance
improvements of MTL. In this study, we introduce a novel attention-based MTL
framework to better leverage inter-task interactions for various tasks from
pixel-level to image-level predictions. Specifically, we propose a Cross-Task
Attention Network (CTAN) which utilizes cross-task attention mechanisms to
incorporate information by interacting across tasks. We validated CTAN on four
medical imaging datasets that span different domains and tasks including:
radiation treatment planning prediction using planning CT images of two
different target cancers (Prostate, OpenKBP); pigmented skin lesion
segmentation and diagnosis using dermatoscopic images (HAM10000); and COVID-19
diagnosis and severity prediction using chest CT scans (STOIC). Our study
demonstrates the effectiveness of CTAN in improving the accuracy of medical
imaging tasks. Compared to standard single-task learning (STL), CTAN
demonstrated a 4.67% improvement in performance and outperformed both widely
used MTL baselines: hard parameter sharing (HPS) with an average performance
improvement of 3.22%; and multi-task attention network (MTAN) with a relative
decrease of 5.38%. These findings highlight the significance of our proposed
MTL framework in solving medical imaging tasks and its potential to improve
their accuracy across domains. | Sangwook Kim, Thomas G. Purdie, Chris McIntosh | 2023-09-07T16:50:40Z | http://arxiv.org/abs/2309.03837v1 | # Cross-Task Attention Network: Improving Multi-Task Learning for Medical Imaging Applications
###### Abstract
Multi-task learning (MTL) is a powerful approach in deep learning that leverages the information from multiple tasks during training to improve model performance. In medical imaging, MTL has shown great potential to solve various tasks. However, existing MTL architectures in medical imaging are limited in sharing information across tasks, reducing the potential performance improvements of MTL. In this study, we introduce a novel attention-based MTL framework to better leverage inter-task interactions for various tasks from pixel-level to image-level predictions. Specifically, we propose a Cross-Task Attention Network (CTAN) which utilizes cross-task attention mechanisms to incorporate information by interacting across tasks. We validated CTAN on four medical imaging datasets that span different domains and tasks including: radiation treatment planning prediction using planning CT images of two different target cancers (Prostate, OpenKBP); pigmented skin lesion segmentation and diagnosis using dermatoscopic images (HAM10000); and COVID-19 diagnosis and severity prediction using chest CT scans (STOIC). Our study demonstrates the effectiveness of CTAN in improving the accuracy of medical imaging tasks. Compared to standard single-task learning (STL), CTAN demonstrated a 4.67% improvement in performance and outperformed both widely used MTL baselines: hard parameter sharing (HPS) with an average performance improvement of 3.22%; and multi-task attention network (MTAN) with a relative decrease of 5.38%. These findings highlight the significance of our proposed MTL framework in solving medical imaging tasks and its potential to improve their accuracy across domains.
Keywords:Multi-Task Learning Cross Attention Automated Radiotherapy
## 1 Introduction
Multi-task learning (MTL) [5] algorithms train deep learning models for two or more tasks simultaneously using shared parameters between models to encourage beneficial cooperation. MTL provides additional information not by ex
Figure 1: (Top) Cross-task attention network (CTAN) and other MTL model architectures: hard parameter sharing (HPS) [1] and multi-task learning network (MTAN) [16]. Similar to the concept of one-to-many mappings from HPS and MTAN, CTAN has one shared encoder linked with decoders for each task. MTAN uses encoder features using attention for respective tasks. However, CTAN uses cross-attention in encoder and bottleneck layers to transfer task-specific features to task-specific decoders for better task interaction. (Bottom) Summary of four medical imaging datasets with three different task sets used in this study. **The number of samples of each train, validation, test splits are shown below each dataset.** Test datasets without complete segmentation labels and clinical information were excluded from the original datasets in OpenKBP and HAM10000, respectively.
plicitly adding more datasets for model training but by implicitly extracting training signals from multiple related tasks from the existing dataset. The various tasks are thought to regularize shared components of the network, leading to improved model performance and generalization. For example, following [2], it is natural to assume that learning features required to delineate a skin lesion from the background may be relevant in comparing the lesion to its surrounding areas to inform the diagnosis.
Previous studies have demonstrated that learning two relevant tasks can improve model performance using MTL in medical imaging [4, 6, 7, 8, 26, 27]. Sainz et al., show the application and improvement of the model performance using MTL in breast cancer screening by training classification and detection of abnormal mammography findings [6]. Chen et al., utilize MTL to improve atrial segmentation and classification using MRI [7]. Weninger et al., propose an MTL framework to improve brain tumour segmentation by jointly training detection of enhancing tumour and image reconstruction using brain MRI [26].
These studies demonstrate the applicability of MTL to improve performance for tasks in medical imaging. However, even though these studies have shown enhanced performance using MTL, most MTL architectures are based on hard-parameter sharing (HPS) [1], which includes a single shared encoder with task-specific decoders in a one-to-many fashion, maximizing encoder regularization between tasks but limiting all tasks to an identical feature set as opposed to some common features.
Introduced by Liu et al., multi-task attention network (MTAN) [16] also employs a one-to-many mapping but adds task-specific independent attention mechanisms that, while they can change the features of the embedding per task, they are not themselves able to share any information. With the introduction of MTAN, there have been studies using attention in MTL for automating binding between task features within the network architectures [17, 28]. However, most existing MTL studies using non-medical images focus on scenarios where all tasks are at the pixel-level. This is often impractical in the medical imaging domain, since acquiring pixel-level labels in medical images is impractical and labour-intensive. Thus, we focus on solving multi-task learning in hybrid scenarios including both pixel and image-level tasks by utilizing cross-task attention in MTL using medical imaging datasets.
We hypothesize that by leveraging the shared feature abilities of HPS with the flexibility of MTAN through a novel cross-task attention framework that shares task information across the attention mechanisms, we can better utilize inter-task interaction to improve overall performance using MTL. Additionally, cross-attention of bottleneck features for each task was also employed to provide cross-task dependent information to decoders for each task. We validated our approach using three distinct pairs of tasks from four medical imaging datasets. CTAN shows broad applicability with mixes of tasks at the both the pixel and image-level.
#### 2.1.3 Contributions
We propose a novel Cross-Task Attention Network (CTAN), an MTL framework that leverages cross-task attention modules in the encoder and bottleneck layer to capture inter-task interaction across tasks (see Fig. 2). Our results demonstrate that CTAN is effective in learning three types of vision tasks, including two pixel-level prediction tasks and one image-level task from various domains. As shown in Fig. 1, we experimented with three different task pairs from four datasets. In addition, we showed the performance improvement of CTAN compared to single-task learning (STL), and two widely used MTL baseline architectures, HPS and MTAN.
## 2 Methods and Materials
### Cross-Task Attention Network (CTAN)
CTAN consists of two cross-task attention modules, the cross-task attention encoder (CTAE), and the cross-task attention bottleneck (CTAB) (see Fig. 2). CTAE is employed within the encoder layers by calculating the attentive mask, and uses two pieces of information targeted for each task. CTAE enables the encoder to extract task-specific information in the encoder. It encodes and decodes the input features to highlight and extract significant features. The attention module in CTAE resembles the attention module in [16], wherein for each task Liu et al. calculate attention maps using one attention block per task and multiply with the feature maps during a forward pass with data from that task.
Figure 2: Overview of architecture of cross-task attention network (CTAN), including the encoder and two decoders for image-level and pixel-level tasks. Convolution blocks are shown on the right, along with the two cross-task attention modules: (a) Cross-task attention encoder (CTAE), and (b) Cross-task attention bottleneck (CTAB).
However, in CTAE, attention maps are instead multiplied in a cross-direction way, as shown in Fig. 2-a. This helps the model to integrate the shared features by multiplying the cross-task attentive maps with features from the shared block, which enables an inter-task interaction while training. We denote \(U^{j}\) and \(P^{j}\) as features from \(j^{th}\) layer of the shared encoder, and \(t\) as task index. Note that \(P^{j}\) refers to the output of two convolution blocks using \(U^{j}\) as the input. \(S^{j-1}\) denotes the input of \(j^{th}\) layer in the shared encoder, which is the output of the shared block in \(j-1^{th}\) layer for \(j>1\). Whereas, when \(j=1\), the input image embedding from the 3x3 Conv block is used (see Fig. 2). The task-specific embedded features, \(F^{j}_{t}\), result from the concatenation of \(U^{j}\) and \(\hat{A}^{j-1}_{t}\) for \(j>1\), while \(U^{j}\) for \(j=0\), followed by the task embedding block in Fig. 2. \(F^{j}_{t}\) is then fed into the task-specific attention block to create attention mask \(A^{j}_{t}\). The output of CTAE \(\hat{A}^{j}_{t}\) is defined as:
\[\hat{A}^{j}_{t}=Pool(A^{j}_{t^{\prime}}\ \odot\ P^{j}),\ t\in\{\textit{1,2}\}, \tag{1}\]
where \(Pool\) refers to the pooling block (see Fig. 2), \(\odot\) refers to the element-wise multiplication, and \(t^{\prime}\) refers to the task index of the other task trained together. \(\hat{A}^{j}_{t}\) then serves as the input attention mask for the attention block in the next layer, propagating attention across the decoder (\(\hat{A}^{j-1}_{t}\) is set to all zero for the first layer).
We propose CTAB as shown in Fig. 2-b, in which we calculate and multiply cross-task attention of two task-embedded features to task-specific bottleneck representation. We calculate the cross-task attention mask using a \(query\) and a \(key\) and apply the attention mask to a \(value\). Herein, \(value\) and \(key\) are the same task-embedded features, and \(query\) is the embedding of the other task. Thus, the output of CTAB \(\bar{A}_{t}\) is defined as:
\[\bar{A}_{t}=\hat{E}_{t}\ \cdot(\hat{E}^{\top}_{t^{\prime}}\cdot\hat{E}_{t}),\ t \in\{\textit{1,2}\}, \tag{2}\]
where \(\top\) refers to transpose of a matrix, \(\cdot\) refers to matrix multiplication, and \(\hat{E}_{t}\) denotes the task-specific embedded features for task \(t\). The output of CTAB, \(\bar{A}_{t}\), is forwarded to task-specific decoders.
#### 3.2.2 Encoder and Decoder
We utilize a ResNet-50 [12] pre-trained with ImageNet [9] as the encoder backbone, with identical architecture across all experiments. However, we implement different decoders for image-level and pixel-level tasks. For pixel-level tasks such as segmentation and dose prediction, we incorporate skip connections [23] between the encoders and decoders, with three up-sampling blocks using bilinear interpolation (as depicted in Fig. 2), followed by a 1x1 convolution layer with output channels equal to the number of segmentation labels, and a single channel for dose prediction. For image-level tasks, we use decoders with skip connections and four down-sampling layers, with a global average pooling layer [11] and a fully-connected layer at the end. Notably, we introduce skip connections in the classifier to balance model training and address asymmetric decoder issues that arise when training MTL to solve both image-level and pixel-level tasks together. Finally, we use a fully-connected layer with
a sigmoid activation function for binary classification (STOIC) and a softmax function for multi-class classification (HAM10000) as the final output layer.
### Training details
We use Adam [14] optimizer with the learning rate of \(10^{-4}\) and the weight decay of \(10^{-5}\). We use task-specific losses (see Table 1). Dynamic Weight Averaging [16] was utilized to stabilize the combined training losses of all tasks. Batch size of 32 was used for the Prostate dataset, and 8 for the rest. We conducted experiments using PyTorch (ver 1.9.0) [20], with an NVIDIA A100 GPU with 40GB memory.
### Evaluation
We used task-specific metrics to evaluate the model performance for each task: slice similarity coefficient for segmentation(%); mean absolute error(Gy) between ground truth and predicted dose distribution maps for dose prediction; accuracy(%) for classification of HAM10000; and the area under the receiver operating characteristic curve(%) for classification of STOIC. Following [15], we define the relative performance of MTL models compared to STL:
\[\Delta_{task}(\%)=100*\frac{(-1)^{l_{i}}(M_{b,i}-M_{m,i})}{M_{b,i}},\ l\in\{ \mathit{0},\mathit{1}\}, \tag{3}\]
where \(i\) denotes the index of the task, \(m\) and \(b\) refer to the target MTL model and the baseline STL, respectively. \(M\) refers to the task performance metric. \(l\) denotes the metric-specific flag, where 1 if the metric is higher the better, and vice versa. We can then calculate the average of the relative difference of all task-specific metrics for each experiment. Positive value of relative performance represents the performance of MTL is better than that of STL.
\begin{table}
\begin{tabular}{l l l} \hline \hline Task & Loss function & Dataset \\ \hline Segmentation & Combo Loss [18] & Prostate, \\ & (Weighted combination of Dice Loss and Cross-entropy & OpenKBP, \\ & Loss) & HAM10000 \\ \hline Dose & Mean absolute error (MAE) Loss [3] & Prostate, \\ prediction & & OpenKBP \\ \hline Classification & Cross-entropy Loss & HAM10000, \\ & & STOIC \\ \hline \hline \end{tabular}
\end{table}
Table 1: Summary of loss functions for each task. We use combo loss [18], with the 0.3 and 0.7 for the loss weight of dice loss and cross-entropy loss, respectively.
### Datasets
We validated our approach using four medical imaging datasets with three different task sets (see Fig. 1-B). The first task set consists of two pixel-level tasks: dose prediction and segmentation of organs at risk (OAR) and clinical target volume (CTV) for prostate (Prostate) and head and neck cancer treatment (OpenKBP )([https://www.aapm.org/GrandChallenge/OpenKBP](https://www.aapm.org/GrandChallenge/OpenKBP), [3]). Segmentation labels for the Prostate dataset are rectum, bladder, left and right femur, while brain stem, spinal cord, left and right parotid are used in OpenKBP. Patients For the second task set, which contains one image-level and one pixel-level tasks, dermatoscopic images of pigmented skin lesion datasets (HAM10000) ([https://doi.org/10.7910/DVN/DBW86T](https://doi.org/10.7910/DVN/DBW86T), [24]) are used to segment and diagnose skin lesions. The last set has two image-level tasks: classification of COVID-19 and disease severity using chest CT scans (STOIC) ([https://stoic2021.grand-challenge.org](https://stoic2021.grand-challenge.org), [22]).
## 3 Experiments and Results
In Table 2, the results showed that CTAN outperformed STL with an average relative difference of 4.67%. For the Prostate and OpenKBP datasets, which have two different pixel-level tasks, CTAN showed an improvement of 2.18% and 1.99%, respectively, over STL. In both datasets, the performance increase for dose prediction task was larger than that of segmentation task. Notably, CTAN improved the performance of dose prediction when the task is trained with segmentation of organs at risk and target volumes, rather than improving the performance of segmentation. For HAM10000, CTAN showed an overall performance improvement with a significant increase in diagnosing skin lesions. However, the performance of segmenting pigmented lesions marginally improved compared to the classification task. For STOIC, CTAN resulted in an average relative difference of 4.67% for both image-level tasks, with a significant increase in diagnosing severe cases but a decrease in diagnosing COVID-19.
As shown in Table 2, CTAN outperformed both HPS and MTAN with an average relative improvement of 3.22% and relative decrease of 5.38%, compared to STL, respectively. Unlike other MTL baselines, CTAN showed performance improvement regardless of task groups combined with different task-levels. However, there were cases where CTAN did not outperform other baselines at the single task level. For instance, for the Prostate datasets' segmentation task, HPS outperformed CTAN with a relative difference of 1.74% while CTAN showed only a 0.54% increase. Nevertheless, overall performance gain using CTAN was higher across datasets and tasks, indicating that the cross-task attention mechanisms in CTAN were effective in learning multiple tasks.
## 4 Discussion
Our findings suggest that CTAN can improve the MTL performance across three distinct tasks from four distinct medical imaging datasets by 4.67% on average.
However, the specific performance improvements on each dataset and task can vary. Compared to other tasks, CTAN only marginally improve performance in segmentation task. This might be due to the faster convergence of segmentation tasks in comparison to others, which may cause them to act more as regularizers with pixel-level prior knowledge providing local contextual information for other tasks [21]. In this regard, results show that CTAN is more effective in utilizing segmentation tasks for learning high-level semantic cues compared to other MTL baselines. In particular, CTAN can implicitly learn to avoid dose exposure to OARs and maximize dose to the CTV by training two clinically relevant tasks. This implies a potential to automate dose planning without the dependence on the contouring information, prior to predicting the dose distribution. This approach can ensure robustness against the variability of human annotators and improve automated planning quality for clinical care [19].
We observed a performance drop in COVID-19 classification in STOIC due to the intricate nature of the task, as diagnosing severity depends on the COVID-19 diagnosis and causes per-task gradient collision during training. However, CTAN
\begin{table}
\begin{tabular}{l l l l l l l l} \hline Dataset & Method & \(M_{task1}\) & \(\Delta_{task1}\uparrow\) & \(M_{task2}\) & \(\Delta_{task2}\uparrow\) & \(\Delta_{mean}\uparrow\) & Rank \\ \hline Prostate & STL & 81.96 & & 0.93 & & & 3 \\ & HPS & **83.28** & **1.74\%** & 0.91 & 1.29\% & 1.51\% & 2 \\ & MTAN & 75.47 & -7.92\% & 0.99 & -7.29\% & -7.60\% & 4 \\ & **CTAN** & 82.40 & 0.54\% & **0.89** & **3.82\%** & **2.18\%** & **1** \\ \hline OpenKBP [3] & STL & 71.29 & & 0.53 & & & 2 \\ & HPS & 70.87 & -0.52\% & 0.53 & 0.31\% & -0.10\% & 3 \\ & MTAN & 66.09 & -7.30\% & 0.56 & -5.29\% & -6.29\% & 4 \\ & **CTAN** & **71.59** & **0.42\%** & **0.51** & **3.56\%** & **1.99\%** & **1** \\ \hline HAM10000 [24] & STL & 92.83 & & 49.24 & & & 3 \\ & HPS & 92.21 & -0.68\% & 55.49 & 12.69\% & 6.01\% & 2 \\ & MTAN & 92.15 & -0.73\% & 47.08 & -4.37\% & -2.55\% & 4 \\ & **CTAN** & **92.91** & **0.09\%** & **57.85** & **17.49\%** & **8.79\%** & **1** \\ \hline STOIC [22] & STL & **71.88** & & 55.83 & & & 3 \\ & HPS & 63.84 & -11.18\% & **68.17** & **22.09\%** & 5.45\% & 2 \\ & MTAN & 57.55 & -19.93\% & 61.30 & 9.79\% & -5.07\% & 4 \\ & **CTAN** & 68.73 & -4.38\% & 64.66 & 15.81\% & **5.72\%** & **1** \\ \hline Average & STL & & & - & & 3 \\ & HPS & - & -2.66\% & - & 9.09\% & 3.22\% & 2 \\ & MTAN & - & -8.97\% & - & -1.79\% & -5.38\% & 4 \\ & **CTAN** & - & **-0.83\%** & - & **10.17\%** & **4.67\%** & **1** \\ \hline \end{tabular}
\end{table}
Table 2: Results of task-specific metrics (\(M_{task}\)) and their relative difference to STL (\(\Delta_{task}\)) of STL, HPS, MTAN, and CTAN on four datasets. Higher values are the better for all metrics, except for \(M_{task2}\) in the Prostate and OpenKBP datasets. Best and second-best results are bolded and underlined, respectively. Average values are only calculated for relative performance difference of MTL methods.
proved to be effective in minimizing the performance drop in COVID-19 classification compared to other MTL methods. This implies CTAN can selectively learn cross-task attentive features to improve overall performance. Future work could expand the applications of CTAN to other domains such as videos of natural teeth [13], fundus photography for diagnosing glaucoma [10], or laparoscopic hysterectomy [25], and further investigate what drives the per dataset variations.
In conclusion, we introduce a novel MTL framework, CTAN, that utilizes cross-task attention to improve MTL performance in medical imaging from multiple levels of tasks by 4.67% compared to STL. Results demonstrate that incorporating inter-task interaction in CTAN enhances overall performance of three medical imaging task sets from four distinct datasets, surpassing STL and two widely-used baseline MTL methods. This highlights CTAN's effectiveness and potential to improve MTL performance in the field of medical imaging.
|
2310.00285 | Optimal Local Measurements in Many-body Quantum Metrology | Quantum measurements are key to quantum metrology. Constrained by
experimental capabilities, collective measurements on a large number of copies
of metrological probes can pose significant challenges. Therefore, the locality
in quantum measurements must be considered. In this work, we propose a method
dubbed as the "iterative matrix partition" approach to elucidate the underlying
structures of optimal local measurements, with and without classical
communications, that saturate the quantum Cram\'er-Rao Bound (qCRB).
Furthermore, we find that while exact saturation is possible for all two-qubit
pure states, it is generically restrictive for multi-qubit pure states.
However, we demonstrate that the qCRB can be universally saturated in an
approximate manner through adaptive coherent controls, as long as the initial
state is separable and the Hamiltonian allows for interaction. Our results
bridge the gap between theoretical proposals and experiments in many-body
metrology and can find immediate applications in noisy intermediate-scale
quantum devices. | Jia-Xuan Liu, Jing Yang, Hai-Long Shi, Sixia Yu | 2023-09-30T07:34:31Z | http://arxiv.org/abs/2310.00285v1 | # Optimal Local Measurements in Many-body Quantum Metrology
###### Abstract
Quantum measurements are key to quantum metrology. Constrained by experimental capabilities, collective measurements on a large number of copies of metrological probes can pose significant challenges. Therefore, the locality in quantum measurements must be considered. In this work, we propose a method dubbed as the "iterative matrix partition" approach to elucidate the underlying structures of optimal local measurements, with and without classical communications, that saturate the quantum Cramer-Rao Bound (qCRB). Furthermore, we find that while exact saturation is possible for all two-qubit pure states, it is generically restrictive for multi-qubit pure states. However, we demonstrate that the qCRB can be universally saturated in an approximate manner through adaptive coherent controls, as long as the initial state is separable and the Hamiltonian allows for interaction. Our results bridge the gap between theoretical proposals and experiments in many-body metrology and can find immediate applications in noisy intermediate-scale quantum devices.
_Introduction.--_ Locality plays a crucial role in various branches of physics, encompassing high energy physics [1; 2; 3], condensed matter physics [4; 5] and quantum information theory [6; 7; 8; 9; 10]. In the context of many-body systems, locality gives rise to the Lieb-Robinson bound [11; 12; 13], which sets an upper limit on the spread of local operators. Despite the recent resurgence of interest in quantum metrology using many-body Hamiltonians [14; 15; 16; 17; 18], the investigation of locality in the sensing Hamiltonian has only been undertaken until recently [19; 20; 21].
On the other hand, at the fundamental as well as the practical level, locality in quantum measurements has been largely uncharted in many-body quantum metrology. For example, consider a non-interacting and multiplicative sensing Hamiltonian \(H_{\lambda}=\lambda\sum_{j}h_{j}\), where \(h_{j}\) is the local Hamiltonian defined for the spin at site \(j\) and \(\lambda\) is the estimation parameter. It has been show in Ref.[22] that if the initial state is prepared in a GHZ (Greenberger-Horne-Zeilinger)-like state and the precision is maximized among all the possible initial states and local measurements (LM) suffice to saturate the quantum Cramer-Rao bound(qCRB). However, it is worth to emphasize that, to our best knowledge, even for this non-interacting Hamiltonian, little is known about whether LM can saturate the qCRB for other initial states, not to mention that \(H_{\lambda}\) in general can contain many-body interactions and have generic parametric dependence. Additionally, for pure states, Zhou et al [23] prove that rank\(-1\) projective local measurements with classical communications (LMCC) can be constructed to saturate the qCRB. However, due to the classical communications between particles, the total number of measurement basis scales exponentially with the number of particles, which requires exponentially amount of experimental resources and thus difficult to implement.
In contrast, the total number of basis in LM scales linearly with the number of particles, which is feasible for experimental implementation. As such, in this work, we present a systematic study on qCRB-saturating LM. We address the following main questions: (i) Can LM universally saturate qCRB? (ii) If not, in which circumstances there exists qCRB-saturating LM? (iii) If one allows generic positive operator-valued measure (POVM) LM, the number of measurement basis is unlimited and thus can be made as exponentially large as the LMCC. Therefore it is natural to ask whether POVM LM can help in the saturation of the qCRB? (iv) If exact saturation with LM is very restrictive, is it possible to identify regimes where the approximate saturation is possible? We shall develop a comprehensive understanding on these questions subsequently.
_The Optimal Measurement Condition. --_ To begin with, we consider a pure quantum state \(\left|\psi_{\lambda}\right\rangle\). The quantum Fisher information (QFI) is given by [24; 25]
\[I=4\left(\left\langle\partial_{\lambda}\psi_{\lambda}|\partial_{\lambda}\psi_{ \lambda}\right\rangle-\left|\left\langle\psi_{\lambda}|\partial_{\lambda}\psi_ {\lambda}\right\rangle|^{2}\right). \tag{1}\]
The optimal measurement condition that can saturate the qCRB is given by [26; 23; 27]
\[\left\langle\pi_{\omega}\right|\mathcal{M}\left|\pi_{\omega}\right\rangle=0, \tag{2}\]
where
\[\mathcal{M}\equiv[\rho_{\lambda},\,L]=2[\rho_{\lambda},\,\partial_{\lambda} \rho_{\lambda}], \tag{3}\]
\(L\) is the symmetric logarithmic derivative defined as \(\partial_{\lambda}\rho_{\lambda}\equiv(\rho_{\lambda}L+L\rho_{\lambda})/2\) with \(\rho_{\lambda}\equiv\left|\psi_{\lambda}\right\rangle\left\langle\psi_{ \lambda}\right|\) and the POVM measurement satisfies \(\sum_{\omega}\left|\pi_{\omega}\right\rangle\left\langle\pi_{\omega}\right|=\mathbb{I}\). Here, without loss of generality, we only consider a set of rank\(-1\) POVM operators [27]. We would like to emphasize in Ref. [23] the optimal condition is divided into two cases according whether \(\text{Tr}(\rho_{\lambda}\left|\pi_{\omega}\right\rangle\left\langle\pi_{ \omega}\right|)\) vanishes or not. Using the results on multi-parameter estimation [28], we argue in the Sec. 1 in the Supplemental Material [27] that such a division is unnecessary and Eq. (2) is the
condition to saturate the qCRB for all types of POVM measurements.
_The Iterative Matrix Partition Approach to LMCC and LM.--_ From now on, we shall focus our discussion on pure states of \(N\)-qubit systems and search for optimal LM and LMCC. In this case, the measurement outcome \(\omega\) in Eq. (2) becomes a string of measurement outcomes of each qubit denoted as \(\omega=(\omega_{1},\,\omega_{2},\,\cdots,\,\omega_{N})\). Zhou et al [23] showed that the optimal projective LMCC can be constructed iteratively through
\[\langle\pi^{(j)}_{\omega_{j},\omega_{1}\cdots\omega_{j-1}}|M^{(j)}_{\omega_{1 }\cdots\omega_{j-1}}|\pi^{(j)}_{\omega_{j},\omega_{1}\cdots\omega_{j-1}}\rangle=0. \tag{4}\]
The superscripts in basis and operators in Eq. (4) indicate the subsystems over which they are defined and
\[M^{(j)}_{\omega_{1}\cdots\omega_{j-1}}\equiv\] \[\langle\pi^{(1)}_{\omega_{1}}|\otimes\cdots\langle\pi^{(j-1)}_{ \omega_{j-1},\omega_{1}\cdots\omega_{j-2}}|\mathrm{Tr}_{(j+1,-N)}\mathcal{M} |\pi^{(1)}_{\omega_{1}}\rangle\otimes\cdots|\pi^{(j-1)}_{\omega_{j-1},\omega_ {1}\cdots\omega_{j-2}}\rangle \tag{5}\]
is an operator defined on the \(j\)-th qubit with \(j\geq 2\), where the subscripts in the "\(\mathrm{Tr}\)" notation indicate the subsystems that are traced over. For \(j=1\), \(M^{(1)}\equiv\mathrm{Tr}_{(2-N)}\mathcal{M}\) and \(|\pi^{(1)}_{\omega_{1}}\rangle\) satisfies \(\langle\pi^{(1)}_{\omega_{1}}\big{|}M^{(1)}\big{|}\pi^{(1)}_{\omega_{1}} \rangle=0\). In Sec. II of the Supplemental Material [27], we show these properties naturally follow from the optimal measurement condition (2) and for optimal projective LM they reduce to
\[\langle\pi^{(j)}_{\omega_{j}}\big{|}M^{(j)}\big{|}\pi^{(j)}_{\omega_{j}} \rangle=0, \tag{6}\]
where \(M^{(j)}\equiv\mathrm{Tr}_{(1-j\cdots N)}\mathcal{M}\), the subscript \(\not{\,}\) indicates that the \(j\)-th qubit is not traced over. A few comments in order: (i) Since \(M^{(j)}_{\omega_{i}\cdots\omega_{j-1}}\) and \(M^{(j)}\) are traceless, the measurement basis in Eqs. (4, 6) can be found through the "hollowization" process: A traceless matrix can be always brought to a hollow matrix, i.e., a matrix with zero diagonal entries, through unitary transformations[27, 29, 30]. (ii) While Eq. (4) is also sufficient to guarantee the optimal measurement condition (2), this is no longer true for Eq. (6).
To resolve this issue, we propose the "_iterative matrix partition_"(IMP) approach, which not only produces the LMCC, but also illuminates the intuition on the existence of LM. We denote the local computational basis for the \(j\)-th qubit as \(|e^{(j)}_{\omega_{j}}\rangle\), \(\omega_{j}=1,\,2\). One can compute the \(\mathcal{M}\) operator in this basis (see a tutorial example in [27]). Consider
\[\mathcal{M}=\left[\begin{array}{c|c}M^{(j)}_{11}&M^{(j)}_{12}\\ \hline M^{(j)}_{21}&M^{(j)}_{22}\end{array}\right]\,, \tag{7}\]
where for fixed \(\omega_{1}\) and \(\mu_{1}\), \(M^{(j)}_{\omega_{2}\mu_{1}}\equiv\langle e^{(1)}_{\omega_{1}}\big{|}\mathcal{ M}|e^{(1)}_{\mu_{1}}\rangle\) is a \(2^{N-1}\times 2^{N-1}\) matrix that acts on all the qubits except first qubit.
Since \(\mathcal{M}\) is anti-Hermitian, so is the diagonal block matrices \(M^{(j)}_{11}\) and \(M^{(j)}_{22}\). Furthermore, \(\mathcal{M}\) is traceless, the trace of the two diagonal block matrices can be also brought zero through a unitary transformation on the first qubit (see Observation 3 in [27]). More precisely,
\[\mathcal{M}=\sum_{\omega_{1}\mu_{1}}W^{(j)}_{\omega_{1}\mu_{1}}\,|\pi^{(1)}_{ \omega_{1}}\rangle\,\langle\pi^{(1)}_{\mu_{1}}| \tag{8}\]
where \(|\pi^{(1)}_{\omega_{i}}\rangle\equiv U^{(1)}\,|e^{(1)}_{\omega_{1}}\rangle\), \(W^{(j)}_{\omega_{1}\mu_{1}}\equiv U^{(1)}M^{(j)}_{\omega_{1}\mu_{1}}U^{(1) \dagger}\) and \(U^{(1)\dagger}\) is chosen such that \(\mathrm{Tr}W^{(j)}_{11}=\mathrm{Tr}W^{(j)}_{22}=0\). Note that \(W^{(j)}_{11}\) and \(W^{(j)}_{22}\) are also anti-Hermitian matrices.
Next, we decompose \(W^{(j)}_{11}\) and \(W^{(j)}_{22}\) in the local computa
Figure 1: LMCC can be constructed through IMP using “block hollowization”, where the trace of the diagonal blocks of a matrix is transformed to zero through local unitary transformations with classical communications. The goal is to perform a full “hollowization” procedure, where all the diagonal matrix elements of the operator \(\mathcal{M}\) are brought to zero. The IMP provides a feasible approach, see details in the main text and the Supplemental Material [27].
tional basis of the second qubit, i.e.
\[W^{(j)}_{\omega_{1},\omega_{1}}=\sum_{\omega_{2},\,\mu_{2}}M^{(j \mathcal{Z})}_{\omega_{2},\mu_{2},\,\omega_{1}}\,|\epsilon^{(2)}_{\omega_{2}} \rangle\,\langle\epsilon^{(2)}_{\omega_{2}}|\,, \tag{9}\]
where \(M^{(j\mathcal{Z})}_{\omega_{2},\,\omega_{1}}\), analogous to \(M^{(j)}_{\omega_{1}\mu_{1}}\), is the block matrix representation of \(W^{(j)}_{\omega_{1}\mu_{1}}\) in the local computational basis of the second qubit. For fixed \(\omega_{1}\), one can iterate to perform the "block-hollowization" process for \(W^{(j)}_{\omega_{1}\omega_{1}}\), leading to
\[W^{(j)}_{\omega_{1}\omega_{1}}=\sum_{\omega_{2},\,\mu_{2}}W^{(j \mathcal{Z})}_{\omega_{2},\,\omega_{1}}\,|\pi^{(2)}_{\omega_{2},\,\omega_{1} }\rangle\,\langle\pi^{(2)}_{\mu_{2},\,\omega_{1}}|\,, \tag{10}\]
where \(|\pi^{(2)}_{\omega_{2},\,\omega_{1}}\rangle\equiv U^{(2)}_{\omega_{1}}\,| \epsilon^{(2)}_{\omega_{2}}\rangle\) and \(W^{(j\mathcal{Z})}_{\omega_{2}\omega_{2},\,\omega_{1}}\) is traceless and anti-Hermitian for fixed \(\omega_{1}\).
Iterating this process to the \(N\)-th qubit, we arrive at
\[W^{(\bigcup\omega\to\mathcal{C})}_{\omega_{N-1},\,\omega_{N-1},\,\omega_{N-1}\to\omega_{N-2}}=\sum_{\omega_{N},\,\mu_{N}}M^{(\bigcup \mathcal{C})}_{\omega_{N},\,\omega_{1}-\omega_{N-1}}\,|\epsilon^{(N)}_{\omega _{N}}\rangle\,\langle\epsilon^{(N)}_{\mu_{N}}|, \tag{11}\]
where for fixed \(\omega_{1}\), \(\cdots\), \(\omega_{N-1}\), \(M^{(\bigcup\mathcal{C})}_{\omega_{N},\,\omega_{1}-\omega_{N-1},\,\,i}\) is a \(2\times 2\) anti-Hermitian traceless matrix. Finally, we perform the "hollowization" and obtain
\[W^{(\bigcup\omega\to\mathcal{C})}_{\omega_{N-1},\,\omega_{N-1},\,\omega_{N-1}\to\omega_{N-2}}\] \[=\sum_{\omega_{N},\,\mu_{N}}W^{(\bigcup\mathcal{C})}_{\omega_{N},\,\omega_{1}-\omega_{N-1},
By the virtue of Theorem 3, it suffices to focus on projective LM. If optimal projective LM cannot be found, then it is impossible to reach the qCRB by using POVM LM with a large number of measurement basis. In this sense, generic POVM LM does not help in reaching the qCRB. However, this does not exclude their other possible utilities. As we have shown before, in the projective LM basis, applying IMP to the GHZ state leads to the property of self-similarity. It is an interesting open question to search for states that display self-similarity in generic POVM LM basis, which could lead to non-GHZ-like many-body states that saturate the qCRB.
We consider a pure state \(\ket{\psi_{\lambda}(t)}=U_{\lambda}(t)\ket{\psi_{0}}\) that is generated from a unitary parameter-dependent quantum channel \(U_{\lambda}(t)\) and an initial pure state \(\ket{\psi_{0}}\), where \(U_{\lambda}(t)\) satisfied the Schrodinger equation \(\mathrm{i}U_{\lambda}(t)=H_{\lambda}(t)U_{\lambda}(t)\). In this case, the quantum Fisher information is given by
\[I_{\lambda}=4\mathrm{Var}\left(G_{\lambda}(t)\right)_{\ket{\psi_{0}}}, \tag{19}\]
and \(\mathcal{M}\) can be rewritten as
\[\mathcal{M}=-2\mathrm{i}U_{\lambda}(t)[\rho_{0},\,[G_{\lambda}(t),\,\rho_{0}] ]U_{\lambda}^{\dagger}(t), \tag{20}\]
where the metrological generator is defined as [31, 14]
\[G_{\lambda}(t)\equiv\mathrm{i}U_{\lambda}^{\dagger}(t)\partial_{\lambda}U_{ \lambda}(t)=\int_{0}^{t}U_{\lambda}^{\dagger}(s)\partial_{\lambda}H_{\lambda}( s)U_{\lambda}(s)ds. \tag{21}\]
So we have the following theorem [27]:
**Theorem 4**.: _Given a pair of initial state \(\ket{\psi_{0}}\) and a unitary channel \(U_{\lambda}(t)\), the qCRB of \(\ket{\psi_{\lambda}(t)}\) can be saturated at the instantaneous time \(t\) by LM if and only_
\[\text{Cov}\left(\mathcal{N}_{\alpha}^{(\mathcal{H})}(t)G_{\lambda}(t)\right)_ {\ket{\psi_{0}}}=0,\,\forall\alpha\subseteq\mathcal{X}_{N}, \tag{22}\]
_where the set \(\mathcal{X}_{N}\) is same as in Theorem 2 and \(\mathcal{N}_{\alpha}^{(\mathcal{H})}(t)\equiv U_{\lambda}^{\dagger}(t)N_{ \alpha}U_{\lambda}(t)\) is the Heisenberg evolution of \(\mathcal{N}_{\alpha}\) and \(\text{Cov}(AB)_{\ket{\psi_{0}}}\equiv\frac{1}{2}(\{A,\,B\})_{\ket{\psi_{0}}}- \langle A\rangle_{\ket{\psi_{0}}}\langle B\rangle_{\ket{\psi_{0}}}\)._
One can check immediately that the GHZ state with \(\sigma_{x}\)-LM satisfies Theorem 4. Now we are in a position to give a minimum 3-qubit counter-example that fails to saturating the qCRB under LM. Consider \(H_{\lambda}=\lambda H_{0}\),where \(H_{0}\equiv\sum_{x=x_{\lambda},y}(\sigma_{\alpha}^{(1)}\sigma_{\alpha}^{(2)}+ \sigma_{\alpha}^{(2)}\sigma_{\alpha}^{(3)})\), the initial state is the W state, i.e., \(\ket{\psi_{0}}=(100)+[010)+[001))/\sqrt{3}\). We assume the true value of \(\lambda\) is zero so that \(\ket{\psi_{\lambda}(t)}=\ket{\psi_{0}}\). It should be clarified that in this case despite the state does not change over time, it does not mean the parameter cannot be estimated accurately. In fact, it is straightforward to see QFI is \(4^{2}\mathrm{Var}[H_{0}]_{\ket{\psi_{0}}}=32t^{2}/9\), independent of the value of \(\lambda\). In [27], using symmetry arguments, we show that the set of equations determined by Eq. (22) can not be consistent with each other. Therefore, neither projective LM nor generic POVM LM exists according to Theorem 3.
_Universal Approximate Saturation with Adaptive Control._ -- As one can see from Theorem 4, the saturation of the qCRB with LM can be very restrictive. Nevertheless, we observe that if
\[\mathcal{N}_{\alpha}^{(\mathrm{H})}(t)\ket{\psi_{0}}\propto\ket{\psi_{0}},\, \,\forall\alpha\subseteq\mathcal{X}_{N} \tag{23}\]
is satisfied at time \(t\), then Eq. (22) holds. Note that the case where \(\ket{\psi_{0}}\) is an eigenstate of \(G_{\lambda}(t)\) is trivial as it leads to a vanishing QFI.
To this end, when the initial state is a product of pure states, one can first choose \(\mathcal{N}_{\alpha}(0)\) such that Eq. (23) hold at \(t=0\). As time evolves, \(\mathcal{N}_{\alpha}(t)\) will the spread and Eq. (23) will no longer hold. However, one can take advantage of our prior knowledge and apply a proper control Hamiltonian such that dynamics is frozen or at least very slow. That is,
\[\delta H(t)=H_{\lambda}(t)+H_{1}(t), \tag{24}\]
where the control Hamiltonian \(H_{1}(t)=-H_{\lambda_{*}}(t)\) and \(\lambda_{*}\) is our priori knowledge on the estimation parameter. Then \(\mathcal{N}_{\alpha}^{(\mathrm{H})}(t)\) remains close to \(\mathcal{N}_{\alpha}(0)\) for quite long time as long as \(\lambda_{*}\) is close to \(\lambda\). It is worth to note that in local estimation theory, adaptive estimation is usually exploited where some refined knowledge of the estimation parameter is known a priori [32, 33, 34]. Quantum control was explored in quantum metrology before, but aiming to boosting the QFI [35, 36, 18, 37, 18] and overcome the measurement noise. [38, 39]. It is remarkable that quantum controls here, which facilities LM to saturate the qCRB is fully consistent with the QFI-boosting controls in Ref. [36, 31, 35]. Finally, we note that as long as \(\lambda_{*}\) close to \(\lambda\), the metrological generator associated with the dynamics generated by Eq. (24) becomes \(G_{\lambda}(t)=\int_{0}^{t}\partial_{x}H_{\lambda}(s)ds\) and QFI is still given by Eq. (19). Let us consider the following an example, where
\[H_{\lambda}=\lambda S_{z}^{2}, \tag{25}\]
and the initial state is a spin coherent state [40] parameterized by
\[\ket{\psi_{0}}=\bigotimes_{k=1}^{N}\left[\cos\frac{\theta}{2}\ket{0}^{(k)}+e^ {i\theta}\sin\frac{\theta}{2}\ket{1}^{(k)}\right]. \tag{26}\]
Equation (25) is nonlinear and non-local. It has been shown previously that precision beyond the shot-noise scaling in classical sensing [14, 19, 41] can be achieved. However, the optimal LM that reaches such a non-classical precision is still missing in the literature. To this end, we apply coherent control \(H_{1}=-\lambda S_{z}^{2}\) so that \(\delta H=\delta\lambda S_{z}^{2}\) where \(\delta\lambda\equiv\lambda-\lambda_{*}\) state. The QFI corresponding to the initial state Eq. (26) is [27]
\[I=4t^{2}\mathrm{Var}[S_{z}^{2}]_{\ket{\psi_{0}}}=4t^{2}\sum_{k=1}^{3}f_{k}(\cos \theta)N^{k}, \tag{27}\]
and scales cubically in \(N\), surpassing the Heisenberg limit. In Fig. 2, the comparison between the QFI and classical Fisher information (CFI) associated with the LM (18), where \(\mathbf{n}^{(j)}=(\sin\theta\cos\phi,\,\sin\theta\sin\phi,\,\cos\theta)\) are plotted. One can readily see that qCRB is asymptotically saturated as \(\lambda_{*}\) approaches \(\lambda\).
_Conclusion and outlook._ -- We systematically study optimal LMCC and LM that can saturate the qCRB in many-body sensing. We propose an IMP approach that illuminates the structure of the optimal LMCC and LM and provide several fundamental theorems on the qCRB-saturating optimal LM. We show that under LM, the qCRB can be universally saturated in an approximate way with adaptive control, regardless of the form of the sensing Hamiltonian.
Currently, in the protocols of many-body sensing [14; 15; 16; 42; 43], there is not yet a systematic construction of the optimal LM. Our results fill the gap between theoretical proposal of many-body sensing and its experimental realization. We expect to see their near-term implementation in noisy intermediate scale quantum devices [44; 45; 46]. Future works include generalization to qudits, continuous variable systems, and qubit-cavity systems, application to entanglement detection [47; 48; 49] and spin-squeezing [50; 51; 40], investigation of the effect of decoherence, etc.
_Acknowledgement._ --We thank Sisi Zhou for useful communications. JY was funded by the Wallenberg Initiative on Networks and Quantum Information (WINQ). HLS was supported by the NSFC key grants No. 12134015 and No. 92365202. SY was supported by Key-Area Research and Development Program of Guangdong Province Grant No. 2020B0303010001.
|
2310.20155 | MLatom 3: Platform for machine learning-enhanced computational chemistry
simulations and workflows | Machine learning (ML) is increasingly becoming a common tool in computational
chemistry. At the same time, the rapid development of ML methods requires a
flexible software framework for designing custom workflows. MLatom 3 is a
program package designed to leverage the power of ML to enhance typical
computational chemistry simulations and to create complex workflows. This
open-source package provides plenty of choice to the users who can run
simulations with the command line options, input files, or with scripts using
MLatom as a Python package, both on their computers and on the online XACS
cloud computing at XACScloud.com. Computational chemists can calculate energies
and thermochemical properties, optimize geometries, run molecular and quantum
dynamics, and simulate (ro)vibrational, one-photon UV/vis absorption, and
two-photon absorption spectra with ML, quantum mechanical, and combined models.
The users can choose from an extensive library of methods containing
pre-trained ML models and quantum mechanical approximations such as AIQM1
approaching coupled-cluster accuracy. The developers can build their own models
using various ML algorithms. The great flexibility of MLatom is largely due to
the extensive use of the interfaces to many state-of-the-art software packages
and libraries. | Pavlo O. Dral, Fuchun Ge, Yi-Fan Hou, Peikun Zheng, Yuxinxin Chen, Mario Barbatti, Olexandr Isayev, Cheng Wang, Bao-Xin Xue, Max Pinheiro Jr, Yuming Su, Yiheng Dai, Yangtao Chen, Lina Zhang, Shuang Zhang, Arif Ullah, Quanhao Zhang, Yanchi Ou | 2023-10-31T03:41:39Z | http://arxiv.org/abs/2310.20155v1 | # MLatom 3: Platform for machine learning-enhanced computational chemistry simulations and workflows
###### Abstract
We present a new MLatom 3 for machine learning-enhanced computational chemistry simulation of a molecular
###### Abstract
Machine learning (ML) is increasingly becoming a common tool in computational chemistry. At the same time, the rapid development of ML methods requires a flexible software framework for designing custom workflows. MLatom 3 is a program package designed to leverage the power of ML to enhance typical computational chemistry simulations and to create complex workflows. This open-source package provides plenty of choice to the users who can run simulations with the command line options, input files, or with scripts using MLatom as a Python package, both on their computers and on the online XACS cloud computing at XACScloud.com. Computational chemists can calculate energies and thermochemical properties, optimize geometries, run molecular and quantum dynamics, and simulate (ro)vibrational, one-photon UV/vis absorption, and two-photon absorption spectra with ML, quantum mechanical, and combined models. The users can choose from an extensive library of methods containing pre-trained ML models and quantum mechanical approximations such as AIQM1 approaching coupled-cluster accuracy. The developers can build their own models using various ML algorithms. The great flexibility of MLatom is largely due to the extensive use of the interfaces to many state-of-the-art software packages and libraries.
Introduction
Computational chemistry simulations are common in chemistry research thanks to abundant general-purpose software, most of which has started as purely quantum mechanical (QM) and molecular mechanical (MM) packages. More recently, the rise of artificial intelligence (AI)/machine learning (ML) applications for chemical simulations has caused the proliferation of programs mostly focusing on specific ML tasks such as learning potential energy surfaces (PESs).[1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17] The rift between the development of the traditional QM and MM packages on the one hand and ML programs on the other hand, is bridged to some extent by the higher-level library ASE,[18] which enables usual computational tasks via interfacing heterogeneous software. The further integration of QM, MM, and ML has been prompted by the maturing of ML techniques and is evidenced by the growing trend of incorporating ML methods in the QM and MM computational chemistry software.[19, 20, 21, 21]
Against this backdrop, the MLatom package started in 2013 as a pure stand-alone ML package to provide a general-purpose experience for computational chemists akin to the black-box QM packages.[22] The early MLatom could be used for training, testing, and using ML models and their combinations with QM methods (e.g., \(\Delta\)-learning[23] and learning of Hamiltonian parameters[24]), accurate representation of PES,[25, 26] sampling of points from data sets,[26] ML-accelerated nonadiabatic dynamics,[27] and materials design[28]. The fast pace of method and software development in QM, MM, ML, and other computational science domains led to MLatom 2 which started to include interfaces to third-party packages.[29] Such an approach provided a unique opportunity for the package users to choose one of the many established ML models - similar to the users of the traditional QM software who can choose one of the many QM methods. MLatom 2 could perform training of the ML models, evaluate their accuracy, and then use the models for geometry optimizations and frequency calculations. Special workflows were also implemented, such as acceleration of the absorption UV/vis spectra calculations with ML[30] and prediction of two-photon absorption spectra[31]. In addition, MLatom 2 could be used to perform simulations with general-purpose AI-enhanced QM method[32] AIQM1 and universal machine learning potentials of the ANI family[33, 34, 35, 2] with the accurate scheme developed for calculating heats of formation[36] with uncertainty quantification with these methods.
With time, the need to develop increasingly complex workflows that incorporate ML and QM for a broad range of applications has necessitated the rethink and redesign of MLatom to enable the rapid development of highly customized routines. These additional design
requirements for MLatom to serve not just as a black-box general-purpose package but also as a flexible platform for developers resulted in a significant extension, redesign, and rewrite of the program. The subsequent upgrade has allowed the use of MLatom through the versatile Python API (MLatom PyAPI) and also included the implementation of more simulation tasks such as molecular and quantum dynamics and the support of QM methods and composite schemes based on the combinations of QM and ML models. This upgrade was released[37] as MLatom 3 in 2023 - ten years after the start of the project. During this decade, MLatom went through a drastic transformation from a pure Fortran package to a predominantly Python package with one-third of the code written in Fortran for efficient implementations of critical parts. MLatom 3 comes under the open-source permissive MIT license (modified to request proper citations). Here we give an overview of the capabilities of MLatom 3 and provide examples of its applications.
## 2 Overview
MLatom merges the functionality from typical quantum chemical and other atomistic simulation packages and the capabilities of desperate ML packages with a strong focus on molecular systems. The user can choose from a selection of ready-to-use QM and ML models
Figure 1: Overview of the MLatom 3 capabilities.
and design and train ML models to perform the required simulations. The bird's view of the MLatom capabilities is best given in Figure 1.
One of the current main goals of MLatom is to enable simulation tasks of interest for a computational chemist with generic types of models that can be based on ML, QM, and their combinations (see Section 4). These tasks include single-point calculations, optimization of geometries of minima and transition states (which can be followed by intrinsic reaction coordinate (IRC) analysis[38]), frequency and thermochemical property calculations, molecular and quantum dynamics, rovibrational (infrared (IR) and power) spectra, ML-accelerated UV/vis absorption and two-photon absorption spectra simulations. This part of MLatom is more similar to traditional QM and MM packages but with much more flexibility in model choice and unique tasks. A dedicated Section 5 will give a more detailed account of the simulations.
Enabling the users to create their own ML models was MLatom's original main focus and it continues to play a major role. The MLatom supports a range of carefully selected representative ML algorithms that can learn the desired properties as a function of a 3D atomistic structure. Typically, these algorithms are used, but not limited to, for learning PESs and hence often can be called, for simplicity, ML (interatomic) potentials (MLPs)[39, 40, 41, 42, 43]. One particular specialization of MLatom is the original implementation of kernel ridge regression (KRR) algorithms for learning any property as a function of any user-provided input vectors or XYZ molecular coordinates[22]. In addition, the user can create custom multi-component models based on concepts of \(\Delta\)-learning[23], hierarchical ML[25], and self-correction[26]. These models may consist of ML and QM methods. MLatom provides standardized means for training, hyperparameter optimization, and evaluation of the models so that switching from one model type to another may need just one keyword change[29]. This allows one to easily experiment with different models and choose the most appropriate for the task.
The data is as important as choosing and training the ML algorithms. MLatom 3 provides several data structures specialized for computational chemistry needs, mainly based on versatile Python classes for atoms, molecules, molecular databases, and dynamics trajectories. These classes allow not just storing the data in a clearly structured format, but also handling it by, e.g., converting to different molecular representations and data formats and splitting and sampling the data sets into the training, validation, and test subsets. Because data is a central concept in the age of data-driven models and MLatom as a package, we describe data structures in Section 3 before describing models, simulations, and machine learning.
How the user interacts with the program is also important and ideally the features should be easily accessible and their use intuitive. MLatom calculations can be requested by providing command-line options either directly or through the input file. Alternatively, MLatom can be used as a Python module which can be imported and used for creating calculation workflows of varying complexity. A side-by-side comparison of these two approaches is given in Figure 2. More examples highlighting different use cases of MLatom are interspersed throughout this article.
MLatom as an open-source package can be conveniently installed via PyPI, i.e., simply using the command pip install mlatom or from the source code available on GitHub at [https://github.com/dralgroup/mlatom](https://github.com/dralgroup/mlatom). To additionally facilitate access to AI-enhanced computational chemistry, MLatom can be conveniently used in the XACS cloud computing service at [https://XACScloud.com](https://XACScloud.com) whose basic functionality is free for non-commercial uses such as education and research. Cloud computing eliminates the need for program installation and might be particularly useful for users with limited computational resources.
Figure 2: Side-by-side comparison of the usage of MLatom in both command-line mode and via Python API for a common task of geometry optimization with one of the pre-trained ML models ANI-1cx.
## 3 Data
In MLatom, everything revolves around operations on data: databases and data points of different types such as an atom, molecule, molecular database, and molecular trajectory (Figure 3). They are implemented as Python classes that contain many useful properties and provide different tools to load and dump these data-type objects using different formats. For example, the key type is a molecule that can be loaded from XYZ file or SMILES and then automatically parsed into the constituent atom objects. Atom objects contain information about the nuclear charge and mass as well as nuclear coordinates. A molecule object is assigned charge and multiplicity. The information about molecular and atomic properties can be passed to perform simulations, e.g., MD, with models that update and create new molecule objects with calculated quantum mechanical properties such as energies and energy gradients.
See Figure 2 for an example of loading a molecule object init_mol from the file init.xyz, used as the initial guess for the geometry optimization, returning an optimized geometry as a new molecule object final_mol, which is saved into the final.xyz file. Data objects can be directly accessed and manipulated via MLatom Python API. When using the MLatom in the command-line mode, many similar operations are done under the hood so
Figure 3: Overview of different data types in MLatom.
that the user often just needs to prepare input files in standard formats such as files with XYZ coordinates.
Molecule objects can be combined into or created by parsing the molecular database that has, e.g., functions to split it into the different subsets needed for training and validation of ML models. The databases can be loaded and dumped in plain text (i.e., several files including XYZ coordinates, labels, XYZ derivatives), JSON, and npz formats. Another data type is molecular trajectory which consists of steps containing molecules and other information. Molecular trajectory objects are created during geometry optimization and MD simulations and in the latter case, the step is a snapshot of MD trajectory, containing information about the time, nuclear coordinates and velocities, atomic numbers and masses, energy gradients, kinetic, potential and total energies, and, if available, dipole moments and other properties. The trajectories can be loaded and dumped in JSON, H5MD,[44] and plain text formats.
Molecules for which XYZ coordinates are provided can be transformed in several supported descriptors: inverse internuclear distances and their version normalized relative to the equilibrium structure (RE)[26], Coulomb matrix,[45, 46] and their variants.[29]
MLatom also has separate statistics routines to calculate different error measures and perform other data analyses.[29] Routines for preparing common types of plots, such as scatter plots and spectra, are available too.
## 4 Models and methods
Any of the simulations needs a model that provides the required output for a given input. The architecture and algorithms behind the models can be designed by an expert or chosen from the available selection. ML models typically require training to find their parameters before they can be used for simulations. Some of these models, such as universal MLPs of ANI family,[33, 34, 2, 35] are already pre-trained for the user who does not have to train them. This is similar to QM methods, commonly used out-of-the-box without tuning their parameters. In MLatom, we call a _method_ any model that can be used out-of-the-box for simulations. Both pre-trained ML models and QM methods belong to the methods in MLatom's terminology, which is reflected in the keyword names. This model type also includes hybrid pre-trained ML and QM methods. Below, we overview models available in MLatom when writing this article, the selection of available methods and models with provided architectures that need to be trained, and the ways to design custom models (Figure 4).
### Methods
MLatom provides access to a broad range of methods through interfaces to many third-party state-of-the-art software packages:
* Pre-trained ML models:
* Universal potentials ANI-1ccx[34], ANI-1x[33], ANI-2x[35], ANI-1x-D4, and ANI-2x-D4. ANI-1ccx is the most accurate and approaches gold-standard CCSD(T) accuracy. We have seen an example of its use in geometry optimization in Figure 2. Other methods approach the density functional theory (DFT) level. ANI-1ccx and ANI-1x are limited to CHNO elements, while ANI-2x can be used for CHNOFCIS elements. We allow the user to use D4-dispersion-corrected universal ANI potentials that might be useful for noncovalent complexes. D4 correction[47] is taken for oB97X functional[48] used to generate data for pre-training ANI-1x and ANI-2x. ANI models are provided via an interface to TorchANI[2] and D4 corrections via the interface to dftd4[49]. These methods are limited to predicting energies and forces for neutral closed-shell compounds in their ground state. MLatom reports uncertainties for calculations with these methods based on the standard deviation between neural network (NN) predictions[36].
Figure 4: Overview of different model types in MLatom.
* Special ML-TPA model for predicting the two-photon absorption (TPA) cross sections [31].
* Hybrid QM/ML methods: AIQM1, AIQM1@DFT, and AIQM1@DFT* [32]. More transferable and accurate than pre-trained ML models but slower (the speed of semi-empirical QM methods which are still much faster than DFT). AIQM1 is approaching gold-standard CCSD(T) accuracy, while AIQM1@DFT and AIQM1@DFT* target the DFT accuracy for neutral, closed-shell molecules in their ground state. All these methods are limited to the CHNO elements. AIQM1 and AIQM1@DFT include explicit D4-dispersion corrections for \(\omega\)B97X functional while AIQM1@DFT* does not. They also include modified ANI-type networks and modified semi-empirical QM method ODM2 [50] (ODM2*, provided by either the MNDO [51] or Sparrow [52] program). These methods can also be used to calculate charged species, radicals, excited states, and other QM properties such as dipole moments, charges, oscillator strengths, and nonadiabatic couplings. MLatom reports uncertainties for calculations with these methods based on the standard deviation between NN predictions [36].
* A range of established QM methods from _ab initio_ (e.g., HF, MP2, coupled cluster, _etc._) to DFT (e.g., B3LYP [53, 54]\(\omega\)B97X [48], etc.) via interfaces to PySCF [55] and Gaussian [55].
* A range of semi-empirical QM methods (GFN2-xTB [56], OM2 [57], ODM2 [50], AM1 [58, PM6 [59], _etc._) via interfaces to the xtb [60], MNDO [51], and Sparrow [52] programs.
* A special composite method CCSD(T)*/CBS [34] extrapolating CCSD(T) to the complete basis set via an interface to Orca [61, 62]. This method is relatively fast and accurate. It allows the user to check the quality of calculations with other methods and generate robust reference data for ML. This method was used to generate the reference data for AIQM1 and ANI-1ccx.
### Available standard models needing training
The field of MLPs is very rich in models. Hence, the user can often choose one of the popular MLP architectures reported in literature rather than developing a new one. MLatom provides a toolset of MLPs from different types (see Ref. [39] for an overview and Ref. [29] for implementation details). These supported types can be categorized in a simplified scheme as
* Models based on kernel methods (KMs)[63] with global descriptors to which (p)KREG,[63, 26] sGDML,[65] and KRR-CM[45, 46] belong as well as with local descriptors represented by only GAP[66]-SOAP[67].
* Models based on neural networks (NNs) with fixed local descriptors to which ANI-type MLPs[2] and DPMD[68] belong and with learned local descriptors represented by PhysNet[69] and DeepPot-SE[70].
Any of these models can be trained and used for simulations, e.g., geometry optimizations or dynamics. MLatom also supports hyperparameter optimization with many algorithms including grid search,[22] Bayesian optimization via the hyperopt package,[71, 72] and standard optimization algorithms available in SciPy[73]. Generalization errors of the resulting models can also be evaluated in standard ways (hold-out and cross-validation). More on this in a dedicated Section 6.
### Custom models based on kernel methods
MLatom also provides the flexibility of training custom models based on kernel ridge regression (KRR) for a given set of input vectors **x** or XYZ coordinates and any labels **y**.[74, 75] If XYZ coordinates are provided, they can be transformed in one of the several supported descriptors (e.g., inverse internuclear distances and their version normalized relative to the equilibrium structure (RE), and Coulomb matrix). The user can choose from one of the implemented kernel functions, including the linear,[75, 22, 76] Gaussian,[75, 22, 76] exponential,[75, 76] Laplacian,[75, 22, 76] and Matern[75, 76, 22] as well as periodic[76, 78, 79] and decaying periodic[76, 78, 80] functions, which are summarized in Table 1. These kernel functions \(k\big{(}\textbf{x},\textbf{x}_{j};\textbf{h}\big{)}\) are key components required to solve the KRR problem of finding the regression coefficients \(\alpha\) of the approximating function \(\hat{f}(\textbf{x};\textbf{h})\) of the input vector \(\textbf{x}\).[74, 75]
\[\hat{f}(\textbf{x};\textbf{h})=\sum_{j=1}^{N_{\text{tr}}}\alpha_{j}k\big{(} \textbf{x},\textbf{x}_{j};\textbf{h}\big{)}. \tag{1}\]
The kernel function, in most cases, has hyperparameters **h** to tune, and they can be viewed as measuring similarity between the input vector **x** and all of \(N_{\text{tr}}\) training points \(\textbf{x}_{j}\) (both vectors should be of the same length \(N_{x}\)). In addition to the hyperparameters in the kernel function, all KRR models have at least one more regularization parameter \(\lambda\) used during training to improve the generalizability.
\begin{table}
\begin{tabular}{l l l} \hline Kernel function & Formula & Hyperparameters \\ & & in kernel \\ & & function \\ \hline Linear & \(k\big{(}\mathbf{x},\mathbf{x}_{j}\big{)}=\mathbf{x}^{\mathsf{T}}\mathbf{x}_{j}\) & \\ Gaussian & \(k\big{(}\mathbf{x},\mathbf{x}_{j}\big{)}=\exp\left(-\dfrac{1}{2\sigma^{2}} \underset{s}{\sum}\big{(}x_{s}-x_{j,s}\big{)}^{2}\right)\) & \(\sigma>0\), length \\ exponential & \(k\big{(}\mathbf{x},\mathbf{x}_{j}\big{)}=\exp\left(-\dfrac{1}{\sigma} \underset{s}{\sum}\big{(}x_{s}-x_{j,s}\big{)}^{2}\right)^{1/2}\) & \(\sigma>0\), length \\ Laplacian & \(k\big{(}\mathbf{x},\mathbf{x}_{j}\big{)}=\exp\left(-\dfrac{1}{\sigma} \underset{s}{\sum}\big{|}x_{s}-x_{j,s}\big{|}\right)\) & \(\sigma>0\), length \\ & \(k\big{(}\mathbf{x},\mathbf{x}_{j}\big{)}=\exp\left(-\dfrac{1}{\sigma} \underset{s}{\sum}\big{(}x_{s}-x_{j,s}\big{)}^{\frac{1}{2}}\right)\) & \\ Matérn & \(\times\sum_{k=0}^{n}\dfrac{(n+k)!}{(2n)!}\binom{n}{k}\) & \(\sigma>0\), length \\ & \(\times\left(\dfrac{2}{\sigma}\underset{s}{\sum}\big{(}x_{s}-x_{j,s}\big{)}^{ 2}\right)^{1/2}\right)^{n-k}\) & \\ periodic & \(k\big{(}\mathbf{x},\mathbf{x}_{j}\big{)}=\exp\left(-\dfrac{2}{\sigma^{2}} \underset{s}{\sum}\left(\dfrac{n}{p}\underset{s}{\sum}\big{(}x_{s}-x_{j,s} \big{)}^{2}\right)^{1/2}\right)\) & \(\sigma>0\), length \\ & \(k\big{(}\mathbf{x},\mathbf{x}_{j}\big{)}=\exp\left(-\dfrac{1}{2\sigma^{2}} \underset{s}{\sum}\big{(}x_{s}-x_{j,s}\big{)}^{2}\right.\) & \(\sigma>0\), length \\ decaying periodic & \(-\dfrac{2}{\sigma_{p}^{2}}\sin^{2}\left\{\dfrac{\pi}{p}\underset{s}{\sum} \big{(}x_{s}-x_{j,s}\big{)}^{2}\right\}^{1/2}\right)\) & \(\sigma>0\), length \\ \hline \end{tabular}
\end{table}
Table 1: Summary of the available kernel functions for solving the kernel ridge regression problem (Eq. 1) as implemented in MLatom.
### Composite models
Often, it is beneficial to combine several models. One example of such composite models is based on \(\Delta\)-learning [23] where the low-level QM method is used as a baseline which is corrected by an ML model to approach the accuracy of the target higher-level QM method. Another example is ensemble learning [81] where multiple ML models are created, and their predictions are averaged during the simulations to obtain more robust results and use in the query-by-committee strategy of active learning [82]. Both of these concepts can also be combined in more complex workflows as exemplified by the AIQM1 method [32] which uses the NN ensemble as a correcting \(\Delta\)-learning model and the semi-empirical QM method as the baseline. To easily implement these workflows, MLatom allows the construction of the composite models as model trees; see an example for AIQM1 in Figure 5.
Other examples of possible composite models are hierarchical ML [25], which combines several (correcting) ML models trained on (differences between) QM levels, and self-correction [26], when each next ML model corrects the prediction by the previous model.
Figure 5: Composite models can be constructed as a model tree in MLatom. Here an example is shown for the AIQM1 method where the root parent node comprises 3 children, the semi-empirical QM method ODM2*, the NN ensemble, and additional D4 dispersion correction. NN ensemble in turn is a parent of 8 ANI-type NN children. Predictions of parents are obtained by applying an operation ‘average’ or ‘sum’ to children’s predictions. The code snippets are shown, too.
Simulations
MLatom supports a range of simulation tasks such as single-point simulations, geometry optimizations, frequency and thermochemistry calculations, molecular and quantum dynamics, one- and two-photon absorption and (ro)vibrational spectra simulations (Figure 6). Most of them need any model that can provide energies and energy derivatives (gradients and Hessians).
### Single-point calculations
Single-point calculations are calculations of quantum mechanical properties -- mostly energies and energy gradients, but also Hessians, charges, dipole moments, _etc._ -- for a single geometry. These calculations are very common in ML research in computational chemistry as they are used both to generate the reference data with QM methods for training and validating ML and to make inferences with ML to validate the trained model and generate required data for new geometries. MLatom is a convenient tool to perform single-point calculations not just for a single geometry, as in many QM packages, but for data sets with many geometries.
Figure 6: Overview of simulation tasks in MLatom. The inset in one-photon UV/vis spectra is reproduced from Ref. [29] under the CC-BY-4.0 license.
### Geometry optimizations
Locating stationary points on PES, such as energy minima and transition states, is crucial for understanding the molecular structure and reactivity. Hence, geometry optimizations are among the most important and frequent tasks in computational chemistry. MLatom can locate energy minima and transition states (TS) with any models providing energies and gradients. An example of geometry optimization is given in Figure 2. Hessians are also required for the Berny TS optimization algorithm. Once the TS is located, the user can follow the intrinsic reaction coordinate (IRC)[38] to check its nature. Geometry optimizations can be performed with many algorithms provided by the interfaces to SciPy[73], ASE[18], or Gaussian[55]. TS search can be performed with the dimer method[83] in ASE and the Berny algorithm[84] in Gaussian. IRC calculations can only be performed with the interface to Gaussian.
The seamless integration of the variety of QM and ML methods for performing geometry optimizations is advantageous because it allows the use of methods from interfaced programs that do not implement some of such simulation tasks by themselves. For example, MLatom can be used to perform TS search with the GFN2-xTB method via an interface to the xtb program, while there is no option for TS search with the latter program. Similarly, Sparrow, which provides access to many semi-empirical methods, can only be used for single-point calculations. Since analytical gradients and Hessians are not available for many models and implementations, MLatom also implements a finite-difference numerical differentiation, further expanding the applicability of the models for geometry optimizations.
### Frequency calculations
Simulation of vibrational frequencies is another common and important task in computational chemistry as it is useful to additionally verify the nature of stationary points, visualize molecular vibrations, calculate zero-point vibrational energy (ZPE) and thermochemical properties, as well as obtain spectroscopic information, which can be compared to experimental vibrational spectra. These calculations can be performed within the ridge-rotor harmonic approximation via an adapted TorchANI implementation[2] and Gaussian[55] interface. The latter also allows the calculation of anharmonic frequencies using the second-order perturbative approach[85].
Similarly to geometry optimizations, MLatom can perform these simulations with any model -- ML and QM or their combination -- that provides energies. Calculations also need
Hessian, and wherever available, analytical Hessian is used. If it is unavailable, semi-analytical (with analytical gradients) or fully numerical Hessian can be calculated.
### Thermochemistry calculations
Therrochemical properties such as enthalpies, entropies, and Gibbs free energies can be derived from frequency calculations. In turn, enthalpies can be used to calculate heats (enthalpies) of formation. MLatom uses the scheme analogous to those employed in the _ab initio_[86] and semi-empirical QM calculations[50] to derive heats of formation:
\[\Delta H_{\mathrm{f,\,\mathcal{T}}}=\left[\sum_{A}\Delta H_{\mathrm{f,\, \mathcal{T}}}(A)\right]-\Delta H_{\mathrm{at,\,\mathcal{T}}} \tag{2}\]
where \(\Delta H_{\mathrm{f,\,\mathcal{T}}}(A)\) is the experimental enthalpies of formation of free atom A, and \(\Delta H_{\mathrm{at,\,\mathcal{T}}}\) is the atomization enthalpy. In AIQM1 and ANI-1ccx, we use the same \(\Delta H_{\mathrm{f,\,\mathcal{T}}}(A)\) values as other semi-empirical QM methods, i.e., 52.102, 170.89, 113.00, 59.559 kcal/mol for elements H, C, N, O, respectively.[51]
The atomization enthalpy \(\Delta H_{\mathrm{at,\,\mathcal{T}}}\) can be obtained from the difference between molecular \(H_{\mathrm{\,\mathcal{T}}}\) and atomic absolute enthalpies \(H_{\mathrm{\,\mathcal{T}}}(A)\):
\[\Delta H_{\mathrm{at,\,\mathcal{T}}}=\left[\sum_{A}H_{\mathrm{\,\mathcal{T}}}( A)\right]-H_{\mathrm{\,\mathcal{T}}}. \tag{3}\]
Analogous to _ab initio_ methods, harmonic-oscillator and rigid-rotor approximations are explicitly considered in the calculation of absolute enthalpies:
\[H_{\mathrm{\,\mathcal{T}}}=E_{tot}+ZPVE+E_{\mathrm{trans,\,\mathcal{T}}}+E_{ \mathrm{rot,\,\mathcal{T}}}+E_{\mathrm{vib,\,\mathcal{T}}}+RT, \tag{4}\]
\[H_{\mathrm{\,\mathcal{T}}}(A)=E(A)+E_{\mathrm{trans,\,\mathcal{T}}}(A)+RT, \tag{5}\]
where \(E_{\mathrm{tot}}\) and \(E(A)\) are the total energy of the molecule and free atom, respectively, and ZPVE is the zero-point vibrational energy. \(E_{\mathrm{trans,\,\mathcal{T}}}\), \(E_{\mathrm{rot,\,\mathcal{T}}}\) and \(E_{\mathrm{vib,\,\mathcal{T}}}\) are the translational, rotational, and vibrational thermal contributions, and \(R\) is the gas constant.
The scheme requires the knowledge of free atom energies \(E(A)\). Any model able to calculate them can be used for predicting heats of formation. This is straightforward for QM methods but not for ML-based models that are usually trained on molecular species. We have previously fitted free atom energies (see Table 2) for AIQM1 and ANI-1ccx methods to the experimental data set.[32, 36] As a result, both methods can provide heats of formation close to
chemical accuracy with speed orders of magnitude higher than that of alternative high-accuracy QM methods. In addition, we provide an uncertainty quantification scheme based on the deviation of NN predictions in these methods to tell the users when the predictions are confident. This was useful to find errors in the experimental data set of heats of formation [36].
An example of using MLatom to calculate heats of formation with the AIQM1 and B3LYP/6-31G* methods is shown in Figure 7. AIQM1 is both faster and more accurate than B3LYP, as can be seen by comparing the values with the experiment. This is also consistent with our previous benchmark [36].
Figure 7: Calculation of heats of formation of ethylene with AIQM1 and B3LYP/6-31G* (from the interface to PySCF) compared to the experiment [87].
\begin{table}
\begin{tabular}{c c c} \hline Element & AIQM1 & ANI-1ccx \\ \hline H & -0.50088038 & -0.50088088 \\ C & -37.79221710 & -37.79199048 \\ N & -54.53360298 & -54.53379230 \\ O & -75.00986203 & -75.00968205 \\ \hline \end{tabular}
\end{table}
Table 2: The atomic energies (in Hartree) of AIQM1 and ANI-1ccx used in heats of formation calculations [32, 36].
### Molecular dynamics
Molecular dynamics propagates nuclear motion based on the equation of motion according to the classical mechanics.[88] This requires the knowledge of forces acting on nuclei, which are typically derived as the negative of the potential energy gradients (i.e., negative of the derivatives of the model for potential energies) for conservative forces. Due to the high cost of the approach, it is most commonly used with molecular mechanics force fields,[89] but often, calculations based on QM methods are possible in variants called _ab initio_ or Born-Oppenheimer MD (BOMD).[88] The proliferation of ML potentials makes it possible to perform BOMD quality dynamics at a cost comparable to molecular mechanics force fields or much faster than commonly used DFT-based BOMD.[39, 40, 41, 42, 43] For example, the AIQM1 method is faster than DFT and the IR spectra obtained from AIQM1 MD are of higher quality (Figure 8).[90]
* [15]
MLatom has a native implementation of MD supporting any kind of model that provides forces, not necessarily conservative [90]. Currently, simulations in NVE and NVT ensembles [92], based on the velocity Verlet algorithm [93], are possible. NVT simulations can be carried out with the Andersen [92, 94] and Nose-Hoover [95, 96] thermostats. Trajectories can be saved in different formats, including plain text, JSON and, more compact H5MD [29] database formats. The Nose-Hoover thermostat is a deterministic thermostat that couples the system to a thermal bath through extra terms in the Hamiltonian. Its theory and implementation details are described elsewhere [90]. Here, we briefly mention the relevant methodology [92, 94] used in the Andersen thermostat. In this thermostat, the system is coupled to a heat bath by stochastically changing the velocity of each atom. The changing frequency (or collision frequency) is controlled by a tunable parameter \(\nu\). The collisions follow the Poisson distribution so that the probability of changing the velocity of each atom during a time step \(\Delta t\) is \(\nu\Delta t\). If the atoms collide, new velocities will be assigned to them, sampled from a Maxwell-Boltzmann distribution at target temperature \(T\).
MD trajectories can be propagated in parallel, dramatically speeding up the calculations. In addition, we made an effort to better integrate the KREG model implemented in Fortran into the main Python-based MLatom code which makes MD with KREG very efficient.
Note that MD can also be propagated without forces using the concept of the 4D-spacetime AI atomistic models, which directly predict nuclear configurations as a function of time [79]. Our realization of this concept, called the GICnet model, is currently available in a publicly available development version of MLatom version [79].
The above implementations can propagate MD on an adiabatic potential energy surface, i.e., typically, for ground-state dynamics. Nonadiabatic MD based on the trajectory surface hopping algorithms can also be performed with the help of MLatom, currently, via Newton-X [96]'s interface to MLatom [97, 98, 27]. MLatom also supports quantum dissipative dynamics as described in the next Section 5.6.
### Quantum dissipative dynamics
It is often necessary and beneficial to treat the entire system quantum mechanically and also include the environmental effects [100]. This is possible via many quantum dissipative dynamics (QD) algorithms, and an increasing number of ML techniques were suggested to accelerate such simulations [98]. MLatom allows performing several unique ML-accelerated QD simulations using either a recursive scheme based on KRR [101] or a conceptually different AI
QD approach[102] predicting the trajectories as a function of time or OSTL technique[103] outputting the entire trajectories in one shot. These approaches are enabled via an interface to a specialized program MLQD[104].
In the recursive KRR scheme, a KRR model is trained, establishing a map between future and past dynamics. This KRR model, when provided with a brief snapshot of the current dynamics, can be leveraged to forecast future dynamics. In the AIQD approach, a convolution neural network (CNN) model is trained mapping simulation parameters and time to the corresponding system's state. Using the trained CNN model, the state of the system can be predicted at any time without the need to explicitly simulate the dynamics. Similarly, the ultra-fast OSTL method utilizes CNN-based architecture and, based on simulation parameters, predicts future dynamics of the system's state up to a predefined time in a single shot. In addition, as optimization is a key component in training, users can optimize both KRR and CNN models using MLatom's grid search functionality for KRR and Bayesian optimization via the hyperopt[71] library for CNN. Moreover, we also incorporate the auto-plotting functionality, where the predicted dynamics is plotted against the provided reference trajectory.
### _Rovibrational (infrared and power) spectra_
Rovibrational spectra can be calculated in several ways with MLatom. The simplest one is by performing frequency calculations on an optimized molecular geometry. This requires any model providing Hessians and, preferably, dipole moments. Another one is performing molecular dynamics simulations with any model providing energy gradients and, then, post-processing the trajectories.
Both frequency calculations and the MD-based approach require the model to also provide dipole moments to calculate absorption intensities. If no dipole moments are provided, only frequencies are available, or, in the case of MD, only power spectra rather than IR can be obtained. The IR spectra are obtained via the fast Fourier transform using the autocorrelation function of dipole moment[104, 105] with our own implementation[90]. The power spectra only need the fast Fourier transform[105], which is also implemented[79] in MLatom.
We have previously shown[90] that the high quality of the AIQM1 method results in rather accurate IR spectra obtained from MD simulations compared to spectra obtained with a representative DFT (which is also substantially slower; see example in Figure 8) or a semi-empirical QM method.
### One-photon UV/vis absorption spectra
UV/vis absorption spectra simulations are computationally intensive because they require calculating excited-state properties. In addition, better-quality spectra can be obtained via the nuclear ensemble approach (NEA)[107], which necessitates the calculation of excited-state properties for thousands of geometries for high precision. MLatom implements an interpolation ML-NEA scheme[30] that improves the precision of the spectra with a fraction of the computational cost of traditional NEA simulations. Currently, the ML-NEA calculations are based on interfaces to Newton-X[96] and Gaussian[55] and utilize the sampling of geometries from a harmonic Wigner distribution[108]. This scheme also automatically determines the optimal number of required reference calculations, providing a user-friendly, black-box implementation of the algorithm[29].
### Two-photon absorption
Beyond one-photon absorption, MLatom has an implementation of a unique ML approach for calculating two-photon absorption (TPA) cross sections of molecules just based on their SMILES strings[44], which are converted into the required descriptors using the interface to RDKit[109], and solvent information[31]. This ML-TPA approach is very fast with accuracy comparable to much more computationally intensive QM methods. We provide a ML model pre-trained on experimental data. ML-TPA was tested in real laboratory settings and shown to provide a good estimate for new molecules not present in the training experimental database.
## 6 Machine learning
In Sections 4 and 5, we discussed the supported types of models and how they can be applied to simulations. Here, we briefly overview the general considerations for training and validating the ML models with MLatom. The models share the standard MLatom's conventions for input, output, training, hyperparameter optimization, and testing, which allows to conveniently switch from one to another model and benchmark them.
### Training
To create an ML model, the user has to choose and train the ML model and prepare data. MLatom provides many tools for different stages of this process. The model can be either chosen from a selection of provided types of ML models with pre-defined architecture or customized based on available algorithms and preset models. Once a model is chosen, it must
be trained, and, in many cases, it is advisable or even required (particularly in the case of the kernel methods) to optimize its hyperparameters, which can be done as explained in Section 6.2.
For training, the data set should be appropriately prepared. MLatom has strict naming conventions for data set splits to avoid any confusion when changing and comparing different model types. All the data that is used directly or indirectly for creating a ML model is called the training set. This means that the validation set, which can be used for hyperparameter optimization or early stopping during NN training, is a subset of the training set. Thus, the part of the training set remaining after excluding the validation set is called the sub-training set and is actually used for training the model, i.e., optimizing model parameters (weights in NN terminology and regression coefficients in kernel methods terminology).
MLatom can split the training data set into the sub-training and validation data subsets or create a collection of these subsets via cross-validation [24, 29]. The sampling into the subsets can be performed randomly or using furthest-point or structure-based sampling.
In the case of kernel methods, the final model in MLatom is typically trained on the entire training set after the hyperparameter optimization. This is possible because the kernel methods have a closed, analytical solution to finding their regression coefficients, and after hyperparameters are appropriately chosen, overfitting can be mitigated to a great extent. In the case of NNs, the final model is the one trained on the sub-training set because it would be too dangerous to train on the entire training set without any validation subset to check for the signs of overfitting.
#### 6.1.1 Training pre-defined types of ML models
Most pre-defined types of ML models, such as ANI-type or KREG models, expect XYZ molecular coordinates as input. This should be either provided by the user or can be obtained using MLatom's conversion routines, e.g., from the SMILES strings [10], which rely on OpenBabel [111]'s Pybel API. These models have a default set of hyperparameters, but especially in the case of kernel methods such as KREG, it is still strongly advised to optimize them. The models can be, in principle, trained on any molecular property. Most often, they are used to learn PESs and, hence, require energy labels in the training set. The PES model accuracy can be greatly improved if the energy gradients are also provided for training. Thus, the increased training time is usually justified [39, 112]. An example of training and testing the KREG and DPMD models on a data set with energies and energy gradients for the urea
molecule in the WS22 database[113] is shown in Figure 9. The KREG model is both faster to train and more accurate, which is a typical situation for small-size molecular databases, while for larger databases, NN-based models might be preferable[39].
Figure 9: Side-by-side comparison of the usage of MLatom in both command-line mode and via Python API for training and testing the KREG and DeepPot-SE models on a 1000-point data set on the urea molecular PES data set randomly sampled from the WS22 database. Hyperparameter optimization of the KREG model is required is also shown. Calculations were run on 36 Intel(R) Xeon(R) Gold 6240 CPU @ 2.60GHz.
#### 6.1.2 Designing and training custom ML models
MLatom's user can also create models on any set of input vectors and labels using a variety of KRR kernel functions. In this case, hyperparameter optimization is strongly advised too. In all other aspects, training of such KRR models is similar to training the pre-defined models, i.e., the preparation of the data set is also performed by splitting it into the required subsets for training and validation.
Importantly, the user can construct models of varying complexity by using a model tree implementation. Special cases of such composite models are \(\Delta\)-learning and self-correcting models and they can be trained similarly to other ML models by supplying input vectors or XYZ coordinates and labels. In the case of \(\Delta\)-learning, the user needs to supply the baseline values. For other, more complicated models, the user must train and combine each component separately.
### Hyperparameter optimization
The performance of ML models strongly depends on the chosen hyperparameters such as the regularization parameters and number of layers in NNs. Hence, it is often necessary to optimize the hyperparameters to achieve reasonable results and to improve the accuracy. The hyperparameter optimization commonly requires multiple trainings, making it an expensive endeavor, and caution must paid in balancing performance/cost issues.
MLatom can optimize hyperparameters by minimizing the validation loss using one of the many available algorithms. The validation loss is usually based on the error in the validation set which can be a single hold-out validation set, or a combined cross-validation error.
For few hyperparameters, the robust grid search on the log or linear scale can be used to find optimal values. It is a common choice for kernel methods (see Figure 9 for an example of optimizing hyperparameters of the KREG model which is the kernel method). For a larger number of hyperparameters, other algorithms are recommended instead. Popular choices are Bayesian optimization with the Tree-structured Parzen Estimator (TPE)[72] and many SciPy optimizers.
The choice of the validation loss also matters. In most cases, MLatom minimizes the root-mean-squared error (RMSE) for the labeled data. However, when multiple labels are provided, i.e., energies and energy gradients for learning PES, the choice should be made on
how to combine them in the validation loss. By default, MLatom calculates the geometric mean of the RMSEs for energies and gradients[29]. The users can also choose a weighted sum of RMSEs, but in this case, they must choose the weight. In addition, the user can supply MLatom with any custom validation loss function, which can be arbitrarily complicated.
### Evaluating models
Once the model has been trained, it is common to evaluate its generalization ability before deploying it in production simulations. MLatom provides dedicated options for such evaluations. The simplest and one of the most widespread approaches is calculating the error for the independent hold-out test set not used in the training. To emphasize, in MLatom terminology, the test set has no overlap with the training set, which might consist of the sub-training and validation subsets[29]. Alternatively, cross-validation and its variant leave-one-out cross-validation are recommended whenever computationally affordable, especially for small data sets. MLatom provides a broad range of error measures for the test set, including RMSE, mean absolute error (MAE), mean signed error, the Pearson correlation coefficient, the R\({}^{2}\) value, outliers, _etc.[29]_. The testing can be performed with the training and hyperparameter optimization for most models, including \(\Delta\)-learning and self-correcting models.
Since the errors depend on the size of the training set, the learning curves showing this dependence are very useful for comparing different models[29]. MLatom can generate the learning curves, which have been instrumental in preparing guidelines for choosing the ML interatomic potential[39].
**Summary**
MLatom 3 is a unique software package combining machine learning and quantum mechanical models for accelerating and improving the accuracy of computational chemistry simulations. It can be used as a black-box package accepting input files with a simple structure or as a transparent Python module enabling custom workflows. MLatom provides access to pre-trained models such as AIQM1 and ANI-1ccx aiming at high accuracy of coupled-cluster level, making them more accurate and much faster than common DFT approaches for ground-state properties of closed-shell organic molecules. Another special pre-trained model can be used to simulate two-photon absorption spectra.
The user of MLatom has an option to create their own models. Pre-defined ML architectures of the ANI-type, KREG, PhysNet, GAP-SOAP, DPMD, or sGDML make it easier. Alternatively, the custom models of varying complexity and based on combinations of
both ML and QM models, such as \(\Delta\)-learning can be easily built with the package. MLatom provides a toolset for training, hyperparameter optimization, and performance analysis of the models.
This wide variety of models can be used for single-point calculations on large data sets, geometry optimizations, calculation of rovibrational (frequencies, IR spectra) and thermochemical (enthalpies, entropies, heats of formation) properties, molecular dynamics, and UV/vis absorption spectra. The ML models can also be trained and used for quantum dissipative dynamics simulations.
The richness of MLatom functionality is available open source and can be exploited on the XACS cloud computing service. The package is accompanied by extensive and detailed manuals and tutorials that are developed and improved in close connection with teaching computational chemistry and machine learning in regular workshops and university courses.
**Data availability**
No data was generated for this article.
**Code availability**
The MLatom code is open-source and available both on GitHub ([https://github.com/dralgroup/mlatom](https://github.com/dralgroup/mlatom)) and PyPI (i.e., it can be installed via the command pip install mlatom). The simulations can also be run on MLatom(r)XACS cloud computing service on [https://XACScloud.com](https://XACScloud.com).
**Author contributions**
P.O.D. is the lead designer, developer, and maintainer of MLatom. F.G. is co-maintaining the MLatom package, implemented interfaces to third-party machine learning packages (PhysNet, DeePMD-kit, TorchANI, and GAP-SOAP), hyperopt, wrote the code for learning curve, and made numerous other improvements in MLatom. Y.F.H. co-implemented the KREG model, implemented molecular dynamics and vibrational spectra simulations, and improved many other parts of the code such as interfaces. P.Z. implemented AIQM1 and the ANI family of models (ANI-1ccx, ANI-2x, ANI-1x and their dispersion-corrected variants) through interfaces to third-party packages (MNDO, TorchANI, Sparrow) as well as geometry optimizations, frequency and thermochemistry simulations via interfaces to Gaussian, ASE, and TorchANI. Y.X.X.C. implemented interfaces to PySCF and Orca and extended thermochemical calculations to many methods. M.B. contributed to planning the
implementation of MLPs and the methodology behind the ML-NEA approach. O.I. contributed to the research involving AIQM1 methods and ANI universal potentials. C.W. led the development of the ML-TPA methodology. B.X.X. implemented the ML-NEA approach and initial argument parsing routines. M.P.J. helped implement of the interfaces to TorchANI, PhysNet, DeePMD-kit, and Newton-X. Y.S., Y.D., and Y.T.C. implemented ML-TPA approach. L.Z. implemented routines for nonadiabatic dynamics and extensions of the MNDO interface to excited-state properties. S.Z. contributed to atomic properties collection and implemented some of the NN-based approaches. A.U. interfaced MLQD to MLatom. Q.Z. contributed to the program documentation and tests. Y.O. contributed to plotting routines. P.O.D. wrote the original manuscript and all authors revised and commented on the manuscript. F.G., Y.F.H., Y.X.X.C., and P.O.D. prepared the figures.
## Acknowledgments
P.O.D. acknowledges funding by the National Natural Science Foundation of China (No. 22003051 and funding via the Outstanding Youth Scholars (Overseas, 2021) project), the Fundamental Research Funds for the Central Universities (No. 20720210092), and via the Lab project of the State Key Laboratory of Physical Chemistry of Solid Surfaces. This project is supported by Science and Technology Projects of Innovation Laboratory for Sciences and Technologies of Energy Materials of Fujian Province (IKKEM) (No: RD2022070103). M.B. and M.P.J. are financially supported by the European Union's Horizon 2020 research and innovation program under ERC advanced grant (grant agreement No 832237, SubNano). He also acknowledges the Centre de Calcul Intensif d'Aix-Marseille. O.I. acknowledges support from the National Science Foundation (NSF) CHE-2154447. O.I. acknowledges Extreme Science and Engineering Discovery Environment (XSEDE) Award CHE200122, which is supported by NSF Grant Number ACI-1053575. C.W. acknowledges funding support from the National Key R&D Program of China (2021YFA1502500), the National Natural Science Foundation of China (22071207, 22121001, 21721001, and 22003051), NFFTBS (no. J1310024), and the Fundamental Research Funds for the Central Universities (nos. 20720220128 and 20720220011). |
2309.14387 | Exploring Robot Morphology Spaces through Breadth-First Search and
Random Query | Evolutionary robotics offers a powerful framework for designing and evolving
robot morphologies, particularly in the context of modular robots. However, the
role of query mechanisms during the genotype-to-phenotype mapping process has
been largely overlooked. This research addresses this gap by conducting a
comparative analysis of query mechanisms in the brain-body co-evolution of
modular robots. Using two different query mechanisms, Breadth-First Search
(BFS) and Random Query, within the context of evolving robot morphologies using
CPPNs and robot controllers using tensors, and testing them in two evolutionary
frameworks, Lamarckian and Darwinian systems, this study investigates their
influence on evolutionary outcomes and performance. The findings demonstrate
the impact of the two query mechanisms on the evolution and performance of
modular robot bodies, including morphological intelligence, diversity, and
morphological traits. This study suggests that BFS is both more effective and
efficient in producing highly performing robots. It also reveals that
initially, robot diversity was higher with BFS compared to Random Query, but in
the Lamarckian system, it declines faster, converging to superior designs,
while in the Darwinian system, BFS led to higher end-process diversity. | Jie Luo | 2023-09-25T06:46:19Z | http://arxiv.org/abs/2309.14387v1 | # Exploring Robot Morphology Spaces through Breadth-First Search and Random Query
###### Abstract
Evolutionary robotics offers a powerful framework for designing and evolving robot morphologies, particularly in the context of modular robots. However, the role of query mechanisms during the genotype-to-phenotype mapping process has been largely overlooked. This research addresses this gap by conducting a comparative analysis of query mechanisms in the brain-body co-evolution of modular robots. Using two different query mechanisms, Breadth-First Search (BFS) and Random Query, within the context of evolving robot morphologies using CPPNs and robot controllers using tensors, and testing them in two evolutionary frameworks, Lamarckian and Darwinian systems, this study investigates their influence on evolutionary outcomes and performance. The findings demonstrate the impact of the two query mechanisms on the evolution and performance of modular robot bodies, including morphological intelligence, diversity, and morphological traits. This study suggests that BFS is both more effective and efficient in producing highly performing robots. It also reveals that initially, robot diversity was higher with BFS compared to Random Query, but in the Lamarckian system, it declines faster, converging to superior designs, while in the Darwinian system, BFS led to higher end-process diversity.
evolutionary robotics, artificial life, morphological evolution, query mechanism, CPPN, mapping, breadth-first search
## I Introduction
Evolutionary robotics empowers the design and evolution of robot morphologies through a process of genotype to phenotype mapping. In the context of modular robots, the challenge lies in determining the presence or absence of specific components at precise positions within the robot body and getting a balance in exploring and exploiting the design space.
Several genotype-to-phenotype mapping techniques have been employed in various research studies, including L-systems [1], CPPNs (Compositional pattern-producing networks) [2, 3, 4], and Direct Mapping [5]. However, scant attention has been given to the query mechanism utilized in these mapping processes, despite its pivotal role in shaping the resultant robot bodies.
This research aims to address the open research area of investigating different query mechanisms in the evolutionary robotic field. The primary objective is to conduct a comparative analysis of query mechanisms and their influence on the evolution and performance of modular robot bodies. These investigations focus on understanding how different query mechanisms affect the key characteristics of evolved robot morphologies in evolutionary robot systems.
To achieve this objective, we design and implement an experimental setup where we evolve modular robot morphologies using CPPNs with one commonly used query mechanism: Breadth-First Search (BFS) [6] and compare it with our design: Random Query [7]. We test these two query mechanisms on two evolutionary systems to evolve both the body and brain.
The main contributions of this research are threefold. Firstly, we provide a comprehensive analysis of the influence of two different query mechanisms on the evolution and performance of modular robot morphologies.
Secondly, we contribute to the understanding of genotype to phenotype mapping in modular robotics by highlighting the importance of the query mechanism and its impact on the diversity and complexity of evolved robot morphologies. Our findings can inform the development of more effective approaches for evolving robot bodies and contribute to the advancement of adaptive and versatile robotic systems.
Finally, we evaluate the efficiency and convergence properties of the query mechanisms, considering the computational resources required for generating desirable robot body configurations. This analysis provides valuable insights for researchers and practitioners working on evolutionary robotics, enabling them to make informed decisions regarding the choice of query mechanism based on their specific requirements and constraints.
Overall, this research enhances our understanding of query mechanisms in genotype to phenotype mapping for modular robots and sheds light on key aspects of evolutionary robotics.
## II Evolution+Learning
A search space comprises distinct layers that stack upon one another. At its foundational level lies the phenotype space, while one layer above that resides the genotype space, which may not always have a straightforward one-to-one representation with the phenotype layer. Numerous factors influence our search process, including reproduction operators and selection mechanisms, among others. Our particular focus revolves around examining how the query mechanisms employed for mapping the body genotype to the robot's morphology impact the exploration of the morphological search space.
### _Robot Phenotype_
#### Ii-A1 Robot Morphology
We adopt RoboGen's components as the robot body's phenotype. RoboGen [8] is a popular
open-source platform for evolving robots, offering modular components: a core component, one or more brick components, and active hinges. The phenotype follows a tree structure, with the core module as the root node, enabling 3D morphologies through 90-degree rotations.
#### Iii-A2 Robot Controller
We employ Central Pattern Generators (CPGs) for driving modular robots, a proven method for controlling various robot types [3, 9]. Each robot joint has an associated CPG consisting of three neurons: an \(x_{i}\)-neuron, a \(y_{i}\)-neuron, and an \(out_{i}\)-neuron. The \(x_{i}\) and \(y_{i}\) neuron states change over time by multiplying the activation value of the opposing neuron by a corresponding weight: \(\dot{x}_{i}=w_{i}y_{i}\) and \(\dot{y}_{i}=-w_{i}x_{i}\). To simplify, we set \(w_{x_{i}y_{i}}\) equal to \(-w_{y_{i}x_{i}}\), denoting their absolute value as \(w_{i}\). Initial states of all \(x\) and \(y\) neurons are \(\frac{\sqrt{2}}{2}\) to create a sine wave with an amplitude of 1, matching joint rotation limits.
To allow complex output patterns, we implement connections between neighboring joint CPGs. For the \(i_{th}\) joint and \(\mathcal{N}_{i}\) as the set of neighboring joint indices, with \(w_{ij}\) representing the connection weight between \(x_{i}\) and \(x_{j}\) (also set to \(-w_{ji}\)), the system of differential equations becomes:
\[\begin{split}\dot{x}_{i}&=w_{i}y_{i}+\sum_{j\in \mathcal{N}_{i}}w_{ji}x_{j}\\ \dot{y}_{i}&=-w_{i}x_{i}\end{split} \tag{1}\]
Due to this addition, \(x\) neurons are no longer bounded within \([-1,1]\). To handle this, we use the hyperbolic tangent function (_tanh_) as the activation function for \(out_{i}\)-neurons.
### _Robot Genotype_
#### Iii-B1 Body Genotype
The phenotype of bodies is encoded in a Compositional Pattern Producing Network (CPPN) which was introduced by Stanley [2] and has been successfully applied to the evolution of both 2D and 3D robot morphologies in prior studies [10]. The structure of the CPPN has four inputs and five outputs. The first three inputs are the x, y, and z coordinates of a component, and the fourth input is the distance from that component to the core component in the tree structure. The first three outputs are the probabilities of the modules being a brick, a joint, or empty space, and the last two outputs are the probabilities of the module being rotated 0 or 90 degrees. For both module type and rotation the output with the highest probability is always chosen; randomness is not involved.
#### Iii-B2 Brain Genotype
We utilize an array-based structure for the brain's genotypic representation to map the CPG weights. This is achieved via direct encoding, a method chosen specifically for its potential to enable reversible encoding in future stages. We have seen how every modular robot can be represented as a 3D grid in which the core module occupies the central position and each module's position is given by a triple of coordinates. When building the controller from our genotype, we use the coordinates of the joints in the grid to locate the corresponding CPG weight. To reduce the size of our genotype, instead of the 3D grid, we use a simplified 3D in which the third dimension is removed. For this reason, some joints might end up with the same coordinates and will be dealt with accordingly.
Since our robots have a maximum of 10 modules, every robot configuration can be represented in a grid of \(21\times 21\). Each joint in a robot can occupy any position of the grid except the center. For this reason, the possible positions of a joint in our morphologies are exactly \((21\cdot 21)-1=440\). We can represent all the internal weights of every possible CPG in our morphologies as a \(440\)-long array. When building the phenotype from this array, we can simply retrieve the corresponding weight starting from a joint's coordinates in the body grid.
To represent the external connections between CPGs, we need to consider all the possible neighbours a joint can have. In the 2-dimensional grid, the number of cells in a distance-2 neighbourhood for each position is represented by the Delannoy number \(D(2,2)=13\), including the central element. Each one of the neighbours can be identified using the relative position from the joint taken into consideration. Since our robots can assume a 3D position, we need to consider an additional connection for modules with the same 2D coordinates.
To conclude, for each of the \(440\) possible joints in the body grid, we need to store 1 internal weight for its CPG, 12 weights for external connections, and 1 weight for connections with CPGs at the same coordinate for a total of 14 weights. The genotype used to represent the robots' brains is an array of size \(440\times 14\). An example of the brain genotype of a "+" shape robot is shown in Figure 2.
It is important to notice that not all the elements of the genotype matrix are going to be used by each robot. This means that their brain's genotype can carry additional information that could be exploited by their children with different morphologies.
### _Query Mechanisms_
Query Mechanism is a critical aspect of the genotype-to-phenotype translation process in designing robot bodies. It serves as the bridge between the genetic information encoded in the genotypes (such as CPPN, L-system, array) and the actual physical characteristics of the robot. Essentially, the
Fig. 1: Brain phenotype (CPG network) of a ”+” shape robot. In our design, the topology of the brain is determined by the topology of the body.
query mechanism is a technique used to extract information from the genotypic representation to determine the composition and arrangement of modules in the resulting robot body.
To produce the phenotypes of the robot bodies, the core component is generated at the origin. Then, two different mechanisms are used to query the CPPN-based genotypes:
Breadth-First Searchan algorithm for searching a tree data structure for a node that satisfies a given property [11]. It starts at the tree root and explores all nodes at the present depth prior to moving on to the nodes at the next depth level. We move outwards from the core component until there are no open sockets(breadth-first exploration), querying the CPPN network to determine whether a module will be placed at each location, its type and its rotation. If a module would be placed in a location already occupied by a previous module, the module is simply not placed and the branch ends there.
Random Queryan algorithm for searching a tree data structure for a node randomly with a given number of queries. All open sockets have an equal chance of being randomly selected to be queried, in no specific order. The CPPN network determines the type and rotation of each module. If a module would be placed in a location already occupied by a previous module, then this module is not expressed in the body. A number of nine queries are applied.
For both methods, the coordinates of each module are integers; a module attached to the front of the core module will have coordinates (0,1,0). We stop when ten modules have been created.
### _Learning Algorithm_
We use Reversible Differential Evolution (RevDE) [12] as the learning algorithm because it has proven to be effective in previous research [3]. This method works as follows:
1. Initialize a population with \(\mu\) samples (\(n\)-dimensional vectors), \(\mathcal{P}_{\mu}\).
2. Evaluate all \(\mu\) samples.
3. Apply the reversible differential mutation operator and the uniform crossover operator. _The reversible differential mutation operator_: Three new candidates are generated by randomly picking a triplet from the population, \((\mathbf{w}_{i},\mathbf{w}_{j},\mathbf{w}_{k})\in\mathcal{P}_{\mu}\), then all three individuals are perturbed by adding a scaled difference.
4. Perform a selection over the population based on the fitness value and select \(\mu\) samples.
5. Repeat from step (2) until the maximum number of iterations is reached.
As explained above, we apply RevDE here as a learning method for 'newborn' robots. In particular, it will be used to optimize the weights of the CPGs of our modular robots for the tasks during the Infancy stage.
The Algorithm 1 displays the pseudocode of the complete integrated process of evolution and learning. With the highlighted yellow code, it is the Lamarckian system, without it is the Darwinian system. Note that for the sake of generality, we distinguish two types of quality testing depending on the context, evolution or learning.
```
1:INITIALIZE robot population
2:EVALUATE each robot
3:while not STOP-EVOLUTION do
4: SELECT parents;
5: RECOMBINE+MUTATE parents' bodies;
6: MUTATE parents' brains;
7: CREATE offspring robot body;
8: CREATE offspring robot brain;
9: INITIALIZE brain(s) for the learning process;
10:while not STOP-LEARNING do
11: ASSESS offspring;
12: GENERATE new brain for offspring;
13:endwhile
14: EVALU offspring with the learned brain;
15: UPDATE brain genotype
16: SELECT survivors / UPDATE population
17:endwhile
```
**Algorithm 1** Evolution+Learning
### _Task and Fitness function_
Point navigation is a closed-loop controller task which needs feedback (coordinates)from the environment passing to the controller to steer the robot. The coordinates are used to obtain the angle between the current position and the target. If the target is on the right, the right joints are slowed down and vice versa.
A robot is spawned at the centre of a flat arena (10 \(\times\) 10 m\({}^{2}\)) to reach a sequence of target points \(P_{1},...,P_{N}\). In each evaluation, the robot has to reach as many targets in order
Fig. 2: Brain genotype to phenotype mapping of a ”+” shape robot. The left image (brain phenotype) shows the schema of the ”+” shape robot with the coordinates of its joints in the 2D body grid. The right image (brain genotype) is the distance 2 neighbour of the joint at (1,0). The coordinates reported in the neighbourhood are relative to this joint. The CPG weight of the joint is highlighted in purple and its 2-distance neighbours are in blue.
as possible. Success in this task requires the ability to move fast to reach one target and then quickly change direction to another target in a short duration. A target point is considered to be reached if the robot gets within 0.01 meters from it. Considering the experimental time, we set the simulation time per evaluation to be 40 seconds which allows robots to reach at least 2 targets \(P_{1}(1,-1),P_{2}(0,-2)\).
The data collected from the simulator is the following:
* The coordinates of the core component of the robot at the start of the simulation are approximate to \(P_{0}(0,0)\);
* The coordinates of the robot, sampled during the simulation at 5Hz, allowing us to plot and approximate the length of the followed path;
* The coordinates of the robot at the end of the simulation \(P_{T}(x_{T},y_{T})\);
* The coordinates of the target points \(P_{1}(x_{1},y_{1})\)... \(P_{n}(x_{n},y_{n})\).
* The coordinates of the robot, sampled during the simulation at 5Hz, allow us to plot and approximate the length of the path \(L\).
The fitness function for this task is designed to maximize the number of targets reached and minimize the path followed by the robot to reach the targets.
\[F=\sum_{i=1}^{k}dist(P_{i},P_{i-1})\\ +(dist(P_{k},P_{k-1})-dist(P_{T},P_{k}))\\ -\omega\cdot L \tag{2}\]
where \(k\) is the number of target points reached by the robot at the end of the evaluation, and \(L\) is the path travelled. The first term of the function is a sum of the distances between the target points the robot has reached. The second term is necessary when the robot has not reached all the targets and it calculates the distance travelled toward the next unreached target. The last term is used to penalize longer paths and \(\omega\) is a constant scalar that is set to 0.1 in the experiments. E.g., if a robot just reached 2 targets, the maximum fitness value will be \(dist(P_{1},P_{0})+(dist(P_{2},P_{1})-dist(P_{2},P2))-0.1*L=\sqrt{2}+\sqrt{2}-0.2*\sqrt{2}\approx 2.54\) (\(L\) is shortest path length to go through \(P_{1}\) and \(P_{2}\) which is equal to \(2*\sqrt{2}\)).
## III Experimental Setup
The stochastic nature of evolutionary algorithms requires multiple runs under the same conditions and a sound statistical analysis ([13]). We perform 10 runs for each query mechanism and evolutionary system, namely BFS Darwinian, BFS Lamarckian, Random Query Darwinian, and Random Query Lamarckian. In total, 40 experiments.
Each experiment consists of 30 generations with a population size of 50 individuals and 25 offspring. A total of \(50+(25\cdot(30-1))=775\) morphologies and controllers are generated, and then the learning algorithm RevDE is applied to each controller. For RevDE we use a population of 10 controllers for 10 generations, for a total of \((10+30\cdot(10-1))=280\) performance assessments.
The fitness measures used to guide the evolutionary process are the same as the performance measure used in the learning loop. For this reason, we use the same test process for both. The tests for the task of point navigation use 40 seconds of evaluation time with two target points at the coordinates of \((1,-1)\) and \((0,-2)\).
All the experiments are run with Mujoco simulator-based wrapper called Revolve2 on a 64-core Linux computer, where they each take approximately 7 hours to finish.
The code for replicating this work and carrying out the experiments is available online: [https://shorturl.at/aES26](https://shorturl.at/aES26).
## IV Results
To compare the effects of BFS and Random Query, we consider two generic performance indicators: efficiency and efficacy, meanwhile we also look into robots' morphologies.
### _Robot Performance_
#### Iv-A1 Efficacy
the average fitness in the final generation. Figure 3 shows that both query mechanisms can produce robots able to solve the task, but robots queried by BFS are approximately 20% better. Moreover, around generation 14, Lamarckian system had already significantly outperformed the result that was produced by Darwinian system only by the end of the evolutionary process. This holds true for both query mechanisms.
#### Iv-A2 Efficiency
how much effort is needed to reach a given quality threshold (fitness level). It is calculated as the number of solution evaluations until the quality threshold is reached.
BFS in the Lamarckian system is the most efficient, as it finds the best solution (maximum fitness) fastest (Figure 3).
### _Robot Morphologies_
#### Iv-A1 Morphological intelligence
in this research, we consider a special property of robot morphology: Morphological Intelligence. Morphology influences how the brain learns. Some bodies are more suitable for the brains to learn with than others. How well the brain learns can be empowered by a better body. Therefore we define the intelligence of a body as a measure of how well it facilitates the brain to learn and achieve tasks. To quantify the measurement, we did an extra experiment, using the fixed bodies of 50 initial robots from the first generation of each run to evolve only the brains of them with these two methods, then we calculate the learning delta of each experiment, being the fitness value after the parameters
\begin{table}
\begin{tabular}{l|c|l} \hline \hline Parameters & Value & Description \\ \hline Population size & 50 & Number of individuals per generation \\ Offspring size & 25 & Number of offspring produced per generation \\ Generations & 30 & Termination condition for each run \\ Learning trials & 280 & Number of the evaluations performed by \\ & & RevDE on each robot \\ Tournament size & 2 & Number of individuals used in the parent \\ & & selection - (k-tournament) \\ Repetitions & 10 & Number of repetitions per experiment \\ \hline \hline \end{tabular}
\end{table} TABLE I: Main experiment parameters
were learned minus the fitness value before the parameters were learned. We finally quantify morphological intelligence by the delta of the learning delta of each method, being the learning delta of the evolved body minus the learning delta of the fixed body. In Figure 4, we see that the average learning \(\Delta\) of both methods with evolved bodies grow steadily across the generations. This effect has been discovered previously in [3, 14], with different tasks, a different learning method and a different representation, so the current results provide additional support that lifetime learning leads the evolutionary search towards morphologies with increasing learning potential. While the average learning Deltas of both methods with fixed body show no significant change which indicates that there is low morphological intelligence in the fixed robot body. The morphological intelligence in Lamarckian system is 30% greater than that in Darwinian system, as indicated by the higher delta of the learning delta. The delta of learning delta in BFS is about 75% higher than in Random Query, which indicates more morphological intelligence in the bodies produced by BFS.
#### Iv-B2 Diversity
the morphological variety of each population using tree-edit distance. It is measured in two steps: firstly, the measure of difference between any two robots, denoted as d(x,y); and secondly, the measure of diversity within a population, which is represented by the average distance along the evolutionary process.
Figure 5 demonstrates that initially, robots generated by BFS exhibit greater diversity compared to those generated by Random Query. Moreover, the morphological diversity of the Lamarckian system using BFS diminishes at a notably faster rate than the other three methods, indicating a convergence toward superior body designs at a faster pace. In the case of the Darwinian system, employing BFS led to a higher diversity value at the conclusion of the evolutionary process.
#### Iv-B3 Morphological traits
We additionally examine the morphological characteristics of the robots, delving into eight specific traits (further information on the measurements can be found in [[15]]).
Figure 8 illustrates that the differences among robots generated by two evolutionary systems are notably larger when employing the Random Query method across all morphological traits, except for branching and symmetry, as opposed to using BFS.
Except for'rel_num_bricks,' the values in all the other morphological traits from BFS are higher than those from Random Query. This means that robots produced by BFS are much more symmetrical, have more branching, more hinges, and fewer bricks compared to the ones produced by Random Query.
Furthermore, a PCA analysis (Figure 7) employing these identical eight traits reveals no difference in the morphologies generated by two evolutionary systems using BSF (subplot a). When employing the Random Query approach, there is a slight variation in the clustering circles (subplot b).
Hence, when applying the same query mechanism, the distinctions in the robots produced by the two evolutionary systems are marginal, whereas the differences in the robot bodies resulting from the two query mechanisms are considerable.
This is also supported by Figure 6 which displays the 10 best robots produced by each method. The morphologies of the best-performing robots using BFS mainly converged into a "+" shape, while using the Random Query, the morphologies predominantly converge into an "L" shape, irrespective of the evolution system used. The best morphologies evolved by BFS from both evolution systems typically feature three or four limbs, primarily consisting of hinges with either no bricks or just one. In contrast, those generated through the Random Query method tend to have a relatively higher likelihood of containing one or two bricks and consist of only two limbs.
## V Conclusions and Future Work
In this research, we investigated the influence of two different query mechanisms used in genotype to phenotype mapping
Fig. 3: Mean (lines) and maximum (dots) fitness over 30 generations (averaged over 10 runs) for Lamarckian system in purple and Darwinian system in blue. Subfigure (a) exhibits mean average fitness for robots produced with BFS, and Subfigure (b) is for Random Query. The bands indicate the 95% confidence intervals (\(\pm 1.96\times SE\), Standard Error).
within two evolutionary robotics systems. Based on our analysis, we draw the following conclusions:
Firstly, the choice of query mechanism significantly affects the evolution and performance of modular robot bodies. Robots are not able to change the system's performance of the robot and the environment's performance of the robot. The results are shown in Fig. 4(a) and Fig. 4(b). The results are shown in Fig. 4(c) and Fig. 4(d). The results are shown in Fig. 4(a) and Fig. 4(b). The results are shown in Fig. 4(a) and Fig.
[MISSING_PAGE_POST]
queried by BFS exhibited approximately 20% better efficacy in solving the given task. Additionally, BFS in the Lamarckian system demonstrated superior efficiency, finding the best solution faster compared to Random Query.
Secondly, the query mechanism plays a crucial role in shaping the morphological intelligence of evolved robot bodies. Our experiments showed that morphological intelligence, measured as the ability of the body to facilitate learning in the brain, was significantly higher in robots produced by BFS. This highlights the importance of the query mechanism in determining the learning potential and adaptability of the evolved robot morphologies.
Furthermore, our analysis revealed that the query mechanism influenced the diversity and morphological traits of the evolved robot bodies. Robots produced by BFS exhibited higher diversity initially. In the Lamarckian system, it declines faster, converging to superior designs, while in the Darwinian system, BFS led to higher end-process diversity. Regarding morphological traits, for the same query mechanism, the distinctions in the robots produced by the two evolutionary systems are marginal, whereas the differences in the robot bodies resulting from the two query mechanisms are considerable.
In conclusion, BFS offers a systematic and deterministic approach, ensuring the exploration of every possible branch of the genotype tree. This results in increased stability and efficiency. On the contrary, the Random query approach, in theory, introduces variability that might lead to innovative body designs - the primary rationale behind our initial choice. However, our experimental results do not definitively showcase any discernible advantages. As we move forward, there is scope to explore alternative query mechanisms within various evolutionary frameworks.
|
2310.00291 | Coexistence of insulating phases in confined fermionic chains with a
Wannier-Stark potential | We study fermions on a finite chain, interacting repulsively when residing on
the same and on nearest-neighbor sites, and subjected to a Wannier-Stark
linearly-varying potential. Using the density matrix renormalization-group
numerical technique to solve this generalized extended Hubbard model, the
ground state exhibits a staircase of (quasi) plateaus in the average local site
density along the chain, decreasing from being doubly-filled to empty as the
potential increases. These `plateaus' represent locked-in commensurate phases
of charge density waves together with band and Mott insulators. These phases
are separated by incompressible regions with incommensurate fillings. It is
suggested that experimental variations of the slope of the potential and of the
range of the repulsive interactions will produce such a coexistence of phases
which have been individually expected theoretically and observed experimentally
for uniform systems. | N. Aucar Boidi, K. Hallberg, A. Aharony, O. Entin-Wohlman | 2023-09-30T07:48:44Z | http://arxiv.org/abs/2310.00291v2 | # Coexistence of insulating phases in confined
###### Abstract
We study fermions on a finite chain, interacting repulsively when residing on the same and on nearest-neighbor sites, and subjected to a Wannier-Stark linearly-varying potential. Using the density matrix renormalization-group numerical technique to solve this generalized extended Hubbard model, the ground state exhibits a staircase of (quasi) plateaus in the average local site density along the chain, decreasing from being doubly-filled to empty as the potential increases. These 'plateaus' represent locked-in commensurate phases of charge density waves together with band and Mott insulators. These phases are separated by incompressible regions with incommensurate fillings. It is suggested that experimental variations of the slope of the potential and of the range of the repulsive interactions will produce such a coexistence of phases which have been individually expected theoretically and observed experimentally for uniform systems.
_Introduction.--_ The complexity of quantum many-body systems originates from the interplay of strong interactions, quantum statistics, and the large number of quantum-mechanical degrees of freedom. This interplay generates a multitude of phases, e.g., insulating commensurate charge (CDW) and spin (SDW) density waves and compressible (metallic) phases. This complexity already shows up in one dimension, in which one can use (and test) a variety of theoretical and experimental tools for their study. The simplest picture for interacting particles in one dimension (1D) is given by the Hubbard Hamiltonian, which includes interactions, \(U\), only between particles residing on the same lattice site [1]. This interaction competes with the kinetic [nearest-neighbor (nn) tunneling] energy, \(t\), resulting for instance, in antiferromagnetic structures [2].
However, this simple Hamiltonian cannot reproduce certain phases, like charge density-waves. Those are generated _e.g._ by the _extended Hubbard model_, which also includes nn interactions, \(V\). Its one-dimensional version reveals a rich phase diagram, which includes the band and Mott insulating phase [3], SDW and CDW and metallic phases [4; 5; 6; 7]. It has also been used to describe data collected in experiments performed on chains of cold atoms [8; 9]. In higher dimensions, it has been used to describe bulk and edge states in electronic insulators [10].
An exact analytic solution of the extended Hubbard Hamiltonian, in particular on a finite chain (the system amenable to cold-atom experiments), has not yet been found. It has been studied by a variety of numerical and approximate methods (e.g., Refs. [11; 12; 13; 14; 15; 16]), emphasizing the half-filled case, where one finds (for fermions in 1D), the insulating Mott antiferromagnetic phase [3] and CDW phases.
Experiments on cold-atom arrays naturally involve finite samples. Numerical calculations performed on such systems used various boundary conditions: hard walls, periodic and open boundaries, or potentials representing confining harmonic traps [17; 9; 18]. These works concentrate mostly on the region around the 'center' of the confined structure, whose details are usually not sensitive to the particular form of the boundaries, and so its possible structures are determined by \(U,\;V\) and particle density \(n\). Remarkably, experiments (e.g., on cold atoms) have observed some of the theoretically predicted phases [19; 20]. Less attention has been paid to the structures near the 'edges' of the samples and to their dependence on the details of the boundary conditions, in particular when the confinement is achieved by varying site energies. Such a confining scheme has been recently considered, using the self-consistent Hartree-Fock approximation, for the two-dimensional extended Hubbard Hamiltonian, and found coexistence of various structures (phases) near the free ends of the samples [10].
In this Letter we generalize the extended Hubbard Hamiltonian to a 1D fermionic chain, confined by a _linear potential_, which mimics either edge configurations in bulk systems or cold-atom arrays placed in an electric field. Such a potential can be produced by a longitudinal electric field, as in the Wannier-Stark model [21]. HERE
Given the complex nature of the many-body problem associated with our system, we resort to one of the most accurate numerical methods for correlated systems, the density matrix renormalization-group (DMRG) [22; 23; 24; 25; 26; 27], which uses quantum information to keep the most relevant states. As we show, the linear potential generates in the ground state the simultaneous existence of segments in which different phases coexist, each of which having been observed separately before, on long uniform chains. Our results are presented by plots of the local quantum-averaged density on the sites \(i\) on the chain, \(\langle n_{i}\rangle\), the nn density-density correlations \(\langle n_{i}n_{i+1}\rangle\) and the
nn spin-spin correlations \(\langle s^{z}_{i}s^{z}_{i+1}\rangle\), (e.g., Fig. 2). Instead of a smooth decrease, the local average of \(\langle n_{i}\rangle\) shows flat steps, corresponding to locked-in Mott or CDW structures (e.g., \(212121\dots\), \(101010\dots\), [28]). These locked-in steps are similar to those observed for commensurate wave vectors in the devil's staircase [29; 30]. Between these steps, \(\langle n_{i}\rangle\) decreases more smoothly, representing incommensurate regions, which can be thought of as 'domain walls' with varying lengths [31]. As shown below, the local density of states on these intermediate sites exhibits small energy gaps, which imply that they are incompressible (insulating), in spite of having incommensurate fillings. We will refer to them hereafter as incompressible incommensurate-filling phases 'IIF'. The specific sequence of phases, and their sizes, can be modified experimentally, e.g., by changing the slope of the potential. Neighboring structures in a sequence are often also neighboring in the phase diagrams found for uniform systems (which are not subjected to the linear potential).
_Model.--_ We study the generalized 1D extended Hubbard Hamiltonian
\[\mathcal{H}= -t\sum_{i,\sigma}\big{(}c^{\dagger}_{i,\sigma}c_{i+1,\sigma}+{ \rm h.c.}\big{)}+\sum_{i}(\mu_{i}-\mu)n_{i}\] \[+U\sum_{i}\big{(}n_{i,\uparrow}-1/2\big{)}\big{(}n_{i,\downarrow} -1/2\big{)}\] \[+V\sum_{i}\big{(}n_{i}-1\big{)}\big{(}n_{i+1}-1\big{)}\, \tag{1}\]
where \(i\) is the site index, \(i=0,\dots,L-1\) (we consider an odd number of sites without loss of generality). Here, \(\mu\) is the fixed external chemical potential, \(c^{\dagger}_{i,\sigma}\) creates an electron with spin \(\sigma(=\uparrow,\downarrow)\) at site \(i\), \(n_{i,\sigma}=c^{\dagger}_{i,\sigma}c_{i,\sigma}\), \(n_{i}=n_{i,\uparrow}+n_{i,\downarrow}\), while \(U\) and \(V\) are the repulsive interactions between electrons on the same and nn sites, respectively (see Fig. 1). The site-dependent local energy (the Wannier-Stark potential) \(\mu_{i}\) describes a linear external potential,
\[\mu_{i}=\mu_{0}[i/i_{c}-1]. \tag{2}\]
The site \(i_{c}=(L-1)/2\) represents the center of the 'edge', where \(\mu_{i_{c}}=0\). The particular form of \(\mathcal{H}\) was chosen so that at \(\mu=0\) (up to a constant energy) it is particle-hole symmetric when \(i\to L-1-i\) and \(n_{i}\to 2-n_{i}\). In that case we always have \(n_{i_{c}}=1\).
For an infinite chain, \(\mu_{i}\) is large and negative at large and negative \(i\), and therefore we expect all the sites there to be filled, i.e.,\(n_{i}=n_{i,\uparrow}+n_{i,\downarrow}=2\). Similarly, \(\mu_{i}\) is large and positive at large and positive \(i\), and therefore we expect all the sites there to be empty, i.e. \(n_{i}=0\), as drawn in Fig. 1. For a finite chain, as we use here, this is still expected for a large slope, \(\mu_{0}\gg 1\), when the whole 'edge' between the fully-occupied and empty 'phases' is confined within the chain. Indeed, this is confirmed by our calculations. However, the 'end' trivial phases disappear for small slopes, for which the observed structures depend on the open boundaries.
_Results.--_ Unless otherwise stated, we use \(U/t\to U=10\), \(\mu=0\) and \(L=41\). All energies are measured in units of \(t\). The Hamiltonian is diagonalized exploiting the DMRG technique, with around \(m=500\) states and \(4\) to \(6\) finite-size sweeps, which leads to a precision of around \(10^{-10}\) in the energy. For a very steep potential (\(\mu_{0}\to\infty\)) we obtain only two coexisting 'phases': a completely filled band (\(n_{i}=2\)) up to the center point \(i_{c}\), and completely empty sites (\(n_{i}=0\)) above that point, as expected. Both regions are incompressible and insulating. As the slope \(\mu_{0}\) decreases (but remains large), these two 'phases' remain near the two ends of the system, but new structures ('phases') appear between them, in which \(\langle n_{i}\rangle\) decreases gradually from \(2\) to \(0\). Figure 2 presents typical results, for three values of \(V\). Note the electron-hole symmetry between the two sides of Figs. 2(a-c), which follows directly from Eq. (1) at \(\mu=0\).
For \(V=0\) (i.e., the simplest Hubbard Hamiltonian, left column in Fig. 2), the system shows the following phases: for large (but finite) values of \(\mu_{0}\) it is a band insulator at both extremes, completely filled on the left and completely empty on the right. In the region located symmetrically around the center point \(i_{c}\), we find a Mott-insulating state (one particle per site, \(\langle n_{i}\rangle=1\)), and an antiferromagnetic spin-spin correlation function, Fig. 2(g). As seen in this figure, the spin correlation function, \(\langle s^{z}_{i}s^{z}_{i+1}\rangle\simeq-0.14\) (note: \(s^{z}_{i}\equiv(n_{i,\uparrow}-n_{i,\downarrow})/2\), the \(z-\)direction is arbitrarily chosen), agrees with its value of the infinite Mott phase [23]. The three insulating commensurate phases are separated by IIF regions with very small but finite gaps, see Fig. 3. These regions differ from the compressible regions found in Ref. [10], possibly because Ref. [10] explores 2D systems using the mean-field approximation. As \(\mu_{0}\) decreases, the band insulating phases on both ends disappear and the Mott region grows, as estimated below. These results are also consistent with the behavior of the density-density correlations, which vary between \(4\) on the left, via \(1\) in the Mott phase, to \(0\) on the right, Fig. 2(d).
For \(V=3\) (middle column in Fig. 2) the above three
Figure 1: Schematic representation of the system considered, for \(L=9\) sites.
insulating 'phases' are supplemented by two regions with an incipient (doped) CDW order on the two sides of the Mott 'phase', with local mean fillings 'quasi-plateaus' around \(\overline{\langle n_{i}\rangle}\simeq 1.5\) and \(\overline{\langle n_{i}\rangle}\simeq 0.5\) (quarter filling of holes and of electrons, respectively). The bar indicates a local average over a few sites. Unlike the uniform case \(\mu_{i}=0\), the local average fillings in these regions are not exactly \(1.5\) and \(0.5\). Rather, they can be fitted by \(\langle n_{i}\rangle=A-Bi+C\cos(i\pi)\) (note that \(i\) is the site number!). The oscillating term corresponds to a CDW, with a wave vector \(q=\pi\) (our lattice constant is \(1\)) and structures \(212121\dots\) or \(101010\dots\)[28]. However, the term \(-Bi\) represents a linear decrease of the actual average, presumably in response to the linear potential. Without this linear 'background', such a CDW is consistent with the results of the density-density and spin-spin correlations and with previous results for the doped (non-half-filled) 1D extended Hubbard model [32] in a uniform potential, \(\mu_{i}=0\), for which there is a transition from a Tomonaga-Luttinger liquid to a CDW phase for intermediate values of \(2t\leq V<U/2\) and large values of \(U\) (\(U\gg t\)). In those cases this CDW phase is insulating and incompressible. As we discuss below, we also find that, in spite of the varying average local densities, the local density of states has a (small) gap at the Fermi energy, which is consistent with an incompressible state. As before, when \(\mu_{0}\) decreases, the Mott region grows, the incipient CDW regions move towards the boundaries and the band-insulating regions disappear.
For \(V=6\) (right column in Fig. 2) the Mott region disappears and is replaced by a half-filled CDW, \(202020\dots\). For large \(\mu_{0}\)'s this phase exists in the center and coexists with doped CDW's at both sides, with fillings \(\overline{\langle n_{i}\rangle}\simeq 1.5\) and \(\overline{\langle n_{i}\rangle}\simeq 0.5\) respectively (black diamonds in Fig. 2(c)). This coexistence of two different CDW's has not been seen before and constitutes a situation which could be observed in cold-atom experiments. As before, the doped CDW's are accompanied by a very small gradual decrease of the local average occupation -'quasi-plateaus', presumably due to the slope in the potential. When \(\mu_{0}\) is lowered, the half-filled CDW occupies the whole chain. This is expected, since it is well known that when \(V>U/2\) and for a half-filled system, the uniform chain undergoes a transition from a Mott phase to a CDW [7; 32]. The results are consistent with the behavior of the density-density and spin-spin correlations. It is interesting to see a finite value of the spin-spin correlations at the phase boundaries between the half-filled and doped CDW's. It is also interesting to see that for \(V=3\) the average occupation \(\overline{\langle n_{i}\rangle}\), and the amplitude of the incipient CDW decrease gradually towards the central Mott or CDW region, but this decrease becomes abrupt for \(V=6\). The width of the IIF region (domain wall) between the two CDW phases seems to shrink to zero above some 'critical' value of \(V\).
The above results exhibited 'plateaus' only for \(1/2\), \(1/4\) and \(3/4\) fillings. We expect similar 'plateaus', corresponding to other simple fraction, e.g., \(1/8\). However, to see these one would need a much larger number of sites, and this is not possible with our present computer capabilities. Note, though, that calculations with a smaller number of sites do still show similar steps for these commensurate fillings.
Local Density of States.--To further explore the different phases, we have calculated the local, site dependent, density of states (LDOS), using the lesser and greater Green's functions, see details in Ref. [33]. In Fig. 3 we show the LDOS for particular sites of the chain for differ
Figure 3: (color online) Top: Local density profile \(\langle n_{i}\rangle\) showing the sites where the local density of states ( LDOS) has been calculated, for \(V=0\) (\(\mu_{0}=10\)), \(V=3\) (\(\mu_{0}=16\)) and \(V=6\) (\(\mu_{0}=20\)). Bottom: LDOS showing gaps at the Fermi energy (at \(\omega=E_{F}=0\)) for all cases, using \(\eta=0.01\) (Eq. S1 in [33]).
Figure 2: (color online) (a)-(c): The local density \(\langle n_{i}\rangle\); (d)-(f): the nn density-density correlations \(\langle n_{i}n_{i+1}\rangle\); (g)-(i): the nn spin-spin correlations \(\langle s_{i}^{x}\tilde{s}_{i+1}^{x}\rangle\), for \(V=0,3,6\) and different values of \(\mu_{0}\). The black diamonds in (c) indicate the mean value between neighboring sites.
ent parameters. We observe that there is always a gap at \(E_{F}=0\), even for the partially filled sites (we have added the filling profile for comparison). The gaps corresponding to these sites are smaller than the corresponding gaps of the fully formed CDW (see the \(V=6\) case) and much smaller than those of the Mott region (see Fig. 4). These gaps indicate that these regions are incompressible (non-metallic). This is not a finite size effect (since we would have a finite LDOS at \(E_{F}\) for fractional densities), but a consequence of the linear potential. We also observe that the LDOS consists of a series of peaks separated by minig gaps, a possible indication of Stark discretization [21].
Figure 4 shows a heatplot of the local density of states along the chain for \(V=0\), \(\mu=0\) and \(\mu_{0}=10\). The Fermi energy is marked by a white (dashed) line at \(\omega=0\). As the Hamiltonian is particle-hole symmetric around the middle of the chain, the density of states for the right half of the chain (\(20\leq i\leq 40\), not shown) is inverted as a function of \(\omega\) (details see in [33]). As mentioned above (Fig. 3), we always find a gap at \(E_{F}\), indicating an incompressible state. This gap is more than an order of magnitude smaller than the Mott gap. We also see a structure in the Hubbard bands in the form of three main substructures which evolve along the chain sites. Each substructure extends to around three neighboring sites, also an indication of Stark localization which requires future study [21].
An interesting result for \(V=0\) is the existence of a (negative) high-energy localized state in the IIF region (clearly seen in the density of states plots at the left of the chain, Fig. S2 in Ref. [33]). We can see a small and narrow peak at energies around \(\omega\sim-14\) for the first sites of this region, which evolves to higher energies (following the increase of \(\mu\)), while we approach the Mott region, increasing its width. This state is reminiscent of the lower Hubbard band for the left regions. A similar state is seen for the right half of the chain which is reminiscent of the upper Hubbard band (not shown). More results for the density of states, together with some calculations in the atomic limit, are presented in Ref. [33].
_Size of the Mott region.--_ At the electron-hole symmetric case (and \(V=0\)) the upper and lower Hubbard bands are centered at \(\pm\frac{U}{2}\) respectively, each with a total width of 4. For \(\mu=0\), the size of the Mott region can be estimated recalling that the Mott insulating state requires that the local \(\mu_{i}\) lies within the Mott gap i.e., \(-U/2+2<\mu_{i}<U/2-2\). At the lower limit \(\mu_{\rm min}=-\frac{U}{2}+2\), yielding by Eq. (2) that \(i_{\rm min}\mu_{0}=i_{c}(\mu_{0}-\frac{U}{2}+2)\), while at \(\mu_{\rm max}=\frac{U}{2}-2\) one finds \(i_{\rm max}\mu_{0}=i_{c}(\mu_{0}+\frac{U}{2}-2)\). Consequently, assuming that the width of the Hubbard bands is not modified by the presence of the confining potential, the size of the Mott region is:
\[L_{\rm Mott}=i_{\rm max}-i_{\rm min}=(U/2-2)\,(L-1)/\mu_{0}. \tag{3}\]
As the confining potential slightly increases the width of the Hubbard bands (not shown), the gap in-between them and \(L_{\rm Mott}\) are slightly overestimated.
To compare Eq. (S1) with our numerical results, we have estimated the size of the Mott region by defining its boundaries at the points where the linear fits of the numerical derivative of the local occupation intercept 0 for each value of \(\mu_{0}\), using the results shown in Fig. 2(a). This procedure reveals that indeed the size of the Mott region is proportional to \(1/\mu_{0}\) (see Fig. S1 in Ref. [33]), and it shrinks to zero for very steep potentials.
_Changing the global chemical potential.--_ The coexisting phases are robust against changes in the global chemical potential \(\mu\). In Fig. 5 we show our results for two cases, \(V=0\) (with coexisting band and Mott insulators, separated by intermediate IIF regions), and \(V=4\) (with CDW's and Mott insulators). The different phases shift towards the right or the left with respect to their position for \(\mu=0\) but otherwise are not changed, except for the regions close to the boundaries where they are affected by the open boundaries.
_Discussion.--_ In this paper we study the one-dimensional extended Hubbard model, subject to a linearly-varying Wannier-Stark potential on a finite chain, applying the density-matrix renormalization group. We find an interesting sequence of several insulating electronic phases in the ground state, in which regions with commensurate charge density waves coexist with band and Mott insulating phases. These regions are separated by incompressible domain walls with incommensurate fillings, which were not reported before. The results are summarized in Fig. 6. Further research is needed to define whether these incompressible walls are due to the Stark many-body localization [34]. The steeper the slope of the external potential, the narrower the domain walls. These phases and domain walls can be moved around by vary
Figure 4: Heatplot of the local density of states at different sites (\(6\leq i\leq 20\)) for \(\mu_{0}=10\) and \(V=0\). The Fermi energy is marked by a white (dashed) line at \(\omega=0\)
ing a global chemical potential, thus providing a possible functionality of this kind of systems. Cold-atom chains placed in an external electric field are suggested as experimental realizations of our system.
Acknowledgments.--NAB and KH acknowledge support from ICTP through the STEP and Associates Programmes respectively, and from the PICT 2018-01546 grant of the ANPCyT. The authors thank useful discussions with Carlos Balseiro.
|
2310.20141 | Contrastive Difference Predictive Coding | Predicting and reasoning about the future lie at the heart of many
time-series questions. For example, goal-conditioned reinforcement learning can
be viewed as learning representations to predict which states are likely to be
visited in the future. While prior methods have used contrastive predictive
coding to model time series data, learning representations that encode
long-term dependencies usually requires large amounts of data. In this paper,
we introduce a temporal difference version of contrastive predictive coding
that stitches together pieces of different time series data to decrease the
amount of data required to learn predictions of future events. We apply this
representation learning method to derive an off-policy algorithm for
goal-conditioned RL. Experiments demonstrate that, compared with prior RL
methods, ours achieves $2 \times$ median improvement in success rates and can
better cope with stochastic environments. In tabular settings, we show that our
method is about $20 \times$ more sample efficient than the successor
representation and $1500 \times$ more sample efficient than the standard (Monte
Carlo) version of contrastive predictive coding. | Chongyi Zheng, Ruslan Salakhutdinov, Benjamin Eysenbach | 2023-10-31T03:16:32Z | http://arxiv.org/abs/2310.20141v2 | # Contrastive Difference Predictive Coding
###### Abstract
Predicting and reasoning about the future lie at the heart of many time-series questions. For example, goal-conditioned reinforcement learning can be viewed as learning representations to predict which states are likely to be visited in the future. While prior methods have used contrastive predictive coding to model time series data, learning representations that encode long-term dependencies usually requires large amounts of data. In this paper, we introduce a temporal difference version of contrastive predictive coding that stitches together pieces of different time series data to decrease the amount of data required to learn predictions of future events. We apply this representation learning method to derive an off-policy algorithm for goal-conditioned RL. Experiments demonstrate that, compared with prior RL methods, ours achieves \(2\times\) median improvement in success rates and can better cope with stochastic environments. In tabular settings, we show that our method is about \(20\times\) more sample efficient than the successor representation and \(1500\times\) more sample efficient than the standard (Monte Carlo) version of contrastive predictive coding.
**Code**: [https://github.com/chongyi-zheng/td_infonce](https://github.com/chongyi-zheng/td_infonce)
**Website**: [https://chongyi-zheng.github.io/td_infonce](https://chongyi-zheng.github.io/td_infonce)
## 1 Introduction
Learning representations is important for modeling high-dimensional time series data. Many applications of time-series modeling require representations that not only contain information about the contents of a particular observation, but also about how one observation relates to others that co-occur in time. equiring representations that encode temporal information is challenging, especially when attempting to capture long-term temporal dynamics: the frequency of long-term events may decrease with the time scale, meaning that learning longer-horizon dependencies requires larger quantities of data.
In this paper, we study contrastive representation learning on time series data - positive examples co-occur nearby in time, so the distances between learned representations should encode the likelihood of transiting from one representation to another. Building on prior work that uses the InfoNCE [79, 67] loss to learn representations of time-series data effectively, we will aim to build a temporal difference version of this loss. Doing so may allow us to optimize this objective with fewer samples, may enable us to stitch together pieces of different time series data, and may enable us to perform counterfactual reasoning - we should be able to estimate which representations we would have learned, if we had
Figure 1: **TD InfoNCE** is a nonparametric version of the successor representation. _(Top)_ The distances between learned representations indicate the probability of transitoning to a set of randomly-sampled states. _(Bottom)_ We update these representations so they assign high likelihood to _(a)_ the next state and _(b)_ states likely to be visited after the next state. See Sec. 3 for details.
collected data in a different way. After a careful derivation, our resulting method can be interpreted as a non-parametric form of the successor representation [15], as shown in Fig. 1.
The main contribution of this paper is a temporal difference estimator for InfoNCE. We then apply this estimator to develop a new algorithm for goal-conditioned RL. Experiments on both state-based and image-based benchmarks show that our algorithm outperforms prior methods, especially on the most challenging tasks. Additional experiments demonstrate that our method can handle stochasticity in the environment more effectively than prior methods. We also demonstrate that our algorithm can be effectively applied in the offline setting. Additional tabular experiments demonstrate that TD InfoNCE is up to \(1500\times\) more sample efficient than the standard Monte Carlo version of the loss and that it can effectively stitch together pieces of data.
## 2 Related Work
This paper will study the problem of self-supervised RL, building upon prior methods on goal-condition RL, contrastive representation learning, and methods for predicting future state visitations. Our analysis will draw a connection between these prior methods, a connection which will ultimately result in a new algorithm for goal-conditioned RL. We discuss connections with unsupervised skill learning and mutual information in Appendix B.
Goal-conditioned reinforcement learning.Prior work has proposed many frameworks for learning goal-conditioned policies, including conditional supervised learning [16; 32; 36; 19; 54; 65; 81], actor-critic methods [2; 59; 10], semi-parametric planning [68; 25; 26; 22; 62; 36], and distance metric learning [89; 63; 18]. These methods have demonstrated impressive results on a range of tasks, including real-world robotic tasks [55; 78; 95]. While some methods require manually-specified reward functions or distance functions, our work builds upon a self-supervised interpretation of goal-conditioned RL that casts this problem as predicting which states are likely to be visited in the future [23; 24; 7].
Contrastive representation learning.Contrastive learning methods have become a key tool for learning representations in computer vision and NLP [14; 76; 79; 66; 88; 67; 87; 92; 40; 71; 12; 84; 30]. These methods assign similar representations to positive examples and dissimilar representations to negative examples or outdated embeddings [35]. The two main contrastive losses are based on binary classification ("NCE") ranking loss ("InfoNCE") [56]. Modern contrastive learning methods typically employ the ranking-based objective to learn representations of images [12; 84; 41; 93], text [53; 44; 71] and sequential data [64; 77]. Prior works have also provided theoretical analysis for these methods from the perspective of mutual information maximization [52; 70], noise contrastive estimation [37; 56; 86; 3], and the geometry of the learned representations [88]. In the realm of RL, prior works have demonstrated that contrastive methods can provide effective reward functions and auxiliary learning objectives [49; 50; 39; 13; 60; 61], and can also be used to formulate the goal-reaching problem in an entirely self-supervised manner [55; 18; 23; 24]. Our method will extend these results by building a temporal difference version of the "ranking"-based contrastive loss; this loss will enable us to use data from one policy to estimate which states a different policy will visit.
Temporal difference learning and successor representation.Another line of work studies using temporal difference learning to predict states visited in the future, building upon successor representations and successor features [15; 4; 5; 7]. While learning successor representation using temporal difference bears a similarity to the typical Q-Learning algorithm [91; 27; 58] in the tabular setting, directly estimating this quantity is difficult with continuous states and actions [43; 4; 85; 7]. To lift this limitation, we will follow prior work [24; 23; 85] in predicting the successor representation indirectly: rather than learning a representation whose coordinates correspond to visitation probabilities, we will learn state representations such that their inner product corresponds to a visitation probability. Unlike prior methods, we will show how the common InfoNCE objective can be estimated in a temporal difference fashion, opening the door to off-policy reasoning and enabling our method to reuse historical data to improve data efficiency.
Method
We start by introducing notation and prior approaches to the contrastive representation learning and the goal-conditioned RL problems. We then propose a new self-supervised actor-critic algorithm that we will use in our analysis.
### Preliminaries
We first review prior work in contrastive representation learning and goal-conditioned RL. Our method (Sec. 3) will use ideas from both.
Contrastive representation via InfoNCE.Contrastive representation learning aims to learn a representation space, pushing representations of positive examples together and pushing representations of negative examples away. InfoNCE (also known as contrastive predictive coding) [79, 45, 67, 41] is a widely used contrastive loss, which builds upon noise contrastive estimation (NCE) [37, 56]. Given the distribution of data \(p_{\mathcal{X}}(x),p_{\mathcal{Y}}(y)\) over data \(x\in\mathcal{X},y\in\mathcal{Y}\) and the conditional distribution of positive pairs \(p_{\mathcal{Y}|\mathcal{X}}(y|x)\) over \(\mathcal{X}\times\mathcal{Y}\), InfoNCE loss is defined as
\[\mathcal{L}_{\text{InfoNCE}}(f)\triangleq\mathbb{E}_{\begin{subarray}{c}x \sim p_{\mathcal{X}}(x),y^{(1)}\sim p_{\mathcal{Y}|\mathcal{X}}(y|x)\\ y^{(2:\mathcal{N})}\sim p_{\mathcal{Y}}(y)\end{subarray}}\left[\log\frac{e^{f (x,y^{(1)})}}{\sum_{i=1}^{N}e^{f(x,y^{(i)})}}\right], \tag{1}\]
where \(f:\mathcal{X}\times\mathcal{Y}\mapsto\mathbb{R}\) is a parametric function. Following prior work [24, 88, 85], we choose to parameterize \(f(\cdot,\cdot)\) via the inner product of representations of data \(f(x,y)=\phi(x)^{\top}\psi(y)\), where \(\phi(\cdot)\) and \(\psi(\cdot)\) map data to \(\ell_{2}\) normalized vectors of dimension \(d\). We will call \(f\) the _critic function_ and \(\phi\) and \(\psi\) the _contrastive representations_. The Bayes-optimal critic for the InfoNCE loss satisfies [70, 56, 67]
\[\exp\left(f^{\star}(x,y)\right)=\frac{p(y\mid x)}{p(y)c(x)},\]
where \(c(\cdot)\) is an arbitrary function. We can estimate this arbitrary function using the optimal critic \(f^{\star}\) by sampling multiple negative pairs from the data distribution:
\[\mathbb{E}_{p(y)}\left[\exp\left(f^{\star}(x,y)\right)\right]=\int p(\not{ \mathcal{Y}})\frac{p(y\mid x)}{p(\not{\mathcal{Y}})c(x)}dy=\frac{1}{c(x)} \underbrace{\int p(y\mid x)dy}_{=1}=\frac{1}{c(x)}. \tag{2}\]
Reinforcement learning and goal-conditioned RL.We will consider a Markov decision process defined by states \(s\in\mathcal{S}\), actions \(a\in\mathcal{A}\), rewards \(r:\mathcal{S}\times\mathcal{A}\times\mathcal{S}\mapsto\mathbb{R}\). Using \(\Delta(\cdot)\) denotes the probability simplex, we define an initial state distribution \(p_{0}:\mathcal{S}\mapsto\Delta(\mathcal{S})\), discount factor \(\gamma\in(0,1]\), and dynamics \(p:\mathcal{S}\times\mathcal{A}\mapsto\Delta(\mathcal{S})\). Given a policy \(\pi:\mathcal{S}\mapsto\Delta(\mathcal{A})\), we will use \(p_{t}^{\pi}(s_{t+}\mid s,a)\) to denote the probability density of reaching state \(s_{t+}\) after exactly \(t\) steps, starting at state \(s\) and action \(a\) and then following the policy \(\pi(a\mid s)\). We can then define the discounted state occupancy measure [42, 94, 23, 24, 95] starting from state \(s\) and action \(a\) as
\[p^{\pi}(s_{t+}\mid s,a)\triangleq(1-\gamma)\sum_{t=1}^{\infty}\gamma^{t-1}p_{t }^{\pi}(s_{t+}\mid s,a). \tag{3}\]
Prior work [15] have shown that this discounted state occupancy measure follows a recursive relationship between the density at the current time step and the future time steps:
\[p^{\pi}(s_{t+}\mid s,a)=(1-\gamma)p(s^{\prime}=s_{t+}\mid s,a)+\gamma\mathbb{ E}_{\begin{subarray}{c}s^{\prime}\sim p(s^{\prime}|s,a)\\ a^{\prime}\sim\pi(a^{\prime}|s^{\prime})\end{subarray}}\left[p^{\pi}(s_{t+} \mid s^{\prime},a^{\prime})\right]. \tag{4}\]
For goal-conditioned RL, we define goals \(g\in\mathcal{S}\) in the same space as states and consider a goal-conditioned policy \(\pi(a\mid s,g)\) and the corresponding goal-conditioned discounted state occupancy measure \(p^{\pi}(s_{t+}\mid s,a,g)\). For evaluation, we will sample goals from a distribution \(p_{g}:\mathcal{S}\mapsto\Delta(\mathcal{S})\). Following prior work [23, 74], we define the objective of the goal-reaching policy as maximizing the probability of reaching desired goals under its discounted state occupancy measure while commanding the same goals:
\[\max_{\pi(\cdot|\cdot,\cdot)}\mathbb{E}_{p_{g}(g),p_{0}(s),\pi(a|s,g)}\left[p^ {\pi}(s_{t+}=g\mid s,a,g)\right]. \tag{5}\]
In tabular settings, this objective is the same as maximizing expected returns using a sparse reward function \(r(s,a,s^{\prime},g)=(1-\gamma)\delta(s^{\prime}=g)\)[24]. Below, we review two strategies for estimating the discounted state occupancy measure. Our proposed method (Sec. 3.2) will combine the strengths of these methods while lifting their respective limitations.
Contrastive RL and C-Learning.Our focus will be on using contrastive representation learning to build a new goal-conditioned RL algorithm, following a template set in prior work [24, 23]. These _contrastive RL_ methods are closely related to the successor representation [15]: they aim to learn representations whose inner products correspond to the likelihoods of reaching future states. Like the successor representation, representations from these contrastive RL methods can then be used to represent the Q function for any reward function [57]. Prior work [24] has shown how both NCE and the InfoNCE losses can be used to derive Monte Carlo algorithms for estimating the discounted state occupancy measure. We review the Monte Carlo InfoNCE loss below. Given a policy \(\pi(a\mid s)\), consider learning contrastive representations for a state and action pair \(x=(s,a)\) and a potential future state \(y=s_{t+}\). We define the data distribution to be the joint distribution of state-action pairs \(p_{\mathcal{X}}(x)=p(s,a)\) and the marginal distribution of future states \(p_{\mathcal{Y}}(y)=p(s_{t+})\), representing either the distribution of a replay buffer (online) or the distribution of a dataset (offline). The conditional distribution of positive pairs is set to the discounted state occupancy measure for policy \(\pi\), \(p_{\mathcal{Y}|\mathcal{X}}(y\mid x)=p^{\pi}(s_{t+}\mid s,a)\), resulting in a Monte Carlo (MC) estimator
\[\mathcal{L}_{\text{MC InfoNCE}}(f)=\mathbb{E}_{\begin{subarray}{c}(s,a)\sim p (s,a),s^{(1)}_{t+}\sim p^{\pi}(s_{t+}\mid s,a)\\ s^{(2;N)}_{t+}\sim p(s_{t+})\end{subarray}}\left[\log\frac{e^{f(s,a,s^{(1)}_{t +})}}{\sum_{i=1}^{N}e^{f(s,a,s^{(i)}_{t+})}}\right] \tag{6}\]
and an optimal critic function satisfying
\[\exp(f^{\star}(s,a,s_{t+}))=\frac{p^{\pi}(s_{t+}\mid s,a)}{p(s_{t+})c(s,a)}. \tag{7}\]
This loss estimates the discounted state occupancy measure in a Monte Carlo manner. While conceptually simple, computing this estimator requires sampling future states from the discounted state occupancy measure of the policy \(\pi\), i.e., on-policy data. Such an estimate is potentially sample inefficient because collecting samples for different policies is expensive. That is, we cannot share experiences collected by one policy with the learning of the discounted state occupancy measure of another policy.
In the same way that temporal difference (TD) algorithms tend to be more sample efficient than Monte Carlo algorithms for reward maximization [82], we expect that TD contrastive methods are more sample efficient at estimating probability ratios than their Monte Carlo counterparts. Given that the InfoNCE tends to outperform the NCE objective in other machine learning disciplines, we conjecture that our TD InfoNCE objective will outperform the TD NCE objective [23] (see experiments in Sec. 4).
### Temporal Difference InfoNCE
In this section, we derive a new loss for estimating the discounted state occupancy measure for a fixed policy. This loss will be a temporal difference variant of the InfoNCE loss. We will use **temporal difference InfoNCE (TD InfoNCE)** to refer to our loss function.
In the off-policy setting, we aim to estimate the discounted state occupancy measure of the policy \(\pi\) given a dataset of transitions \(\mathcal{D}=\{(s,a,s^{\prime})_{i}\}_{i=1}^{D}\) collected by another behavioral policy \(\beta(a\mid s)\). This setting is challenging because we do not obtain samples from the discounted state occupancy measure of the target policy \(\pi\). Addressing this challenge involves two steps: _(i)_ expanding the MC estimator (Eq. 6) via the recursive relationship of the discounted state occupancy measure (Eq. 4), and _(ii)_ estimating the expectation over the discounted state occupancy measure via importance sampling. We first use the identity from Eq. 4 to express the MC InfoNCE loss as the sum of a
next-state term and a future-state term:
\[\mathbb{E}_{\begin{subarray}{c}(s,a)\sim p(s,a)\\ s^{(2:N)}_{t+}\sim p(s_{t+})\end{subarray}}\Bigg{[} (1-\gamma)\underbrace{\mathbb{E}_{s^{(1)}_{t+}\sim p(s^{\prime}|s,a)} \left[\log\frac{e^{f(s,a,s^{(1)}_{t+})}}{\sum_{i=1}^{N}e^{f(s,a,s^{(i)}_{t+})} }\right]}_{\mathcal{L}_{1}(f)}\] \[+\gamma\underbrace{\mathbb{E}_{s^{\prime}\sim p(s^{\prime}|s,a),a^ {\prime}\sim\pi(a^{\prime}|s^{\prime})}\left[\log\frac{e^{f(s,a,s^{(1)}_{t+})} }{\sum_{i=1}^{N}e^{f(s,a,s^{(i)}_{t+})}}\right]}_{\mathcal{L}_{2}(f)}\Bigg{[} \log\frac{e^{f(s,a,s^{(1)}_{t+})}}{\sum_{i=1}^{N}e^{f(s,a,s^{(i)}_{t+})}} \Bigg{]}\Bigg{]}.\]
While this estimate is similar to a TD target for Q-Learning [91, 27], the second term requires sampling from the discounted state occupancy measure of policy \(\pi\). To avoid this sampling, we next replace the expectation over \(p^{\pi}(s_{t+}\mid s^{\prime},a^{\prime})\) in \(\mathcal{L}_{2}(f)\) by an importance weight,
\[\mathcal{L}_{2}(f)=\mathbb{E}_{s^{\prime}\sim p(s^{\prime}|s,a),a^{\prime}\sim \pi(a^{\prime}|s^{\prime})}\left[\frac{p^{\pi}(s^{(1)}_{t+}\mid s^{\prime},a^ {\prime})}{p(s^{(1)}_{t+})}\log\frac{e^{f(s,a,s^{(1)}_{t+})}}{\sum_{i=1}^{N}e ^{f(s,a,s^{(i)}_{t+})}}\right].\]
If we could estimate the importance weight, then we could easily estimate this term by sampling from \(p(s_{t+})\). We will estimate this importance weight by rearranging the expression for the optimal critic (Eq. 7) and substituting our estimate for the normalizing constant \(c(s,a)\) (Eq. 2):
\[\frac{p^{\pi}(s^{(1)}_{t+}\mid s,a)}{p(s^{(1)}_{t+})}=c(s,a)\cdot\exp\left(f^{ \star}(s,a,s^{(1)}_{t+})\right)=\frac{e^{f^{\star}(s,a,s^{(1)}_{t+})}}{ \mathbb{E}_{p(s_{t+})}\left[e^{f^{\star}(s,a,s_{t+})}\right]}. \tag{8}\]
We will use \(w(s,a,s^{(1:N)}_{t+})\) to denote our estimate of this, using \(f\) in place of \(f^{\star}\) and using a finite-sample estimate of the expectation in the denominator:
\[w(s,a,s^{(1:N)}_{t+})\triangleq\frac{e^{f(s,a,s^{(1)}_{t+})}}{\frac{1}{N} \sum_{i=1}^{N}e^{f(s,a,s^{(1)}_{t+})}} \tag{9}\]
This weight accounts for the effect of the discounted state occupancy measure of the target policy. Additionally, it corresponds to the categorical classifier that InfoNCE produces (without constant \(N\)). Taken together, we can now substitute the importance weight in \(\mathcal{L}_{2}(f)\) with our estimate in Eq. 9, yielding a temporal difference (TD) InfoNCE estimator
\[\mathcal{L}_{\text{TD InfoNCE}}(f)\triangleq\mathbb{E}_{ \begin{subarray}{c}(s,a)\sim p(s,a)\\ s^{(2:N)}_{t+}\sim p(s_{t+})\end{subarray}}\left[(1-\gamma)\mathbb{E}_{s^{(1 )}_{t+}\sim p(s^{\prime}|s,a)}\left[\log\frac{e^{f(s,a,s^{(1)}_{t+})}}{\sum_{ i=1}^{N}e^{f(s,a,s^{(i)}_{t+})}}\right]\right.\] \[\left.+\gamma\mathbb{E}_{s^{\prime}\sim p(s^{\prime}|s,a)}\left[ \lfloor w(s^{\prime},a^{\prime},s^{(1:N)}_{t+})\rfloor_{\text{sg}}\log\frac{e^ {f(s,a,s^{(1)}_{t+})}}{\sum_{i=1}^{N}e^{f(s,a,s^{(i)}_{t+})}}\right]\right], \tag{10}\]
where \(\lfloor\cdot\rfloor_{\text{sg}}\) indicates the gradient of the importance weight should not affect the gradient of the entire objective. As shown in Fig. 1, we can interpret the first term as pulling together the representations of the current state-action pair \(\phi(s,a)\) and the next state \(\psi(s^{\prime})\); the second term pulls the representations at the current step \(\phi(s,a)\) similar to the (weighted) predictions from the future state \(\psi(s_{t+})\). Importantly, the TD InfoNCE estimator is equivalent to the MC InfoNCE estimator for the optimal critic function: \(\mathcal{L}_{\text{TD InfoNCE}}(f^{\star})=\mathcal{L}_{\text{MC InfoNCE}}(f^{\star})\).
Convergence and connections.In Appendix A, we prove that optimizing a variant of the TD InfoNCE objective is equivalent to perform one step policy evaluation with a new Bellman operator; thus, repeatedly optimizing this objective yields the correct discounted state occupancy measure. This analysis considers the tabular setting and assumes that the denominators of the softmax functions and \(w\) in Eq. 10 are computed using an exact expectation. We discuss the differences between TD InfoNCE and C-learning [23] (a temporal difference estimator of the NCE objective) in Appendix E.2. Appendix C discusses how TD InfoNCE corresponds to a nonparametric variant of the successor representation.
```
1:Input contrastive representations \(\phi_{\theta}\) and \(\psi_{\theta}\), target representations \(\phi_{\bar{\theta}}\) and \(\psi_{\bar{\theta}}\), and goal-conditioned policy \(\pi_{\omega}\).
2:for each iteration do
3: Sample \(\{(s_{t}^{(i)},a_{t}^{(i)},s_{t+1}^{(i)},g^{(i)},s_{t+}^{(i)})\}_{i=1}^{N}\)\(\sim\) replay buffer / dataset,\(a^{(i)}\sim\pi(a\mid s_{t}^{(i)},g^{(i)})\).
4: Compute \(F_{\text{next}}\), \(F_{\text{future}}\), \(F_{\text{goal}}\) using \(\phi_{\theta}\) and \(\psi_{\theta}\).
5: Compute \(\tilde{F}_{\text{w}}\) using \(\phi_{\bar{\theta}}\) and \(\psi_{\bar{\theta}}\).
6:\(W\gets N\cdot\textsc{stcp}.\textsc{grad}\left(\textsc{SoftMax}(\tilde{F}_{ \text{w}})\right)\)
7:\(\mathcal{L}(\theta)\leftarrow(1-\gamma)\mathcal{CE}(\text{logits}=F_{\text{ next}}\), \(\text{labels}=I_{N})+\gamma\mathcal{CE}(\text{logits}=F_{\text{future}},\text{ labels}=W)\)
8:\(\mathcal{L}(\omega)\leftarrow\mathcal{CE}(\text{logits}=F_{\text{goal}}, \text{labels}=I_{N})\)
9: Update \(\theta,\omega\) by taking gradients of \(\mathcal{L}(\theta),\mathcal{L}(\omega)\).
10: Update \(\bar{\theta}\) using an exponential moving average.
11:Return\(\phi_{\theta}\), \(\psi_{\theta}\), and \(\pi_{\omega}\).
```
**Algorithm 1** Temporal Difference InfoNCE
### Goal-conditioned Policy Learning
The TD InfoNCE method provides a way for estimating the discounted state occupancy measure. This section shows how this estimator can be used to derive a new algorithm for goal-conditioned RL. This algorithm will alternate between _(1)_ estimating the occupancy measure using the TD InfoNCE objective and _(2)_ optimizing the policy to maximize the likelihood of the desired goal under the estimated occupancy measure. Pseudo-code is shown in Algorithm 1, additional details are in Appendix D.1, and code is available online.1
Footnote 1: [https://github.com/chongyi-zheng/td_infonce](https://github.com/chongyi-zheng/td_infonce)
While our TD InfoNCE loss in Sec. 3.2 estimates the discounted state occupancy measure for policy \(\pi(a\mid s)\), we can extend it to the goal-conditioned setting by replacing \(\pi(a\mid s)\) with \(\pi(a\mid s,g)\) and \(f(s,a,s_{t+})\) with \(f(s,a,g,s_{t+})\), resulting in a goal-conditioned TD InfoNCE estimator. This goal-conditioned TD InfoNCE objective estimates the discounted state occupancy measure of _any_ future state for a goal-conditioned policy commanding _any_ goal. Recalling that the discounted state occupancy measure corresponds to the Q function [24], the policy objective is to select actions that maximize the likelihood of the commanded goal:
\[\mathbb{E}_{\begin{subarray}{c}p_{g}(g),p_{0}(s)\\ \pi(a_{0}|s,g)\end{subarray}}\left[\log p^{\pi}(s_{t+}=g\mid s,a,g)\right]= \mathbb{E}_{\begin{subarray}{c}g\sim p_{g}(g),s\sim p_{0}(s)\\ a_{0}\sim\pi(a|s,g),s_{t+}^{(1:N)}\sim p(s_{t+})\end{subarray}}\left[\log \frac{e^{f^{*}(s,a,g,s_{t+}=g)}}{\sum_{i=1}^{N}e^{f^{*}(s,a,g,s_{t+}^{(i)})}} \right]. \tag{11}\]
In practice, we optimize both the critic function and the policy for one gradient step iteratively, using our estimated \(f\) in place of \(f^{*}\).
## 4 Experiments
Our experiments start with comparing goal-conditioned TD InfoNCE to prior goal-conditioned RL approaches on both online and offline goal-conditioned RL (GCRL) benchmarks. We then analyze the properties of the critic function and the policy learned by this method. Visualizing the representations learned by TD InfoNCE reveals that linear interpolation corresponds to a form of planning. Appendix E.2 ablates the difference between TD InfoNCE and a prior temporal difference method based on NCE. All experiments show means and standard deviations over three random seeds.
### Comparing to Prior Goal-conditioned RL methods
We compare TD InfoNCE to four baselines on an online GCRL benchmark [69] containing four manipulation tasks for the Fetch robot. The observations and goals of those tasks can be either a state of the robot and objects or a \(64\times 64\) RGB image. We will evaluate using both versions. The first baseline, Quasimetric Reinforcement Learning (QRL) [89], is a state-of-the-art approach that uses quasimetric models to learn the optimal goal-conditioned value functions and the corresponding policies. The second baseline is contrastive RL [24], which estimates the discounted state occupancy measure using \(\mathcal{L}_{\text{MC InfoNCE}}\) (Eq. 6). Our third baseline is the goal-conditioned behavioral cloning (GCBC) [16; 19; 32; 54; 80; 81]. We also include a comparison with an off-the-shelf actor-critic
algorithm augmented with hindsight relabeling [2, 51, 73, 75] to learn a goal-conditioned policy (DDPG + HER).
We report results in Fig. 1(a), and defer the full learning curves to Appendix Fig. 7. These results show that TD InfoNCE matches or outperforms other baselines on all tasks, both for state and image observations. On those more challenging tasks (pick & place (state / image) and slide (state / image)), TD InfoNCE achieves a \(2\times\) median improvement relative to the strongest baseline. On the most challenging tasks, image-based pick & place and slide, TD InfoNCE is the only method achieving non-negligible success rates. We speculate this observation is because TD InfoNCE estimates the discounted state occupancy measure more accurately, a hypothesis we will investigate in Sec. 4.3.
Among those baselines, QRL is the strongest one. Unlike TD InfoNCE, the derivation of QRL assumes the dynamics are deterministic. This difference motivates us to study whether TD InfoNCE continues achieving high success rates in environments with stochastic noise. To study this, we compare TD InfoNCE to QRL on a variant of the Fetch benchmark where observations are corrupted with probability \(0.1\). As shown in Fig. 1(b), TD InfoNCE maintains high success rates while the performance of QRL decreases significantly, suggesting that TD InfoNCE can better cope with stochasticity in the environment.
### Evaluation on Offline Goal Reaching
We next study whether the good performance of TD InfoNCE transfers to the setting without any interaction with the environment (i.e., offline RL). We evaluate on AntMaze tasks from the D4RL benchmark [28]. The results in Table 1 show that TD InfoNCE outperforms most baselines on most tasks. See Appendix D.3 for details.
### Accuracy of the estimated discounted state occupancy measure
This section tests the hypothesis that our TD InfoNCE loss will be more accurate and sample efficient than alternative Monte Carlo methods (namely, contrastive RL [24]) in predicting the discounted state occupancy measure. We will use the tabular setting so that we can get a ground truth estimate. We compare TD InfoNCE to three baselines. Successor representations [15] can also be learned in a TD manner, though can be challenging to apply beyond tabular settings. C-learning is similar
\begin{table}
\begin{tabular}{c|c c c c c c} \hline \hline & TD InfoNCE & QRL & Contrastive RL & GCBC & DT & IQL & TD3 + BC \\ \hline unaze-v2 & **85.8 \(\pm\) 0.9** & \(77.2\pm 2.3\) & \(79.8\pm 1.4\) & \(65.4\) & \(65.6\) & **87.5** & \(78.6\) \\ unaze-diverse-v2 & **92.1 \(\pm\) 1.1** & \(79.4\pm 1.5\) & \(77.6\pm 2.8\) & \(60.9\) & \(51.2\) & \(62.2\) & \(71.4\) \\ medium-play-v2 & **87.5 \(\pm\) 1.2** & \(74.9\pm 1.9\) & \(72.6\pm 2.9\) & \(58.1\) & \(1.0\) & \(71.2\) & \(10.6\) \\ medium-diverse-v2 & **82.3 \(\pm\) 2.8** & \(73.1\pm 1.1\) & \(71.5\pm 1.3\) & \(67.3\) & \(0.6\) & \(70.0\) & \(3.0\) \\ large-play-v2 & \(47.3\pm 2.9\) & **52.3 \(\pm\) 3.2** & \(48.6\pm 4.4\) & \(32.4\) & \(0.0\) & \(39.6\) & \(0.2\) \\ large-diverse-v2 & **56.2 \(\pm\) 3.8** & \(50.9\pm 4.6\) & **54.1 \(\pm\) 5.5** & \(36.9\) & \(0.2\) & \(47.5\) & \(0.0\) \\ \hline \hline \end{tabular}
\end{table}
Table 1: Evaluation on offline D4RL AntMaze benchmarks.
Figure 2: **Evaluation on online GCRL benchmarks. (Left) TD InfoNCE performs similarly to or outperforms all baselines on both state-based and image-based tasks. _(Right)_ On stochastic versions of the state-based tasks, TD InfoNCE outperforms the strongest baseline (QRL). Appendix Fig. 7 shows the learning curves.**
to TD InfoNCE in that it uses a temporal difference method to optimize a contrastive loss, but differs in using a binary cross entropy loss instead of a softmax cross entropy loss. Contrastive RL is the MC counterpart of TD InfoNCE. We design a \(5\times 5\) gridworld with 125 states and 5 actions (up, down, left, right, and no-op) and collect 100K transitions using a uniform random policy, \(\mu(a\mid s)=\textsc{Unif}(\mathcal{A})\). We evaluate each method by measuring the absolute error between the predicted probability \(\hat{p}\) and the ground truth probability \(p^{\mu}\), averaging over all pairs of \((s,a,s_{t+})\):
\[\frac{1}{|\mathcal{S}||\mathcal{A}||\mathcal{S}|}\sum_{s,a,s_{t+}}|\hat{p}(s_{ t+}\mid s,a)-p^{\mu}(s_{t+}\mid s,a)|.\]
For the three TD methods, we compute the TD target in a SARSA manner [82]. For those methods estimating a probability ratio, we convert the prediction to a probability by multiplying by the empirical state marginal. Results in Fig. 3 show that TD methods achieve lower errors than the Monte Carlo method, while TD InfoNCE converges faster than C-Learning. Appendix E.1 discusses why all methods plateau above zero.
Our next experiments studies sample efficiency. We hypothesize that the softmax in the TD InfoNCE loss may provide more learning signal than alternative methods, allowing it to achieve lower error on a fixed budget of data. To test this hypothesis, we run experiments with dataset sizes from 1K to 10M on the same gridworld, comparing TD InfoNCE to the same set of baselines. We report results in Fig. 3 with errors showing one standard deviation after training for 50K gradient steps for each approach. These results suggest that methods based on temporal difference learning predict more accurately than Monte Carlo method when provided with the same amount of data. Compared with its Monte Carlo counterpart, TD InfoNCE is \(1500\times\) more sample efficient (\(6.5\times 10^{3}\) vs \(10^{7}\) transitions). Compared with the only other TD method applicable in continuous settings (C-learning), TD InfoNCE can achieve a comparable loss with \(130\times\) less data (\(7.7\times 10^{4}\) vs \(10^{7}\) transitions). Even compared with the strongest baseline (successor representations), which makes assumptions (tabular MDPs) that our method avoids, TD InfoNCE can achieve a comparable error rate with almost \(20\times\) fewer samples (\(5.2\times 10^{5}\) vs \(10^{7}\) transitions).
### Does TD InfoNCE enable off-policy reasoning?
The explicit temporal difference update (Eq. 10) in TD InfoNCE is similar to the standard Bellman backup, motivating us to study whether the resulting goal-conditioned policy is capable of performing dynamic programming with offline data. To answer these questions, we conduct two experiments on the same gridworld environment as in Sec. 4.3, comparing TD InfoNCE to contrastive RL (i.e., Monte Carlo InfoNCE). Fig. 4 shows that TD InfoNCE successfully stitches together pieces of different trajectories to find a route between unseen (state, goal) pairs. Fig. 5 shows that TD InfoNCE can perform off-policy reasoning, finding a path that is shorter than the average path demonstrated in the dataset. See Appendix D.4 for details.
Figure 3: **Estimating the discounted state occupancy measure in a tabular setting.**_(Left)_ Temporal difference methods have lower errors than the Monte Carlo method. Also note that our TD InfoNCE converges as fast as the best baseline (successor representation). _(Right)_ TD InfoNCE is more data efficient than other methods. Using a dataset of size 10M, TD InfoNCE achieves an error rate \(25\%\) lower than the best baseline; TD InfoNCE also matches the performance of C-learning with \(130\times\) less data.
### Representation Interpolation
Prior work has shown that representations from self-supervised learning can reflect the geometry of the underlying data [88, 3]. We study this property for the representations learned by TD InfoNCE, interpolating between the learned representations of 29-dimensional observations from the offline AntMaze medium-play-v2 task. We visualize this interpolation in Fig. 6, using nearest-neighbors to retrieve the 29-dim observation with the most similar representation. These results suggest that the learned representations are structured so that linear interpolation corresponds to planning a path from one state to another. See Appendix E.3 for details.
## 5 Conclusion
This paper introduced a temporal difference estimator for the InfoNCE loss. Our goal-conditioned RL algorithm based on this estimator outperforms prior methods in both online and offline settings, and is capable of handling stochasticity in the environment dynamics. While we focused on a specific type of RL problem (goal-conditioned RL), in principle the TD InfoNCE estimator can be used to drive policy evaluation for arbitrary reward functions. One area for future work is to determine how it compares to prior off-policy evaluation techniques.
While we focused on evaluating the TD InfoNCE estimator on control tasks, it is worth noting that the MC InfoNCE objective has been previously applied to NLP, audio, video settings; one intriguing and important question is whether the benefits of TD learning seen on these control tasks translate into better representations in these other domains.
Limitations.One limitation of TD InfoNCE is complexity: compared with its Monte Carlo counterpart, ours is more complex and requires more hyperparameters. It is worth noting that even TD InfoNCE struggles to solve the most challenging control tasks with image observations. On the theoretical front, our convergence proof uses a slightly modified version of our loss (replacing a sum with an expectation), which would be good to resolve in future work.
AcknowledgmentsWe thank Ravi Tej and Wenzhe Li for discussions and feedback on drafts of the paper. We thank Raj Ghugare for sharing code. We thank Tongzhou Wang for providing performance of baselines in online GCRL experiments.
Figure 4: **Stichting trajectories in a dataset. The behavioral policy collects “Z” style trajectories. Unlike the Monte Carlo method (contrastive RL), our TD InfoNCE successfully “stitches” these trajectories together, navigating between pairs of (start, goal ) states unseen in the training trajectories. Appendix Fig. 8 shows additional examples.**
Figure 5: **Searching for shortcuts in skewed datasets.**_(Left)_ Conditioned on different initial states and goals, we collect datasets with \(95\%\) long paths (dark) and \(5\%\) short paths (light). _(Center)_ TD InfoNCE infers the shortest path, _(Right)_ while contrastive RL fails to find this path. Appendix Fig. 9 shows additional examples.** |
2302.00080 | Rainbow Hamilton cycle in hypergraph system | In this paper, we develop a new rainbow Hamilton framework, which is of
independent interest, settling the problem proposed by Gupta, Hamann,
M\"{u}yesser, Parczyk, and Sgueglia when $k=3$, and draw the general conclusion
for any $k\geq3$ as follows. A $k$-graph system $\textbf{H}=\{H_i\}_{i\in[n]}$
is a family of not necessarily distinct $k$-graphs on the same $n$-vertex set
$V$, moreover, a $k$-graph $H$ on $V$ is rainbow if $E(H)\subseteq
\bigcup_{i\in[n]}E(H_i)$ and $|E(H)\cap E(H_i)|\leq1$ for $i\in[n]$. We show
that given $\gamma> 0$, sufficiently large $n$ and an $n$-vertex $k$-graph
system $\textbf{H}=\{H_i\}_{i\in[n]}$ , if
$\delta_{k-2}(H_i)\geq(5/9+\gamma)\binom{n}{2}$ for $i\in[n]$ where $k\geq3$,
then there exists a rainbow tight Hamilton cycle. This result implies the
conclusion in a single graph, which was proved by Lang and Sanhueza-Matamala
[$J. Lond. Math. Soc., 2022$], Polcyn, Reiher, R\"{o}dl and Sch\"{u}lke [$J.
Combin. Theory \ Ser. B, 2021$] independently. | Yucong Tang, Bin Wang, Guanghui Wang, Guiying Yan | 2023-01-31T20:23:00Z | http://arxiv.org/abs/2302.00080v1 | # Rainbow Hamilton cycle in hypergraph system
###### Abstract.
In this paper, we develop a new rainbow Hamilton framework, which is of independent interest, settling the problem proposed by Gupta, Hamann, Muyesser, Parczyk, and Sgueglia when \(k=3\), and draw the general conclusion for any \(k\geq 3\) as follows. A \(k\)-graph system \(\boldsymbol{H}=\{H_{i}\}_{i\in[n]}\) is a family of not necessarily distinct \(k\)-graphs on the same \(n\)-vertex set \(V\), moreover, a \(k\)-graph \(H\) on \(V\) is rainbow if \(E(H)\subseteq\bigcup_{i\in[n]}E(H_{i})\) and \(|E(H)\cap E(H_{i})|\leq 1\) for \(i\in[n]\). We show that given \(\gamma>0\), sufficiently large \(n\) and an \(n\)-vertex \(k\)-graph system \(\boldsymbol{H}=\{H_{i}\}_{i\in[n]}\), if \(\delta_{k-2}(H_{i})\geq(5/9+\gamma)\binom{n}{2}\) for \(i\in[n]\) where \(k\geq 3\), then there exists a rainbow tight Hamilton cycle. This result implies the conclusion in a single graph, which was proved by Lang and Sanhueza-Matamala [_J. Lond. Math. Soc., 2022_], Polcyn, Reiher, Rodl and Schulke [_J. Combin. Theory Ser. B, 2021_] independently.
## 1. Introduction
Finding Hamilton cycles in graphs is one of the key areas in graph theory and extremal combinatorics with a profound history. The classical Dirac's theorem [13] states that every \(n\)-vertex graph with minimum degree at least \(n/2,n\geq 3\), contains a Hamilton cycle. There are also many extensions of Dirac's theorem in hypergraphs.
### Hamilton cycles in hypergraphs
Let \([a,b]\), \(a,b\in\mathbb{Z}\), denote the set \(\{a,a+1,\ldots,b\}\) and the set \([1,n]\) is denoted by \([n]\) in short. Given a \(k\)-graph \(H\) with a set \(S\) of \(d\) vertices (\(d\in[k-1]\)), we define \(\deg_{H}(S)\) to be the number of edges containing \(S\) (the subscript \(H\) is omitted if it is clear from the context), the relative degree \(\overline{\deg}(S)\) to be \(\deg(S)/\binom{n-d}{k-d}\). The _minimum relative d-degree_ of a \(k\)-graph \(H\), written by \(\overline{\delta}_{d}(H)\), is the minimum of \(\overline{\deg}(S)\) over all sets \(S\) of \(d\) vertices.
Katona and Kierstead [22] defined a type of cycle in hypergraphs, which has been studied extensively. A \(k\)-graph is called an \(\ell\)-cycle if its vertices can be ordered cyclically such that each of its edges consists of \(k\) consecutive vertices and every two consecutive edges (in the natural order of the edges) share exactly \(\ell\) vertices. In \(k\)-graphs, a \((k-1)\)-cycle is often called a tight cycle. We say that a \(k\)-graph contains a Hamilton \(\ell\)-cycle if it contains an \(\ell\)-cycle as a spanning subhypergraph. Without special instruction, the tight cycle is referred as cycle for short.
Katona and Kierstead [22] gave a sufficient condition for finding a Hamilton cycle in a \(k\)-graph with minimum \((k-1)\)-degree: every \(n\)-vertex \(k\)-graph \(H\) with \(\delta_{k-1}(H)>(1-1/(2k))n+4-k-5/(2k)\) admits a Hamilton cycle. They conjectured that the bound on the minimum \((k-1)\)-degree can be reduced to roughly \(n/2\), which was confirmed asymptotically by Rodl, Rucinski and Szemeredi in [41, 42]. The same authors gave the exact version for \(k=3\) in [43].
**Theorem 1.1** ([42, 43]).: _Let \(k\geq 3,\gamma>0\) and \(H\) be an \(n\)-vertex \(k\)-graph, where \(n\) is sufficiently large. If \(\delta_{k-1}(H)\geq(1/2+\gamma)n\), then \(H\) contains a Hamilton cycle. Furthermore, when \(k=3\) it is enough to have \(\delta_{2}(H)\geq\lfloor n/2\rfloor\)._
More generally, Kuhn and Osthus [25] and Zhao [46] noted that it is much more difficult to determine the minimum \(d\)-degree condition for tight Hamilton cycle for \(d\in[k-2]\). Based on the results of Cooley and Mycroft [11], Glebov, Person and Weps [16], Rodl and Rucinski [39] and Rodl, Rucinski, Schacht and Szemeredi [40], Reiher, Rodl, Rucinski, Schacht, and Szemeredi [37] gave the asymptotic version when \(d=k-2\) and \(k=3\), while Polcyn, Reiher, Rodl, Rucinski, Schacht, and Schulke [35] gave the asymptotic version when \(d=k-2\) and \(k=4\). Glebov, Person and Weps [16] proved the minimum relative \(d\)-degree condition for a tight Hamilton cycle is a function of \(k\). The best general bound was given by Lang and Sanhueza-Matamala [27], Polcyn, Reiher, Rodl and Schulke [36] independently. They proved the following theorem.
**Theorem 1.2** ([27, 36]).: _Let \(k\geq 3\), \(\gamma>0\) and \(H\) be an \(n\)-vertex \(k\)-graph, where \(n\) is sufficiently large. If \(\delta_{k-2}(H)\geq(5/9+\gamma)\binom{n}{2}\), then \(H\) contains a Hamilton cycle._
A construction due to Han and Zhao [19] showed that the constant \(5/9\) appearing in the above theorem is optimal. For more background, we refer the readers to the recent surveys of Kuhn and Osthus [25], Rodl and Rucinski [38], Simonovits and Szemeredi [45] and Zhao [46].
### Rainbow settings in hypergraph systems
A \(k\)-graph system \(\textbf{{H}}=\{H_{i}\}_{i\in[n]}\) is a family of not necessarily distinct \(k\)-graphs on the same \(n\)-vertex set \(V\), moreover, a \(k\)-graph \(H\) on \(V\) is rainbow if \(E(H)\subseteq\bigcup_{i\in[n]}E(H_{i})\) and \(|E(H)\cap E(H_{i})|\leq 1\) for \(i\in[n]\). Let \(|H|\) denote the size of the vertex set of \(H\).
The study of rainbow structures in graph systems has attracted much more attention. Aharoni, DeVos, Maza, Montejano, and Samal. [1] conjectured that: for \(|V|=n\geq 3\) and \(n\)-vertex graph system \(\textbf{{G}}=\{G_{i}\}_{i\in[n]}\) on \(V\), if \(\delta(G_{i})\geq n/2\) for each \(i\in[n]\), then there exists a rainbow Hamilton cycle with edge set \(\{e_{1},\ldots,e_{n}\}\) such that \(e_{i}\in E(G_{i})\) for \(i\in[n]\). This was recently verified asymptotically by Cheng, Wang and Zhao [9], and completely by Joos and Kim [21]. In [6], Bradshaw, Halasz, and Stacho strengthened the Joos-Kim result by showing that given an \(n\)-vertex graph system \(\textbf{{G}}=\{G_{i}\}_{i\in[n]}\) with \(\delta(G_{i})\geq n/2\) for \(i\in[n]\), then \(G\) has exponentially many rainbow Hamilton cycles. Similarly, a degree condition of Moon and Moser [34] for Hamiltonicity in bipartite graphs has been generalized to the rainbow setting by Bradshaw in [5]. Generally, for each graph \(F\), let \(\delta_{F}\) be the smallest real number \(\delta\geq 0\) such that, for each \(\varepsilon>0\) there exists some \(n_{0}\) such that, for every \(n\geq n_{0}\) with \(|F|\) dividing \(n\), if an \(n\)-vertex graph \(G\) has minimum degree at least \((\delta+\varepsilon)n\), then \(G\) contains an \(F\)-factor. Cheng, Han, Wang, and Wang [7] proved that the minimum degree bound \(\delta_{K_{r}}\) is asymptotically sufficient for the existence of rainbow \(K_{r}\)-factor in graph systems. Montgomery, Muyesser and Pehova [33] generalized the above conclusion for some \(F\) satisfying \(\delta_{F}\geq 1/2\) or \(F\) has a bridge.
In hypergraph systems, Cheng, Han, Wang, Wang and Yang [8] proved that given \(k\geq 3,\gamma>0\), sufficiently large \(n\) and an \(n\)-vertex \(k\)-graph system \(\textbf{{H}}=\{H_{i}\}_{i\in[n]}\), if \(\delta_{k-1}(H_{i})\geq(1/2+\gamma)n\) for \(i\in[n]\), then there exists a rainbow tight Hamilton cycle. There are also some works on rainbow
subgraphs, see [2, 14, 20, 23, 26, 29, 28, 30, 31, 7, 12]. Recently, Gupta, Hamann, Muyesser, Parczyk and Sgueglia [18] gave a unified approach to this problem. However, they mentioned that "there is a well-known (uncoloured) Dirac-type result whose rainbow version is missing" (Given a \(3\)-graph system \(\textbf{{H}}=\{H_{i}\}_{i\in[n]}\), with minimum vertex degree condition of each \(H_{i}\), does \(H\) admit a rainbow Hamilton cycle?) and "it would be an interesting challenge to obtain this result". It hits a technical barrier to tackle. In this paper, we develop the new rainbow Hamilton framework, whose uncolored version is first established in [27], and give a general result as follows.
**Theorem 1.3**.: _For every \(k\geq 3,\gamma>0\), there exists \(n_{0}\) such that the following holds for \(n\geq n_{0}\). Given a \(k\)-graph system \(\textbf{{H}}=\{H_{i}\}_{i\in[n]}\), if \(\delta_{k-2}(H_{i})\geq(5/9+\gamma){n\choose 2}\) for \(i\in[n]\), then **H** admits a rainbow Hamilton cycle._
### Notation and preliminary
We call a hypergraph \(H\) a \((1,k)\)-graph if \(V(H)\) can be partitioned into \(V_{1}\) and \(V_{2}\) such that every edge contains exactly one vertex of \(V_{1}\) and \(k\) vertices of \(V_{2}\). Given a partition \(V(H)=V_{1}\cup V_{2}\), a \((1,d)\)-subset \(S\) of \(V(H)\) contains one vertex in \(V_{1}\) and \(d\) vertices in \(V_{2}\). Let \(\delta_{1,d}(H):=\min\{\deg_{H}(S):S\text{ is a }(1,d)\text{-subset of }V(H)\}\) for \(d\in[k-1]\).
A \(k\)-partite graph is a graph whose vertices are (or can be) partitioned into \(k\) different independent sets. Given a \((k+1)\)-partite \((k+1)\)-graph \(H\) with \(V(H)=V_{0}\cup V_{1}\cup\cdots\cup V_{k}\). A \((k+1)\)-uniform sequentially path \(P\) of _length_\(t\) in \(H\) is a \((k+1)\)-graph with vertex set \(V(P)=C(P)\cup I(P)\) where \(C(P)=\{c_{1},\ldots,c_{t-k+1}\}\subseteq V_{0}\), \(I(P)=\{v_{1},\ldots,v_{t}\}\subseteq V_{1}\cup\cdots\cup V_{k}\) and edge set \(\{e_{1},\ldots,e_{t-k+1}\}\) such that \(e_{i}=\{c_{i},v_{i},\ldots,v_{i+k-1}\}\) for \(i\in[t-k+1]\). Denote the length of \(P\) by \(\ell(P)\). We call \(c_{1},\ldots,c_{t-k+1}\) the _colors_ of \(P\) and \(v_{1},\ldots,v_{t}\) the _points_ of \(P\). For convenience, we use \((C(P),I(P))\) to denote the above sequentially tight path. Furthermore, if \((v_{1},\ldots,v_{t})\) is a cyclically ordered set, then we call this sequentially path a _sequentially cycle_. A \((k+1)\)-uniform sequentially walk is an ordered set of points with an ordered set of colors such that the \(i_{th}\)\(k\) consecutive points along with the \(i_{th}\) color forms an edge. Note that the points, edges and colors in a sequentially walk are allowed to be repeated. The length of a sequentially walk is its number of points.
Before we give the proof of Theorem 1.3, we use the following similar definitions with [27].
**Definition 1.4** (Sequentially Hamilton cycle threshold).: _The minimum \((1,k-2)\)-degree threshold for sequentially Hamilton cycles, denoted by \(thc_{k-2}(k)\), is the smallest number \(\delta>0\) such that, for every \(\varepsilon>0\), there exists an \(n_{0}\in\mathbb{N}\) such that every \((1,k)\)-graph \(H\) on \([n]\cup V\) with minimum degree \(\delta_{1,k-2}(H)\geq(\delta+\varepsilon){n\choose 2}\) contains a sequentially Hamilton cycle where \(|V|=n\geq n_{0}\)._
**Definition 1.5** (Sequentially tight connectivity).: _A subgraph \(H^{\prime}\) of a \((1,k)\)-graph \(H\) is sequentially tightly connected, if any two edges of \(H^{\prime}\) can be connected by a sequentially walk. A sequentially tight component of \(H\) is an edge maximal sequentially tightly connected subgraph._
Given \(\mathbf{b}\): \(V(H)\rightarrow[0,1]\), we define the \(\mathbf{b}\)-_fractional matching_ to be a function \(\mathbf{w}\): \(E(H)\rightarrow[0,1]\) such that \(\sum_{e:v\in e}\mathbf{w}(e)\leq\mathbf{b}(v)\) for every vertex \(v\in V(H)\). Moreover, if the equality holds, then we call \(\mathbf{w}\) perfect. Denote the maximum size of a \(\mathbf{b}\)-fractional matching by \(\nu(H,\mathbf{b})=\max_{\mathbf{w}}\sum_{e\in E(H)}\mathbf{w}(e)\) where \(\mathbf{w}\) is a \(\mathbf{b}\)-fractional matching. It is well-known that perfect matchings are closely related to its fractional counterpart. In particular, when \(\mathbf{b}\equiv 1\), the \(\mathbf{b}\)-_fractional matching_ is called _fractional matching_. The _density_ of a \(\mathbf{b}\)-fractional matching is \(\sum_{e\in E(H)}\mathbf{w}(e)/|V(H)|\)
Besides, we require the following characterization. Given a \(k\)-graph \(H\), we say that \(H\) is \(\gamma\)-_robustly matchable_ if the following holds. For every vertex weight \(\mathbf{b}\): \(V(H)\to[1-\gamma,1]\), there is an edge weight \(\mathbf{w}\): \(E(H)\to[0,1]\) with \(\sum_{e:v\in e}\mathbf{w}(e)=\mathbf{b}(v)/(k-1)\) for every vertex \(v\in V(H)\). Note that a \(\gamma\)-robustly matchable \(k\)-graph \(H\) admits a \(\mathbf{b}\)-fractional matching of size \(\sum_{v\in V(H)}\mathbf{b}(v)/k(k-1)\) for every vertex weighting \(\mathbf{b}\): \(V(H)\to[1-\gamma,1]\). The following definition plays an important role in our proof.
**Definition 1.6** (Link graph).: _Consider a \((1,k)\)-graph \(H\) on \(V(H)=[n]\cup V\) where \(|V|=n\) and a set \(S\) of \((1,\ell)\)-subset of \(V(H)\). We define the link \((k-\ell)\)-graph of \(S\) in \(H\) as the graph \(L_{H}(S)\) with vertex set \(V\) and edge set \(\{X:X\cup S\in E(H)\}\) for \(\ell\in[0,k-1]\). If \(H\) is clear, then we simply write \(L(S)\)._
Let \(H=(V,E)\) be a \(k\)-graph, \(V^{\prime}\subseteq V\), an _induced subgraph_\(H[V^{\prime}]\) of a \(k\)-graph \(H\) is a \(k\)-graph with vertex set \(V^{\prime}\) and edge set \(E^{\prime}\) where each edge is precisely the edge of \(H\) consisting of \(k\) vertices in \(V^{\prime}\). We usually denote \(H^{\prime}\) by \(H[V^{\prime}]\).
**Definition 1.7** (Rainbow Hamilton framework).: _Let \(\alpha,\gamma,\delta\) be positive constants. Suppose \(R\) is a \((1,k)\)-graph on \([t]\cup V\) where \(|V|=t\), we call a subgraph \(H\) of \(R\) an \((\alpha,\gamma,\delta)\)-rainbow Hamilton framework, if \(H\) has the following properties._
1. \(H_{i}:=H[\{i\}\cup V]\) _is sequentially tightly connected for_ \(i\in[t]\)_,_
2. \(H_{i}\) _contains a sequentially closed walk of length 1 mod_ \(k\) _for_ \(i\in[t]\)_,_
3. \(H_{W_{i}}:=H[[t(i-1)/k+1,ti/k]\cup V]\) _is_ \(\gamma\)_-robustly matchable for_ \(i\in[k]\)_,_
4. _For every color_ \(i\in[t]\)_, there are at least_ \((1-\alpha)t\) _points_ \(v\in V\) _such that_ \(\{i,v\}\) _has relative_ \((1,1)\)_-degree at least_ \(1-\delta+\gamma\)_,_
5. \(L_{H}(\{i\})\) _and_ \(L_{H}(\{j\})\) _intersect in an edge for each_ \(i,j\in[t]\)_._
We write \(x\ll y\) to mean that for any \(y\in(0,1]\), there exists an \(x_{0}\in(0,1)\) such that for all \(x\leq x_{0}\), the subsequent statements hold. Hierarchies with more constants are defined similarly to be read from right to left.
**Definition 1.8** (Rainbow Hamilton framework threshold).: _The minimum \((1,k-2)\)-degree threshold for \((1,k)\)-uniform rainbow Hamilton framework, denoted by \(rhf_{k-2}(k)\), is the smallest value of \(\delta\) such that the following holds._
_Suppose \(\varepsilon,\alpha,\gamma,\mu>0\) and \(t\in\mathbb{N}\) with \(1/t\ll\varepsilon\ll\alpha\ll\gamma\ll\mu\). If \(R\) is a \((1,k)\)-graph on \([t]\cup V\) where \(|V|=t\), with minimum relative \((1,k-2)\)-degree at least \(\delta+\mu\) and a set \(I\subseteq E(R)\) of at most \(\varepsilon t\binom{t}{k}\) perturbed edges, then \(R\) contains an \((\alpha,\gamma,\delta)\)-rainbow Hamilton framework \(H\) that avoids the edges of \(I\)._
We transform the problem of bounding the sequentially Hamilton cycle threshold to bound the rainbow Hamilton framework threshold.
**Theorem 1.9** (Framework Theorem).: _For \(k\geq 3\), we have \(thc_{k-2}(k)\leq rhf_{k-2}(k)\)._
Let the _shadow graph_\(\partial_{j}(H)\) of \((1,k)\)-graph \(H\) at level \(j\) be the \((1,j)\)-graph on \([n]\cup V\) whose edges are \((1,j)\)-sets contained in the edges of \(H\) for \(j\in[k]\).
**Definition 1.10** (Vicinity).: _Given a \((1,k)\)-graph \(R\) on \([t]\cup V\), we say that \(\mathcal{C}_{i}=\{C_{S}\subseteq L(S):S\in\partial_{k-2}(R)\text{ and }i\in S\}\) for each \(i\in[t]\) is a \((k-2)\)-vicinity. We define the \((1,k)\)-graph \(H\) generated by \(\mathcal{C}_{i}\) as the subgraph of \(R\) with vertex set \(V(H)=\{i\}\cup V\) and edge set_
\[E(H)=\bigcup_{i\in S,S\in\partial_{k-2}(R)}\{A\cup S:A\in C_{S}\}.\]
Besides, we need the following structures.
**Definition 1.11** (Switcher).: _A switcher in a graph \(G\) is an edge \(ab\) such that \(a\) and \(b\) shares a common neighbor in \(G\)._
Note that a switcher together with its common neighbor generates a triangle.
**Definition 1.12** (Arc).: _Let \(R_{i}\) be a \((1,k)\)-graph on \(\{i\}\cup V\) with \((k-2)\)-vicinity \(\mathcal{C}_{i}=\{C_{S}:S\in\partial_{k-2}(R_{i})\}\). We say that a \((1,k+1)\)-tuple \((i,v_{1},\dots,v_{k+1})\) is an arc for \(\mathcal{C}_{i}\) if the following holds._
* \(\{i,v_{1},\dots,v_{k-2}\}\in\partial_{k-2}(R_{i})\) _with_ \(\{v_{k-1},v_{k}\}\in C_{\{i,v_{1},\dots,v_{k-2}\}}\)_._
* \(\{i,v_{2},\dots,v_{k-1}\}\in\partial_{k-2}(R_{i})\) _with_ \(\{v_{k},v_{k+1}\}\in C_{\{i,v_{2},\dots,v_{k-1}\}}\)_._
**Definition 1.13** (Rainbow Hamilton vicinity).: _Let \(\gamma,\delta>0\). Suppose that \(R\) is a \((1,k)\)-graph on \([t]\cup V\), let \(R_{i}:=R[\{i\}\cup V]\). We say that a family \(\mathcal{C}=\{\mathcal{C}_{i}:i\in[t]\}\) of \((k-2)\)-vicinities where \(\mathcal{C}_{i}=\{C_{S}:S\in\partial_{k-2}(R_{i})\}\) is \((\gamma,\delta)\)-rainbow Hamilton if for any \(S,S^{\prime}\in\partial_{k-2}(R_{i})\) and \(T\in\partial_{k-2}(R_{j})\) where \(i\neq j\), the followings hold,_
1. \(C_{S}\) _is tightly connected,_
2. \(C_{S}\) _and_ \(C_{S^{\prime}}\) _intersect in an edge,_
3. \(C_{S}\) _has a switcher and the vicinity_ \(\mathcal{C}_{i}\) _has an arc for_ \(i\in[t]\)_,_
4. \(C_{S}\) _has a fractional matching of density_ \((1+1/k)(1/(k+1)+\gamma)\)_,_
5. \(C_{S}\) _has edge density at least_ \(1-\delta+\gamma\)_,_
6. \(C_{S}\) _and_ \(C_{T}\) _intersect in an edge._
**Definition 1.14** (Perturbed degree).: _Let \(\alpha,\delta>0\). We say that a \((1,k)\)-graph \(R\) has \(\alpha\)-perturbed minimum relative \((1,k-2)\)-degree at least \(\delta\) if the followings hold for \(j\in[k-2]\)._
1. _every edge of_ \(\partial_{j}(R)\) _has relative degree at least_ \(\delta\) _in_ \(R\)_,_
2. \(\overline{\partial_{j}(R)}\) _has edge density at most_ \(\alpha\)_, where_ \(\overline{\partial_{j}(R)}\) _denotes the complement of_ \(\partial_{j}(R)\)_,_
3. _each_ \((1,j-1)\)_-tuple of_ \(\partial_{j-1}(R)\) _has relative degree less than_ \(\alpha\) _in_ \(\overline{\partial_{j}(R)}\)_._
**Definition 1.15** (Rainbow Hamilton vicinity threshold).: _The minimum \((1,k-2)\)-degree threshold for \((1,k)\)-uniform rainbow Hamilton vicinities, denoted by \(rhv_{k-2}(k)\), is the smallest value \(\delta>0\) such that the following holds. Let \(\alpha,\gamma,\mu>0\), \(t\in\mathbb{N}\) with \(1/t\ll\alpha\ll\gamma\ll\mu\) and \(R\) be a \((1,k)\)-graph on \([t]\cup V\). If each \(R_{i}:=R[\{i\}\cup V]\) has \(\alpha\)-perturbed minimum relative \((1,k-2)\)-degree at least \(\delta+\mu\) for \(i\in[t]\), then \(R\) admits a family of \((\gamma,\delta)\)-rainbow Hamilton \((k-2)\)-vicinities._
**Theorem 1.16** (Vicinity Theorem).: _For \(k\geq 3\), \(rhf_{k-2}(k)\leq rhv_{k-2}(k)\)._
Combining Theorem 1.16 with Theorem 1.9, we just need to prove the following theorem, and we can obtain Theorem 1.3.
**Theorem 1.17**.: _For \(k\geq 3\), \(rhv_{k-2}(k)\leq 5/9\)._
We use the following concentration inequalities.
**Proposition 1.18** (Chernoff's inequality [4]).: _Suppose that \(X\) has the binomial distribution and \(0<a<3/2\), then \(\Pr(|X-\mathbb{E}X|\geq a\mathbb{E}X)\leq 2e^{-a^{2}\mathbb{E}X/3}\)._
**Proposition 1.19** (McDiarmid's inequality [32]).: _Suppose \(X_{1},\ldots,X_{m}\) are independent Bernoulli random variables and \(b_{i}\in[0,B]\) for \(i\in[m]\). Suppose that \(X\) is a real-valued random variable determined by \(X_{1},\ldots,X_{m}\) such that altering the value of \(X_{i}\) changes \(X\) by at most \(b_{i}\) for \(i\in[m]\). For all \(\lambda>0\), we have_
\[\Pr(|X-\mathbb{E}X|>\lambda)\leq 2\exp\left(\frac{-2\lambda^{2}}{B\Sigma_{i=1 }^{m}b_{i}}\right).\]
### Organisation of the paper
The paper is organised as follows. In Section 2, we show that how a rainbow Hamilton vicinity deduce a rainbow Hamilton framework. In Section 3, we show the minimum degree condition guarantees a rainbow Hamilton vicinity. We review the hypergraph regularity method in Section 4. In Section 5, the proof of Theorem 1.9 is obtained by the absorption method and almost cover lemma, whose details can be seen in Section 6 and Section 7 respectively. We conclude the paper with a discussion in Section 8. For the proof of absorption lemma and almost cover lemma, we develop a new rainbow Hamilton framework. It is of great interest to tackle the rainbow Hamilton cycle embedding problem with other conditions. In the proof of absorption lemma, which was widely popularised by Rodl, Rucinski and Szemeredi [41], we have an innovation point where we provides a mentality for absorbing a color set and a point set. An absorber can be divided into two parts, one for the color set and the other for the point set. The almost cover lemma is obtained by tools with regularity. However, connecting the end-pairs of paths arising in the proof requires more involved changes. The traditional connecting lemma asserts that for every pair of disjoint pairs of vertices there exists a relatively short tight path, but there might be pairs of vertices that are not contained in any hyperedge at all as the following example shows. Consider a \(3\)-graph system \(\boldsymbol{H}=\{H_{i}\}_{i\in[n]}\), \(V(H_{i})=V=X\cup Y\) where \(|X|<\frac{1}{3}n\), each \(H_{i}\) has edge set \(E=\{e\in V^{(3)}:|X\cap e|\neq 2\}\), it is easy to confirm that this \(3\)-graph system satisfies the degree condition in Theorem 1.3, but every tight path starting with a pair of vertices in \(X\) is bound to stay in \(X\). We overcome the obstacle in Section 6.
## 2. From vicinity to framework
Our goal is to prove Theorem 1.16 in this part. We need the followings lemmas.
**Lemma 2.1**.: _Let \(R_{i}\) be a \((1,k)\)-graph on \(\{i\}\cup V\) with a \((k-2)\)-vicinity \(\mathcal{C}_{i}=\{C_{S}:S\in\partial_{k-2}(R_{i})\}\) for \(i\in[t]\). For every \(S,S^{\prime}\in\partial_{k-2}(R_{i})\), if the vicinity \(\mathcal{C}_{i}\) has an arc for \(i\in[t]\), \(C_{S}\) and \(C_{S^{\prime}}\) intersect, \(C_{S}\) is tightly connected and has a switcher, then the vertex spanning subgraph \(H\) of \(R_{i}\) generated by \(\mathcal{C}_{i}\) is sequentially tightly connected and contains a sequentially closed walk of length 1 mod \(k\)._
**Lemma 2.2**.: _Let \(\gamma,\alpha,\delta>0\) such that \(1/t\ll\alpha,\gamma\ll 1/k\). Let \(R\) be a \((1,k)\)-graph on \([t]\cup V\) where \(|V|=t\) and each \(R_{i}\) has \(\alpha\)-perturbed minimum relative \((1,k-2)\)-degree at least \(\delta\). Let
\(\mathcal{C}=\{\mathcal{C}_{i}:i\in[t]\}\) be a family of \((k-2)\)-vicinities where \(\mathcal{C}_{i}=\{C_{S}:S\in\partial_{k-2}(R_{i})\}\). If for every \(S\in\partial_{k-2}(R)\), \(C_{S}\) has a fractional matching of density \((1+1/k)(1/(k+1)+\gamma)\), then the graph \(H\subseteq R\) generated by \(\mathcal{C}_{W_{i}}:=\{\mathcal{C}_{j}:j\in[t(i-1)/k+1,ti/k]\}\) is \(\gamma\)-robustly matchable for each \(i\in[k]\)._
**Lemma 2.3**.: _Let \(t,k\in\mathbb{N},i\in[t]\) and \(\delta,\alpha,\varepsilon>0\) with \(1/t\ll\varepsilon\ll\alpha\ll\delta,1/k\). Let \(R_{i}\) be a \((1,k)\)-graph on \(\{i\}\cup V\) with minimum relative \((1,k-2)\)-degree at least \(\delta\) where \(|V|=t\). Let \(I\) be a subgraph of \(R_{i}\) with edge density at most \(\varepsilon\), there exists a vertex spanning subgraph \(R^{\prime}_{i}\subseteq R_{i}-I\) of \(\alpha\)-perturbed minimum relative \((1,k-2)\)-degree at least \(\delta-\alpha\)._
Proof of Theorem 1.16.: Let \(\delta=rhv_{k-2}(k)\) and \(\varepsilon,\alpha,\gamma>0\) with \(t_{0}\in\mathbb{N}\) such that
\[1/t\ll\varepsilon\ll\alpha\ll\alpha^{\prime}\ll\gamma\ll\mu\ll\delta,1/k.\]
Moreover,the constants \(t,\varepsilon,\alpha,\mu\) are compatible with the constant hierarchy given by Definition 1.15, \(t,\varepsilon,2\alpha,\mu\) satisfy the conditions of Lemma 2.2 and \(t,\varepsilon,\alpha,\delta\) satisfy the conditions of Lemma 2.3.
Given a \((1,k)\)-graph \(R_{i}\) on \(\{i\}\cup V\) with minimum relative \((1,k-2)\)-degree at least \(\delta+2\mu\) and a set \(I\) of at most \(\varepsilon\binom{t}{k}\) perturbed edges. We start by selecting a subgraph of \(R_{i}\). By Lemma 2.3, we obtain a vertex spanning subgraph \(R^{\prime}_{i}\subseteq R_{i}-I\) of \(\alpha\)-perturbed minimum relative \((1,k-2)\)-degree at least \(\delta+\mu\).
By the definition of 1.15, \(R^{\prime}:=\bigcup_{i\in[t]}R^{\prime}_{i}\) has a family of \((2\gamma,\delta)\)-rainbow Hamilton \((k-2)\)-vicinities \(\mathcal{C}=\{\mathcal{C}_{i}:i\in[t]\}\) where \(\mathcal{C}_{i}=\{C_{S}:S\in\partial_{k-2}(R_{i})\}\). Each \(\mathcal{C}_{i}\) generates a \((1,k)\)-graph \(G_{i}\). Let \(H=\bigcup_{i\in[t]}G_{i}\). Note that \(G_{i}\) does not contain the edges of \(I\) and \(V(G_{i})=V(R^{\prime}_{i})\). By Lemma 2.1 and 2.2, \(H\) also satisfies (F1)-(F3). For \(k\geq 4\), by repeatedly applying Definition 1.14, we deduce that for all but at most \(\alpha t\)\((1,1)\)-sets of \(V(R^{\prime}_{i})\) is contained in at least \((1-2\alpha)^{k-3\binom{|V^{\prime}|-1}{k-3}}\geq(1-2(k-3)\alpha)\binom{|V^{ \prime}|-1}{k-3}\) many \((1,k-2)\)-sets in \(\partial_{k-2}(R^{\prime}_{i})\). Note that \(\partial_{k-2}(R^{\prime}_{i})=\partial_{k-2}(G_{i})\). This implies that for all but at most \(\alpha t\)\((1,1)\)-tuples of \(V(G_{i})\) has relative degree at least \(1-2(k-3)\alpha\) in \(\partial_{k-2}(G_{i})\). Moreover, every \((1,k-2)\)-set in \(\partial_{k-2}(G_{i})\) has relative degree at least \(1-\delta+2\gamma\) in \(G_{i}\), since \(G_{i}\) is generated from \((2\gamma,\delta)\)-rainbow Hamilton \((k-2)\)-vicinity and Definition 1.13. Thus, we obtain that for each color \(i\in[t]\), there are at least \((1-\alpha)t\) points \(v\in V\) such that \(\{i,v\}\) has relative \((1,1)\)-degree at least \(1-\delta+\gamma\), which implies (F4) for \(k\geq 4\). While for \(k=3\), by Definition 1.13, we have every \((1,1)\)-set has relative degree at least \(1-\delta+2\gamma\) in \(G_{i}\), which implies (F4) for \(k=3\). Besides, it is obvious that (V6) implies (F5), we obtain an \((\alpha,\gamma,\delta)\)-framework, as desired.
### The Proof of Lemma 2.1
We define a _directed edge_ in a \(k\)-graph to be a \(k\)-tuple whose vertices correspond to an underlying edge. Note that the directed edges \((a,b,c),(b,c,a)\) corresponds to the same underlying edge \(\{a,b,c\}\). Given a \(k\)-graph system \(\textbf{{H}}=\{H_{i}\}_{i\in[n]}\) on vertex set \(V\), we consider the hypergraph \(H\) with vertex set \([n]\cup V\) and edge set \(\{\{i\}\cup e:e\in E(H_{i}),i\in[n]\}\). Define a directed edge to be a \((1,k)\)-tuple \((i,v_{1},\ldots,v_{k})\) with \(k\) points corresponding to an underlying edge \(\{v_{1},\ldots,v_{k}\}\) in \(H_{i}\). Given a \(k\)-tuple \(\overrightarrow{S}:=(v_{1},\ldots,v_{k})\), abbreviated as \(v_{1}\cdots v_{k}\), we use \(\overrightarrow{S}\subseteq V\) to mean that the corresponding \(k\)-set of \(\overrightarrow{S}\) is a subset of \(V\). Similarly, given a family \(F\) of \(k\)-sets and a \(k\)-tuple \(\overrightarrow{S}\), we use \(\overrightarrow{S}\in F\) to denote that the corresponding \(k\)-set of \(\overrightarrow{S}\) is an element of \(F\). Let
\(\overrightarrow{S}=(v_{1},\ldots,v_{k})\), \(\overrightarrow{S}\setminus\{v_{i}\}\) is the \((k-1)\)-tuple \((v_{1},\ldots,v_{i-1},v_{i+1},\ldots,v_{k})\) for \(i\in[k]\), \(\{v^{\prime}_{i}\}\cup\overrightarrow{S}\setminus\{v_{i}\}\) is the \(k\)-tuple \((v_{1},\ldots,v_{i-1},v^{\prime}_{i},v_{i+1},\ldots,v_{k})\).
**Definition 2.4** (Strong Connectivity).: _A hypergraph is called strongly connected, if every two directed edges lie on a sequentially walk._
**Claim 2.5**.: _If \(G\) is a tightly connected graph, then \(G\) is strongly connected._
Proof.: Let \(ab\) be a switcher in \(G\), by Definition 1.11, we obtain that \(a\) and \(b\) share a neighbor \(c\). If we can prove that \((a,b)\) and \((b,a)\) are on a walk \(W\), then we can obtain that \(G\) is strongly connected. Since we consider any two directed edges \(D_{1}\) and \(D_{2}\) of \(G\), there are walks \(W_{1}\) and \(W_{2}\) starting from \(D_{1}\) and \(D_{2}\) respectively and ending with \(\{a,b\}\), \(W_{1}WW_{2}\) is a tight walk starting from \(D_{1}\) and ending with \(D_{2}\). While it is easy to see that \(aba\) is a tight walk from \((a,b)\) to \((b,a)\), as desired.
Next, we want to show that switchers can control the length of sequentially walks.
**Proposition 2.6**.: _If \(G\) is a tightly connected graph containing a switcher, then \(G\) has a closed tight walk of odd length._
**Proposition 2.7**.: _Let \(R\) be a \((1,k)\)-graph with a subgraph \(H\) which is generated by \(\mathcal{C}_{i}\). Suppose that \(\mathcal{C}_{i}\) satisfies the conditions of Lemma 2.1, for any \((1,k-2)\)-tuple \(\overrightarrow{S}\in\partial_{k-2}(H)\) and two directed edges \(D_{1},D_{2}\in C_{\overrightarrow{S}}\), there exists a sequentially walk \(W\) of length 0 mod \(k\) in \(H\) starting from \(\overrightarrow{S}D_{1}\) and ending with \(\overrightarrow{S}D_{2}\)._
Proof.: Let \(\mathcal{C}_{i}=\{C_{\overrightarrow{S}}:\overrightarrow{S}\in\partial_{k-2}( R)\text{ and }i\in\overrightarrow{S}\}\) and \(\overrightarrow{S}=\{i\}\cup\overrightarrow{S}^{\prime}\) where \(\overrightarrow{S}^{\prime}\) is a \((k-2)\)-tuple. By Proposition 2.6, there is a closed tight walk \(W_{1}\) of odd length in \(C_{\overrightarrow{S}}\). By Proposition 2.5, there is a tight walk \(W_{2}\) starting from \(D_{1}\), ending with \(D_{2}\) and containing \(W_{1}\) as a subwalk. Let \(\ell(W_{2})=p\). We obtain \(W_{3}\) from \(W_{2}\) by replacing \(W_{1}\) with the concatenation of \(p+1\) mod 2 copies of \(W_{1}\). Hence, \(W_{3}\) is a tight walk of even length in \(C_{\overrightarrow{S}}\) starting from \(D_{1}\) and ending with \(D_{2}\).
Suppose that \(W_{3}=(a_{1},a_{2}\ldots,a_{2m})\), we have \(D_{1}=(a_{1},a_{2})\) and \(D_{2}=(a_{2m-1},a_{2m})\). Note that \((i\ldots i,\overrightarrow{S}^{\prime}a_{1}a_{2}\overrightarrow{S}^{\prime}a_{ 3}a_{4}\cdots\overrightarrow{S}^{\prime}a_{2m-1}a_{2m})\) is a sequentially walk in \(H\). Moreover, it has length 0 mod \(k\), as desired.
**Proposition 2.8**.: _Let \(R\) be a \((1,k)\)-graph with a subgraph \(H\) that is generated by \(\mathcal{C}_{i}\). Suppose \(\mathcal{C}_{i}\) satisfies the conditions of Lemma 2.1, we consider directed edges \(\overrightarrow{S},\overrightarrow{T}\in\partial_{k-2}(H)\) and \(D_{1}\in C_{\overrightarrow{S}}\), \(D_{2}\in C_{\overrightarrow{T}}\). If \(\overrightarrow{S}\) and \(\overrightarrow{T}\) differ in exactly one coordinate, then there is sequentially walk of length 0 mod \(k\) in \(H\) starting from \(\overrightarrow{S}D_{1}\) and ending with \(\overrightarrow{T}D_{2}\)._
Proof.: Let \(\overrightarrow{S}=(i,v_{1}\ldots v_{i}\ldots v_{k-2})\) and \(\overrightarrow{T}=(i,v_{1}\ldots u_{i}\ldots v_{k-2})\) where \(u_{i}\neq v_{i}\). By Definition 1.13, there is a directed edge \(D_{3}\) in \(C_{\overrightarrow{S}}\cap C_{\overrightarrow{T}}\), thus \((ii,\overrightarrow{S}\setminus\{i\}D_{3}\overrightarrow{T}\setminus\{i\})\) is a sequentially walk in \(H\). By Proposition 2.7, there is a sequentially walk \(W_{1}\) of length 0 mod \(k\) starting from \(\overrightarrow{S}D_{1}\) and ending with \(\overrightarrow{S}D_{3}\), \(W_{2}\) of length 0 mod \(k\) starting from \(\overrightarrow{T}D_{3}\) and ending with \(\overrightarrow{T}D_{2}\), \((C(W_{1})C(W_{2}),I(W_{1})I(W_{2}))\) is the desired walk.
**Proposition 2.9**.: _Let \(R\) be a \((1,k)\)-graph with a subgraph \(H\) that is generated by \(\mathcal{C}_{i}\). Suppose \(\mathcal{C}_{i}\) satisfies the conditions of Lemma 2.1, we consider directed edges \(\overrightarrow{S},\overrightarrow{T}\in\partial_{k-2}(H)\) and \(D_{1}\in C_{\overrightarrow{S}}\), \(D_{2}\in C_{\overrightarrow{T}}\). There is a sequentially walk of length 0 mod \(k\) in \(H\) starting from \(\overrightarrow{S}D_{1}\) and ending with \(\overrightarrow{T}D_{2}\)._
Proof.: Let \(r\in[k-2]\) be the number of indices where \(\overrightarrow{S}\) and \(\overrightarrow{T}\) differ. If \(r=1\), the result follows from Proposition 2.8. Suppose the result is known for \(r-1\). By Definition 1.13, there exists an edge \(pq\) in \(C_{\overrightarrow{S}}\cap C_{\overrightarrow{T}}\).
Suppose that the \(i\)th coordinate vertex of \(\overrightarrow{S}\) and \(\overrightarrow{T}\) are different, which are replaced with \(p\), we obtain \(\overrightarrow{S}^{\prime}\) and \(\overrightarrow{T}^{\prime}\). Note that \(\overrightarrow{S}^{\prime}\), \(\overrightarrow{T}^{\prime}\in\partial_{k-2}(H)\). Choose \(D_{1}^{\prime}\in C_{\overrightarrow{S}^{\prime}}\). By Proposition 2.8, there is a sequentially walk \(W_{1}\) of length 0 mod \(k\) from \(\overrightarrow{S}D_{1}\) to \(\overrightarrow{S}^{\prime}D_{1}^{\prime}\), similarly, there is a sequentially walk \(W_{3}\) of length 0 mod \(k\) from \(\overrightarrow{T}^{\prime}D_{2}^{\prime}\) to \(\overrightarrow{T}D_{2}\) where \(D_{2}^{\prime}\in C_{\overrightarrow{T}^{\prime}}\). By induction, there is a sequentially walk \(W_{2}\) from \(\overrightarrow{S}^{\prime}D_{1}^{\prime}\) to \(\overrightarrow{T}^{\prime}D_{2}^{\prime}\) of length 0 mod \(k\). Thus, \((C(W_{1})C(W_{2})C(W_{3}),I(W_{1})I(W_{2})I(W_{3}))\) is the desired walk.
_The proof of Lemma 2.1._ Consider any two edges \(X\) and \(Y\) of \(H\). Since \(H\) is generated by \(\mathcal{C}_{i}\), let \(X=S\cup A\) and \(Y=T\cup B\) where \(A\in C_{S}\) and \(B\in C_{T}\). The desired walk can be obtained from Proposition 2.9.
Next, we need to show that \(H\) contains a closed walk of length 1 mod \(k\). Since \(\mathcal{C}_{i}\) admits an arc \(\{i,v_{1},\ldots,v_{k+1}\}\), by Proposition 2.9, there is a sequentially walk \(W\) of length 0 mod \(k\) from \(\{i,v_{2},\ldots,v_{k+1}\}\) to \(\{i,v_{1},\ldots,v_{k}\}\). Thus, \((C(W)i,I(W)v_{k+1})\) is a closed walk of length 1 mod \(k\).
### The proof of Lemma 2.2
In this section, we show the details of proving Lemma 2.2. The following claim can be seen in [27], we use a corollary of the claim in this paper.
**Claim 2.10**.: [27] _Let \(H\) be a \(k\)-graph and \(\boldsymbol{b}:V(H)\to[0,1]\). Suppose that there exists \(m\leq\sum_{v\in V(H)}\boldsymbol{b}(v)/k\) such that for every \(v\in V(H)\), the link graph \(L_{H}(\{v\})\) has a \(\boldsymbol{b}\)-fractional matching of size \(m\), then \(H\) has a \(\boldsymbol{b}\)-fractional matching of size \(m\)._
**Corollary 2.1**.: _Let \(H\) be a \(k\)-graph, \(\alpha\in[0,1)\) and \(\boldsymbol{b}:V(H)\to[0,1]\). Suppose that there exists \(m\leq\sum_{v\in V(H)}\boldsymbol{b}(v)/k\) such that for all but at most \(\alpha|V(H)|\) isolated vertices \(v\), the link graph \(L_{H}(\{v\})\) has a \(\boldsymbol{b}\)-fractional matching of size \(m\), then \(H\) has a \(\boldsymbol{b}\)-fractional matching of size \(m\)._
Proof.: We first delete the isolated vertices of \(H\) and obtain a subgraph \(H^{\prime}\) of \(H\). Thus, \(L_{H^{\prime}}(\{v\})\) has a \(\boldsymbol{b}\)-fractional matching of size \(m\). By Claim 2.1, we obtain that \(H^{\prime}\) has a \(\boldsymbol{b}\)-fractional matching \(\boldsymbol{w}\) of size \(m\). Assign a weight \(\boldsymbol{b}^{\prime}(u)\in[0,1]\) to each isolated vertex \(u\) of \(H\), and \(\boldsymbol{b}^{\prime}(v)=\boldsymbol{b}(v)\) for each non-isolated vertex \(v\) of \(H\), it is obvious that \(H\) has a \(\boldsymbol{b}^{\prime}\)-fractional matching \(\boldsymbol{w}\) of size \(m\) since \(\sum_{e\ni u}\boldsymbol{w}(e)=0\) for any isolated vertex \(u\) and \(E(H^{\prime})=E(H)\).
**Proposition 2.11**.: _Let \(R\) be a \((1,k)\)-graph on \([n/k]\cup V\) where \(|V|=n\), \(\gamma>0\), \(\alpha\in[0,1)\), \(\boldsymbol{b}:[n/k]\cup V\to[1-\gamma,1]\). Suppose that there exists \(m\leq\sum_{v\in V(R)}\boldsymbol{b}(v)/(k+1)\) such that given \(c\in[n/k]\), for all but at most \(\alpha n\) vertices \(v\in V\), the link graph \(L_{R}(\{c,v\})\) has a \(\boldsymbol{b}\)-fractional matching of size \(m\), then \(R\) has a \(\boldsymbol{b}\)-fractional matching of size \(m/k\)._
Proof.: By Corollary 2.1 with \(H\) being \(L_{R}(\{c\})\) for \(c\in[n/k]\), we obtain that \(L_{R}(\{c\})\) has a \(\mathbf{b}\)-fractional matching of size \(m\) for \(c\in[n/k]\).
Next, we want to construct a \(\mathbf{b}\)-fractional matching of size \(m/k\) for \(R\). Let \(\mathbf{w}_{c}:E(L_{R}(\{c\}))\to[0,1]\) such that \(\sum_{v\in e,e\in L_{R}(\{c\})}\mathbf{w}_{c}(e)\leq\mathbf{b}(v)\) where \(\sum_{e\in L_{R}(\{c\})}\mathbf{w}_{c}(e)=m\). Let \(\mathbf{w}(f)=\frac{1}{n}\mathbf{w}_{c}(e)\) for \(e\in L_{R}(\{c\})\) and \(f=e\cup\{c\}\), \(c\in[n/k]\). Thus, we have \(\sum_{f\in E(R)}\mathbf{w}(f)=\sum_{c\in[n/k]}\sum_{e\in L_{R}(\{c\})}\frac{1} {n}\mathbf{w}_{c}(e)=\frac{m}{k}\). It is easy to see that \(\sum_{c\in f}\mathbf{w}(f)=\sum_{e\in L_{R}(\{c\})}\frac{1}{n}\mathbf{w}_{c}( e)=\frac{m}{n}\leq\frac{1}{k}\leq\mathbf{b}(c)\). And \(\sum_{v\in f}\mathbf{w}(f)=\sum_{c\in[n/k]}\sum_{v\in e,e\in L_{R}(\{c\})} \frac{k}{n}\mathbf{w}_{c}(e)\leq\sum_{c\in[n/k]}\frac{k}{n}\mathbf{b}(v)= \mathbf{b}(v)\) for \(v\in V\). As desired.
We use the following results of [27] directly.
**Proposition 2.12**.: _[_27_]_ _Let \(H\) be a \(k\)-graph and \(m\leq v(H)/k\). If for every vertex \(v\) of \(V(H)\), \(L_{H}(\{v\})\) has a fractional matching of size \(m\), then \(H\) has a fractional matching of size \(m\)._
**Proposition 2.13**.: _Let \(d\in[k-2]\) and \(\alpha,\gamma,\delta>0,k\geq 3\) such that \(\alpha,\gamma\ll 1/k\). Let \(R\) be a \((1,k)\)-graph on \([t]\cup V\) with \(\alpha\)-perturbed minimum \((1,k-2)\)-degree \(\delta\) where \(|V|=t\). If for every \(S\in\partial_{d}(R)\), the link graph \(L(S)\) contains a fractional matching of size at least \((1+1/k)(1/(k+1)+\gamma)t\), then for every edge \(S^{\prime}\in\partial_{1}(R)\), the link graph \(L(S^{\prime})\) contains a fractional matching of size at least \((1+1/k)(1/(k+1)+\gamma)t\)._
Proof.: We prove it by induction on \(d\). Note that the base case when \(d=1\) is obvious. Suppose that given \(d\in[2,k-2]\), we obtain the conclusion for \(d^{\prime}<d\). Let \(S\subseteq V(R)\) be a \((1,d-1)\)-set in \(\partial_{d-1}(R)\). Consider any vertex \(s^{\prime}\) in \(\partial_{1}(L_{R}(S))\), \(S\cup\{s^{\prime}\}\) is an edge in \(\partial_{d}(R)\). By assumption, \(L_{R}(S\cup\{s^{\prime}\})\) has a fractional matching of size at least \((1+1/k)(1/(k+1)+\gamma)t\), thus, we have \(L_{R^{\prime}}(\{s^{\prime}\})\) contains a fractional matching of size at least \((1+1/k)(1/(k+1)+\gamma)t\) for any vertex \(s^{\prime}\) of \(V\) where \(R^{\prime}\) is the subgraph of \(L_{R}(S)\) induced on the non-isolated vertices of \(L_{R}(S)\).
By Definition 1.14, \(S\) has at most \(\alpha t\) neighbors in \(\overline{\partial_{d}(R)}\). It follows that \(v(R^{\prime})=\partial_{1}(L_{R}(S))\geq(1-\alpha)t\) and \((1+1/k)(1/(k+1)+\gamma)t\leq v(R^{\prime})/(k-d+1)\) since \(\alpha,\gamma\ll 1/k\). By Proposition 2.12 with the condition that \(L_{R^{\prime}}(\{s^{\prime}\})\) contains a fractional matching of size at least \((1+1/k)(1/(k+1)+\gamma)t\) for any vertex \(s^{\prime}\) of \(V\), we obtain \(R^{\prime}\)(and thus \(L_{R}(S)\)) contains a fractional matching of size \((1+1/k)(1/(k+1)+\gamma)t\). Since \(S\) is arbitrary, for any \(S\in\partial_{d-1}(R)\), \(L_{R}(S)\) contains a fractional matching of size \((1+1/k)(1/(k+1)+\gamma)t\). Hence, we are done by the induction hypothesis.
The proof of Lemma 2.2.: Suppose that \(V(H)=[t/k]\cup V^{\prime}\) where \(|V^{\prime}|=t\). By assumption, \(C_{S}\) contains a fractional matching of size \((1+1/k)(1/(k+1)+\gamma)t\) for every \(S\in\partial_{k-2}(H)\) and \(C_{S}\) is a subgraph of \(L_{H}(S)\). By Proposition 2.13, we have \(L_{H}(\{i,v\})\) contains a fractional matching of size \((1+1/k)(1/(k+1)+\gamma)t\) for every \(\{i,v\}\in\partial_{1}(H)\).
We want to show that \(H\) is \(\gamma\)-robustly matchable. Given a vertex weight \(\mathbf{b}\): \([t/k]\cup V^{\prime}\to[1-\gamma,1]\), we have to find a \(\mathbf{b}\)-fractional matching \(\mathbf{w}\) such that \(\sum_{e\ni v}\mathbf{w}(e)=\mathbf{b}(v)/k\) for any vertex \(v\in V(H)\). That is, we need to find a \(\mathbf{b}\)-fractional matching with size \(\sum_{v\in V(H)}\mathbf{b}(v)/k(k+1)\). Given \(i\in[t/k]\), there are at most \(\alpha t\) isolated \((1,1)\)-tuples by Definition 1.14. For any non-isolated \((1,1)\)-tuple \((i,v)\) of \(V(H)\), let \(\mathbf{x}\) be a fractional matching in \(L_{H}(\{i,v\})\) of size at least \((1+1/k)(1/(k+1)+\gamma)t\) and let \(\mathbf{w}^{\prime}=(1-\gamma)\mathbf{x}\), since \(1-\gamma\leq\mathbf{b}(v)\) for any \(v\in V(H)\), thus \(\mathbf{w}^{\prime}\) is a \(\mathbf{b}\)-fractional matching
in \(L_{H}(\{i,v\})\). Moreover \(\mathbf{w}^{\prime}\) has size at least \((1-\gamma)(1+1/k)(1/(k+1)+\gamma)t\geq(1+1/k)t/(k+1)\geq\sum_{v\in V(H)}\mathbf{ b}(v)/(k+1)\) since \(1/t\ll\gamma\ll 1/k\). We can assume that \(\mathbf{w}^{\prime}\) has size exactly \(\sum_{v\in V(H)}\mathbf{b}(v)/(k+1)\). By Proposition 2.11, we obtain that \(H\) has a \(\mathbf{b}\)-fractional matching of size \(\sum_{v\in V(H)}\mathbf{b}(v)/k(k+1)\), as desired.
### The Proof of Lemma 2.3
We use the following claim directly, which can be seen in [27].
**Claim 2.14**.: _[_27_]_ _Let \(t,d,k\) be integers with \(d\in[k-1]\) and \(\delta,\varepsilon,\alpha>0\) with \(1/t\ll\varepsilon\ll\alpha\leq\delta,1/k\). Let \(R\) be a \(k\)-graph on \(t\) vertices with minimum relative \(d\)-degree \(\overline{\delta_{d}(R)}\geq\delta\). Let \(I\) be a subgraph of \(R\) of edge density at most \(\varepsilon\). Then there exists a vertex spanning subgraph \(R^{\prime}\subseteq R-I\) od \(\alpha\)-perturbed minimum relative \(d\)-degree at least \(\delta-\alpha\)._
The \((1,k)\)-graph \(R_{i}\) on \(\{i\}\cup V\) with minimum relative \((1,k-2)\)-degree at least \(\delta\) is equivalent to a \(k\)-graph \(R^{\prime}_{i}\) on \(V\) with minimum relative \((k-2)\)-degree at least \(\delta\). Thus, by Claim 2.14, we obtain Lemma 2.3.
## 3. Obtaining vicinity
In this section, we determine the \((k-2)\)-vicinity threshold of \((1,k)\)-graphs. Lovasz's formulation of the Kruskal-Katona theorem states that, for any \(x>0\), if \(G\) is a \(k\)-graph with \(e(G)\geq\binom{x}{k}\) edges, then \(e_{j}(G)\geq\binom{x}{j}\) for every \(j\in[k]\) (Theorem 2.14 in [15]). By approximating the binomial coefficients, they [27] deduce the following variant.
**Lemma 3.1** (Kruskal-Katona theorem).: _[_27_]_ _Let \(1/t\ll\varepsilon\ll 1/k\) and \(G\) be a graph on \(t\) vertices and edge density \(\delta\), then \(\partial(G)\) has at least \((\delta^{1/2}-\varepsilon)t\) vertices._
**Proposition 3.2**.: _Let \(t\in N\) and \(\gamma,\delta^{\prime},\delta>0\) with \(1/t\ll\varepsilon\ll\delta\) and \(\delta+\delta^{1/2}>1+\varepsilon\). Let \(R_{i}\) be a \((1,k)\)-graph on \(\{i\}\cup V\) where \(|V|=t\) with a subgraph that is generated by a \((k-2)\)-vicinity \(\mathcal{C}_{i}\). Suppose that each \(C_{S}\in\mathcal{C}_{i}\) has edge density at least \(\delta+\mu\), then \(\mathcal{C}_{i}\) admits an arc._
Proof.: Consider an arbitrary set \(S=\{i,v_{1},\ldots,v_{k-2}\}\in\partial_{k-2}(R_{i})\). By averaging, there is a vertex \(v_{k-1}\) with relative vertex degree at least \(\delta\) in \(C_{S}\). Set \(S^{\prime}=\{i,v_{2},\ldots,v_{k-1}\}\), we have \(S^{\prime}\in\partial_{k-2}(R_{i})\). Thus, \(C_{S^{\prime}-\{v_{1}\}}\) has edge density at least \(\delta+\mu/2\). By Lemma 3.1, \(\partial(C_{S^{\prime}}-\{v_{1}\})\) has at least \((\delta^{1/2}-\varepsilon)t\) vertices.
By the choice of \(v_{k-1}\) and the pigeonhole principle, \(\partial(C_{S^{\prime}}-\{v_{1}\})\) and \(L(\{i,v_{1},\ldots,v_{k-1}\})\) must share a common vertex \(v_{k}\). Since \(v_{k}\in\partial(C_{S^{\prime}}-\{v_{1}\})\), there is another vertex \(v_{k+1}\) such that \(\{v_{k},v_{k+1}\}\in C_{S^{\prime}}-\{v_{1}\}\). Thus, \(\{i,v_{1},\ldots,v_{k+1}\}\) is an arc.
We use the following result of [27].
**Lemma 3.3**.: _[_27_]_ _Let \(1/t\ll\gamma\ll\mu\), suppose that \(L_{1}\) and \(L_{2}\) are graphs on a common vertex set of size \(t\) such that \(L_{1}\), \(L_{2}\) has edge density at least \(5/9+\mu\). For \(i\in[2]\), let \(C_{i}\) be a tight component of \(L_{i}\) with a maximum number of edges. We have_
1. \(C_{1}\) _and_ \(C_{2}\) _has an edge in common,_
2. \(C_{i}\) _has a switcher for_ \(i\in[2]\)_,_
3. \(C_{i}\) _has a fractional matching of density_ \(1/3+\gamma\) _for_ \(i\in[2]\)
* \(C_{i}\) _has edge density at least_ \(4/9+\gamma\) _for_ \(i\in[2]\)_._
_The proof of Lemma 1.17._ Let \(\alpha,\gamma,\mu>0\) with
\[1/t\ll\alpha\ll\delta\ll\mu\ll 5/9.\]
Consider a \((1,k)\)-graph \(R\) on \([t]\cup V\) where \(|V|=t\) and each \(R_{i}:=R[\{i\}\cup V]\) has \(\alpha\)-perturbed minimum relative \((1,k-2)\)-degree at least \(5/9+\mu\). For every \(S\in\partial_{k-2}(R)\), let \(C_{S}\) be a tight component of \(L(S)\) with a maximum number of edges and \(\mathcal{C}_{i}=\{C_{S}:S\in\partial_{k-2}(R)\) and \(i\in S\}\). By the choice of \(C_{S}\), (V1) holds obviously. By Lemma 3.3, \(\mathcal{C}_{i}\) satisfies (V2), (V4), (V5) and (V6). Every \(C_{S}\in\mathcal{C}_{i}\) contains a switcher. By Proposition 3.2, \(\mathcal{C}_{i}\) contains an arc since \(4/9+(4/9)^{1-1/2}=1+1/9\), thus \(\mathcal{C}=\{\mathcal{C}_{i}:i\in[t]\}\) satisfies (V3), as desired.
## 4. Tools
### Regular Complexes
A hypergraph \(H=(V,E)\) is a _complex_ if its edge set is down-closed, meaning that whenever \(e\in E\) and \(e^{\prime}\subseteq e\), we have \(e^{\prime}\in E\). A \(k\)-complex is a complex where all edges have size at most \(k\). Given a complex \(H\), we use \(H^{(i)}\) to denote the \(i\)-graph obtained by taking all vertices of \(H\) and edges of size \(i\). Denote the number of edges of size \(i\) in \(H\) by \(e_{i}(H)\).
Let \(\mathcal{P}\) partition a vertex set \(V\) into parts \(V_{1},\ldots,V_{s}\). Then we say that a subset \(S\subseteq V\) is \(\mathcal{P}\)_-partite_ if \(|S\cap V_{i}|\leq 1\) for every \(i\in[s]\). Similarly, we say that hypergraph \(\mathcal{H}\) is \(\mathcal{P}\)_-partite_ if all of its edges are \(\mathcal{P}\)-partite. In this case we refer to the parts of \(\mathcal{P}\) as the _vertex class_ of \(\mathcal{H}\). We say that a hypergraph \(\mathcal{H}\) is \(s\)_-partite_ if there is some partition \(\mathcal{P}\) of \(V(\mathcal{H})\) into \(s\) parts for which \(\mathcal{H}\) is \(\mathcal{P}\)-partite.
Let \(\mathcal{H}\) be a \(\mathcal{P}\)-partite complex. Then for any \(A\subseteq[s]\) we write \(V_{A}\) for \(\bigcup_{i\in A}V_{i}\). The _index_ of a \(\mathcal{P}\)-partite set \(S\subseteq V\) is \(i(S):=\{i\in[s]:|S\cap V_{i}|=1\}\). We write \(\mathcal{H}_{A}\) to denote the collection of edges in \(\mathcal{H}\) with index \(A\), that is, \(\mathcal{H}_{A}\) can be regarded as an \(|A|\)-partite \(|A|\)-graph on vertex set \(V_{A}\). Similarly, if \(X\) is a \(j\)-set of indexes of vertex classes of \(\mathcal{H}\) we write \(\mathcal{H}_{X}\) for the \(j\)-partite \(j\)-uniform subgraph of \(\mathcal{H}^{(j)}\) induced by \(\bigcup_{i\in X}V_{i}\). We write \(\mathcal{H}_{X<}\) for the \(j\)-partite hypergraph with vertex set \(\bigcup_{i\in V_{X}}V_{i}\) and edge set \(\bigcup_{X^{\prime}\subset X}\mathcal{H}_{X^{\prime}}\).
Let \(H_{i}\) be any \(i\)-partite \(i\)-graph and \(H_{i-1}\) be any \(i\)-partite \((i-1)\)-graph on a common vertex set \(V\) partitioned into \(i\) common vertex classes. Denote \(K_{i}(H_{i-1})\) by the \(i\)-partite \(i\)-graph on \(V\) whose edges are all \(i\)-sets which are supported on \(H_{i-1}\)(i.e. induce a copy of complete \((i-1)\)-graph \(K_{i}^{i-1}\) on \(i\) vertices in \(H_{i-1}\)). The _density of_\(H_{i}\)_with respect to_\(H_{i-1}\) is defined to be
\[d(H_{i}|H_{i-1}):=\frac{|K_{i}(H_{i-1})\cap H_{i}|}{|K_{i}(H_{i-1})|}\]
if \(|K_{i}(H_{i-1})|>0\). For convenience, we take \(d(H_{i}|H_{i-1}):=0\) if \(|K_{i}(H_{i-1})|=0\). When \(H_{i-1}\) is clear from the context, we simply refer \(d(H_{i}|H_{i-1})\) as the _relative density of_\(H_{i}\). More generally, if \(\mathbf{Q}:=(Q_{1},\ldots,Q_{r})\) is a collection of \(r\) not necessarily disjoint subgraphs of \(H_{i-1}\), we define
\[K_{i}(\mathbf{Q}):=\bigcup_{j=1}^{r}K_{i}(Q_{j})\]
\[d(H_{i}|\mathbf{Q}):=\frac{|K_{i}(\mathbf{Q})\cap H_{i}|}{|K_{i}(\mathbf{Q})|}\]
if \(|K_{i}(\mathbf{Q})|>0\). Similarly, we take \(d(H_{i}|\mathbf{Q}):=0\) if \(|K_{i}(\mathbf{Q})|=0\). We say that \(H_{i}\) is \((d_{i},\varepsilon,r)\)-_regular with respect to \(H_{i-1}\)_ if we have \(d(H_{i}|\mathbf{Q})=d_{i}\pm\varepsilon\) for every \(r\)-set \(\mathbf{Q}\) of subgraphs of \(H_{i-1}\) such that \(|K_{i}(\mathbf{Q})|>\varepsilon|K_{i}(H_{i-1})|\). We refer to \((d_{i},\varepsilon,1)\)-regularity simply as \((d_{i},\varepsilon)\)-_regularity_. We say that \(H_{i}\) is \((\varepsilon,r)\)-regular with respect to \(H_{i-1}\) to mean that there exists some \(d_{i}\) for which \(H_{i}\) is \((d_{i},\varepsilon,r)\)-regular with respect to \(H_{i-1}\). Given an \(i\)-graph \(G\) whose vertex set contains that of \(H_{i-1}\), we say that \(G\) is \((d_{i},\varepsilon,r)\)-_regular with respect to \(H_{i-1}\)_ if the \(i\)-partite subgraph of \(G\) induced by the vertex classes of \(H_{i-1}\) is \((d_{i},\varepsilon,r)\)-regular with respect to \(H_{i-1}\). Similarly, when \(H_{i-1}\) is clear from the context, we refer to the relative density of this \(i\)-partite subgraph of \(G\) with respect to \(H_{i-1}\) as the _relative density of_\(G\).
Now let \(\mathcal{H}\) be an \(s\)-partite \(k\)-complex on vertex classes \(V_{1},\ldots,V_{s}\), where \(s\geq k\geq 3\). Since \(\mathcal{H}\) is a complex, if \(e\in\mathcal{H}^{(i)}\) for some \(i\in[2,k]\), then the vertices of \(e\) induce a copy of \(K_{i}^{i-1}\) in \(\mathcal{H}^{(i-1)}\). This means that for any index \(A\in{[s]\choose i}\), the density \(d(\mathcal{H}^{(i)}[V_{A}]|\mathcal{H}^{(i-1)}[V_{A}])\) can be regarded as the proportion of 'possible edges' of \(\mathcal{H}^{(i)}[V_{A}]\) which are indeed edges. We say that \(\mathcal{H}\) is \((d_{2},\ldots,d_{k},\varepsilon_{k},\varepsilon,r)\)-_regular_ if
1. for \(i\in[2,k-1]\) and \(A\in{[s]\choose i}\), the induced subgraph \(\mathcal{H}^{(i)}[V_{A}]\) is \((d_{i},\varepsilon)\)-regular with respect to \(\mathcal{H}^{(i-1)}[V_{A}]\) and
2. for any \(A\in{[s]\choose k}\), the induced subgraph \(\mathcal{H}^{(k)}[V_{A}]\) is \((d_{k},\varepsilon_{k},r)\)-regular with respect to \(\mathcal{H}^{(k-1)}[V_{A}]\).
### Regular Slices
The Regular Slice Lemma says that any \(k\)-graph \(G\) admits a regular slice. Informally speaking, a regular slice of \(G\) is a partite \((k-1)\)-complex \(\mathcal{J}\) whose vertex classes have equal size, whose subgraphs \(\mathcal{J}^{(2)},\ldots,\mathcal{J}^{(k-1)}\) satisfy certain regularity properties and which moreover has the property that \(G\) is regular with respect to \(\mathcal{J}^{(k-1)}\). The first two of these conditions are formalised in the following definition: we say that a \((k-1)\)-complex \(\mathcal{J}\) is \((t_{0},t_{1},\varepsilon)\)-_equitable_, if it has the following properties.
1. \(\mathcal{J}\) is \(\mathcal{P}\)-partite for a \(\mathcal{P}\) which partitions \(V(\mathcal{J})\) into \(t\) parts of equal size, where \(t_{0}\leq t\leq t_{1}\). We refer to \(\mathcal{P}\) as the _ground partition_ of \(\mathcal{J}\), and to the parts of \(\mathcal{P}\) as the _clusters_ of \(\mathcal{J}\).
2. There exists a _density vector_\(\mathbf{d}=(d_{2},\ldots,d_{k-1})\) such that for \(i\in[2,k-1]\) we have \(d_{i}\geq 1/t_{1}\) and \(1/d_{i}\in\mathbb{N}\) and for each \(A\subseteq\mathcal{P}\) of size \(i\), the \(i\)-graph \(\mathcal{J}^{(i)}[V_{A}]\) induced on \(V_{A}\) is \((d_{i},\varepsilon)\)-regular with respect to \(\mathcal{J}^{(i-1)}[V_{A}]\).
If \(\mathcal{J}\) has density vector \(\mathbf{d}=(d_{2},\ldots,d_{k-1})\), then we will say that \(\mathcal{J}\) is \((d_{2},\ldots,d_{k-1},\varepsilon)\)-regular, or \((\mathbf{d},\varepsilon)\)-_regular_, for short. For any \(k\)-set \(X\) of clusters of \(\mathcal{J}\), we write \(\hat{\mathcal{J}}_{X}\) for the \(k\)-partite \((k-1)\)-graph \(\mathcal{J}_{X<}^{(k-1)}\). Given a \((t_{0},t_{1},\varepsilon)\)-equitable \((k-1)\)-complex \(\mathcal{J}\), a \(k\)-set \(X\) of clusters of \(\mathcal{J}\) and a \(k\)-graph \(G\) on \(V(\mathcal{J})\), we say that \(G\) is \((d,\varepsilon_{k},r)\)-_regular with respect to \(X\)_ if \(G\) is \((d,\varepsilon_{k},r)\)-regular with respect to \(\hat{\mathcal{J}}_{X}\). We will also say that \(G\) is \((\varepsilon_{k},r)\)-_regular with respect to \(X\)_ if there exists a \(d\) such that \(G\) is \((d,\varepsilon_{k},r)\)-regular with respect to \(X\). We write \(d_{\mathcal{J},G}^{*}(X)\) for the relative density of \(G\) with respect to \(\hat{\mathcal{J}}_{X}\), or simply \(d^{*}(X)\) if \(\mathcal{J}\) and \(G\) are clear from the context, which will always be the case in applications.
We now give the key definition of the Regular Slice Lemma.
**Definition 4.1** (Regular Slice).: _Given \(\varepsilon,\varepsilon_{k}>0\), \(r,t_{0},t_{1}\in\mathbb{N}\), a \(k\)-graph \(G\) and a \((k-1)\)-complex \(\mathcal{J}\) on \(V(G)\), we call \(\mathcal{J}\) a \((t_{0},t_{1},\varepsilon,\varepsilon_{k},r)\)-regular slice for \(G\) if \(\mathcal{J}\) is \((t_{0},t_{1},\varepsilon)\)-equitable and \(G\) is \((\varepsilon_{k},r)\)-regular with respect to all but at most \(\varepsilon_{k}{t\choose k}\) of the \(k\)-sets of clusters of \(\mathcal{J}\), where \(t\) is the number of clusters of \(\mathcal{J}\)._
It will sometimes be convenient not to specify all parameters, we may write that \(\mathcal{J}\) is \((\cdot,\cdot,\varepsilon)\)-equitable or is a \((\cdot,\cdot,\varepsilon,\varepsilon_{k},r)\)-slice for \(G\), if we do not wish to specify \(t_{0}\) and \(t_{1}\).
Given a regular slice \(\mathcal{J}\) for a \(k\)-graph \(G\), it will be important to know the relative densities \(d^{*}(X)\) for \(k\)-sets \(X\) of clusters of \(\mathcal{J}\). To keep track of these we make the following definition.
**Definition 4.2** (Weighted reduced \(k\)-graph).: _Let \(G\) be a \((1,k)\)-graph and let \(\mathcal{J}\) be a \((t_{0},t_{1},\varepsilon,\varepsilon_{k+1},r)\)-regular slice for \(G\). We define the weighted reduced \((1,k)\)-graph of \(G\), denoted by \(R(G)\), to be the complete weighted \((1,k)\)-graph whose vertices are the clusters of \(\mathcal{J}\) and where each edge \(X\) is given weight \(d^{*}(X)\)._
_Similarly, for \(d_{k+1}>0\), we define the \(d_{k+1}\)-reduced \((1,k)\)-graph \(R_{d_{k+1}}(G)\) to be the (unweighted) \((1,k)\)-graph whose vertices are the clusters of \(\mathcal{J}\) and whose edges are all \((1,k)\)-sets \(X\) of clusters of \(\mathcal{J}\) such that \(G\) is \((\varepsilon_{k+1},r)\)-regular with respect to \(X\) and \(d^{*}(X)\geq d_{k+1}\)._
Given a \((1,k)\)-graph \(G\) on \([n]\cup V\), a vertex \(v\in V\) and a color \(c\in[n]\), recall that \(\deg_{G}(c,v)\) is the number of edges of \(G\) containing \(c\) and \(v\) and \(\overline{\deg}_{G}(c,v)=\deg_{G}(c,v)/{n-1\choose k-1}\) is the relative degree of \(v\) in \(G\). Given a \((t_{0},t_{1},\varepsilon)\)-equitable \((k-1)\)-complex \(\mathcal{J}\) with \(V(\mathcal{J})\subseteq V(G)\), the _rooted degree_ of \((c,v)\)_supported by \(\mathcal{J}\)_, written by \(\deg_{G}((c,v),\mathcal{J})\), is defined as the number of \((k-1)\)-sets \(T\) in \(\mathcal{J}^{(k-1)}\) such that \(T\cup\{c,v\}\) forms an edge in \(G\). Then the relative degree \(\overline{\deg}_{G}((c,v);\mathcal{J})\) of \((c,v)\) in \(G\) supported by \(\mathcal{J}\) is defined as \(\overline{\deg}_{G}((c,v);\mathcal{J})=\deg_{G}((c,v);\mathcal{J})/e( \mathcal{J}^{(k-1)})\).
**Definition 4.3** (Representative rooted degree).: _Let \(\eta>0\), \(G\) be a \((1,k)\)-graph on \([n]\cup V\) and \(\mathcal{J}\) be a \((t_{0},t_{1},\varepsilon,\varepsilon_{k+1})\)-regular slice for \(G\). We say that \(\mathcal{J}\) is \(\eta\)-rooted-degree-representative if for any vertex \(v\in V\) and any color \(c\in[n]\), we have_
\[|\overline{\deg}_{G}((c,v);\mathcal{J})-\overline{\deg}_{G}(c,v)|<\eta.\]
**Definition 4.4** (Regular Setup).: _Let \(k,m,r,t\in\mathbb{N}\) and \(\varepsilon,\varepsilon_{k+1},d_{2},\ldots,d_{k+1}>0\). We say that \((G,G_{\mathcal{J}},\mathcal{J},\mathcal{P},R)\) is a \((k,m,t,\varepsilon,\varepsilon_{k+1},r,d_{2},\ldots,d_{k+1})\)-regular setup, if_
1. \(G\) _is a_ \((1,k)\)_-graph on_ \([n]\cup V\) _where_ \(|V|=n\) _and_ \(G_{\mathcal{J}}\subseteq G\)_,_
2. \(\mathcal{J}\) _is a_ \((\cdot,\cdot,\varepsilon,\varepsilon_{k+1},r)\)_-regular slice for_ \(G\) _with density vector_ \(\textbf{d}=(d_{2},\ldots,d_{k})\)_,_
3. \(\mathcal{P}\) _is the ground partition of_ \(\mathcal{J}\) _with initial partition of_ \([n]\cup V\) _and_ \(2t\) _clusters, each of size_ \(m\)_,_
4. \(R\) _is a subgraph of_ \(R_{d_{k+1}}(G)\)_,_
5. _for each_ \(X\in E(R)\)_,_ \(G_{\mathcal{J}}\) _is_ \((d_{k+1},\varepsilon_{k+1},r)\)_-regular with respect to_ \(X\)_._
_We further say that \((G,G_{\mathcal{J}},\mathcal{J},\mathcal{P},R)\) is representative if_
1. \(\mathcal{J}\) _is_ \(\varepsilon_{k+1}\)_-rooted-degree-representative._
The Regular Slice Lemma of [3] ensures that every sufficiently large \(k\)-graph has a representative regular slice. Given the existence of a regular slice, it is easy to derive the existence of a regular setup. In [27], it is stated directly in terms of regular setups. And it is an easy corollary of giving a sufficiently large \((1,k)\)-graph.
**Lemma 4.5** (Regular Setup Lemma [3]).: _Let \(k,t_{0}\) be positive integers, \(\delta,\mu,\alpha,\varepsilon_{k+1},d_{k+1}\) be positive and \(r:\mathbb{N}\to\mathbb{N}\) and \(\varepsilon:\mathbb{N}\to(0,1]\) be functions. Suppose that_
\[k\geq 3,\varepsilon_{k+1}\ll\alpha,d_{k+1}\ll\mu.\]
_Then there exists \(t_{1}\) and \(m_{0}\) such that the following holds for all \(n\geq 2t_{1}m_{0}\). Let \(G\) be a \((1,k)\)-graph on \([n]\cup V\) where \(|V|=n\) and suppose that \(G\) has minimum relative \((1,k-2)\)-degree \(\overline{\delta}_{1,k-2}(G)\geq\delta+\mu\). There exists \(\textbf{d}=(d_{2},\ldots,d_{k+1})\) and a representative \((k,m,2t,\varepsilon(t_{1}),\varepsilon_{k+1},r(t_{1}),\textbf{d})\)-regular setup \((G,G_{\mathcal{J}},\mathcal{J},\mathcal{P},R_{d_{k+1}})\) with \(t\in[t_{0},t_{1}]\), \(m_{0}\leq m\) and \(n\leq(1+\alpha)mt\). Moreover, there is a \((1,k)\)-graph \(I\) on \(\mathcal{P}\) of edge density at most \(\varepsilon_{k+1}\) such that \(R=R_{d_{k+1}}\cup I\) has minimum relative \((1,k-2)\)-degree at least \(\delta+\mu/2\)._
### Tools for working with regularity
Let \(\mathcal{G}\) be a \(\mathcal{P}\)-partite \(k\)-complex and \(X_{1},\ldots,X_{s}\in\mathcal{P}\)(possibly with repetition), and let \(\mathcal{H}\) be a \(k\)-complex on vertices \([s]\). We say that an embedding of \(\mathcal{H}\) in \(\mathcal{G}\) is _partition-respecting_, if \(i\) is embedded in \(X_{i}\) for \(i\in[s]\). Note that this notion depends on the labeling of \(V(\mathcal{H})\) and the clusters \(X_{1},\ldots,X_{s}\), but these will be clear in the paper. Denote the set of labelled partition-respecting copies of \(\mathcal{H}\) in \(\mathcal{G}\) by \(\mathcal{H}_{\mathcal{G}}[\bigcup_{i\in S}X_{i}]\). When \(X_{1},\ldots,X_{s}\) are clear, we denote it by \(\mathcal{H}_{\mathcal{G}}\) for short. Recall that \(e_{i}(\mathcal{H})\) denotes the number of edges of size \(i\) in \(\mathcal{H}\).
The following lemma states that the number of copies of a given small \(k\)-graph inside a regular slice is roughly what we expect if the edges inside a regular slice were chosen randomly. There are many different versions in [3, 10, 17, 44] and we use the following version in [10].
**Lemma 4.6** (Counting Lemma [10]).: _Let \(k,s,r,m\) be positive integers and let \(\beta,d_{2},\ldots,d_{k},\varepsilon,\varepsilon_{k}\) be positive constants such that \(1/d_{i}\in\mathbb{N}\) for \(i\in[2,k-1]\) and such that_
\[1/m\ll 1/r,\varepsilon\ll\varepsilon_{k},d_{2},\ldots,d_{k-1},\]
\[\varepsilon_{k}\ll\beta,d_{k},1/s.\]
_Let \(H\) be a \(k\)-graph on \([s]\) and let \(\mathcal{H}\) be the \(k\)-complex generated by the down-closure of \(H\). Let \(\textbf{d}=(d_{2},\cdots,d_{k})\), let \((G,G_{\mathcal{J}},\mathcal{J},\mathcal{P},R)\) be a \((k,m,\cdot,\varepsilon,\varepsilon_{k},r,\textbf{d})\)-regular setup and \(\mathcal{G}=\mathcal{J}\cup G_{\mathcal{J}}\). Suppose \(X_{1},\ldots,X_{s}\) are such that \(i\mapsto X_{i}\) is a homomorphism from \(H\) into \(R\), then the number of labelled partition-respecting copies of \(\mathcal{H}\) in \(\mathcal{G}\) satisfies_
\[|\mathcal{H}_{\mathcal{G}}|=(1\pm\beta)\left(\prod_{i=2}^{k}d_{i}^{e_{i}( \mathcal{H})}\right)m^{s}.\]
The following tool allows us to extend small subgraphs into a regular slice. It was given by Cooley, Fountoulakis, Kuhn and Osthus [10].
**Lemma 4.7** (Extension Lemma [10]).: _Let \(k,s,s^{\prime},r,m\) be positive integers, where \(s^{\prime}<s\) and let \(\beta,d_{2},\ldots,d_{k},\varepsilon,\varepsilon_{k}\) be positive constants such that \(1/d_{i}\in\mathbb{N}\) for \(i\in[2,k-1]\) and such that_
\[1/m\ll 1/r,\varepsilon\ll\varepsilon_{k},d_{2},\ldots,d_{k-1},\]
\[\varepsilon_{k}\ll\beta,d_{k},1/s.\]
_Suppose \(H\) is a \(k\)-graph on \([s]\). Let \(\mathcal{H}\) be the \(k\)-complex generated by the down-closure of \(H\) and \(\mathcal{H}^{\prime}\) be an induced subcomplex of \(\mathcal{H}\) on \(s^{\prime}\) vertices. Let \(\textbf{d}=(d_{2},\ldots,d_{k})\) and \((G,G_{\mathcal{J}},\mathcal{J},\mathcal{P},R)\) be a \((k,m,\cdot,\varepsilon,\varepsilon_{k},r,\textbf{d})\)-regular setup and \(\mathcal{G}=\mathcal{J}\cup G_{\mathcal{J}}\). Suppose \(X_{1},\ldots,X_{s}\) are such that \(i\mapsto X_{i}\) is a homomorphism from \(H\) into \(R\). Then all but at most \(\beta|\mathcal{H}^{\prime}_{\mathcal{G}}|\) labelled partition-respecting copies of \(\mathcal{H}^{\prime}\) in \(\mathcal{G}\) extend to_
\[(1\pm\beta)\left(\prod_{i=2}^{k}d_{i}^{e_{i}(\mathcal{H})-e_{i}(\mathcal{H}^{ \prime})}\right)m^{s-s^{\prime}}\]
_labelled partition-respecting copies of \(\mathcal{H}\) in \(\mathcal{G}\)._
In some certain situation, we look for structures whose edges lie entirely in the \((k-1)\)-complex \(\mathcal{J}\) of a regular setup. We can no longer use the above lemmas whose input is a regular setup rather than an equitable complex. Also, the above lemmas requires \(r\) to be large enough with respect to \(\varepsilon_{k}\) while the \((k-1)\)-th level of \(\mathcal{J}\) will only need to be \((d_{k-1},\varepsilon)\)-regular with respect to the lower level. We can use a Dense Counting Lemma as proved by Kohayakawa, Rodl and Skokan [24]. We state the following version given by Cooley, Fountoulakis, Kuhn and Osthus [10].
**Lemma 4.8** (Dense Counting Lemma [10]).: _Let \(k,s,m\) be positive integers and \(\varepsilon,d_{2},\ldots,d_{k-1},\beta\) be positive constants such that_
\[1/m\ll\varepsilon\ll\beta\leq d_{2},\ldots,d_{k-1},1/s.\]
_Suppose \(H\) is a \((k-1)\)-graph on \([s]\) and \(\mathcal{H}\) is the \((k-1)\)-complex generated by the down-closure of \(H\). Let \(\textbf{d}=(d_{2},\ldots,d_{k-1})\) and \(\mathcal{J}\) be a \((\textbf{d},\varepsilon)\)-regular \((k-1)\)-complex with ground partition \(\mathcal{P}\), each size of whose vertex class is \(m\). If \(X_{1},\ldots,X_{s}\in\mathcal{P}\), then_
\[|\mathcal{H}_{\mathcal{J}}|=(1\pm\beta)\prod_{i=2}^{k-1}d_{i}^{e_{i}(\mathcal{ H})}m^{s}.\]
The following lemma gives the number of edges in each layer of a regular slice.
**Lemma 4.9**.: [3] _Suppose that \(1/m\ll\varepsilon\ll\beta\ll d_{2},\ldots,d_{k-1},1/k\) and that \(\mathcal{J}\) is a \((\cdot,\cdot,\varepsilon)\)-equitable \((k-1)\)-complex with density vector \((d_{2},\ldots,d_{k-1})\) and clusters of size \(m\). Let \(X\) be a set of at most \(k-1\) clusters of \(\mathcal{J}\). Then_
\[|\mathcal{J}_{X}|=(1\pm\beta)\left(\prod_{i=2}^{|X|}d_{i}^{\binom{|X|}{i}} \right)m^{|X|}.\]
Analogously, we have a dense version of Extension Lemma [10].
**Lemma 4.10** (Dense Extension Lemma [10]).: _Let \(k,s,s^{\prime},m\) be positive integers, where \(s^{\prime}<s\) and \(\varepsilon,\beta,d_{2},\ldots,d_{k-1}\) be positive constants such that \(1/m\ll\varepsilon\ll\beta\ll d_{2},\ldots,d_{k-1},1/s\). Let \(H\) be a \((k-1)\)-graph on \([s]\). Let \(\mathcal{H}\) be the \((k-1)\)-complex generated by the down-closure of \(H\) and \(\mathcal{H}^{\prime}\) be
an induced subcomplex of \(\mathcal{H}\) on \(s^{\prime}\) vertices. Let \(\textbf{d}=(d_{2},\ldots,d_{k-1})\) and let \(\mathcal{J}\) be a \((\textbf{d},\varepsilon)\)-regular \((k-1)\)-complex, with ground partition \(\mathcal{P}\) with vertex classes of size \(m\) each. If \(X_{1},\ldots,X_{s}\in\mathcal{P}\), then all but at most \(\beta|\mathcal{H}^{\prime}_{\mathcal{J}}|\) labelled partition-respecting copies of \(\mathcal{H}^{\prime}\) in \(\mathcal{J}\) extend to_
\[(1\pm\beta)\left(\prod_{i=2}^{k-1}d_{i}^{e_{i}(\mathcal{H})-e_{i}(\mathcal{H} ^{\prime})}\right)m^{s-s^{\prime}}\]
_labelled partition-respecting copies of \(\mathcal{H}\) in \(\mathcal{J}\)._
The restriction of a regular complex to a large subset of its vertex is also a regular complex, with slightly altered constants.
**Lemma 4.11** (Regular Restriction Lemma [3]).: _Let \(k,r,m,s\) be integers and \(\alpha,\varepsilon,\varepsilon_{k},d_{2},\ldots,d_{k}\) be positive constants such that \(1/d_{i}\in\mathbb{N}\) for \(\in[2,k]\) and_
\[1/m\ll\varepsilon\ll\varepsilon_{k},d_{2},\ldots,d_{k-1},\]
_and_
\[\varepsilon_{k}\ll\alpha.\]
_Let \(\mathcal{G}\) be an \(s\)-partite \(k\)-complex on vertex classes \(V_{1},\ldots,V_{s}\), each of size \(m\) and which is \((\textbf{d},\varepsilon_{k},\varepsilon,r)\)-regular where \(\textbf{d}=(d_{2},\ldots,d_{k})\). Choose any \(V_{i}^{\prime}\subseteq V_{i}\) with \(|V_{i}^{\prime}|\geq\alpha m\) for \(i\in[s]\). Then the induced subcomplex \(\mathcal{G}[V_{1}^{\prime}\cup\cdots\cup V_{s}^{\prime}]\) is \((\textbf{d},\sqrt{\varepsilon_{k}},\sqrt{\varepsilon},r)\)-regular._
## 5. Framework lemma
In this section, we use the following Absorption Lemma and Almost Cover Lemma to prove Theorem 1.9. The proof of these two lemmas will be found in Section 8 and 9. Before we give these two lemmas, we need some definition.
**Definition 5.1** (Extensible paths).: _Let \((G,G_{\mathcal{J}},\mathcal{J},\mathcal{P},R)\) be a \((k,m,2t,\varepsilon,\varepsilon_{k+1},r,\textbf{d})\)-regular setup, \(G\) be a \((1,k)\)-graph on \([n]\cup V\) where \(|V|=n\) and \(c,\nu>0\). A \((k-1)\)-tuple \(A\) in \(V^{k-1}\) is said to be \((c,\nu)\)-extensible rightwards to an ordered edge \(Y=(Y_{0},Y_{1},\ldots,Y_{k})\) in \(R\) if there exists a connection \(S\subseteq[n]\cup V\) and a target set \(T\subseteq\mathcal{J}_{(Y_{2},\ldots,Y_{k})}\) with the following properties._
* \(|T|\geq\nu|\mathcal{J}_{(Y_{2},\ldots,Y_{k})}|\)_,_
* _for every_ \((v_{2},\ldots,v_{k})\in T\)_, there are at least_ \(cm^{3k+1}\) _many_ \((3k+1)\)_-tuples_ \((c_{1},\ldots,c_{2k},w_{1},\ldots w_{k},v_{1})\) _with_ \(v_{1}\in S\cap Y_{1}\)_,_ \(w_{i}\in S\cap Y_{i}\) _and_ \(c_{j}\in Y_{0}\) _for_ \(i\in[k]\) _and_ \(j\in[2k]\) _such that_ \((c_{1}\ldots c_{2k},Aw_{1}\ldots w_{k}v_{1}\ldots v_{k})\) _is a sequentially path in_ \(G\)_._
Given a sequentially path \(P\) in a \((1,k)\)-graph \(G\) and an ordered edge \(X\) in \(R\), we say that \(P\) is \((c,\nu)\)-_extensible rightwards_ to \(X\) if the \((k-1)\)-tuple corresponding \(P\)'s last \(k-1\) vertices is \((c,\nu)\)-extensible rightwards to \(X\). We call \(X\) as the right extension. We can define leftwards path extensions for \((k-1)\)-tuples and for tight paths in an analogous way (this time corresponding to the first \(k-1\) vertices of \(P\)). A _connection set_ of a sequentially path is the union of the connection set of the initial \((k-1)\)-tuple and the connection set of the end \((k-1)\)-tuple.
Given that \(X=(a,b,c)\) and \(Y=(a,c,b)\), there is no guarantee that \(H\) contains a walk from \(X\) to \(Y\). While if \(Y\) is a cyclic shift of \(X\), that is, \((b,c,a)\) or \((c,a,b)\), then a walk from \(X\) to \(Y\) does exist.
More generally, a _cyclic shift_ of a tuple \((v_{1},\ldots,v_{k})\) is any \(k\)-tuple of the form \((v_{i},\ldots,v_{k},v_{1},\ldots,v_{i-1})\) for \(i\in[k]\).
An orientation of a \((1,k)\)-graph \(G\) on \([n]\cup V\) is a family of ordered \((1,k)\)-tuples \(\{\overrightarrow{e}\in[n]\times V^{k}:e\in E(G)\}\). We say that a family \(\overrightarrow{G}\) of ordered \((1,k)\)-tuples is an _oriented \((1,k)\)-graph_ if there exists a \((1,k)\)-graph \(G\) such that \(\overrightarrow{G}=\{\overrightarrow{e}\in[n]\times V^{k}:e\in E(G)\}\). Given an oriented \((1,k)\)-graph \(\overrightarrow{R}\), we say that \((G,G_{\mathcal{J}},\mathcal{J},\mathcal{P},\overrightarrow{R})\) is an _oriented \((k,m,2t,\varepsilon,\varepsilon_{k+1},r,\mathbf{d})\)-regular setup_ if \(\overrightarrow{R}\) is an orientation of \(R\) and \((G,G_{\mathcal{J}},\mathcal{J},\mathcal{P},R)\) is a \((k,m,2t,\varepsilon,\varepsilon_{k+1},r,\mathbf{d})\)-regular setup. Consider a \((1,k)\)-graph \(G\) with an orientation \(\overrightarrow{G}\) and vertex set \([n]\cup V\). Given an ordered \(k\)-tuple \(Y\) of distinct vertices in \(V\) and \(c\in[n]\), we say that \(\{c\}\cup Y\) is _consistent with_\(\overrightarrow{G}\) if there exists an oriented edge \(\{c\}\cup\overrightarrow{e}\in\overrightarrow{G}\) such that \(\overrightarrow{e}\) is a cyclic shift of \(Y\). We say that an extensible path is _consistent with_\(\overrightarrow{G}\) if its left and right extensions are consistent with \(\overrightarrow{G}\). Finally, when considering multiple paths, we refer to the union of their connection sets as their _joint connection set_.
Let \(\overrightarrow{G}\) be an orientation of a \((1,k)\)-graph \(G\). A sequentially walk \(W\) in \(G\) is said to be _compatible_ with \(\overrightarrow{G}\) if each oriented edge of \(\overrightarrow{G}\) appears at least once in \(W\) as a sequence of \(k\) consecutive vertices.
Let \(G\) be a \((1,k)\)-graph on \([n]\cup V\) where \(|V|=n\), and \(S\subseteq V,O\subseteq[n],|O|=|S|=k\), \(P\) be a sequentially path. Recall that \((C(P),I(P))\) is used to denote a sequentially path where \(C(P)\) is the color set of \(P\) and \(I(P)\) is the point set of \(P\). We say that \(P\) is \((S,O)\)-_absorbing_ in \(G\) if there exits a sequentially path \(P^{\prime}\) in \(G\) with the same initial \((k-1)\)-tuple and the same terminal \((k-1)\)-tuple with \(P\), \(I(P^{\prime})=I(P)\cup S\) and \(C(P^{\prime})=C(P)\cup O\). We say that \(P\) is \(\eta\)-_absorbing_ in \(G\) if it is \((S,O)\)-absorbing in \(G\) for every \(S\) of size at most \(\eta n\) divisible by \(k\), any \(O\) of size \(|S|\), and \(S\cap I(P)=\emptyset,O\cap C(P)=\emptyset\).
**Lemma 5.2** (Absorption Lemma).: _Let \(k,r,m,t\in\mathbb{N}\) and \(d_{2},\ldots,d_{k+1},\varepsilon,\varepsilon_{k+1},\eta,\mu,\delta,\alpha,c, \nu,\lambda\) be such that_
\[1/m\ll 1/r,\varepsilon\ll 1/t,c,\varepsilon_{k+1},d_{2},\ldots,d_{k},\]
\[c\ll d_{2},\ldots,d_{k},\]
\[1/t\ll\varepsilon_{k+1}\ll d_{k+1},\nu\leq 1/k,\]
\[c\ll\varepsilon_{k+1}\ll\alpha\ll\eta\ll\lambda\ll\nu\ll\mu\ll\delta,1/k.\]
_Let \(\textbf{d}=(d_{2},\ldots,d_{k+1})\) and let \(\mathfrak{S}=(G,G_{\mathcal{J}},\mathcal{J},\mathcal{P},\overrightarrow{H})\) be an oriented representative \((k,m,2t,\varepsilon,\varepsilon_{k+1},r,\textbf{d})\)-regular setup. Let \(G\) be \((1,k)\)-graph on \([n]\cup V\) with minimum relative \((1,1)\)-degree being at least \(\delta+\mu\) where \(|V|=n\), \(n\leq(1+\alpha)mt\). Suppose that there exists a closed sequentially walk which is compatible with the orientation \(\overrightarrow{H}\) of \(H\) and_
1. \(H_{i}\) _is sequentially tightly connected,_
2. _For every color_ \(i\in[t]\)_, there are at least_ \((1-\alpha)t\) _points_ \(v\in V\) _such that_ \(\{i,v\}\) _has relative_ \((1,1)\)_-degree at least_ \(1-\delta+\gamma\)_._
_Then there exists a sequentially path \(P\) in \(G\) such that the following holds._
1. \(P\) _is_ \((c,\nu)\)_-extensible and consistent with_ \(\overrightarrow{H}\)_,_
2. \(V(P)\) _is_ \(\lambda\)_-sparse in_ \(\mathcal{P}\) _and_ \(V(P)\cap S=\emptyset\)_, where_ \(S\) _denotes the connection set of_ \(P\)
3. \(P\) _is_ \(\eta\)_-absorbing in_ \(G\)_._
**Lemma 5.3** (Almost Cover Lemma).: _Let \(k,r,m,t\in\mathbb{N}\), \(d_{2},\ldots,d_{k+1},\varepsilon,\varepsilon_{k+1},\alpha,\gamma,c,\nu,\lambda\) be such that_
\[1/m\ll 1/r,\varepsilon \ll 1/t,c,\varepsilon_{k+1},d_{2},\ldots,d_{k},\] \[c \ll d_{2},\ldots,d_{k},\] \[1/t\ll\varepsilon_{k+1} \ll d_{k+1},\nu,\alpha\leq 1/k,\] \[\alpha \ll\eta \ll\lambda\ll\nu\ll\gamma.\]
_Let \(\textbf{d}=(d_{2},\ldots,d_{k+1})\) and let \(\mathfrak{S}=(G,G_{\mathcal{J}},\mathcal{J},\mathcal{P},\overrightarrow{H})\) be an oriented \((k,m,2t,\varepsilon,\varepsilon_{k+1},r,\textbf{d})\)-regular setup. Suppose that \(G\) is a \((1,k)\)-graph on \([n]\cup V\) where \(|V|=n\) and \(n\leq(1+\alpha)mt\), \(H\) is a \((1,k)\)-graph on \([t]\cup V^{\prime}\) where \(|V^{\prime}|=t\) and_
1. \(H_{i}\) _is sequentially tightly connected,_
2. \(H_{i}\) _contains a sequentially closed walk_ \(W\) _compatible with_ \(\overrightarrow{H}\) _whose length is 1 mod_ \(k\)_,_
3. \(H_{W_{i}}\) _is_ \(\gamma\)_-robustly matchable for_ \(i\in[k]\)_,_
4. \(L_{H}(\{i\})\) _and_ \(L_{H}(\{j\})\) _intersect in an edge for each_ \(i,j\in[t]\)_._
_Suppose that \(P\) is a sequentially path in \(G\) such that_
1. \(P\) _is_ \((c,\nu)\)_-extensible and consistent with_ \(\overrightarrow{H}\)_,_
2. \(V(P)\) _is_ \(\lambda\)_-sparse in_ \(\mathcal{P}\) _and_ \(V(P)\cap S=\emptyset\) _where_ \(S\) _is the connection set of_ \(P\)_,_
_then there exists a sequentially cycle \(C\) of length at least \((1-\eta)n\) which contains \(P\) as a subpath. Moreover, the number of uncovered points of \(V\) is divisible by \(k\) and the number of uncovered colors of \([n]\) has the same size with the number of uncovered points._
_The proof of Theorem 1.9._ Let \(\delta=rhf_{k-2}(k)\), \(\mu>0\) and
\[\varepsilon_{k+1}\ll\alpha\ll\eta\ll\lambda\ll\nu\ll\gamma\ll\mu,\]
\[1/t_{0}\ll\varepsilon_{k+1}\ll d_{k+1}\ll\mu.\]
We apply Lemma 4.5 with input \(\varepsilon_{k+1},1/t_{0},r,\varepsilon\) to obtain \(t_{1},m_{0}\). Choose \(c\ll 1/t_{1}\) and \(1/n_{0}\ll 1/t_{1},1/m_{0},c,1/r,\varepsilon\). Let \(G\) be a \((1,k)\)-graph on \([n]\cup V\) where \(|V|=n\) and \(2n\geq n_{0}\) vertices with \(\overline{\delta}_{1,k-2}(G)\geq\delta+\mu\). Our goal is to prove that \(G\) contains a sequentially Hamilton cycle. By Lemma 4.5, there exists a representative \((k,m,2t,\varepsilon,\varepsilon_{k+1},r,d_{2},\ldots,d_{k+1})\)-regular setup \((G,G_{\mathcal{J}},\mathcal{J},\mathcal{P},R_{d_{k+1}})\) with \(t_{0}\leq t\leq t_{1}\) and \(n\leq(1+\alpha)mt\). Moreover, there is a \((1,k)\)-graph \(I\) of edge density at most \(\varepsilon_{k+1}\) such that \(R=R_{d_{k+1}}\cup I\) has minimum relative \((1,k-2)\)-degree at least \(\delta+\mu/2\). By Definition 1.8 and \(\delta=rhf_{k-2}(k)\), we obtain that \(R\) contains an \((\alpha,\gamma,\delta)\)-rainbow Hamilton framework \(H\) that avoids edges of \(I\). Thus, \(H\subseteq R_{d_{k+1}}\).
Next, we want to fix an orientation \(\overrightarrow{H}\) and a compatible walk \(W\). Since \(H\) is an \((\alpha,\gamma,\delta)\)-rainbow Hamilton framework, \(H_{i}\) is sequentially tightly connected and has a sequentially closed walk of length 1 mod \(k\), \(L_{H}(\{i\})\) and \(L_{H}(\{j\})\) intersect in an edge for each \(i,j\in[t]\). We obtain a sequentially closed walk of length 1 mod \(k\) visiting all edges of \(H\). Define an orientation \(\overrightarrow{H}=\{\overrightarrow{e}\in V(H)^{k}:e\in H\}\) by choosing for every edge \(e\) of \(H\) a \(k\)-tuple (or subpath) \(\overrightarrow{e}\) in \(W\) which contains the vertices of \(e\). Note that \(W\) is compatible with \(\overrightarrow{H}\).
Firstly, we select a sequentially absorbing path \(P\). Note that \(1/t_{1}\leq d_{2},\ldots,d_{k}\), since \(\mathcal{J}\) is a \((t_{0},t_{1})\)-equitable complex. Since \(H\) is an \((\alpha,\gamma,\delta)\)-rainbow Hamilton framework, it follows that there exists a sequentially path \(P\) in \(G\) by Lemma 5.2 such that
1. \(P\) is \((c,\nu)\)-extensible and consistent with \(\overrightarrow{H}\),
2. \(V(P)\) is \(\lambda\)-sparse in \(\mathcal{P}\) and \(V(P)\cap T=\emptyset\), where \(T\) denotes the connection set of \(P\),
3. \(P\) is \(\eta\)-absorbing in \(G\).
Next, by Lemma 5.3, there is a sequentially cycle \(A\) of length at least \((1-\eta)n\) which contains \(P\) as a subpath. Moreover, the number of uncovered points \(|V\setminus I(A)|\) is divisible by \(k\) and the number of uncovered colors is of size \(|[n]\setminus C(A)|=|V\setminus I(A)|\).
Finally, we absorb the uncovered points and colors into \(A\). Note that \(|V\setminus I(A)|\leq\eta n\). Thus, there is a sequentially path \(P^{\prime}\) with point set \(I(P)\cup(V\setminus I(A))\) and color set \(C(P)\cup([n]\setminus C(A))\), which has the same endpoints as \(P\), as desired.
## 6. Almost Covering
### Embedding sequentially paths
Given sequentially walks \(W\) and \(W^{\prime}\) with the property that the terminal \((k-1)\)-tuple of \(W\) is identical to the initial \((k-1)\)-tuple of \(W^{\prime}\), we may _concatenate_\(W\) and \(W^{\prime}\) to form a new sequentially walk with color set \(C(W)+C(W^{\prime})\), which we denote \(W+W^{\prime}\). Note that a rainbow path in \(k\)-graph system is a sequentially path in the auxiliary \((1,k)\)-graph \(G\).
**Lemma 6.1**.: _Let \(k,r,n_{0},t,B\) be positive integers and \(\psi,d_{2},\ldots,d_{k+1},\varepsilon,\varepsilon_{k+1},\nu\) be positive constants such that \(1/d_{i}\in\mathbb{N}\) for \(i\in[2,k]\) and such that \(1/n_{0}\ll 1/t\),_
\[\frac{1}{n_{0}},\frac{1}{B}\ll\frac{1}{r},\varepsilon\ll\varepsilon_{k+1},d_{ 2},\ldots,d_{k},\]
\[\varepsilon_{k+1}\ll\psi,d_{k+1},\nu,\frac{1}{k}.\]
_Then the following holds for all integers \(n\geq n_{0}\)._
_Let \(G\) be a \((1,k)\)-graph on \([n]\cup V\) where \(|V|=n\), \(\mathcal{J}\) be a \((\cdot,\cdot,\varepsilon,\varepsilon_{k+1},r)\)-regular slice for \(G\) on \([t]\cup V^{\prime}\) where \(|V^{\prime}|=t\) with density vector \(\textbf{d}=(d_{2},\ldots,d_{k})\). Let \(\mathcal{J}_{W_{i}}\) be the induced subcomplex of \(\mathcal{J}\) on \([t(i-1)/k+1,ti/k]\cup V^{\prime}\) for \(i\in[k]\). We call \([t]\) the family of color clusters and \(V^{\prime}\) the family of point clusters. Let \(R_{W_{i}}:=R\big{[}[t(i-1)/k+1,ti/k]\cup V^{\prime}\big{]}\) be the induced subgraph of \(R:=R_{d_{k+1}}(G)\). Let \(R_{W_{i}}\) be sequentially tightly connected for \(i\in[k]\) and \(\textbf{w}_{i}\) be a fractional matching of size \(\mu_{i}=\sum_{e\in E(R_{W_{i}})}\textbf{w}_{i}(e)\) for \(i\in[k]\) and \(\mu_{i}(Z)=\sum_{Z\in e,e\in E(R_{W_{i}})}\textbf{w}_{i}(e)\leq 1/k\) for each cluster \(Z\). Also, let \(X\) and \(Y\) be \((k-1)\)-tuples of point clusters, \(S_{X}\) and \(S_{Y}\) be the subsets of \(\mathcal{J}_{X}\) and \(\mathcal{J}_{Y}\) of sizes at least \(\nu|\mathcal{J}_{X}|\) and \(\nu|\mathcal{J}_{Y}|\) respectively. Finally, let \(W\) be a sequentially walk from \(X\) to \(Y\) of length at most \(t^{2k+1}\) in \(R_{W_{i}}\) and denote \(\ell(W)\) by \(p\). For \(i\in[k]\), we have_
1. _for any_ \(\ell\) _divisible by_ \(k\) _with_ \(4k\leq\ell\leq(1-\psi)\mu_{i}kn/t\)_, there is a sequentially path_ \(P\) _in_ \(G\) _of length_ \(\ell-1+\ell(W)(k+1)\) _whose initial_ \((k-1)\)_-tuple belongs to_ \(S_{X}\) _and whose terminal_ \((k-1)\)_-tuple belongs to_ \(S_{Y}\)_,_
2. \(P\) _uses at most_ \(\mu_{i}(Z)n/t+B\) _vertices from any point cluster_ \(Z\in V^{\prime}\) _and at most_ \(k\mu_{i}(C)n/t+B\) _vertices from any color cluster_ \(C\in[t(i-1)/k+1,ti/k]\) _where_ \(\mu_{i}(Z^{\prime})=\sum_{Z^{\prime}\in e,e\in R_{W_{i}}}\textbf{w}_{i}(e)\) _for any cluster_ \(Z^{\prime}\)
Proof.: Let \(\alpha=\psi/5\) and \(\beta=1/200\). When using Lemma 4.7, we require that \(\varepsilon\ll c^{2}\) and choose \(m_{0}\) to be large enough so that \(m\geq\alpha m_{0}\) is acceptable for all these applications. Given \(t\), let
\[n_{0}=t\cdot\max(m_{0},\frac{200k^{2}}{\varepsilon},\frac{8k^{2}}{\alpha\sqrt{ \varepsilon}},\frac{10k(k+1)t^{2k+1}}{\alpha}). \tag{1}\]
We write \(\mathcal{G}\) for the \((k+1)\)-complex obtained from \(\mathcal{J}_{W_{i}}\) by adding all edges of \(G\) supported on \(\mathcal{J}_{W_{i}}^{(k)}\) as the '\((k+1)\)th level' of \(\mathcal{G}\). So for any edge \(X=(X_{0},X_{1},\ldots,X_{k})\in R_{W_{i}}\), \(\mathcal{G}[\bigcup_{i\in[0,k]}X_{i}]\) is a \((d_{2},\ldots,d_{k},d^{*}(X),\varepsilon,\varepsilon_{k+1},r)\)-regular \((k+1)\)-partite \((k+1)\)-complex with \(d^{*}(X)\geq d_{k+1}\).
Since \(\mathcal{J}\) is a regular slice for \(G\), for any \((1,k)\)-set of clusters \(X=\{X_{0},X_{1},\ldots,X_{k}\}\) in \(\mathcal{J}_{W_{i}}\), the \((k+1)\)-partite \(k\)-complex \(\mathcal{J}_{W_{i}}[\bigcup_{j\in[0,k]}X_{j}]\) is \((\mathbf{d},\varepsilon)\)-regular. By adding all \((k+1)\)-sets supported on \(\hat{\mathcal{J}}_{W_{i}X}\) as the '\((k+1)\)th level', we may obtain a \((d_{2},\ldots,d_{k},1,\varepsilon,\varepsilon_{k+1},r)\)-regular \((k+1)\)-partite \((k+1)\)-complex, whose vertex clusters are subsets \(Y_{j}\subseteq X_{j}\) for \(j\in[0,k]\) of size \(|Y_{1}|=\cdots=|Y_{k}|=\alpha m/k\) and \(|Y_{0}|=\alpha m\). \(Y_{0}\) can be seen as \(\bigcup_{i\in[k]}Y_{0,i}\) where \(|Y_{0,i}|=\alpha m/k\) for \(i\in[k]\) and we obtain a \((d_{2},\ldots,d_{k},1,\sqrt{\varepsilon},\sqrt{\varepsilon_{k+1}},r)\)-regular by Lemma 4.11. We conclude by Lemma 4.9 that for any subset \(Y_{i}\), \(i\in[k-1]\) of distinct clusters of \(\mathcal{J}\), each of size \(\alpha m\), we have
\[|\mathcal{G}(Y_{1},\ldots,Y_{k-1})|\geq\varepsilon m^{k-1}. \tag{2}\]
The following claim plays an important role in Lemma 6.1.
**Claim 6.2**.: _Let \(\{X_{0},X_{1},\ldots,X_{k}\}\) be an edge of \(R\) and choose any \(Y_{j}\subseteq X_{j}\) for each \(j\in[0,k]\) so that \(|Y_{0}|=k|Y_{1}|=\cdots=k|Y_{k}|=\alpha m\). Let \(\mathcal{P}\) be a collection of at least \(\frac{1}{2}|\mathcal{G}(Y_{1},\ldots,Y_{k-1})|\) sequentially paths in \(G\)(not necessarily contained in \(\bigcup_{j\in[k]}Y_{j}\)) each of length at most \(3k\) and whose terminal \((k-1)\)-tuples are distinct members of \(\mathcal{G}(Y_{1},\ldots,Y_{k-1})\). Then for each \(\sigma\in\{0,1\}\) there is a path \(P\in\mathcal{P}\) and a collection \(\mathcal{P}^{\prime}\) of \(\frac{9}{10}e(\mathcal{G}(Y_{\sigma+1},\ldots,Y_{\sigma+k-1}))\) sequentially paths in \(G\), each of length \(2k-1+\sigma\), all of whose initial \((k-1)\)-tuples are the same (terminal \((k-1)\)-tuple of \(P\)). Furthermore, the terminal \((k-1)\)-tuples of paths in \(\mathcal{P}^{\prime}\) are distinct members of \(\mathcal{G}(Y_{\sigma+1},\ldots,Y_{\sigma+k-1})\). If \(j\leq k-1\), then the \(j\)th vertex \(x\) of each path in \(\mathcal{P}^{\prime}\) lies in \(Y_{j}\), if \(j\geq k\), then \(x\) is not contained in \(P\), and \(k\) new colors are not contained in \(P\)._
Proof.: Let \(\sigma\in\{0,1\}\) be fixed, we take \(\mathcal{H}\) to be the \((k+1)\)-complex generated by the down-closure of a sequentially path of length \(2k-1+\sigma\) with vertex set \(\{c_{1},\ldots,c_{k+\sigma}\}\cup\{v_{1},\ldots,v_{2k-1+\sigma}\}\) and consider its \((k+1)\)-partition \(V_{0}\cup V_{1}\cup\cdots\cup V_{k}\) where \(\{c_{1},\ldots,c_{k+\sigma}\}\subseteq V_{0}\) and the \(i\)th vertex of the path lies in the vertex class \(V_{j}\) with \(j=i\) mod \(k\). We take \(\mathcal{H}^{\prime}\) to be the subcomplex of \(\mathcal{H}\) induced by \(\{v_{1},\ldots,v_{k-1},v_{k+1+\sigma},\ldots,v_{2k-1+\sigma}\}\). Consider the pair \((e,f)\), where \(e\) is an ordered \((k-1)\)-tuple of \(\mathcal{G}(Y_{1},\ldots,Y_{k-1})\) and \(f\) is an ordered \((k-1)\)-tuple of \(\mathcal{G}(Y_{\sigma+1},\ldots,Y_{\sigma+k-1})\). For any such ordered \((k-1)\)-tuple \(e\), there are at most \(km^{k-2}\) such ordered \((k-1)\)-tuples \(f\) which intersect \(e\), thus there are at most \(1/200\)-proportion of the pairs \((e,f)\) are not disjoint. On the other hand, if \(e\) and \(f\) are disjoint, then the down-closure of the pair \((e,f)\) forms a labelled copy of \(\mathcal{H}^{\prime}\) in \(\mathcal{G}[\bigcup_{j\in[0,k]}Y_{j}]\), so by Lemma 4.7 with \(s=3k+2\sigma-1\) and \(s^{\prime}=2k-2\), for all but at most \(1/200\)-proportion of the disjoint pairs \((e,f)\), there are at least \(c(\alpha m/k)^{k+2\sigma+1}\geq\sqrt{\varepsilon}(\alpha m/k)^{k+2\sigma+1}\) extensions to copies of \(\mathcal{H}\) in \(\mathcal{G}[\bigcup_{j\in[0,k]}Y_{j}]\). Each such copy of \(\mathcal{H}\) corresponds to a sequentially path in \(G\) of length \(2k-1+\sigma\) with all vertices in the desired clusters. We conclude that at least \(99/100\)-proportion of all pairs \((e,f)\) of ordered \((k-1)\)-tuples are disjoint and are linked by at least \(\sqrt{\varepsilon}(\alpha m/k)^{k+2\sigma+1}\) sequentially
paths in \(G\) of length \(2k-1+\sigma\), where \(c_{i}\in V_{0}\) for \(i\in[k+\sigma]\) and \(v_{\ell}\in V_{j}\) with \(j=\ell\) mod \(k\). We call these pairs _extensible_.
We call an ordered \((k-1)\)-tuple \(e\in\mathcal{G}(Y_{1},\ldots,Y_{k-1})\)_good_ if at most \(1/20\) of the ordered edges \(f\in\mathcal{G}(Y_{\sigma+1},\ldots,Y_{\sigma+k-1})\) do not make an extensible pair with \(e\). Then at most \(1/5\) of the ordered \((k-1)\)-tuples in \(\mathcal{G}(Y_{1},\ldots,Y_{k-1})\) are not good. Thus, there exists a path \(P\in\mathcal{P}\) whose terminal \((k-1)\)-tuple is a good ordered \((k-1)\)-tuple \(e\). Fix such a \(P\) and \(e\), and any ordered \((k-1)\)-tuple \(f\) in \(\mathcal{G}(Y_{\sigma+1},\ldots,Y_{\sigma+k-1})\) which is disjoint from \(P\), suppose that \((e,f)\) is an extensible pair, there are at least \(\sqrt{\varepsilon}(\alpha m/k)^{k+2\sigma+1}\) sequentially paths in \(G\) from \(e\) to \(f\). We claim that at least one of these paths has the further property that if \(j\geq k\), then the \(j\)th vertex is not contained in \(P\) and the \(k+\sigma\) new colors are not contained in \(P\), we can therefore put it in \(\mathcal{P}^{\prime}\). Indeed as \(f\) is disjoint from \(P\), if \(\sigma=0\), then it suffices to show that one of these paths has the property that \(v_{k}\in Y_{k}\setminus V(P)\) and \(c_{i}\in Y_{0}\setminus V(P)\) for \(i\in[k]\). This is true because there are only at most \((2k+1)(\alpha m)^{k}+k(2k+1)(\alpha m/k)^{k}<\sqrt{\varepsilon}(\alpha m/k)^{k+1}\) paths which do not have this property by (1). If \(\sigma=1\), then we need a path whose \(k\)th and \((k+1)\)st vertices are not in \(V(P)\) and \(c_{i}\in Y_{0}\setminus V(P)\) for \(i\in[k+1]\), which is possible since \(2(2k+1)(\alpha m/k)^{k+2}+(k+1)(2k+1)(\alpha m/k)^{k+2}<\sqrt{\varepsilon}( \alpha m/k)^{k+3}\) by (1).
Finally, considering the ordered \((k-1)\)-tuple \(f\in\mathcal{G}(Y_{\sigma+1},\ldots,Y_{\sigma+k-1})\), we have \(20|V(P)|(k-1)(\alpha m/k)^{k-2}\leq\varepsilon m^{k-1}\leq e(\mathcal{G}(Y_{ \sigma+1},\ldots,Y_{\sigma+k-1}))\) by (1) and (2), at most \(1/20\) of these \((k-1)\)-tuples \(f\) intersect \(P\) and by the choice of \(e\), at most \(1/20\) of these \((k-1)\)-tuples \(f\) are such that \((e,f)\) is not extensible. This leaves at least \(9/10\) of \((k-1)\)-tuples \(f\) remaining, and choose a sequentially path for each such \(f\) as described above gives the desired set \(\mathcal{P}^{\prime}\).
Let \(X=(X_{1},\ldots,X_{k-1})\), \(Y=(Y_{1},\ldots,Y_{k-1})\), \(X_{k}\) be the cluster following \(X\) in \(W\) and \(Y_{k}\) be the cluster preceding \(Y\) in \(W\). Without loss of generality, we may assume that \(\{X_{0},X_{1},\ldots,X_{k}\}\) is an edge of \(R_{1}\) and \(\{Y_{0},Y_{1},\ldots,Y_{k}\}\) is an edge of \(R_{k}\). By the condition, we have \(S_{X}\) constitutes at least a \(\nu\) proportion of \(\mathcal{G}(X_{1},\ldots,X_{k-1})\) and \(S_{Y}\) constitutes at least a \(\nu\) proportion of \(\mathcal{G}(Y_{1},\ldots,Y_{k-1})\). Given any subsets \(X^{\prime}_{j}\subseteq X_{j}\) of size \(\alpha m/k\) for \(j\in[k]\) and \(X^{\prime}_{0}\subseteq X_{0}\) of size \(\alpha m\), we say that a \((k-1)\)-tuple \(e\in\mathcal{G}(X_{1},\ldots,X_{k-1})\) is _well-connected to \((X^{\prime}_{1},\ldots,X^{\prime}_{k-1})\)_ via \(X^{\prime}_{k}\) and \(X^{\prime}_{0}\) if for at least \(9/10\) of the \((k-1)\)-tuples \(f\) in \(\mathcal{G}(X^{\prime}_{1},\ldots,X^{\prime}_{k-1})\), there exist distinct \(k\)-subsets \(\{c_{1},\ldots,c_{k}\}\), \(\{f_{1},\ldots,f_{k}\}\) of \(X^{\prime}_{0}\) and distinct \(u,v\in X^{\prime}_{k}\) such that \((c_{1}\cdots c_{k},e(u)f)\) and \((f_{1}\cdots f_{k},e(v)f)\) are sequentially paths in \(G\) of length \(2k-1\).
**Claim 6.3**.: _For any subsets \(X^{\prime}_{j}\subseteq X_{j}\) of size \(\alpha m/k\), \(Z_{j}\subseteq X_{j}\) of size \(\alpha m/k\) for \(j\in[k]\) and \(X^{\prime}_{0}\subseteq X_{0}\), \(Z_{0}\subseteq X_{0}\) of size \(\alpha m\) such that each \(X^{\prime}_{j}\) is disjoint from \(Z_{j}\), the following statements hold._
1. _At least_ \(9/10\) _of the_ \((k-1)\)_-tuples_ \(e\) _in_ \(\mathcal{G}(Z_{1},\ldots,Z_{k-1})\) _are well-connected to_ \((Z_{1},\ldots,Z_{k-1})\) _via_ \(Z_{k}\) _and_ \(Z_{0}\)_._
2. _At least_ \(9/10\) _of the_ \((k-1)\)_-tuples_ \(e\) _in_ \(\mathcal{G}(Z_{1},\ldots,Z_{k-1})\) _are well-connected to_ \((X^{\prime}_{1},\ldots,X^{\prime}_{k-1})\) _via_ \(X^{\prime}_{k}\) _and_ \(X^{\prime}_{0}\)_._
3. _At least_ \(9/10\) _of the_ \((k-1)\)_-tuples_ \(e\) _in_ \(\mathcal{G}(X^{\prime}_{1},\ldots,X^{\prime}_{k-1})\) _are well-connected to_ \((Z_{1},\ldots,Z_{k-1})\) _via_ \(X^{\prime}_{k}\) _and_ \(X^{\prime}_{0}\)
Proof.: From the proof of Claim 6.2, we know that all but at most \(1/100\)-proportion of pairs \((e,f)\), where \(e,f\in\mathcal{G}(Z_{1},\ldots,Z_{k-1})\), are disjoint and are linked by at least \(\sqrt{\varepsilon}(\alpha m/k)^{k+1}\) sequentially tight paths in \(G\) of length \(2k-1\). It is obvious that at least \(9/10\)-proportion \((k-1)\)-tuples of \(\mathcal{G}(Z_{1},\ldots,Z_{k-1})\) can be extended to at least \(9/10\)-proportion \((k-1)\)-tuples of \(\mathcal{G}(Z_{1},\ldots,Z_{k-1})\) by at least \(\sqrt{\varepsilon}(\alpha m/k)^{k+1}\) sequentially paths. To prove (2), we apply Lemma 4.10 with \(\mathcal{H}\) being the \((k+1)\)-complex generated by the down-closure of a sequentially path of length \(2k-1\) and \(\mathcal{H}^{\prime}\) being the subcomplex induced by its initial and terminal \((k-1)\)-tuples. We regard \(\mathcal{H}\) as a \((2k)\)-partite \((k+1)\)-complex with \(k\) colors in the color cluster and one vertex in each point cluster. The role of \(\mathcal{G}\) in Lemma 4.7 is the \((2k)\)-partite subcomplex of \(\mathcal{G}\) with vertex classes \(X^{\prime}_{0},Z_{1},\ldots,Z_{k-1},X^{\prime}_{k},X^{\prime}_{1},\ldots,X^{ \prime}_{k-1}\), the colors of \(\mathcal{H}\) are embedded in \(X^{\prime}_{0}\), the first vertex of \(\mathcal{H}\) is to be embedded in \(Z_{1}\), the second one in \(Z_{2}\), and so forth. By Lemmas 4.11 and 4.7, the proportion of pairs \((e,f)\) for which there is no path as in (2) is at most \(1/200\), and the remainder of the argument can be followed in (1). (3) can be proved similarly.
We are ready to construct our path. Arbitrarily choose a subset \(X^{(0)}_{0}\subseteq X_{0}\), \(Z_{0}\subseteq Y_{0}\) of size \(\alpha m\) and \(X^{(0)}_{j}\subseteq X_{j}\), \(Z_{j}\subseteq Y_{j}\) of size \(\alpha m/k\) for \(j\in[k]\). By Theorem 4.6, Theorem 4.7, Theorem 4.9, there are at least \(|S_{X}||\mathcal{G}(X^{(0)}_{1},\ldots,X^{(0)}_{k-1})|/2\) pairs \((e,f)\), where \(e\in S_{X}\) and \(f\in\mathcal{G}(X^{(0)}_{1},\ldots,X^{(0)}_{k-1})\), can be extended to \(\sqrt{\varepsilon}(\alpha m/k)^{k+1}\) sequentially paths whose remaining point lies in \(X^{(0)}_{k}\) and colors lie in \(X^{(0)}_{0}\). Thus, we choose a \((k-1)\)-tuple \(P^{(0)}\) of \(S_{X}\) such that the following holds, there is a set \(\mathcal{P}^{(0)}\) of sequentially paths of the form \((c_{1}\cdots c_{k},P^{(0)}(v)f)\) for \(v\in X^{(0)}_{k}\), \(c_{1},\ldots,c_{k}\in X^{(0)}_{0}\) and \(f\in\mathcal{G}(X^{(0)}_{1},\ldots,X^{(0)}_{k-1})\) for which the terminal \((k-1)\)-tuples of paths in \(\mathcal{P}^{(0)}\) are all distinct and constitute at least half of the ordered \((k-1)\)-tuples of \(\mathcal{G}(X^{(0)}_{1},\ldots,X^{(0)}_{k-1})\). Similarly, we can choose \(e\in S_{Y}\) such that for at least half the members \(e^{\prime}\) of \(\mathcal{G}(Z_{1},\ldots,Z_{k-1})\), there is a sequentially path of length \(2k-1\) in \(G\) from \(e^{\prime}\) to \(e\) whose remaining point lies in \(Z_{k}\) and colors lie in \(Z_{0}\).
We now construct the desired path. Since \(H_{W_{i}}\) is sequentially tightly connected, we can obtain \(W=e_{1}\cdots e_{s}\) passing all edges of \(H_{W_{i}}\). For each \(i\in[s]\), let \(n_{i}\) be any integer with \(0\leq n_{i}\leq(1-3\alpha)\mathbf{w}(e_{i})m\). Set the initial state to be 'filling the edge \(e_{1}\)', we proceed for \(j\geq 1\) as follows,
\(\bigstar\)**:**: The terminal \((k-1)\)-tuple of the path family \(\mathcal{P}^{(j)}\) constitute at least half of the ordered \((k-1)\)-tuples \(\mathcal{G}(X^{(j)}_{1},\ldots,X^{(j)}_{k-1})\).
Suppose that our current state is 'filling the edge \(e_{i}\)' for some \(i\), if we have previously completed \(n_{i}\) steps in this state, then we do nothing and change the state to 'position \(1\) in traversing the walk \(W\)'. Otherwise, since \(\bigstar\) holds for \(j-1\), we apply Claim 6.2 with \(\sigma=0\) to obtain a path \(P\in\mathcal{P}^{(j-1)}\) and a collection \(\mathcal{P}^{(j)}\) of \(\frac{9}{10}e(\mathcal{G}(X^{(j-1)}_{1},\ldots,X^{(j-1)}_{k-1}))\) sequentially paths of length \(2k-1\), all of whose initial \((k-1)\)-tuples are the same (the terminal \((k-1)\)-tuple of \(P\)) and whose terminal \((k-1)\)-tuples are distinct numbers of \(\mathcal{G}(X^{(j-1)}_{1},\ldots,X^{(j-1)}_{k-1})\) and are disjoint from \(V(P)\), whose colors lie in \(X^{(j-1)}_{0}\setminus C(P)\), and whose remaining vertex lies in \(X^{(j-1)}_{k}\setminus V(P)\). We define \(P^{(j)}\) to be the concatenation \(P^{(j-1)}+P\) with color classes \(C(P^{(j-1)})\cup C(P)\). For \(p\in[0,k]\), we generate \(X^{(j)}_{p}\) from \(X^{(j-1)}_{p}\) by removing the vertices of \(P^{(j)}\) in \(X^{(j)}_{p}\) and replacing them by vertices from the same cluster which do not lie in \(Z\) or in \(P^{(j)}\). We will prove that this is possible in Claim 6.4.
Now suppose that our current state is 'position \(q\) in traversing the walk \(W\)'. Since \(\bigstar\) holds for \(j-1\), applying Claim 6.2 with \(\sigma=1\) to obtain a path \(P\in\mathcal{P}^{(j-1)}\) and a collection \(\mathcal{P}^{(j)}\) of \(\frac{9}{10}e(\mathcal{G}(X_{1}^{(j-1)},\ldots,X_{k-1}^{(j-1)}))\) sequentially paths of length \(2k\), all of whose initial \((k-1)\)-tuples are the same (the terminal \((k-1)\)-tuple of \(P\)) and whose terminal \((k-1)\)-tuples are distinct numbers of \(\mathcal{G}(X_{2}^{(j-1)},\ldots,X_{k}^{(j-1)})\) and are disjoint from \(V(P)\), and whose two remaining vertices lie in \(X_{k}^{(j-1)}\setminus V(P)\) and \(X_{1}^{(j-1)}\setminus V(P)\) respectively with colors in \(X_{0}^{(j-1)}\setminus C(P)\). Exactly as before we define \(P^{(j)}\) to be the concatenation \(P^{(j-1)}+P\). We generate \(X_{p}^{(j)}\) from \(X_{p+1}^{(j-1)}\) for \(p\in[0,k-1]\) by removing the vertices of \(P^{(j-1)}\) in \(X_{p+1}^{(j-1)}\) and replacing them by vertices from the same cluster do not lie in \(Z\) or \(P^{(j)}\). If we have not reached the end of \(W\), we choose \(X_{k}^{(j)}\) to be a subset of the cluster at position \(q+k\) in the sequence of \(W\) such that \(X_{k}^{(j)}\) is disjoint from \(P^{(j)}\cup Z\). In this case, we change our state to 'position \(q+1\) in traversing \(W\)'. Alternatively, if we have reached the end of \(W\), meaning that the \((k-1)\)-tuple of clusters containing \(X_{1}^{(j)},\ldots,X_{k-1}^{(j)}\) is \((Y_{1}\ldots,Y_{k-1})\), then we choose \(X_{k}^{(j)}\) to be a subset of \(Y_{k}\) which has size \(\alpha m/k\) and is disjoint from \(P^{(j)}\cup Z\). We may choose a path \(P\in\mathcal{P}^{(j-1)}\) such that the terminal \((k-1)\)-tuple \(f\in G(X_{1}^{(j)},\ldots,X_{k-1}^{(j)})\) of \(P\) is well-connected to \((Z_{1},\ldots,Z_{k-1})\) via \(Z_{k}\) and \(Z_{0}\). This implies that we may choose a \((k-1)\)-tuple \(e^{\prime}\) in \(\mathcal{G}(Z_{1},\ldots,Z_{k-1})\), \(v,v^{\prime}\) in \(Z_{k}\) with new colors \(C^{*},C^{**}\) in \(Z_{0}\) with \(|C^{*}|=|C^{**}|=k\) such that \((C^{*},f(v^{\prime})e^{\prime})\) is a sequentially path \(Q^{\prime}\) and \((C^{**},e^{\prime}(v)e)\) is a sequentially path \(Q\). Return \(P^{(j)}+Q^{\prime}+Q\) as the output sequentially path in \(G\). Note that an edge may appear multiple times. When it first appears in the walk, the process executes 'filling the edge'. When it appears later, 'filling the edge' is no longer needed. Again we prove Claim 6.4 that these choices are all possible.
**Claim 6.4**.: _The algorithm described above is well-defined(that is, it is always possible to construct the sets \(X_{p}^{(j)}\)), maintains \(\bigstar\) and returns a sequentially path of length_
\[4k-1+\left(\sum_{i\in[s]}n_{i}\right)\cdot k+\ell(W)\cdot(k+1).\]
Proof.: We prove that \(\bigstar\) is maintained, recall that \(e(\mathcal{G}(X_{1}^{(j)},\ldots,X_{k-1}^{(j)}))\geq\varepsilon m^{k-1}\) for each \(j\). Fixing some \(j\), for either \(A_{p}:=X_{p}^{(j-1)}\) or \(A_{p}:=X_{p+1}^{(j-1)}\), we obtain sets \(A_{1},\ldots,A_{k-1}\), each with size \(\alpha m\) such that the terminal \((k-1)\)-tuples of \(\mathcal{P}^{(j)}\) constitute at least \(9/10\) of the ordered edges of \(\mathcal{G}(A_{1},\ldots,A_{k-1})\) and for each \(i\in[k-1]\), \(X_{i}^{(j)}\) is formed from \(A_{i}\) by removing at most two vertices and replacing them with the same number of vertices. Since each vertex is in at most \(m^{k-2}\) ordered \((k-1)\)-tuples of either \(\mathcal{G}(A_{1},\ldots,A_{k-1})\) or \(\mathcal{G}(X_{1}^{(j)},\ldots,X_{k-1}^{(j)})\), we conclude that the fraction of ordered \((k-1)\)-tuples of \(\mathcal{G}(X_{1}^{(j)},\ldots,X_{k-1}^{(j)})\) which are the terminal \((k-1)\)-tuples of paths in \(\mathcal{P}^{(j)}\) is at least
\[\begin{split}&\frac{\frac{9}{10}e(\mathcal{G}(A_{1},\ldots,A_{k-1})) -2(k-1)m^{k-2}}{e(\mathcal{G}(X_{1}^{(j)},\ldots,X_{k-1}^{(j)}))}\\ &\geq\frac{\frac{9}{10}(e(\mathcal{G}(X_{1}^{(j)},\ldots,X_{k-1}^ {(j)}))-2(k-1)m^{k-2})-2(k-1)m^{k-2}}{e(\mathcal{G}(X_{1}^{(j)},\ldots,X_{k-1} ^{(j)}))}\\ &\geq\frac{9}{10}-\frac{4(k-1)m^{k-2}}{\varepsilon m^{k-1}} \geq\frac{1}{2},\end{split} \tag{3}\]
where the last equality holds since \(m\geq m_{0}\geq 16(k-1)/\varepsilon\). Thus, we obtain \(\bigstar\).
To prove that we can always construct the set \(X_{p}^{(j)}\), observe that it is enough to check that at termination every cluster still have at least \(2\alpha m\) vertices not in \(P^{(j)}\), as then there are at least \(\alpha m\) vertices outside \(Z\). In each walk-traversing step, each path in \(\mathcal{P}^{(j)}\) contains precisely \(k+1\) new vertices and \(k+1\) new colors and the total number of walk-traversing steps is precisely \(\ell(W)\). Recall that this number is at most \(t^{2k+1}\), we have \((k+1)t^{2k+1}<\frac{\alpha m}{2k}\) and \((k+1)^{2}t^{2k+1}<\frac{\alpha m}{2}\) by (1). When we are in the state 'filling the edge \(e_{i}\)', we have \(n_{i}\) steps and in each step, each path in \(\mathcal{P}^{(j)}\) contains \(k\) new vertices, one from each cluster of \(e_{i}\setminus C(e_{i})\) and \(k\) new colors from \(C(e_{i})\). So for any color cluster \(C\), the number of whose vertices which are added to \(P^{(j)}\) is at most \(\sum_{i:C\in e_{i}}kn_{i}\leq\sum_{i:C\in e_{i}}(1-3\alpha)k\mathbf{w}(e_{i})m \leq(1-3\alpha)m\). And for any point cluster \(X\), the number of whose vertices which are added to \(P^{(j)}\) is at most \(\sum_{i:X\in e_{i}}n_{i}\leq\sum_{i:X\in e_{i}}(1-3\alpha)\mathbf{w}(e_{i})m \leq(1-3\alpha)m/k\). Together with \(e\) and the \(k\) vertices of the chosen path in \(\mathcal{P}^{(0)}\), we conclude that there are at most \((1-2\alpha)m\) vertices of any color cluster and at most \((1-2\alpha)m/k\) vertices of any point cluster contained in \(P^{(j)}\) at termination.
Finally, the length of the path is equal to the number of vertices. Recall that \(P^{(0)}\) contains \(k-1\) vertices. Next, \(k\) vertices and \(k\) colors are added from \(P^{(0)}\) to form \(P^{(1)}\). Each of the \(\sum_{i\in[s]}n_{i}\) edge-filling steps resulted in \(k\) new vertices and \(k\) new colors being added to \(P^{(j)}\) and each of the \(\ell(W)\) walk-traversing steps resulted in \(k+1\) new vertices and \(k+1\) new colors being added to \(P^{(j)}\). When completing the path, we need \(2k\) vertices which are not in the final paths \(P^{(j)}\) (\(v,v^{\prime},e\) and \(e^{\prime}\)). Thus, the final path has length
\[(k-1)+k+\left(\sum_{i\in[s]}n_{i}\right)\cdot k+\ell(W)\cdot(k+1)+2k.\]
We obtain the shortest sequentially path by never entering the state 'filling an edge', in which case we can obtain a sequentially path of length \(4k-1+\ell(W)(k+1)\). On the other hand, by extending \(W\) to include all edges of \(R_{W_{i}}\), we take \(n_{i}\) to be \((1-\psi)\mathbf{w}(e_{i})m\) for each \(i\in[s]\). We can obtain a sequentially path of length at least \((1-\psi)\mu_{i}kn/t\), with using at most \(k\mu_{i}(C)n/t+B\) vertices from any color cluster \(C\) in \(R_{W_{i}}\) and at most \(\mu_{i}(X)n/t+B\) where \(\mu_{i}(Z)=\sum_{Z\in e,e\in R_{W_{i}}}\mathbf{w}_{i}(e)\) for \(i\in[k]\) and \(B=B(t,k)\). By choosing \(n_{i}\) appropriately, we can obtain tight cycles of certain length between two extremes.
Similarly with Lemma 6.1, we can obtain the following lemma.
**Lemma 6.5**.: _Let \(k,r,n_{0},t,B\) be positive integers and \(\psi,d_{2},\ldots,d_{k+1},\varepsilon,\varepsilon_{k+1},\nu\) be positive constants such that \(1/d_{i}\in\mathbb{N}\) for \(i\in[2,k]\) and such that \(1/n_{0}\ll 1/t\),_
\[\frac{1}{n_{0}}\ll\frac{1}{t}\ll\frac{1}{B}\ll\frac{1}{r},\varepsilon\ll \varepsilon_{k+1},d_{2},\ldots,d_{k},\]
\[\varepsilon_{k+1}\ll\psi,d_{k+1},\nu,\frac{1}{k}.\]
_Then the following holds for all integers \(n\geq n_{0}\)._
_Let \(G\) be a \((1,k)\)-graph on \([n]\cup V\) where \(|V|=n\), \(\mathcal{J}\) be a \((\cdot,\cdot,\varepsilon,\varepsilon_{k+1},r)\)-regular slice for \(G\) on \([t]\cup V^{\prime}\) where \(|V^{\prime}|=t\) with density vector \(\textbf{d}=(d_{2},\ldots,d_{k})\). Let \(\mathcal{J}_{W_{i}}\) be the induced subcomplex of \(\mathcal{J}\) on \([t(i-1)/k+1,ti/k]\cup V^{\prime}\) for \(i\in[k]\). Let \(R_{W_{i}}:=R\left[[t(i-1)/k+1,ti/k]\cup V^{\prime}\right]\) be the induced subgraph of \(R:=R_{d_{k+1}}(G)\). Let \(R_{W_{i}}\) be sequentially tightly connected for \(i\in[k]\) and \(\textbf{w}_{i}\) be a fractional matching of size \(\mu_{i}=\sum_{e\in E(R_{W_{i}})}\textbf{w}_{i}(e)\) for \(i\in[k]\) with \(\mu_{i}(Z)\leq 1/k\) for each cluster \(Z\) and \(i\in[k]\). Also, let \(X\) and \(Y\) be \((k-1)\)-tuples of point clusters, \(S_{X}\) and \(S_{Y}\) be the subsets of \(\mathcal{J}_{X}\) and \(\mathcal{J}_{Y}\) of sizes at least \(\nu|\mathcal{J}_{X}|\) and \(\nu|\mathcal{J}_{Y}|\) respectively. Finally, let \(W\) be a sequentially walk traversing all edges of each \(H_{W_{i}}\) from \(X\) to \(Y\) of length at most \(t^{2k+1}\) and denote \(\ell(W)\) by \(p\). For \(i\in[k]\), we have_
1. _for any_ \(\ell\) _divisible by_ \(k\) _with_ \(4k\leq\ell\leq(1-\psi)\sum_{i\in[k]}\mu_{i}kn/t\)_, there is a sequentially path_ \(P\) _in_ \(G\) _of length_ \(\ell-1+\ell(W)(k+1)\) _whose initial_ \((k-1)\)_-tuple belongs to_ \(S_{X}\) _and whose terminal_ \((k-1)\)_-tuple belongs to_ \(S_{Y}\)_,_
2. \(P\) _uses at most_ \(\sum_{i\in[k]}\mu_{i}(Z)n/t+B\) _vertices from any point cluster_ \(Z\in V^{\prime}\) _and at most_ \(k\mu_{i}(C)n/t+B\) _vertices from any color cluster_ \(C\in[t]\) _where_ \(\mu_{i}(Z^{\prime})=\sum_{Z^{\prime}\in e,e\in R_{W_{i}}}\textbf{w}_{i}(e)\) _for any cluster_ \(Z^{\prime}\)_._
### Connecting
Let us begin with the existence of extensible paths. The following proposition states that most tuples in the complex induced by an edge of the reduced graph of a regular slice also extend to that edge.
**Proposition 6.6**.: _Let \(k,m,t,r\in\mathbb{N}\) and \(\varepsilon,\varepsilon_{k+1},d_{2},\ldots,d_{k+1},\beta,c,\nu\) be such that_
\[1/m\ll 1/r,\varepsilon\ll c \ll\varepsilon_{k+1},d_{2},\ldots,d_{k},\] \[\varepsilon_{k+1} \ll\beta \ll d_{k+1},\nu.\]
_Let \(\textbf{d}=(d_{2},\ldots,d_{k+1})\) and let \((G,G_{\mathcal{J}},\mathcal{J},\mathcal{P},R)\) be a \((k,m,2t,\varepsilon,\varepsilon_{k+1},r,\textbf{d})\)-regular setup. Let \(Y=(Y_{0},Y_{1},\ldots,Y_{k})\) be an ordered edge in \(R\), then all but at most \(\beta|\mathcal{J}_{(Y_{1},\ldots,Y_{k-1})}|\) many tuples \((v_{1},\ldots,v_{k-1})\in\mathcal{J}_{(Y_{1},\ldots,Y_{k-1})}\) are \((c,\nu)\)-extensible both left and rightwards to \(Y\)._
Proof.: Let \(P=(c_{1},\ldots,c_{2k},v_{1},\ldots,v_{3k-1})\) be a sequentially path. Partition its vertex set in \(k+1\) clusters \(X_{0},X_{1},\ldots,X_{k}\) such that \(X_{0}=\{c_{1},\ldots,c_{2k}\}\), and \(X_{i}=\{v_{j}:j=i\text{ mod }k\}\) for \(i\in[k]\). Thus, \(P\) is a \((k+1)\)-partite \((k+1)\)-graph.
Let \(\mathcal{H}\) be the down-closure of the path \(P\), which is a \((k+1)\)-partite \((k+1)\)-complex. Let \(V_{1}=\{v_{1},\ldots,v_{k-1}\}\) and \(V_{2}=\{v_{2k+1},\ldots,v_{3k-1}\}\). Let \(\mathcal{H}^{\prime}\) be the induced subcomplex of \(\mathcal{H}\) on \(V_{1}\cup V_{2}\). Thus, \(\mathcal{H}^{\prime}\) is a \(k\)-partite \((k-1)\)-complex on \(2k-2\) vertices. Let \(\mathcal{G}=\mathcal{J}\cup G_{\mathcal{J}}\).
Let \(\mathcal{H}^{\prime}_{\mathcal{G}}\) be the set of labelled partition-respecting copies of \(\mathcal{H}^{\prime}\) in \(\mathcal{G}\). It follows that
\[|\mathcal{H}^{\prime}_{\mathcal{G}}|=(1\pm\varepsilon_{k+1})|\mathcal{J}_{(Y_{ 1},\ldots,Y_{k-1})}|^{2}, \tag{4}\]
where the error term accounts for the fact that we do not count the intersecting pairs of \((k-1)\)-tuples in \(\mathcal{J}_{(Y_{1},\ldots,Y_{k-1})}\). Since \(Y\) is an edge of \(R\), any function \(\phi:V(P)\to V(R)\) such that \(\phi(X_{i})\subseteq Y_{i}\) is a homomorphism. By Lemma 4.7 with \(\beta^{2}\) playing the role of \(\beta\), we deduce that all but at most \(\beta^{2}|\mathcal{H}^{\prime}_{\mathcal{G}}|\) of labelled partition-respecting copies of \(\mathcal{H}^{\prime}\) in \(\mathcal{G}\) extend to at least \(cm^{3k+1}\) labelled partition-respecting copies of \(\mathcal{H}\) in \(\mathcal{G}\), since \(c\ll d_{2},\ldots,d_{k-1}\). For each \(e\in\mathcal{J}_{(Y_{1},\ldots,Y_{k-1})}\), let \(T(e)\) be
the number of tuples \(e^{\prime}\) in \(\mathcal{J}_{(Y_{1},\ldots,Y_{k-1})}\) such that \(e\cup e^{\prime}\) can be extended to at least \(cm^{3k+1}\) copies of \(\mathcal{H}\) in \(\mathcal{G}\), We have
\[\sum_{e\in\mathcal{J}_{(Y_{1},\ldots,Y_{k-1})}}T(e)\geq(1-2\beta^{2})|\mathcal{ J}_{(Y_{1},\ldots,Y_{k-1})}|^{2}. \tag{5}\]
Let \(S\subseteq\mathcal{J}_{(Y_{1},\ldots,Y_{k-1})}\) be the set of \((k-1)\)-tuples \(e\) which is not \((c,\nu)\)-extensible leftwards to \(Y\), that is \(T(e)<\nu|\mathcal{J}_{(Y_{1},\ldots,Y_{k-1})}|\). Combining with (5) and \(\beta\ll\nu\), we have
\[\sum_{e\in\mathcal{J}_{(Y_{1},\ldots,Y_{k-1})}}T(e)\leq|S|\cdot\nu|\mathcal{ J}_{(Y_{1},\ldots,Y_{k-1})}|+(|\mathcal{J}_{(Y_{1},\ldots,Y_{k-1})}|-|S|)| \mathcal{J}_{(Y_{1},\ldots,Y_{k-1})}|,\]
furthermore, we have
\[|S|\leq\frac{2\beta^{2}}{1-\nu}|\mathcal{J}_{(Y_{1},\ldots,Y_{k-1})}|\leq \frac{\beta}{2}|\mathcal{J}_{(Y_{1},\ldots,Y_{k-1})}|.\]
A symmetric fact shows that all but at most \(\frac{\beta}{2}|\mathcal{J}_{(Y_{1},\ldots,Y_{k-1})}|\)\((k-1)\)-tuples in \(\mathcal{J}_{(Y_{1},\ldots,Y_{k-1})}\) are not \((c,\nu)\)-extensible rightwards to \(Y\). Thus, all but at most \(\beta|\mathcal{J}_{(Y_{1},\ldots,Y_{k-1})}|\) pairs in \(\mathcal{J}_{(Y_{1},\ldots,Y_{k-1})}\) are not \((c,\nu)\)-extensible both left and rightwards to \(Y\).
In Proposition 6.6, we know that most tuples in the complex induced by an edge of the reduced graph of a regular slice also extend to that edge. The following lemma allows us to connect up two extensible paths using either very few or quite a lot of vertices.
**Lemma 6.7**.: _Let \(k,r,m,t\in\mathbb{N}\), and \(d_{2},\ldots,d_{k+1},\varepsilon,\varepsilon_{k+1},c,\nu,\lambda\) be such that_
\[1/m\ll 1/r,\varepsilon\ll c \ll\varepsilon_{k+1},d_{2},\ldots,d_{k},\] \[\lambda \ll\nu \ll 1/k,\] \[\varepsilon_{k+1} \ll d_{k+1}.\]
_Let \(\textbf{d}=(d_{2},\ldots,d_{k+1})\) and let \(\mathfrak{S}=(G,G_{\mathcal{J}},\mathcal{J},\mathcal{P},H)\) be a \((k,m,2t,\varepsilon,\varepsilon_{k+1},r,\textbf{d})\)-regular setup where \(\mathcal{P}\) has an initial partition of \([n]\cup V\) and \(H\) is a \((1,k)\)-graph on \([t]\cup V^{\prime}\). Suppose that \(H_{W_{i}}=H[[t(i-1)/k,ti/k]\cup V^{\prime}]\) and \(H_{W_{i}}\) is sequentially tightly connected for \(i\in[k]\). Let \(P_{1}\), \(P_{2}\subseteq G\) be \((c,\nu)\)-extensible paths such that \(P_{1}\) extends rightwards to \(X\) and \(P_{2}\) extends leftwards to \(Y\). Suppose that \(P_{1}\) and \(P_{2}\) are either identical or disjoint, let \(W\) be a sequentially walk traversing each \(H_{W_{i}}\) of length at most \(t^{2k+1}\) that starts from \(X\) and ends with \(Y\). Let \(T\) be the joint connection set of \(P_{1}\) and \(P_{2}\). Suppose that \(T\) and \(S\subseteq V(G)\) are \(\lambda\)-sparse in \(\mathcal{P}\), \(V(P_{1})\cup V(P_{2})\subseteq S\) and \(T\cap S=\emptyset\), then_
_(1) there is a sequentially path \(Q\) of length \(4k-1+(\ell(W)+2)(k+1)\) in \(G[V(\mathcal{P})]\) such that \(P_{1}QP_{2}\) is a sequentially path, containing no vertices of \(S\) and exactly \(6k+2\) vertices of \(T\),_
_(2) consider \(\psi\) with \(\varepsilon_{k+1}\ll\psi\), let **w** be a fractional matching of size \(\mu=\sum_{i\in[k]}\sum_{e\in E(H_{W_{i}})}\textbf{w}_{i}(e)\)\(\geq 5/m\) such that \(\sum_{Z\in e,e\in H_{W_{i}}}\textbf{w}_{i}(e)\leq(1-2\lambda)/k\) for each \(Z\in\mathcal{P}\). There is a sequentially path \(Q\) of length \(\ell(W)+1\) mod \(k\) in \(G[V(\mathcal{P})]\) such that \(P_{1}QP_{2}\) is a sequentially path, containing no vertices of \(S\) and exactly \(6k+2\) vertices of \(T\). Moreover, there is a set \(U\subseteq V(\mathcal{P})\) of size at most \(\psi mt\) such that \(U\cup V(Q)\) has exactly \(\lceil\sum_{i\in[k]}\sum_{Z\in e,e\in H_{W_{i}}}\textbf{w}_{i}(e)m\rceil+B\) vertices in each point cluster \(Z\)._
Proof.: Let \(X=(X_{0},X_{1},\ldots,X_{k})\), since \(P_{1}\) extends rightwards to \(X\), thus there exists a target set \(T_{1}\subseteq\mathcal{J}_{(X_{2},\ldots,X_{k})}\) of size \(|T_{1}|\geq\nu|\mathcal{J}_{(X_{2},\ldots,X_{k})}|\) such that for every \((v_{2},\ldots,v_{k})\in T_{1}\), there are at least
\(cm^{3k+1}\) many \((3k+1)\)-tuples \((c_{1},\ldots,c_{2k},w_{1},\ldots,w_{k},v_{1})\) with \(c_{i}\in T\cap X_{0}\) for \(i\in[2k]\), \(w_{i}\in T\cap X_{i}\) for \(i\in[k]\) and \(v_{1}\in T\cap X_{1}\) such that \(((c_{1},\ldots,c_{2k}),P_{1}(w_{1},\ldots,w_{k},v_{1},\ldots,v_{k}))\) is a sequentially path. Let \(Y=(Y_{0},Y_{1},\ldots,Y_{k})\), \(P_{2}\) extends leftwards to \(Y\) with target set \(T_{2}\subseteq\mathcal{J}_{(Y_{2},\ldots,Y_{k})}\).
For each \(Z\in\mathcal{P}\), let \(Z^{\prime}\subseteq Z\setminus(S\cup T)\) of size \(m^{\prime}=(1-2\lambda)m\) since \(S\) and \(T\) are \(\lambda\)-sparse. Let \(\mathcal{P}^{\prime}=\{Z^{\prime}\}_{Z\in\mathcal{P}}\), \(G^{\prime}=G[V(\mathcal{P}^{\prime})]\) and \(\mathcal{J}^{\prime}=\mathcal{J}[V(\mathcal{P}^{\prime})]\). By lemma 4.11, \(\mathfrak{S}^{\prime}:=(G^{\prime},G^{\prime}_{\mathcal{J}},\mathcal{J}^{ \prime},\mathcal{P}^{\prime},H)\) is a \((k,m^{\prime},2t,\sqrt{\varepsilon},\sqrt{\varepsilon_{k+1}},r,\mathbf{d})\)-regular setup.
For (2), let \(\mu^{\prime}=\mu/(1-2\lambda)\) be the scaled size of \(\mathbf{w}\) and \(B\in\mathbb{N}\) such that \(1/B\ll 1/r,\varepsilon\). Let \(\ell\) be the largest integer divisible by \(k\) with \(4k\leq\ell\leq(1-\psi/4)\mu^{\prime}m^{\prime}k\). Note that such an \(\ell\) exists since \((1-\psi/4)\mu^{\prime}m^{\prime}\geq 4\), where the latter inequality follows from \(\mu\geq 5/m\). Applying Lemma 6.5 with \(G^{\prime},\mathcal{J}^{\prime},W,\ell,\mathbf{w},\mu^{\prime}\) and \(T_{1},T_{2}\), we obtain a sequentially path \(Q^{\prime}\) whose initial \((k-1)\)-tuple belongs to \(T_{1}\) and whose terminal \((k-1)\)-tuple belongs to \(T_{2}\). Furthermore, \(Q^{\prime}\) has length \(\ell-1+\ell(W)(k+1)\) and uses at most \(\sum_{i\in[k]}\mu_{i}(Z)m+B\) vertices from any point cluster \(Z\) where \(\mu_{i}(Z)=\sum_{Z\in e,e\in H_{W_{i}}}\mathbf{w}_{i}(e)\) and \(B\ll\psi\mu mk\). Note that \(\ell\geq(1-\psi/4)\mu km-k\), it follows that
\[\sum_{Z\in V^{\prime}}\sum_{i\in[k]}\mu_{i}(Z)m-\sum_{Z\in V^{ \prime}}|V(Q^{\prime})\cap Z|\] \[\leq\mu km-(1-\frac{\psi}{4})\mu km+k+1-\ell(W)(k+1)\] \[\leq\frac{\psi}{4}\mu km+k+1\] \[\leq\frac{\psi}{4}(1-2\lambda)tm+k+1\leq\frac{\psi}{2}mt.\]
Hence, there is a set \(U\subseteq V(\mathcal{P})\) of size at most \(\psi mt\) such that \(U\cup V(Q^{\prime})\) has \(\lceil\sum_{i\in[k]}\mu_{i}(Z)m\rceil+B\) vertices from any point cluster \(Z\in V^{\prime}\).
For (1), we can choose a path \(Q^{\prime}\) in the same way. The only difference is that in this case \(\mathbf{w}\) is a single edge of weight \(1\) and \(\ell=4k\). Hence, \(Q^{\prime}\) is a path of length \(4k-1+\ell(W)(k+1)\).
Finally, we use the above extensible paths to choose \(c_{1},\ldots,c_{k+1},w_{1},\ldots,w_{k},v_{1}\) and \(f_{1},\ldots,f_{k+1}\), \(v^{\prime}_{k},w^{\prime}_{1},\ldots,w^{\prime}_{k}\) in \(T\) such that for
\[Q=((c_{1},\ldots,c_{k+1})C(Q^{\prime})(f_{1},\ldots,f_{k+1}),(w_{1},\ldots,w_{k},v_{1})Q^{\prime}(v^{\prime}_{k},w^{\prime}_{1},\ldots,w^{\prime}_{k})),\]
the concatenation \(P_{1}QP_{2}\) is a sequentially path and \(Q\) is disjoint from \(S\), since \(V(S)\cap T=\emptyset\)\(T\cap V(Q^{\prime})=\emptyset\). It is obvious that the length of \(Q\) in (1) is \(4k-1+(\ell(W)+2)(k+1)\) and the length of \(Q\) in (2) is \(\ell(W)+1\) mod \(k\).
**Proposition 6.8**.: _Let \(W\) be a sequentially walk in a \((1,k)\)-graph \(H\) on \([t]\cup V^{\prime}\) which starts from \((1,k)\)-tuple \(X\) and ends with \((1,k)\)-tuple \(Y\) where \(|V^{\prime}|=t\). There exists a sequentially walk \(W^{\prime}\) of length at most \(kt^{k+1}\), which starts from \(X\) and ends with \(Y\). Moreover, \(\ell(W^{\prime})=\ell(W)\) mod \(k\)._
Proof.: Suppose that \(\ell(W)=j\) mod \(k\) for a \(j\in[0,k-1]\). Let \(W^{\prime}\) be a vertex-minimal sequentially tightly walk from \(X\) to \(Y\) of size \(j\) mod \(k\). Our goal is to show that every \((1,k)\)-tuple repeats at most \(k\) times in \(W^{\prime}\).
Assume that \(W^{\prime}\) contains \(k+1\) copies of the same \((1,k)\)-tuple \(Z\) and denote by \(n_{j}\) the position in \(W^{\prime}\) where the \(j\)th repetition \(Z\) begins. It is obvious that \(n_{j}-n_{1}\not\equiv 0\) mod \(k\), otherwise it is contrary to the minimal of \(W^{\prime}\). By the pigeonhole principle, there exist two indices \(j,j^{\prime}\) such that \(n_{j}-n_{1}\equiv n_{j^{\prime}}-n_{1}\) mod \(k\) for \(1\leq j<j^{\prime}\leq k+1\). That is, \(n_{j}-n_{j^{\prime}}\equiv 0\) mod \(k\). We can also reduce the length of \(W^{\prime}\) by deleting the vertices between \(n_{j}\) and \(n_{j^{\prime}}-1\), a contradiction.
**Proposition 6.9**.: _Let \(j,k,t\in\mathbb{N}\) with \(j\in[k]\). Let \(W\) be a sequentially closed walk that is compatible with respect to an orientation \(\overrightarrow{H}\) of a \((1,k)\)-graph \(H\) on \([t]\cup V^{\prime}\) where \(|V^{\prime}|=t\). Let \(X_{1}\) and \(X_{2}\) be consistent with \(\overrightarrow{H}\). There exists a sequentially walk \(W^{\prime}\) of length at most \(kt^{k+1}\), which starts from \(X_{1}\) and ends with \(X_{2}\). Moreover, if \(W\) has length 1 \(\mathrm{mod}\ k\), then \(W^{\prime}\) has length \(j\)\(\mathrm{mod}\ k\)._
Proof.: For the first part, by Proposition 6.8, it suffices to show that there is a sequentially walk starting from \(X_{1}\) and ending with \(X_{2}\). Since \(X_{1}\) is consistent with \(\overrightarrow{H}\), there is a sequentially path \(W_{X_{1}}\) of length at most \(k-1\) from \(X_{1}\) to \(X_{1}^{\prime}\) in \(H\) where \(X_{1}^{\prime}\) is an oriented edge in \(\overrightarrow{H}\) which is a cyclic shift of \(X_{1}\). Similarly, there is a sequentially path \(W_{X_{2}}\) of length at most \(k-1\) from \(X_{2}\) to \(X_{2}^{\prime}\) in \(H\) where \(X_{2}^{\prime}\) is an oriented edge in \(\overrightarrow{H}\) which is a cyclic shift of \(X_{2}\). Since \(W\) is compatible with respect to an orientation \(\overrightarrow{H}\), there is a subwalk \(W_{X_{1}^{\prime}X_{2}^{\prime}}\subseteq W\) starting from \(X_{1}^{\prime}\) and ending with \(X_{2}^{\prime}\), hence \((C(X_{1})C(W_{X_{1}})C(W_{X_{1}^{\prime}X_{2}^{\prime}})C(W_{X_{2}})C(X_{2}), I(X_{1})I(W_{X_{1}})I(W_{X_{1}^{\prime}X_{2}^{\prime}})I(W_{X_{2}})I(X_{2}))\) is the desired \(W^{\prime}\).
Note that we choose \(W_{X_{1}^{\prime}X_{2}^{\prime}}\) such that \(W^{\prime}\) has length \(j\) mod \(k\) by extending \(W_{X_{1}^{\prime}X_{2}^{\prime}}\) along the same \((1,k)\)-tuple with copies of \(W\), for an appropriate number of times. This is possible since any number coprime to \(k\) is a generator for the finite cyclic group \(\mathbb{Z}/k\mathbb{Z}\).
**Lemma 6.10** (Connecting Lemma).: _Let \(k,m,r,t\in\mathbb{N}\), \(d_{2},\ldots,d_{k+1},\varepsilon,\varepsilon_{k+1},p,\nu,\lambda,\zeta\) be such that_
\[1/m\ll 1/r,\varepsilon\ll 1/t,\zeta,\varepsilon_{k+1},d_{2},\ldots,d_{k},\]
\[\zeta\ll p\ll d_{2},\ldots,d_{k},\]
\[1/t\ll\varepsilon_{k+1}\ll d_{k+1},\nu\leq 1/k,\]
\[\lambda\ll\nu\ll 1/k.\]
_Let \(\textbf{d}=(d_{2},\ldots,d_{k+1})\) and let \((G,G_{\mathcal{J}},\mathcal{J},\mathcal{P},H)\) be a \((k,m,2t,\varepsilon,\varepsilon_{k+1},r,\textbf{d})\)-regular setup with \(H\) being sequentially tightly connected. Let \(\overrightarrow{H}\) be an orientation of \(H\) with a compatible closed walk \(W\). Suppose that \(\mathcal{C}\) is a collection of pairwise disjoint \((p,\nu)\)-extensible paths consistent with \(\overrightarrow{H}\) and with joint connection set \(T\). Assume that_
1. \(|\mathcal{C}|\leq\zeta m\)_,_
2. \(V(\mathcal{C})\) _is_ \(\lambda\)_-sparse in_ \(\mathcal{P}\)_,_
3. \(V(\mathcal{C})\cap T=\emptyset\)_._
_Consider any two elements \(P_{1},P_{2}\) of \(\mathcal{C}\), there is a sequentially path \(P\) in \(G\) such that_
1. \(P\) _connects every path of_ \(\mathcal{C}\)_,_
2. \(P\) _starts from_ \(P_{1}\) _and ends with_ \(P_{2}\)_,_
3. \(V(P)\setminus V(\mathcal{C})\subseteq V(\mathcal{P})\)_,_
4. \(V(P)\setminus V(\mathcal{C})\) _intersects in at most_ \(10k^{2}\mathcal{C}_{Z}+t^{2t+3k+2}\) _vertices with each cluster_ \(Z\in\mathcal{P}\)_, where_ \(\mathcal{C}_{Z}\) _denotes the number of paths of_ \(\mathcal{C}\) _intersecting with_ \(Z\)
Proof.: Choose a set \(T^{\prime}\) from \(V(G)\) by including each vertex of \(V(\mathcal{P})\) independently at random with probability \(p\). By Proposition 1.18 and the union bound, we obtain that the set \(T^{\prime}\) is \((2p)\)-sparse with probability \(1-2t\exp(-\Omega(m))\). By Proposition 1.19, we obtain that the set \(T^{\prime}\) is a connection set of a fixed \((p^{3k+2}/2,\nu)\)-extensible path in \(\mathcal{C}\) with probability \(1-2m^{k-1}\exp(-\Omega(m))\). Since \(|\mathcal{C}|\leq\zeta m\), with positive probability, we get a set \(T^{\prime}\) satisfying all these properties.
Initiate \(S=V(\mathcal{C})\). While there are two paths \(Q_{1},Q_{2}\in\mathcal{A}\) such that the extension to the right of \(Q_{1}\) equals to the left of \(Q_{2}\), apply Lemma 6.7 (1) with \(\ell(W)=kp^{k+4}/2\) to obtain a path \(Q\) of length \(10k^{2}\) which avoids \(S\) and has exactly \(6k+2\) vertices in \(T^{\prime}\). Add \(V(Q)\) to \(S\), replace \(Q_{1},Q_{2}\) with \(Q\) in \(\mathcal{C}\) and delete the \(6k+2\) vertices used by \(Q\) in \(T^{\prime}\). Denote the set of paths after the procedure by \(\mathcal{C}^{\prime}\).
Note that the size of \(S\) grows by at most \(10k^{2}|\mathcal{C}|\leq 10k^{2}\zeta m\leq\lambda m\), we delete at most \((6k+2)|\mathcal{C}|\leq(6k+2)\zeta m\leq p^{3k+2}m/4\) vertices from \(T\) throughout this process since \(\zeta\ll p\). This implies that every path of \(\mathcal{C}\) remains \((p^{3k+2}/4,\nu)\)-extensible with connection set \(T^{\prime}\). Hence the conditions of Lemma 6.7 (1) are satisfied in every step and \(\mathcal{C}^{\prime}\) is well-defined.
Note that when the procedure ends, \(\mathcal{C}^{\prime}\) has size at most \(t^{2t}\). Moreover, the paths of \(\mathcal{C}^{\prime}\) inherit the property of being consistent with \(\overrightarrow{H}\). We continue by connecting up the paths of \(\mathcal{C}^{\prime}\) to the desired path \(P\) along the orientation. As the paths of \(\mathcal{C}^{\prime}\) are consistent with \(\overrightarrow{H}\), the left and right extensions of each path in \(\mathcal{C}^{\prime}\) are contained in the walk \(W\). Since \(W\) is compatible with \(\overrightarrow{H}\), we can apply Proposition 6.9 to obtain a sequentially walk in \(H\) of length of at most \(t^{2k+1}\) between the left and right end of each path in \(\mathcal{C}^{\prime}\). Use Lemma 4.11 and Lemma 6.7 (1), we can connect up the paths of \(\mathcal{C}^{\prime}\) using at most \(t^{2t+3k+2}\) further vertices of \(V(\mathcal{P})\).
Thus, \(P\) contains every path in \(\mathcal{C}\) as a subpath and \(V(P)\setminus V(\mathcal{C})\subseteq V(\mathcal{P})\). Moreover, note that \(V(\mathcal{C}^{\prime})\setminus\mathcal{C}\) intersects in at most \(10k^{2}\mathcal{C}_{Z}\) vertices for each \(Z\in\mathcal{P}\), where \(\mathcal{C}_{Z}\) denotes the number of paths of \(\mathcal{C}\) that intersects with \(Z\). It is obvious that \(P\) can start and end with any two paths of \(\mathcal{C}\).
Proof of Lemma 5.3.: Let \(P_{1}=P\). Suppose that \(P_{1}\) extends rightwards to \(X\) and leftwards to \(Y\), there exists a path \(P_{2}\) of length \(k-1\) which \((c,\nu)\)-extends both leftwards and rightwards to \(Y\) by Proposition 6.6. Moreover, we can assume that \(V(P_{1})\) is disjoint from \(V(P_{2})\) and \(T_{2}\), where \(T_{2}\) is the connection set of \(P_{2}\). By Proposition 1.18 and Proposition 1.19, we can choose a \(\lambda\)-sparse vertex set \(T^{\prime}\) such that \(P_{1}\), \(P_{2}\) are \((c^{3k+2}/2,\nu)\)-extensible paths with connection set \(T^{\prime}\).
Firstly, let \(S_{1}=V(P_{1})\cup V(P_{2})\), and we choose \(\kappa\) such that \(\lambda\ll\kappa\ll\gamma\). For each \(Z\in\mathcal{P}\), we can select a subset \(Z^{\prime}\) of \(Z\) of size \(m^{\prime}=\kappa m\) such that \(Z\cap S_{1}\subseteq Z^{\prime}\) since \(S_{1}\) is \(2\lambda\)-sparse, \(1/m\ll 1/t\ll\alpha\ll\lambda\) and \(2\lambda\ll\kappa\). Let \(\mathcal{P}^{\prime}=\{Z^{\prime}\}_{Z\in\mathcal{P}}\), \(V(\mathcal{P}^{\prime})=\bigcup_{Z\in\mathcal{P}}Z^{\prime}\), \(G^{\prime}=G[V(\mathcal{P}^{\prime})]\), \(G^{\prime}_{\mathcal{J}^{\prime}}=G_{\mathcal{J}}[V(\mathcal{P}^{\prime})]\) be the corresponding induced subgraphs and \(\mathcal{J}^{\prime}=\mathcal{J}[V(\mathcal{P}^{\prime})]\) be the induced subcomplex. By Lemma 4.11, \(\mathfrak{S}^{\prime}=(G^{\prime},G^{\prime}_{\mathcal{J}^{\prime\prime}}, \mathcal{J}^{\prime},\mathcal{P}^{\prime},H)\) is a \((k,m^{\prime},2t,\sqrt{\varepsilon},\sqrt{\varepsilon_{k+1}},r,d_{2},\ldots,d_{k +1})\)-regular setup.
Now we define a fractional matching that complements the discrepancy of \(S_{1}\) in the clusters of \(\mathcal{P}\). Consider \(\mathbf{b}_{i}\in\mathbb{R}^{V(H_{W_{i}})}\) by setting \(\mathbf{b}_{i}(Z^{\prime})=|Z^{\prime}\setminus S_{1}|/|Z^{\prime}|\) for every \(Z\in V(H_{W_{i}})\). Recall that
\(|S_{1}\cap Z|\leq 2\lambda m\), \(|Z^{\prime}|=\kappa m\) and \(\lambda\ll\kappa,\gamma\). It follows that
\[1-\gamma\leq 1-\frac{2\lambda}{\kappa}\leq 1-\frac{|S_{1}|}{|Z^{\prime}|}\leq \mathbf{b}_{i}\leq 1.\]
Since \(H_{W_{i}}\) is \(\gamma\)-robustly matchable, there is a fractional matching \(\mathbf{w}_{i}\) such that \(\sum_{Z\in e,e\in H_{W_{i}}}\mathbf{w}_{i}(e)=\mathbf{b}_{i}(Z^{\prime})/k\) for every cluster \(Z^{\prime}\in\mathcal{P}^{\prime}\) of \(H_{W_{i}}\) where \(i\in[k]\). Consider \(\psi>0\) with \(\varepsilon_{k+1}\ll\psi\ll\alpha\), there exists a sequentially path \(Q_{1}\) in \(G^{\prime}\) such that \(P_{2}Q_{1}P_{1}\) is a sequentially path in \(G\) which contains no vertices of \(S_{1}\) and \(4k+2\) vertices of \(T^{\prime}\) by Lemma 6.7. Moreover, there is a set \(U\subseteq V(\mathcal{P})\) of size at most \(\psi mt\) such that \(U\cup V(Q_{1})\) has \(\lceil\sum_{i\in[k]}\sum_{Z\in e,e\in H_{W_{i}}}\mathbf{w}_{i}(e)\kappa m\rceil+B\) vertices in each point cluster \(Z\). In other words, \(V(P_{2}Q_{1}P_{1})\cup U\) has \(\kappa m+B\) vertices in each point cluster of \(V(H)\) and uses \((\kappa m+B)(1-\alpha)t\) vertices of \(V\) since \(|V(L_{H}(i))|\geq(1-\alpha)t\) for \(i\in[t]\).
We now choose the second path \(Q_{2}\). Note that \(P_{2}Q_{1}P_{1}\) has right extension \(X\) and left extension \(Y\), which are consistent with \(\overrightarrow{H}\). Since \(W\) is compatible with \(\overrightarrow{H}\), we can apply Proposition 6.9 to obtain a sequentially walk \(W^{\prime}\) in \(H\) of length \(p\leq t^{2k+1}\) starting from \(X\) and ending with \(Y\). Moreover, since \(W\) has length coprime to \(k\), we can choose \(W^{\prime}\) such that
\[p+1=|V(G)\setminus V(P_{2}Q_{1}P_{1})|\text{ mod }k.\]
Let \(S_{2}=V(P_{2}Q_{1}P_{1})\) and \(T^{\prime\prime}=T^{\prime}\setminus S_{2}\). Define \(\mathbf{c}_{i}\in\mathbb{R}^{V(H_{W_{i}})}\) by setting \(\mathbf{c}_{i}(Z)=(m-|Z\cap S_{2}|)/m\) for every \(Z\in V(H_{W_{i}})\). Note that \(1-\gamma\leq 1-\kappa-\psi\leq\mathbf{c}_{i}\leq 1\). Since \(H_{W_{i}}\) is robustly matchable, there is a fractional matching \(\mathbf{z}_{i}\) such that \(\sum_{Z\in e,e\in H_{W_{i}}}\mathbf{z}_{i}(e)=\mathbf{c}_{i}(Z)/k\) for every \(Z\in\mathcal{P}\) of \(H_{W_{i}}\). By Lemma 6.7, there exists a sequentially path \(Q_{2}\) in \(G\) of length \(p+1\) mod \(k\) which contains no vertices of \(S_{2}\) and \(4k+2\) vertices of \(T^{\prime\prime}\) such that \(P_{2}Q_{1}P_{1}Q_{2}\) is a sequentially cycle. Besides, there is a set \(U^{\prime}\subseteq V(\mathcal{P})\) of size at most \(\psi mt\) such that \(U^{\prime}\cup V(Q_{2})\) has \(\lceil\sum_{i\in[k]}\sum_{Z\in e,e\in H_{W_{i}}}\mathbf{z}_{i}(e)m\rceil+B\) vertices in each point cluster \(Z\). Thus, \(U^{\prime}\cup V(Q_{2})\) uses at least \(((1-\kappa)m-B+B)\,(1-\alpha)t=(1-\kappa)m(1-\alpha)t\) vertices of \(V\). Denote the set of uncovered vertices in all clusters of \(\mathcal{P}\) by \(M\).
Note that \(P_{2}Q_{1}P_{1}Q_{2}\) contains all vertices of \(V(G)\) but \(M\), \(U\) and \(U^{\prime}\). We know that \(|M|\leq\alpha mt\), \(|U|\leq\psi mt,|U^{\prime}|\leq\psi mt\). Thus \(P_{2}Q_{1}P_{1}Q_{2}\) covers all but at most \(\alpha mt+2\psi mt\leq 3\alpha n\leq\eta n\) vertices. Since the length of \(Q_{2}\) is \(p+1\) mod \(k\), it follows that \(|V\setminus V(P_{2}Q_{1}P_{1}Q_{2})|\) is divisible by \(k\).
## 7. Absorption
We will give the proof of Lemma 5.2 in this section. The method can be sketched as follows. We define absorbing gadget to absorb a set \(T\) of \(k\) vertices and a set \(O\) of \(k\) colors. For each \((T,O)\), the absorbing gadgets are numerous. Based on the above properties, we can choose a small family of vertex-disjoint gadgets such that for every \((T,O)\), there are many absorbing gadgets. Such a family is obtained by probabilistic method. Connecting all these gadgets yields the desired absorbing path.
This section can be organised as follows. In subsection 7.1, we attach vertices to regular complexes since the gadgets we need should be well-integrated in regular setups. In section 7.2, we count the number of absorbing gadgets for each \((T,O)\). In section 7.3, we select a well-behaved family of absorbing gadgets, which is used to absorb a small number of arbitrary sets of \(k\) vertices and \(k\) colors.
### Technical Tools
In this part, we will obtain some results to help us attach vertices to regular complexes. Let \(H\) be a \((1,k)\)-graph with vertex set \([n]\cup V\), \(\mathcal{J}\) be a regular slice with cluster set \(\mathcal{P}\). Given a \((0,k-1)\)-subset \(X\subseteq\mathcal{P}\), \(\mathcal{J}_{X}\) is an \(|X|\)-partite \(|X|\)-graph containing all edges of \(|X|\)-level of \(\mathcal{J}\). For any \(v\in V\), \(\delta>0\) and any color cluster \(C\), let
\[N_{\mathcal{J}}((v,C),\delta)=\{X\subseteq\mathcal{P}:|X|=k-1,\text{for any }c\in C,|N_{H}((v,c);\mathcal{J}_{X})|>\delta|\mathcal{J}_{X}|\},\]
**Lemma 7.1**.: _Let \(k,r,m,t\in\mathbb{N}\) and \(d_{2},\ldots,d_{k+1},\varepsilon,\varepsilon_{k+1},\mu,\delta\) be such that_
\[1/m\ll 1/r,\varepsilon\ll\varepsilon_{k+1},d_{2},\ldots,d_{k},\]
\[\varepsilon_{k+1}\ll d_{k+1}\leq 1/k,\]
_and_
\[\varepsilon_{k+1}\ll\mu\ll\delta.\]
_Let \(\textbf{d}=(d_{2},\ldots,d_{k+1})\) and let \((H,H_{\mathcal{J}},\mathcal{J},\mathcal{P},R)\) be a representative \((k,m,2t,\varepsilon,\varepsilon_{k+1},r,\textbf{d})\)-regular setup. Suppose that \(H\) has minimum relative \((1,1)\)-degree at least \(\delta+\mu\) with vertex set \([n]\cup V\). Then for any \(v\in V\) and any color cluster \(C\), we have_
\[|N_{\mathcal{J}}((v,C),\frac{\mu}{3})|\geq(\delta+\frac{\mu}{4})\binom{t}{k-1 }.\]
_For any \(c\in[n]\) and any point cluster \(Z\), we have_
\[|N_{\mathcal{J}}((c,Z),\frac{\mu}{3})|\geq(\delta+\frac{\mu}{4})\binom{t}{k-1 }.\]
Proof.: Let \(v\in V\) and \(c\in C\) be arbitrary. The minimum relative degree condition implies that \(\overline{\deg}_{H}(v,c)\geq\delta+\mu\). Since the regular setup is representative and \(\varepsilon_{k+1}\ll\mu\), we have \(|\overline{\deg}_{H}(v,c)-\overline{\deg}_{H}((v,c);\mathcal{J})|< \varepsilon_{k+1}\) and
\[\deg_{H}((v,c),\mathcal{J}^{(k-1)})\geq(\delta+\mu-\varepsilon_{k+1})| \mathcal{J}^{(k-1)}|\geq(\delta+\frac{2}{3}\mu)|\mathcal{J}^{(k-1)}|.\]
For any \((0,k-1)\)-subset \(X\) of \(\mathcal{P}\), \(\mathcal{J}_{X}\) corresponds to the \((k-1)\)-edges of \(\mathcal{J}^{(k-1)}\) which are \(X\)-partite. Define \(d_{X}=\prod_{i=2}^{k-1}d_{i}^{\binom{k-1}{i}}\). By Lemma 4.9, we have \(|\mathcal{J}_{X}|=(1\pm\varepsilon_{k+1})d_{X}m^{k-1}\). By summing over all the \((0,k-1)\)-subsets of \(\mathcal{P}\), we have
\[|\mathcal{J}^{(k-1)}|\geq(1-\varepsilon_{k+1})\binom{t}{k-1}d_{X}m^{k-1}.\]
Moreover, let \(X\) range over all \((0,k-1)\)-subsets of \(\mathcal{P}\), we have
\[\sum_{X}|N_{H}((v,c);\mathcal{J}_{X})|=\deg_{H}((v,c);\mathcal{J}^{(k-1)}) \geq(\delta+\frac{2}{3}\mu)|\mathcal{J}^{(k-1)}|.\]
Finally, we obtain
\[(\delta+\frac{2}{3}\mu)|\mathcal{J}^{(k-1)}|\] \[\leq\sum_{X}|N_{H}((v,c);\mathcal{J}_{X})|\leq\sum_{X\in N_{ \mathcal{J}}((v,c),\mu/3)}|\mathcal{J}_{X}|+\sum_{X\notin N_{\mathcal{J}}((v,c),\mu/3)}\frac{\mu}{3}|\mathcal{J}_{X}|\] \[\leq\left(|N_{\mathcal{J}}((v,c),\mu/3)|+\frac{\mu}{3}\left( \binom{t}{k-1}-|N_{\mathcal{J}}((v,c),\mu/3)|\right)\right)(1+\varepsilon_{k+1 })d_{X}m^{k-1}\] \[\leq\left((1-\frac{\mu}{3})|N_{\mathcal{J}}((v,c),\mu/3)|+\frac{ \mu}{3}\binom{t}{k-1}\right)\frac{1+\varepsilon_{k+1}}{1-\varepsilon_{k+1}} \frac{|\mathcal{J}^{(k-1)}|}{\binom{t}{k-1}}\] \[\leq\left(|N_{\mathcal{J}}((v,c),\mu/3)|+\frac{\mu}{3}\binom{t}{k- 1}\right)(1+2\varepsilon_{k+1})\frac{|\mathcal{J}^{(k-1)}|}{\binom{t}{k-1}}.\]
Thus, for any \(v\in V\) and \(c\in C\), we have
\[|N_{\mathcal{J}}((v,c),\mu/3)|\geq(\delta+\frac{\mu}{4})\binom{t}{k-1},\]
and by definition, the following holds for any \(v\in V\) and color cluster \(C\),
\[|N_{\mathcal{J}}((v,C),\mu/3)|\geq(\delta+\frac{\mu}{4})\binom{t}{k-1}.\]
Similarly, we can obtain the following result holds for any \(c\in[n]\) and point cluster \(Z\),
\[|N_{\mathcal{J}}((c,Z),\mu/3)|\geq(\delta+\frac{\mu}{4})\binom{t}{k-1}.\]
**Lemma 7.2**.: _Let \(k,r,m,t\in\mathbb{N}\) and \(d_{2},\ldots,d_{k+1},\varepsilon,\varepsilon_{k+1},\mu,\lambda\) be such that_
\[1/m\ll 1/r,\varepsilon\ll\varepsilon_{k+1},d_{2},\ldots,d_{k},\]
\[\varepsilon_{k+1}\ll d_{k+1}\leq 1/k,\]
_and_
\[\varepsilon_{k+1}\ll\lambda\ll\mu.\]
_Let \(\textbf{d}=(d_{2},\ldots,d_{k+1})\) and let \((H,H_{\mathcal{J}},\mathcal{J},\mathcal{P},R)\) be a \((k,m,2t,\varepsilon,\varepsilon_{k+1},r,\textbf{d})\)-regular setup. Let \(T\subseteq V(H)\) such that \(|Z_{1}\cap T|=|Z_{2}\cap T|\leq\lambda m\) for every \(Z_{1},Z_{2}\in\mathcal{P}\). Let \(Z^{\prime}=Z\setminus T\) for each \(Z\in\mathcal{P}\), and let \(\mathcal{J}^{\prime}=\mathcal{J}[\bigcup Z^{\prime}]\) be the induced subcomplex. For every \(v\in V\) and color cluster \(C\), we have_
\[|N_{\mathcal{J}}((v,C),2\mu)|\leq|N_{\mathcal{J}^{\prime}}((v,C),\mu)|,\]
_and for every \(c\in[n]\) and point cluster \(Z\), we have_
\[|N_{\mathcal{J}}((c,Z),2\mu)|\leq|N_{\mathcal{J}^{\prime}}((c,Z),\mu)|,\]
Proof.: For any \(v\in V\), color cluster \(C\) and a \((0,k-1)\)-set \(X\in N_{\mathcal{J}}((v,C),2\mu)\). By the definition, we have \(|N_{H}((v,c);\mathcal{J}_{X})|>2\mu|\mathcal{J}_{X}|\) for any \(c\in C\). Let \(X=\{X_{1},\ldots,X_{k-1}\}\) and \(X^{\prime}=\{X^{\prime}_{1},\ldots,X^{\prime}_{k-1}\}\) be the corresponding clusters in the complex \(\mathcal{J}^{\prime}\). Our goal is to prove that \(X^{\prime}\in N_{\mathcal{J}^{\prime}}((v,C),\mu)\).
Let \(\varepsilon\ll\beta\ll\varepsilon_{k+1}\) and \(d_{X}=\prod_{i=2}^{k-1}d_{i}^{\binom{k-1}{i}}\). By Lemma 4.9, we have
\[|\mathcal{J}_{X}|=(1\pm\beta)d_{X}m^{k-1}\]
and
\[|N_{H}((v,c);\mathcal{J}_{X})|>2\mu|\mathcal{J}_{X}|\geq 2\mu(1-\beta)d_{X}m^{k-1}.\]
Let \(m^{\prime}=|X_{1}\setminus T|\), we have \(|Z^{\prime}|=m^{\prime}\) for each \(Z\in\mathcal{P}\), note that \(m^{\prime}\geq(1-\lambda)m\). By Lemma 4.11, \(\mathcal{J}^{\prime}\) is a \((\cdot,\cdot,\sqrt{\varepsilon},\sqrt{\varepsilon_{k+1}},r)\)-regular slice. By Lemma 4.9, we have
\[(1+\beta)d_{X}(m^{\prime})^{k-1}\geq|\mathcal{J}_{X^{\prime}}^{\prime}|\geq(1 -\beta)d_{X}(m^{\prime})^{k-1}\geq(1-\beta)(1-\lambda)^{k-1}d_{X}m^{k-1}.\]
Since \(\beta\ll\varepsilon_{k+1}\ll\lambda\ll\mu\), we have
\[|N_{H}((v,c);\mathcal{J}_{X^{\prime}}^{\prime})| \geq|N_{H}((v,c);\mathcal{J}_{X})|-(|\mathcal{J}_{X}|-|\mathcal{ J}_{X^{\prime}}^{\prime}|)\] \[\geq(1-\beta)(2\mu-(1-(1-\lambda)^{k-1}))d_{X}m^{k-1}\] \[\geq\mu(1+\beta)d_{X}m^{k-1}\geq\mu|\mathcal{J}_{X^{\prime}}^{ \prime}|.\]
Thus, we obtain that \(X\in N_{\mathcal{J}^{\prime}}((v,C),\mu)\).
Similarly, for every \(c\in[n]\) and point cluster \(Z\), we have
\[|N_{\mathcal{J}}((c,Z),2\mu)|\leq|N_{\mathcal{J}^{\prime}}((c,Z),\mu)|.\]
In a \((k+1)\)-uniform sequentially cycle, the link graph of a point corresponds to a \(k\)-uniform sequentially path. Thus, we will look for sequentially paths in the neighbors of vertices inside a regular complex. The following lemma states that by looking at a \(\mu\)-fraction of \((1,k-1)\)-edges of a regular complex, we will find lots of sequentially paths.
**Lemma 7.3**.: _Let \(1/m\ll\varepsilon\ll d_{2},\ldots,d_{k},1/k,\mu\) and \(k\geq 3\). Suppose that \(\mathcal{J}\) is a \((\cdot,\cdot,\varepsilon)\)-equitable complex with density vector \(\textbf{d}=(d_{2},\ldots,d_{k})\) and ground partition \(\mathcal{P}\), the size of each vertex class is \(m\). Let \(W=\{W_{0},W_{1},\ldots,W_{k-1}\}\subseteq\mathcal{P}\). Let \(S\subseteq\mathcal{J}_{W}\) be with size at least \(\mu|\mathcal{J}_{W}|\) and \(Q\) be a \(k\)-uniform sequentially path \(((c_{1},\ldots,c_{k}),(v_{1},\ldots,v_{2k-2}))\) with vertex classes \(\{X_{0},X_{1},\ldots,X_{k-1}\}\) such that \(v_{i},v_{i+k-1}\in X_{i}\) for \(i\in[k-1]\) and \(c_{j}\in X_{0}\) for \(j\in[k]\). Let \(\mathcal{Q}\) be the down-closed \(k\)-complex generated by \(Q\) and \(\mathcal{Q}_{S}\subseteq\mathcal{Q}_{\mathcal{J}}\) be the copies of \(\mathcal{Q}\) whose edges in the \(k\)-th level are in \(S\). We have_
\[|\mathcal{Q}_{S}|\geq\frac{1}{2}\left(\frac{\mu}{8k}\right)^{k+1}|\mathcal{Q} _{\mathcal{J}}|.\]
Proof.: The proof consists of three steps. Firstly, we use the dense version of the counting and extension lemma to count the number of various hypergraphs in \(\mathcal{J}\). Secondly, we remove some \((1,k-1)\)-tuples without good properties. Finally, we use an iterative procedure to return sequentially paths using good \((1,k-1)\)-tuples, as desired.
Firstly, let \(\beta\) be such that \(\varepsilon\ll\beta\ll d_{2},\ldots,d_{k},1/k,\mu\). Define
\[d_{a}=\prod_{i=2}^{k-2}d_{i}^{\binom{k-2}{i}},d_{b}=\prod_{i=2}^{k-2}d_{i}^{ \binom{k}{i}-\binom{k-2}{i}}\cdot\prod_{i=k-1}^{k}d_{i}^{\binom{k}{i}}.\]
Let \(W^{\prime}=W\setminus\{W_{0},W_{k-1}\}\). By Lemmas 4.8 and 4.9, we have
\[|\mathcal{J}_{W}|=(1\pm\beta)d_{a}d_{b}m^{k},\] \[|\mathcal{J}_{W^{\prime}}|=(1\pm\beta)d_{a}m^{k-2},\] \[|\mathcal{Q}_{\mathcal{J}}|=(1\pm\beta)d_{a}d_{b}^{k}m^{3k-2}. \tag{6}\]
Since \(S\subseteq\mathcal{J}_{W}\) with \(|S|\geq\mu|\mathcal{J}_{W}|\), with (7), we have
\[|S|\geq(1-\beta)\mu d_{a}d_{b}m^{k}.\]
Let \(B_{W^{\prime}}\subseteq\mathcal{J}_{W^{\prime}}\) be the \((k-2)\)-edges which are not extensible to \((1\pm\beta)d_{b}m^{2}\) copies of a \(k\)-edge in \(\mathcal{J}_{W}\). By Lemma 4.10, we have
\[|B_{W^{\prime}}|\leq\beta|\mathcal{J}_{W^{\prime}}|.\]
Secondly, we delete from \(S\) the edges which contain a \((k-2)\)-set from \(B_{W^{\prime}}\) to obtain \(S^{\prime}\), the number of edges deleted is at most
\[|B_{W^{\prime}}|m^{2}\leq\beta|\mathcal{J}_{W^{\prime}}|m^{2}\leq\beta(1+\beta )d_{a}m^{k}\leq|S|/3,\]
since \(\beta\ll\mu,d_{2},\ldots,d_{k}\). Thus, we have \(|S^{\prime}|\geq 2|S|/3\). Furthermore, if there is any partite \((k-2)\)-set \(T\) in \(\mathcal{J}\) which lies in less than \(\mu d_{b}m^{2}/(4k)\) edges of \(S^{\prime}\), then we delete all edges in \(S^{\prime}\) containing \(T\) to obtain \(S^{\prime\prime}\) and iterate this until no further deletions are possible. Note that the number of partite \((k-2)\)-sets supported in the clusters of \(W\setminus\{W_{0}\}\) is \((k-1)(1\pm\beta)d_{a}m^{k-2}\). Thus the number of edges deleted is at most
\[(k-1)(1+\beta)d_{a}m^{k-2}\frac{\mu d_{b}m^{2}}{4k}\leq(1+\beta)\frac{\mu d_{ a}d_{b}m^{k}}{4}\leq\frac{|S|}{3}.\]
Thus, \(|S^{\prime\prime}|\geq|S|/3\). Each partite \((k-2)\)-set in \(W_{1},\ldots,W_{k-1}\) is either contained in zero edges of \(S^{\prime\prime}\) or in at least \(\mu d_{b}m^{2}/(4k)\) edges in \(S^{\prime\prime}\).
Finally, we use the properties of \(S^{\prime\prime}\) to construct many labelled partition-respecting paths in \(\mathcal{Q}_{S}\).
**Step 1.** Select \(T=\{x_{1},\ldots,x_{k-2}\}\in\mathcal{J}_{W^{\prime}}\) which is contained in at least \(\mu d_{b}m^{2}/4\) edges in \(S^{\prime\prime}\).
**Step 2.** Choose \((c_{1},x_{k-1})\) such that \(\{c_{1},x_{1},x_{2},\ldots,x_{k-1}\}\in S^{\prime\prime}\) and \(c_{1},x_{k-1}\) are not in \(T\).
**Step 3.** For \(i\in[k,2k-2]\), choose \((c_{i-k+2},x_{i})\) such that \(\{c_{i-k+2},x_{i-k+2},\ldots,x_{i}\}\in S^{\prime\prime}\) and \(c_{i-k+2},x_{i}\) are not used before.
This constructs a sequentially path \(\mathcal{Q}_{S}\) on \(3k-2\) vertices such that each edge in the \(k\)-th level is in \(S^{\prime\prime}\), thus in \(S\). Next, we count the size of \(\mathcal{Q}_{S}\).
In Step 1, let \(G\subseteq\mathcal{J}_{W^{\prime}}\) be the set of \((k-2)\)-sets which are contained in less than \(\mu d_{b}m^{2}/4\) edges in \(S^{\prime\prime}\), we have
\[\frac{|S|}{3}\leq|S^{\prime\prime}|=\sum_{T\in\mathcal{J}_{W^{\prime}}}\deg_{ S^{\prime\prime}}(T)\leq|G|\frac{\mu}{4}d_{b}m^{2}+(|\mathcal{J}_{W^{\prime}}|-|G| )d_{b}m^{2}(1+\beta),\]
it gives that \(|G|\leq(1-\beta)(1-\mu/12)d_{a}m^{k-2}\), thus, the choices for \(T\) is at least \(|\mathcal{J}_{W^{\prime}}|-|G|\geq\mu/13d_{a}m^{k-2}\). In Step 2, we have at least \(\mu d_{b}m^{2}/4\) choices for \((c_{1},x_{k-1})\). In Step 3, \(\{x_{i-k+2},\ldots,x_{i-1}\}\) is a \((k-2)\)-set contained in \(S^{\prime\prime}\), by the construction of \(S^{\prime\prime}\), there are at least \(\mu d_{b}m^{2}/(4k)\) choices for \((c_{i-k+2},x_{i})\), furthermore, at least \(\mu d_{b}m^{2}/(8k)\) are different from the previous choices.
Thus, the number of paths in \(\mathcal{Q}_{S}\) is at least
\[\left(\frac{\mu}{13}d_{a}m^{k-2}\right)\left(\frac{\mu}{4}d_{b}m^{2}\right)\left( \frac{\mu}{8k}d_{b}m^{2}\right)^{k-1}\geq(\frac{\mu}{8k})^{k+1}d_{a}d_{b}^{k}m^ {3k-2}\geq\frac{1}{2}(\frac{\mu}{8k})^{k+1}|\mathcal{Q}_{\mathcal{J}}|,\]
since \(\beta\ll\mu,1/k\).
**Lemma 7.4**.: _Let \(1/m\ll\varepsilon\ll d_{2},\ldots,d_{k},1/k,\mu\) and \(k\geq 3\). Suppose that \(\mathcal{J}\) is a \((\cdot,\cdot,\varepsilon)\)-equitable complex with density vector \(\textbf{d}=(d_{2},\ldots,d_{k})\) and ground partition \(\mathcal{P}\), the size of each vertex class is \(m\). Let \(W=\{W_{1},\ldots,W_{k-1},W_{k}\}\subseteq\mathcal{P}\). Let \(S\subseteq\mathcal{J}_{W}\) be with size at least \(\mu|\mathcal{J}_{W}|\) and \(Q\) be a \(k\)-uniform tight path \(v_{1},\ldots,v_{k-1},b,v_{k},\ldots,v_{2k-2}\) with vertex classes \(\{X_{1},\ldots,X_{k-1},X_{k}\}\) such that \(v_{i},v_{i+k-1}\in X_{i}\) for \(i\in[k-1]\) and \(b\in X_{k}\). Let \(\mathcal{Q}\) be the down-closed \(k\)-complex generated by \(Q\) and \(\mathcal{Q}_{S}\subseteq\mathcal{Q}_{\mathcal{J}}\) be the copies of \(\mathcal{Q}\) whose edges in the \(k\)-th level are in \(S\). We have_
\[|\mathcal{Q}_{S}|\geq\frac{1}{2}\left(\frac{\mu}{8k}\right)^{k+1}|\mathcal{Q} _{\mathcal{J}}|.\]
Proof.: The proof consists of three steps. Firstly, we use the dense version of the counting and extension lemma to count the number of various hypergraphs in \(\mathcal{J}\). Secondly, we remove some \(k\)-tuples without good properties. Finally, we use an iterative procedure to return a tight path using good \(k\)-tuples, as desired.
Firstly, let \(\beta\) be such that \(\varepsilon\ll\beta\ll d_{2},\ldots,d_{k},1/k,\mu\). Define
\[d_{a}=\prod_{i=2}^{k-1}d_{i}^{\binom{k-1}{i}},d_{b}=\prod_{i=2}^{k}d_{i}^{ \binom{k-1}{i-1}}.\]
Let \(W^{\prime}=W\setminus\{W_{k}\}\). By Lemma 4.8 and 4.9, we have
\[|\mathcal{J}_{W}| =(1\pm\beta)d_{a}d_{b}m^{k},\] \[|\mathcal{J}_{W^{\prime}}| =(1\pm\beta)d_{a}m^{k-1},\] \[|\mathcal{Q}_{\mathcal{J}}| =(1\pm\beta)d_{a}d_{b}^{k}m^{2k-1}. \tag{7}\]
Since \(S\subseteq\mathcal{J}_{W}\) with \(|S|\geq\mu|\mathcal{J}_{W}|\), with (7), we have
\[|S|\geq(1-\beta)\mu d_{a}d_{b}m^{k}.\]
Let \(B_{W^{\prime}}\subseteq\mathcal{J}_{W^{\prime}}\) be the \((k-1)\)-edges which are not extensible to \((1\pm\beta)d_{b}m\) copies of a \(k\)-edge in \(\mathcal{J}_{W}\). By Lemma 4.10, we have
\[|B_{W^{\prime}}|\leq\beta|\mathcal{J}_{W^{\prime}}|.\]
Secondly, we delete from \(S\) the edges which contain a \((k-1)\)-set from \(B_{W^{\prime}}\) to obtain \(S^{\prime}\), the number of edges deleted is at most
\[|B_{W^{\prime}}|m\leq\beta|\mathcal{J}_{W^{\prime}}|m\leq\beta(1+\beta)d_{a}m^ {k}\leq|S|/3,\]
since \(\beta\ll\mu,d_{2},\ldots,d_{k}\). Thus, we have \(|S^{\prime}|\geq 2|S|/3\). Furthermore, if there is any partite \((k-1)\)-set \(T\) in \(\mathcal{J}\) which lies in less than \(\mu d_{b}m/(4k)\) edges of \(S^{\prime}\), then we delete all edges in \(S^{\prime}\) containing \(T\) to obtain \(S^{\prime\prime}\) and iterate this until no further deletions are possible. Note that the number of partite \((k-1)\)-sets supported in the clusters of \(W\) is \(k(1\pm\beta)d_{a}m^{k-1}\). Thus the number of edges deleted
is at most
\[k(1+\beta)d_{a}m^{k-1}\frac{\mu d_{b}m}{4k}\leq(1+\beta)\frac{\mu d_{a}d_{b}m^{k}} {4}\leq\frac{|S|}{3}.\]
Thus, \(|S^{\prime\prime}|\geq|S|/3\). Each partite \((k-1)\)-set in \(W_{1},\ldots,W_{k}\) is either contained in zero edges of \(S^{\prime\prime}\) or in at least \(\mu d_{b}m/(4k)\) edges in \(S^{\prime\prime}\).
Finally, we use the properties of \(S^{\prime\prime}\) to construct many labelled partition-respecting paths in \(\mathcal{Q}_{S}\).
**Step 1.** Select \(T=\{x_{1},\ldots,x_{k-1}\}\in\mathcal{J}_{W^{\prime}}\) which is contained in at least \(\mu d_{b}m/4\) edges in \(S^{\prime\prime}\).
**Step 2.** Choose \(b\) such that \(\{x_{1},x_{2},\ldots,x_{k-1},b\}\in S^{\prime\prime}\) and \(b\notin T\).
**Step 3.** For \(i\in[k,2k-2]\), choose \(x_{i}\) such that \(\{x_{i-k+2},\ldots,x_{k-1},b,x_{k},\ldots,x_{i}\}\in S^{\prime\prime}\) and \(x_{i}\) is not used before.
This constructs a sequentially path \(\mathcal{Q}_{S}\) on \(2k-1\) vertices such that each edge in the \(k\)-th level is in \(S^{\prime\prime}\), thus in \(S\). Next, we count the size of \(\mathcal{Q}_{S}\).
In Step 1, let \(G\subseteq\mathcal{J}_{W^{\prime}}\) be the set of \((k-1)\)-sets which are contained in less than \(\mu d_{b}m/4\) edges in \(S^{\prime\prime}\), we have
\[\frac{|S|}{3}\leq|S^{\prime\prime}|=\sum_{T\in\mathcal{J}_{W^{\prime}}}\deg_{S ^{\prime\prime}}(T)\leq|G|\frac{\mu}{4}d_{b}m+(|\mathcal{J}_{W^{\prime}}|-|G| )d_{b}m(1+\beta),\]
it gives that \(|G|\leq(1-\beta)(1-\mu/12)d_{a}m^{k-1}\), thus, the choices for \(T\) is at least \(|\mathcal{J}_{W^{\prime}}|-|G|\geq\mu/13d_{a}m^{k-1}\). In Step 2, we have at least \(\mu d_{b}m/4\) choices for \(b\). In Step 3, \(\{x_{i-k+2},\ldots,x_{k-1},b,x_{k},\ldots,\)\(x_{i-1}\}\) is a \((k-1)\)-set contained in \(S^{\prime\prime}\), by the construction of \(S^{\prime\prime}\), there are at least \(\mu d_{b}m/(4k)\) choices for \(x_{i}\), furthermore, at least \(\mu d_{b}m/(8k)\) are different from the previous choices.
Thus, the number of paths in \(\mathcal{Q}_{S}\) is at least
\[\left(\frac{\mu}{13}d_{a}m^{k-1}\right)\left(\frac{\mu}{4}d_{b}m\right)\left( \frac{\mu}{8k}d_{b}m\right)^{k-1}\geq(\frac{\mu}{8k})^{k+1}d_{a}d_{b}^{k}m^{2 k-1}\geq\frac{1}{2}(\frac{\mu}{8k})^{k+1}|\mathcal{Q}_{\mathcal{J}}|,\]
since \(\beta\ll\mu,1/k\).
### Absorbing Gadget
Before we build the absorbing path, we need to define absorbing gadget, which is useful to absorb a particular set \(T\) of \(k\) vertices and a particular set \(O\) of \(k\) colors. Next, we will show that for every \((T,O)\), there are numerous absorbing gadgets to absorb \((T,O)\).
**Definition 7.5** (Absorbing gadget).: _Let \(T=\{t_{1},\ldots,t_{k}\}\) be a \(k\)-set of points of \(G\) and \(O=\{o_{1},\ldots,o_{k}\}\) be a \(k\)-set of colors of \(G\). We say that \(F\subseteq G\) is an absorbing gadget for \((T,O)\) if \(F=F_{1}\cup F_{2}\) where \(F_{1}=A\cup B\cup E\cup\bigcup_{i=1}^{k}(P_{i}\cup Q_{i})\cup C\cup\bigcup_{i=1 }^{k}C_{k}\) and \(F_{2}=A^{\prime}\cup B^{\prime}\cup E^{\prime}\cup\bigcup_{i=1}^{k}(P^{\prime}_ {i}\cup Q^{\prime}_{i})\cup C^{\prime}\cup\bigcup_{i=1}^{k}C^{\prime}_{k}\) such that_
1. \(A,B,E\)_,_\(P_{1},Q_{1},\ldots,P_{k},Q_{k}\)_,_\(A^{\prime},B^{\prime},E^{\prime}\)_,_ \(P^{\prime}_{1},Q^{\prime}_{1},\ldots,P^{\prime}_{k},Q^{\prime}_{k}\) _are pairwise disjoint and also disjoint from_ \(T\)_._ \(C,C_{1},\ldots,C_{k},C^{\prime},C^{\prime}_{1},\ldots,C^{\prime}_{k}\) _are pairwise disjoint and also disjoint from_ \(O\)_,_
2. \(C_{i}=(c_{i,1},\ldots,c_{i,k-1})\) _and_ \(C^{\prime}_{i}=(c^{\prime}_{i,1},\ldots,c^{\prime}_{i,k-1})\) _for_ \(i\in[k]\)_,_
3. \(A,B,E,A^{\prime},B^{\prime},E^{\prime}\) _are_ \(k\)_-tuples of points of_ \(G\)_,_ \(C\) _and_ \(C^{\prime}\) _are_ \((k+1)\)_-tuples of colors of_ \(G\)_,_ \((C,AE)\)_,_ \((C^{\prime},A^{\prime}E^{\prime})\) _and_ \((C^{\prime}(c_{1,1},\ldots,c_{k,1}),A^{\prime}B^{\prime}E^{\prime})\) _are sequentially paths,_
4. _for_ \(B=(b_{1},\ldots,b_{k})\)_, each of_ \(P_{i},Q_{i}\) _has_ \(k-1\) _vertices for_ \(i\in[k]\)_, both_ \((C_{i},P_{i}b_{i}Q_{i})\) _and_ \((\{o_{i}\}\cup C_{i}\setminus\{c_{i,1}\},P_{i}b_{i}Q_{i})\) _are sequentially paths of length_ \(2k-1\) _for_ \(i\in[k]\)
_
5. _for_ \(B^{\prime}=(b^{\prime}_{1},\ldots,b^{\prime}_{k})\)_, each of_ \(P^{\prime}_{i},Q^{\prime}_{i}\) _has_ \(k-1\) _vertices for_ \(i\in[k]\)_, both_ \((C^{\prime}_{i},P^{\prime}_{i}b^{\prime}_{i}Q^{\prime}_{i})\) _and_ \((C^{\prime}_{i},P^{\prime}_{i}t_{i}Q^{\prime}_{i})\) _are sequentially paths of length_ \(2k-1\) _for_ \(i\in[k]\)_._
Note that an absorbing gadget \(F\) spans \(4k^{2}+2k\) points together with \(2k^{2}+2k+2\) colors.
**Definition 7.6** (\(\mathfrak{S}\)-gadget).: _Suppose \(F=F_{1}\cup F_{2}\) is an absorbing gadget where \(F_{1}=A\cup B\cup E\cup\bigcup_{i=1}^{k}(P_{i}\cup Q_{i})\cup C\cup\bigcup_{i=1} ^{k}C_{k}\) and \(F_{2}=A^{\prime}\cup B^{\prime}\cup E^{\prime}\cup\bigcup_{i=1}^{k}(P^{\prime}_ {i}\cup Q^{\prime}_{i})\cup C^{\prime}\cup\bigcup_{i=1}^{k}C^{\prime}_{k}\) with \(A=(a_{1},\ldots,a_{k})\), \(B=(b_{1},\ldots,b_{k})\), \(E=(e_{1},\ldots,e_{k})\), \(C=(c_{1},\ldots,c_{k+1})\), \(C_{i}=(c_{i,1},\ldots,c_{i,k})\), \(P_{i}=(p_{i,1},\ldots,p_{i,k-1})\) and \(Q_{i}=(q_{i,1},\ldots,q_{i,k-1})\) for \(i\in[k]\), \(A^{\prime}=(a^{\prime}_{1},\ldots,a^{\prime}_{k})\), \(B^{\prime}=(b^{\prime}_{1},\ldots,b^{\prime}_{k})\), \(E^{\prime}=(c^{\prime}_{1},\ldots,c^{\prime}_{k})\), \(C^{\prime}=(c^{\prime}_{1},\ldots,c^{\prime}_{k+1})\), \(C^{\prime}_{i}=(c^{\prime}_{1},\ldots,c^{\prime}_{k,k})\), \(P^{\prime}_{i}=(p^{\prime}_{i,1},\ldots,p^{\prime}_{i,k-1})\) and \(Q^{\prime}_{i}=(q^{\prime}_{i,1},\ldots,q^{\prime}_{i,k-1})\) for \(i\in[k]\). Suppose that \(\varepsilon,\varepsilon_{k+1},d_{2},\ldots,d_{k+1},c,\nu>0\). Let \(\textbf{d}=(d_{2},\ldots,d_{k+1})\) and suppose that \(\mathfrak{S}=(G,G_{\mathcal{J}},\mathcal{J},\mathcal{P},\overrightarrow{H})\) is an oriented \((k+1,m,2t,\varepsilon,\varepsilon_{k+1},r,\textbf{d})\)-regular setup. We say that \(F\) is an \(\mathfrak{S}\)-gadget if_
1. _there exists an oriented edge_ \(Y^{\prime}=(Y_{0},Z_{1},\ldots,Z_{k})\in\overrightarrow{H}\) _and a color cluster_ \(Z_{0}\)_, such that_ \(C\cup C^{\prime}\cup\bigcup_{i\in[k]}C_{i}\subseteq Y_{0}\)_,_ \(\bigcup_{i\in[k]}C^{\prime}_{i}\subseteq Z_{0}\)_,_ \(a_{i},b_{i},e_{i}\in Z_{i}\) _for_ \(i\in[k]\)_,_
2. _there exists an oriented edge_ \(Y=(Y_{0},Y_{1},\ldots,Y_{k})\in\overrightarrow{H}\)_, such that_ \(a^{\prime}_{i},b^{\prime}_{i},e^{\prime}_{i}\in Y_{i}\) _for_ \(i\in[k]\)_,_
3. _there exists an ordered_ \(k\)_-tuple of clusters_ \(W_{i}=(W_{i,1},\ldots,W_{i,k-1})\) _such that_ \(W_{i}\cup\{Y_{0},Z_{i}\}\) _is an edge in_ \(H\) _and_ \((Y_{0},W_{i,1},\ldots,W_{i,k-1},Z_{i})\) _is consistent with_ \(\overrightarrow{H}\)_,_ \(p_{i,j},q_{i,j}\in W_{i,j}\) _for_ \(i\in[k],j\in[k-1]\)_,_
4. _there exists an ordered_ \(k\)_-tuple of clusters_ \(W^{\prime}_{i}=(W^{\prime}_{i,1},\ldots,W^{\prime}_{i,k-1})\) _such that_ \(W^{\prime}_{i}\cup\{Z_{0},Y_{i}\}\) _is an edge in_ \(H\) _and_ \((Z_{0},W^{\prime}_{i,1},\ldots,W^{\prime}_{i,k-1},Y_{i})\) _is consistent with_ \(\overrightarrow{H}\)_,_ \(p^{\prime}_{i,j},q^{\prime}_{i,j}\in W^{\prime}_{i,j}\) _for_ \(i\in[k],j\in[k-1]\)_,_
5. \(F\subseteq G_{\mathcal{J}}\)_,_
_We will further say that \(F\) is \((c,\nu)\)-extensible if the following also holds:_
1. _The path_ \((C,AE)\) _is_ \((c,\nu)\)_-extensible both left- and rightwards to the ordered tuple_ \(Y^{\prime}=(Y_{0},Z_{1},\ldots,Z_{k})\) _and the path_ \((C_{i},P_{i}b_{i}Q_{i})\) _is_ \((c,\nu)\)_-extensible leftwards to_ \((Y_{0},W_{i,1},\ldots,W_{i,k-1},Z_{i})\) _and rightwards to_ \((Y_{0},Z_{i},W_{i,1},\ldots,W_{i,k-1})\) _for_ \(i\in[k]\)_._
2. _The path_ \((C^{\prime},A^{\prime}E^{\prime})\) _is_ \((c,\nu)\)_-extensible both left- and rightwards to the ordered tuple_ \(Y=(Y_{0},Y_{1},\ldots,Y_{k})\) _and the path_ \((C^{\prime}_{i},P^{\prime}_{i}b^{\prime}_{i}Q^{\prime}_{i})\) _is_ \((c,\nu)\)_-extensible leftwards to_ \((Z_{0},W^{\prime}_{i,1},\ldots,W^{\prime}_{i,k-1},Y_{i})\) _and rightwards to_ \((Z_{0},Y_{i},W^{\prime}_{i,1},\ldots,W^{\prime}_{i,k-1})\) _for_ \(i\in[k]\)_._
**Definition 7.7** (Reduced gadget).: _A reduced gadget is a \((1,k)\)-graph \(L\) consisting of \(Y\cup W_{1}\cup\cdots\cup W_{k}\cup Z_{0}\cup Z_{1}\cup\ldots\cup Z_{k}\cup W ^{\prime}_{1}\cup\cdots\cup W^{\prime}_{k}\) where \(Y=\{Y_{0},Y_{1},\ldots,Y_{k}\}\), \(W_{i}=\{W_{i,1},\ldots,W_{i,k-1}\}\) for \(i\in[k]\), \(W^{\prime}_{i}=\{W^{\prime}_{i,1},\ldots,W^{\prime}_{i,k-1}\}\) for \(i\in[k]\) and \(2(k+1)\) edges given by \(Y,Y^{\prime}=\{Y_{0},Z_{1},\ldots,Z_{k}\}\), \(W_{i}\cup\{Y_{0},Z_{i}\}\) for \(i\in[k]\) and \(W^{\prime}_{i}\cup\{Z_{0},Y_{i}\}\) for \(i\in[k]\). We refer to \(Y\) and \(Y^{\prime}\) as the core edges of \(L\) and \(W_{i},W^{\prime}_{i},i\in[k]\) as the peripheral sets of \(L\)._
Given an oriented \((1,k)\)-graph \(\overrightarrow{H}\), a reduced gadget in \(\overrightarrow{H}\) is a copy of \(L\) such that \(Y\) coincides with the orientation of that edge in \(\overrightarrow{H}\) and such that \((Z_{0},W_{i,1},\ldots,W_{i,k-1},Y_{i})\) is consistent with that edge in \(\overrightarrow{H}\).
Let \(\mathfrak{S}=(G,G_{\mathcal{J}},\mathcal{J},\mathcal{P},\overrightarrow{H})\) be an oriented regular setup. Let \(c,\nu>0\), \(T=\{t_{1},\ldots,t_{k}\}\) be a \(k\)-set of \(V\) and \(O=\{o_{1},\ldots,o_{k}\}\) be a \(k\)-set of \([n]\), and \(L\) be a reduced gadget in \(\overrightarrow{H}\). We define the following sets:
1. Denote the set of all reduced gadgets in \(\overrightarrow{H}\) by \(\mathfrak{L}_{\overrightarrow{H}}\),
2. Denote the set of \(\mathfrak{S}\)-gadgets which use precisely the clusters of \(L\) as in Definition 7.7 by \(\mathfrak{F}_{L}\)
3. Denote the set of \(\mathfrak{S}\)-gadgets in \(\mathfrak{F}_{L}\) which are \((c,\nu,V(G))\)-extensible by \(\mathfrak{F}_{L}^{\mathrm{ext}}\),
4. Denote the set of all \(\mathfrak{S}\)-gadgets by \(\mathfrak{F}\),
5. Denote the set of all \((c,\nu,V(G))\)-extensible \(\mathfrak{S}\)-gadgets by \(\mathfrak{F}^{\mathrm{ext}}\subseteq\mathfrak{F}\),
6. For any \(k\)-subset \(T\) of \(V\) and any \(k\)-subset \(O\) of \([n]\), let \(\mathfrak{F}_{(T,O)}\subseteq\mathfrak{F}\) be the set of absorbing \(\mathfrak{S}\)-gadgets for \((T,O)\),
7. Denote the set of \(\mathfrak{S}\)-gadgets absorbing \((T,O)\) which are \((c,\nu)\)-extensible by \(\mathfrak{F}_{(T,O)}^{\mathrm{ext}}=\mathfrak{F}_{(T,O)}\cap\mathfrak{F}^{ \mathrm{ext}}\).
**Lemma 7.8**.: _Let \(k,r,m,t\in\mathbb{N}\) and \(d_{2},\ldots,d_{k+1},\varepsilon,\varepsilon_{k+1},c,\nu,\beta\) be such that_
\[1/m \ll 1/r,\varepsilon\ll 1/t,c,\varepsilon_{k+1},d_{2},\ldots,d_{k},\] \[c \ll d_{2},\ldots,d_{k},\] \[1/t \ll\varepsilon_{k+1}\ll\beta,d_{k+1}\leq 1/k,\] \[\varepsilon_{k+1} \ll\nu.\]
_Let \(\textbf{d}=(d_{2},\ldots,d_{k+1})\) and let \(\mathfrak{S}=(G,G_{\mathcal{J}},\mathcal{J},\mathcal{P},\overrightarrow{H})\) be an oriented \((k,m,2t,\varepsilon,\varepsilon_{k+1},r,\textbf{d})\)-regular setup and \(L\in\mathcal{L}_{\overrightarrow{H}}\) be a reduced gadget in \(\overrightarrow{H}\). Let \(\mathcal{F}\) be the \((k+1)\)-complex corresponding to the
Figure 3. Reduced Gadget
down-closure of \((1,k)\)-graph \(F\) as in Definition 7.6. Then_
\[|\mathfrak{F}_{L}|=(1\pm\beta)\left(\prod_{i=2}^{k+1}d_{i}^{e_{i}( \mathcal{F})}\right)m^{6k^{2}+4k+2},\] \[|\mathfrak{F}_{L}\setminus\mathfrak{F}_{L}^{\mathrm{ext}}|\leq \beta|\mathfrak{F}_{L}|. \tag{8}\]
Proof.: Let \(Y=(Y_{0},Y_{1},\ldots,Y_{k}),Y^{\prime}=(Y_{0},Z_{1},\ldots,Z_{k})\in \overrightarrow{H}\) be the ordered core edge of \(L\) and \(W_{i}=\{W_{i,1},\ldots,W_{i,k-1}\}\), \(W_{i}^{\prime}=\{W_{i,1}^{\prime},\ldots,W_{i,k-1}^{\prime}\}\) for \(i\in[k]\), be the peripheral sets, ordered such that \((Y_{0},W_{i,1},\ldots,W_{i,k-1},Z_{i})\) and \((Z_{0},W_{i,1}^{\prime},\ldots,W_{i,k-1}^{\prime},Y_{i})\) are consistent with \(\overrightarrow{H}\). Note that \(|V(F)|=6k^{2}+4k+2\). The bounds on \(|\mathfrak{F}_{L}|\) are given by Lemma 4.6 directly.
Let \(Y^{*}=(Y_{1},\ldots,Y_{k-1})\) and denote the ordered tuples in the \((k-1)\)-th level of \(\mathcal{J}\) in the clusters \(\{Y_{1},\ldots,Y_{k-1}\}\) by \(\mathcal{J}_{Y^{*}}\). Let \(d_{Y^{*}}=\prod_{i=2}^{k-1}d_{i}^{\binom{k-1}{i}}\). By Lemma 4.9 we have
\[|\mathcal{J}_{Y^{*}}|=(1\pm\beta)d_{Y^{*}}m^{k-1}.\]
Let \(\beta_{1}\) be such that \(\varepsilon_{k+1}\ll\beta_{1}\ll\beta,d_{k},d_{k+1},1/k\). Let \(B_{1}\subseteq\mathcal{J}_{Y^{*}}\) be the set of \((k-1)\)-tuples which are not \((c,\nu)\)-extensible leftwards to \((Y_{0},Y_{1},\ldots,Y_{k})\). By Proposition 6.6 with \(\beta_{1}\) playing the role of \(\beta\), we deduce that
\[|B_{1}|\leq\beta_{1}|\mathcal{J}_{Y^{*}}|.\]
Let \(\beta_{2}\) be such that \(\varepsilon\ll\beta_{2}\ll\varepsilon_{k+1},d_{2},\ldots,d_{k-1}\). Let \(\phi:V(F)\to L\) be the homomorphism and \(Z\subseteq V(F)\) corresponds to the first \(k-1\) vertices \(\{a_{1},\ldots,a_{k-1}\}\) of path \(AE\). Let \(\mathcal{F}^{-}\) be the \((k-1)\)-complex generated by removing the \((k+1)\)-st and \(k\)-th layer from the down-closure \(\mathcal{F}\) of \(F\). Let \(\mathcal{Z}=\mathcal{F}^{-}[Z]\) be the induced subcomplex of \(\mathcal{F}^{-}\) in \(Z\). Note that \(\phi(a_{i})=Y_{i}\) for \(i\in[k-1]\). Thus the labelled partition-respecting copies of \(\mathcal{Z}\) in \(\mathcal{J}\) correspond exactly to \(\mathcal{J}_{Y^{*}}\). Define
\[d_{\mathcal{F}^{-}\setminus\mathcal{Z}}=\prod_{i=2}^{k-1}d_{i}^{e_{i}( \mathcal{F}^{-})-e_{i}(\mathcal{Z})}.\]
Let \(B_{2}\subseteq\mathcal{J}_{Y^{*}}\) be the set of \((k-1)\)-tuples which are not extensible to \((1\pm\beta_{2})d_{\mathcal{F}^{-}\setminus\mathcal{Z}}m^{6k^{2}+3k+3}\) labelled partition-respecting copies of \(\mathcal{F}^{-}\) in \(\mathcal{J}\). By Lemma 4.10 with \(\beta_{2}\) playing the role of \(\beta\), we have
\[|B_{2}|\leq\beta_{2}|\mathcal{J}_{Y^{*}}|.\]
By (8), we have
\[|\mathfrak{F}_{L}|=(1\pm\beta)d_{k+1}^{e_{k+1}(\mathcal{F})}d_{k}^{e_{k}( \mathcal{F})}d_{\mathcal{F}^{-}\setminus\mathcal{Z}}d_{Y^{*}}m^{6k^{2}+4k+2}.\]
Let \(\mathcal{G}=\mathcal{J}\cup G_{\mathcal{J}}\). Say that a labelled partition-respecting copy of \(\mathcal{F}\) in \(\mathcal{G}\) is _nice_ if the vertices of \(\{a_{1},\ldots,a_{k-1}\}\) are not in \(B_{1}\cup B_{2}\). For every \(Z\in\mathcal{J}_{Y^{*}}\), let \(N^{*}(Z)\) be the number of labelled partition-respecting copies of \(\mathcal{F}\) in \(\mathcal{G}\) which extend \(Z\). Note that \(0\leq N^{*}(Z)\leq m^{6k^{2}+3k+3}\) and we
have
\[\sum_{Z\in B_{1}\cup B_{2}}N^{*}(Z) =\sum_{Z\in B_{1}\setminus B_{2}}N^{*}(Z)+\sum_{Z\in B_{2}}N^{*}(Z)\] \[\leq[|B_{1}|(1+\beta_{2})d_{\mathcal{F}^{-}\setminus\mathcal{Z}}+| B_{2}||m^{6k^{2}+3k+3}\] \[\leq[\beta_{1}(1+\beta_{2})d_{\mathcal{F}^{-}\setminus\mathcal{Z} }+\beta_{2}]|\mathcal{J}_{Y^{\prime}}|m^{6k^{2}+3k+3}\] \[\leq 3\beta_{1}d_{\mathcal{F}^{-}\setminus\mathcal{Z}}|\mathcal{ J}_{Y^{*}}|m^{6k^{2}+3k+3}\] \[\leq 3\beta_{1}(1+\beta)d_{\mathcal{F}^{-}\setminus\mathcal{Z}}d_{Y ^{*}}m^{6k^{2}+4k+2}\] \[\leq\frac{3\beta_{1}(1+\beta)}{(1-\beta)d_{k+1}^{\varepsilon_{k+ 1}(\mathcal{F})}d_{k}^{\varepsilon_{k}(\mathcal{F})}}|\mathcal{F}_{L}|\] \[\leq\frac{\beta}{4k+4}|\mathcal{F}_{L}|,\]
since \(\beta_{1}\ll\beta,d_{k},d_{k+1},1/k\) and \(\beta_{2}\ll d_{2},\ldots,d_{k-1},\varepsilon_{k+1}\).
The same analysis shows that we define nice tuples for any \((k-1)\)-set of vertices of \(F\), the number of copies of \(F\) which are not nice with respect to that \((k-1)\)-set is at most \(\beta|\mathcal{F}_{L}|/(4k+4)\). Note that \(F\in\mathfrak{F}_{L}\) is extensible if and only if paths \((C,AE)\), \((C^{\prime},A^{\prime}E^{\prime})\), \((C_{i},P_{i}b_{i}Q_{i})\) and \((C^{\prime}_{i},P^{\prime}_{i}b^{\prime}_{i}Q^{\prime}_{i})\) for \(i\in[k]\) contained in \(F\) are extensible with certain edges of the reduced graph. This means that \(4(k+1)\) many \((k-1)\)-tuples are extensible with certain edges of the reduced graph. Thus, \(F\in\mathfrak{F}_{L}\setminus\mathfrak{F}_{L}^{\text{ext}}\) implies that \(F\) is not nice with one of \(4k+4\) many \((k-1)\)-sets. Thus,
\[|\mathfrak{F}_{L}\setminus\mathfrak{F}_{L}^{\text{ext}}|\leq(4k+4)\frac{\beta }{4k+4}|\mathcal{F}_{L}|=\beta|\mathcal{F}_{L}|.\]
**Lemma 7.9**.: _Let \(k,r,m,t\in\mathbb{N}\) and \(d_{2},\ldots,d_{k+1},\varepsilon,\varepsilon_{k+1},c,\nu,\beta,\mu\) be such that_
\[1/m \ll 1/r,\varepsilon\ll 1/t,c,\varepsilon_{k+1},d_{2},\ldots,d_{k},\] \[c \ll d_{2},\ldots,d_{k},\] \[1/t \ll\varepsilon_{k+1}\ll\beta,d_{k+1}\leq 1/k,\] \[\varepsilon_{k+1} \ll\nu,\mu,\] \[\alpha \ll\mu.\]
_Let \(\textbf{d}=(d_{2},\ldots,d_{k+1})\) and let \(\mathfrak{S}=(G,G_{\mathcal{J}},\mathcal{J},\mathcal{P},\overrightarrow{H})\) be an oriented \((k,m,2t,\varepsilon,\varepsilon_{k+1},r,\textbf{d})\)-regular setup. Suppose that for each color cluster \(C\), there are at least \((1-\alpha)t\) point clusters \(Z\) such that \(\{C,Z\}\) has relative \((1,1)\)-degree at least \(\mu\) in \(H\), then_
\[\frac{\mu^{2k+2}}{8}{t\choose k}^{2}{t\choose k-1}^{2k}t(t-1)\leq|\mathfrak{ S}_{\overrightarrow{H}}|\leq{t\choose k}^{2}{t\choose k-1}^{2k}t(t-1).\]
_Let \(\mathcal{F}\) be the \((k+1)\)-complex corresponding to the down-closure of the \((1,k)\)-graph \(F\). For each reduced gadget \(L\in\mathfrak{L}_{\overrightarrow{H}}\) in \(\overrightarrow{H}\), we have_
\[|\mathfrak{F}_{L}^{ext}|=(1\pm\beta)\left(\prod_{i=2}^{k+1}d_{i}^{e_{i}( \mathcal{F})}\right)m^{6k^{2}+4k+2}\]
_and_
\[|\mathfrak{F}^{ext}|=(1\pm\beta)\left(\prod_{i=2}^{k+1}d_{i}^{e_{i}(\mathcal{ F})}\right)m^{6k^{2}+4k+2}|\mathfrak{L}_{\overrightarrow{H}}|.\]
Proof.: The lower bound of \(\mathfrak{L}_{\overrightarrow{H}}\) can be done as follows. Let \(Y=(Y_{0},Y_{1},\ldots,Y_{k})\), \(Y^{\prime}=(Y_{0},Z_{1},\ldots,Z_{k})\)\(\in\overrightarrow{H}\) be the ordered core edge of \(L\) and \(W_{i}=\{W_{i,1},\ldots,W_{i,k-1}\}\), \(W_{i}^{\prime}=\{W_{i,1}^{\prime},\ldots,W_{i,k-1}^{\prime}\}\) for \(i\in[k]\), be the peripheral sets, ordered such that \((Z_{0},W_{i,1}^{\prime},\ldots,W_{i,k-1}^{\prime},Y_{i})\) and \((Y_{0},W_{i,1},\ldots,W_{i,k-1},Z_{i})\) are consistent with \(\overrightarrow{H}\). We first choose \(Y_{0},Z_{0}\) arbitrarily, there are at least \(t(t-1)\) choices. For \((Y_{1},\ldots,Y_{k})\), there are at least \(\mu\binom{t}{k}-\alpha t\binom{t}{k-1}\geq\mu\binom{t}{k}/2\) choices. Similarly, for \((Z_{1},\ldots,Z_{k})\), there are at least \(\mu\binom{t}{k}/2\) choices. Furthermore, \(W_{i}^{\prime}\) and \(W_{i}\) for \(i\in[k]\) can be chosen in at least \(\mu\binom{t}{k-1}\) ways for \(i\in[k]\), but we need to delete the possible choices of intersecting reduced gadgets, whose number is at most \(t(t-1)(2k^{2})^{2}t^{2k^{2}-2}\leq(2k^{2})^{2}t^{2k^{2}}\). We have
\[|\mathfrak{L}_{\overrightarrow{H}}|\geq\frac{\mu^{2k+2}}{4}\binom{t}{k}^{2} \binom{t}{k-1}^{2k}t(t-1)-(2k^{2})^{2}t^{2k^{2}}\geq\frac{\mu^{2k+2}}{8}\binom {t}{k}^{2}\binom{t}{k-1}^{2k}t(t-1),\]
since \(1/t\ll\mu,1/k\).
While the upper bound is obvious.
We choose \(\beta^{\prime}\) such that \(\varepsilon_{k+1}\ll\beta^{\prime}\ll\beta,d_{k},d_{k+1},1/k\). By Lemma 7.8 (with \(\beta^{\prime}\) in place of \(\beta\)), we obtain that
\[(1-\beta)\left(\prod_{i=2}^{k+1}d_{i}^{e_{i}(\mathcal{F})}\right)m^{6k^{2}+4k+ 2}\leq(1-\beta^{\prime})^{2}\left(\prod_{i=2}^{k+1}d_{i}^{e_{i}(\mathcal{F})} \right)m^{6k^{2}+4k+2}\leq(1-\beta^{\prime})|\mathfrak{F}_{L}|\leq|\mathfrak{F }_{L}^{\text{ext}}|,\]
\[|\mathfrak{F}_{L}^{\text{ext}}|\leq|\mathfrak{F}_{L}|\leq(1+\beta^{\prime}) \left(\prod_{i=2}^{k+1}d_{i}^{e_{i}(\mathcal{F})}\right)m^{6k^{2}+4k+2}\leq(1 +\beta)\left(\prod_{i=2}^{k+1}d_{i}^{e_{i}(\mathcal{F})}\right)m^{6k^{2}+4k+2}.\]
Note that
\[\mathfrak{F}^{\text{ext}}=\bigcup_{L\in\mathfrak{L}_{\overrightarrow{H}}} \mathfrak{F}_{L}^{\text{ext}},\]
and the union is disjoint, the bounds of \(|\mathfrak{F}^{\text{ext}}|\) are easy to see.
**Lemma 7.10**.: _Let \(k,r,m,t\in\mathbb{N}\) and \(d_{2},\ldots,d_{k+1},\varepsilon,\varepsilon_{k+1},c,\nu,\theta,\mu\) be such that_
\[1/m \ll 1/r,\varepsilon\ll 1/t,c,\varepsilon_{k+1},d_{2},\ldots,d_{k},\] \[c \ll d_{2},\ldots,d_{k},\] \[1/t \ll\varepsilon_{k+1}\ll d_{k+1}\leq 1/k,\] \[\varepsilon_{k+1} \ll\nu\ll\theta\ll\mu\ll 1/k.\]
_Let \(\textbf{d}=(d_{2},\ldots,d_{k+1})\) and let \(\mathfrak{S}=(G,G_{\mathcal{J}},\mathcal{J},\mathcal{P},\overrightarrow{H})\) be an oriented \((k,m,2t,\varepsilon,\varepsilon_{k+1},r,\textbf{d})\)-regular setup. Suppose that for each color cluster \(C\), there are at least \((1-\alpha)t\) point clusters \(Z\) such that
\(\{C,Z\}\) has relative \((1,1)\)-degree at least \(\mu\) in \(H\). For any point \(v\) of \(G\), color cluster \(C\), there are at least \((1-\alpha)t\) point clusters \(Z\in\mathcal{P}\) such that \(|N_{\mathcal{J}}((v,C),\mu)\cap N_{H}(Z,C)|\geq\mu\binom{t}{k-1}\). And for every \(c\in[n]\), color cluster \(C\), there are at least \((1-\alpha)t\) point clusters \(Z\in\mathcal{P}\) such that \(|N_{\mathcal{J}}((c,Z),\mu)\cap N_{H}(C,Z)|\geq\mu\binom{t}{k-1}\). Let \(T\subseteq V\) be a \(k\)-set and \(O\subseteq[n]\) be a \(k\)-set, we have_
\[|\mathfrak{F}_{(T,O)}^{\mathrm{ext}}|\geq\theta|\mathfrak{F}^{\mathrm{ext}}|.\]
Given a \(k\)-subset \(T=\{t_{1},\ldots,t_{k}\}\) of \(V\) and a \(k\)-subset \(O=(o_{1},\ldots,o_{k})\), the family \(\mathfrak{L}_{\overrightarrow{H}}\) and \(\mu>0\), we define \(\mathfrak{L}_{\overrightarrow{H},(T,O),\mu}\) of _reduced \(((T,O),\mu)\)-absorbers_ as the set of \((T,O)\)-absorbers \(Y\cup W_{1}\cup\cdots\cup W_{k}\cup Z_{0}\cup Z_{1}\cup\ldots\cup Z_{k}\cup W _{1}^{\prime}\cup\cdots\cup W_{k}^{\prime}\), where \(W_{i}\subseteq N_{\mathcal{J}}((c_{i},Z_{i}),\mu)\) and \(W_{i}^{\prime}\subseteq N_{\mathcal{J}}((t_{i},Z_{0}),\mu)\) for \(i\in[k]\).
**Claim 7.11**.: _Let \(k,r,m,t\in\mathbb{N}\) and \(d_{2},\ldots,d_{k+1},\varepsilon,\varepsilon_{k+1},c,\nu,\theta,\mu\) be such that_
\[1/m \ll 1/r,\varepsilon\ll 1/t,c,\varepsilon_{k+1},d_{2},\ldots,d_{k},\] \[c \ll d_{2},\ldots,d_{k+1},\] \[1/t \ll\varepsilon_{k+1}\ll d_{k+1}\leq 1/k,\] \[\varepsilon_{k+1} \ll\nu\ll\theta\ll\mu\ll 1/k,\] \[\alpha \ll\mu.\]
_Let \(\textbf{d}=(d_{2},\ldots,d_{k+1})\) and let \(\mathfrak{S}=(G,G_{\mathcal{J}},\mathcal{J},\mathcal{P},\overrightarrow{H})\) be an oriented \((k,m,2t,\varepsilon,\varepsilon_{k+1},r,\textbf{d})\)-regular setup. Suppose that for each color cluster \(C\), there are at least \((1-\alpha)t\) point clusters \(Z\) such that \(\{C,Z\}\) has relative \((1,1)\)-degree at least \(\mu\) in \(H\). For any point \(v\) of \(G\), color cluster \(C\), there are at least \((1-\alpha)t\) point clusters \(Z\in\mathcal{P}\) such that \(|N_{\mathcal{J}}((v,C),\mu)\cap N_{H}(Z,C)|\geq\mu\binom{t}{k-1}\). And for every \(c\in[n]\), color cluster \(C\), there are at least \((1-\alpha)t\) point clusters \(Z\in\mathcal{P}\) such that \(|N_{\mathcal{J}}((c,Z),\mu)\cap N_{H}(C,Z)|\geq\mu\binom{t}{k-1}\). Let \(T\subseteq V\) be a \(k\)-set and \(O\subseteq[n]\) be a \(k\)-set, we have_
\[|\mathfrak{L}_{\overrightarrow{H},(T,O),\mu}|\geq\theta|\mathfrak{L}_{ \overrightarrow{H}}|.\]
Proof.: Let \(T=\{t_{1},\ldots,t_{k}\}\) and \(O=(o_{1},\ldots,o_{k})\). Since \(H\) has minimum relative \((1,1)\)-degree at least \(\mu\), there are at least \(\mu t\binom{t}{k}-t\alpha t\binom{t}{k-1}\geq\mu t\binom{t}{k}/2\) choices for \(Y\). Besides, there are at least \(t-1\) choices for \(Z_{0}\). For \((Z_{1},\ldots,Z_{k})\), there are at least \(\mu\binom{t}{k}/2-k^{2}\binom{t}{k-1}\geq\mu\binom{t}{k}/3\) choices. Each \(W_{i}\) is chosen from \(N_{\mathcal{J}}((o_{i},Z_{i}),\mu)\cap N_{H}(Y_{0},Z_{i})\) for \(i\in[k]\), thus, \(W_{i}\) can be chosen in at least \(\mu\binom{t}{k-1}-(k-1)((i-1)(k-1)+2k)\binom{t}{k-2}\geq\mu\binom{t}{k-1}/2\) ways for \(i\in[k]\), since there are at most \((k-1)((i-1)(k-1)+2k)\binom{t}{k-2}\) choices for \(W_{i}^{\prime}\) which intersects with \(Y\setminus\{Y_{0}\},Z_{1},\ldots,Z_{k},W_{1},\ldots,W_{i-1}\).
And each \(W_{i}^{\prime}\) is chosen from \(N_{\mathcal{J}}((t_{i},Z_{0}),\mu)\cap N_{H}(Y_{i},Z_{0})\) for \(i\in[k]\). Similarly, there are at least \((\mu/2)\binom{t}{k-1}\) possible choices for each \(W_{i}^{\prime}\) for \(i\in[k]\). Thus, the number of reduced \(((T,O),\mu)\)-absorbers is at least
\[\frac{\mu t}{2}\binom{t}{k}(t-1)\frac{\mu}{3}\binom{t}{k}\left(\frac{\mu}{2} \binom{t}{k-1}\right)^{2k}\geq\theta\binom{t}{k}^{2}\binom{t}{k-1}^{2k}t(t-1) \geq\theta|\mathfrak{L}_{\overrightarrow{H}}|\]
since \(\theta\ll\mu\)
**Claim 7.12**.: _Let \(k,r,m,t\in\mathbb{N}\) and \(d_{2},\ldots,d_{k+1},\varepsilon,\varepsilon_{k+1},c,\nu,\theta,\mu\) be such that_
\[1/m \ll 1/r,\varepsilon\ll 1/t,c,\varepsilon_{k+1},d_{2},\ldots,d_{k},\] \[c \ll d_{2},\ldots,d_{k},\] \[1/t \ll\varepsilon_{k+1}\ll d_{k+1}\leq 1/k,\] \[\varepsilon_{k+1} \ll\nu\ll\theta\ll\mu\ll 1/k.\]
_Let \(\textbf{d}=(d_{2},\ldots,d_{k+1})\) and let \(\mathfrak{S}=(G,G_{\mathcal{J}},\mathcal{J},\mathcal{P},\overrightarrow{H})\) be an oriented \((k,m,2t,\varepsilon,\varepsilon_{k+1},r,\textbf{d})\)-regular setup. Let \(T\subseteq V\) and \(O\subseteq[n]\) be \(k\)-sets and let \(L\in\mathfrak{L}_{\overrightarrow{H}}\) be a reduced \(((T,O),\mu)\)-gadget in \(\overrightarrow{H}\). We have_
\[|\mathfrak{F}_{L}\cap\mathfrak{F}_{(T,O)}|\geq\theta|\mathfrak{F}_{L}|.\]
Proof.: Let \(T=\{t_{1},\ldots,t_{k}\}\) and \(O=\{o_{1},\ldots,o_{k}\}\), \(L=Y\cup W_{1}\cup\cdots\cup W_{k}\cup Z_{0}\cup Z_{1}\cup\ldots\cup Z_{k}\cup W _{1}^{\prime}\cup\cdots\cup W_{k}^{\prime}\) where \(W_{i}=\{W_{i,1},\ldots,W_{i,k-1}\}\) and \(W_{i}^{\prime}=\{W_{i,1}^{\prime},\ldots,W_{i,k-1}^{\prime}\}\). Choose \(P_{i},Q_{i}\) in \(W_{i}\) and \(P_{i}^{\prime},Q_{i}^{\prime}\) in \(W_{i}^{\prime}\), let \(\mathcal{Q}_{Z_{i},W_{i}}\) be the set of \(k\)-uniform tight paths \((b_{i},v_{1},\ldots,v_{2k-2})\) such that \(b_{i}\in Z_{i}\), \(v_{\ell},v_{\ell+k-1}\in W_{i,\ell}\) for \(i,j\in[k]\), \(\ell\in[k-1]\) and its down-closure is in \(\mathcal{J}\). Let \(\mathcal{Q}_{o_{i},(Z_{i},W_{i})}\subseteq\mathcal{Q}_{Z_{i},W_{i}}\) be the set of those paths whose edges in the \(k\)-th level are in \(N_{G}(o_{i})\). Note that \(F\) is the absorbing gadget for \((T,O)\). Let \(\mathcal{F}\) be the down-closure of \(F\). Since \(L\) is a reduced \((T,\mu)\)-gadget, we have \(W_{i}\in N_{H}(Y_{0},Z_{i})\cap N_{\mathcal{J}}((o_{i},Z_{i}),\mu)\), thus \(|N_{G}((o_{i},Z_{i}),\mathcal{J}_{W_{i}})|\geq\mu|\mathcal{J}_{W_{i}}|\). By Lemma 7.4 with \(S\) being the set of \(k\)-sets where each \(k\)-set consists of \(k-1\) points from \(N_{G}((o_{i},Z_{i}),\mathcal{J}_{W_{i}})\) and one point from \(Z_{i}\), we have
\[|\mathcal{Q}_{o_{i},(Z_{i},W_{i})}|\geq\frac{1}{2}\left(\frac{\mu}{8k}\right) ^{k+1}|\mathcal{Q}_{Z_{i},W_{i}}|.\]
Let \(\mathcal{Q}_{Z_{0},W_{i}^{\prime}}\) be the set of \(k\)-uniform sequentially paths \((c_{1}^{\prime},\ldots,c_{k}^{\prime},v_{1}^{\prime},\ldots,v_{2k-2}^{\prime})\) such that \(c_{j}^{\prime}\in Z_{0}\), \(v_{\ell}^{\prime},v_{\ell+k-1}^{\prime}\in W_{i,\ell}^{\prime}\) for \(i,j\in[k]\), \(\ell\in[k-1]\) and its down-closure is in \(\mathcal{J}\). Let \(\mathcal{Q}_{t_{i},(Z_{0},W_{i}^{\prime})}\subseteq\mathcal{Q}_{Z_{0},W_{i}^{ \prime}}\) be the set of those paths whose edges in the \(k\)-th level are in \(N_{G}(t_{i})\). Since \(L\) is a reduced \(((T,O),\mu)\)-gadget, we have \(W_{i}^{\prime}\in N_{H}(Z_{0},Y_{i})\cap N_{\mathcal{J}}((t_{i},Z_{0}),\mu)\), thus \(|N_{G}((t_{i},Z_{0}),\mathcal{J}_{W_{i}^{\prime}})|\geq\mu|\mathcal{J}_{W_{i}^ {\prime}}|\). By Lemma 7.3 with \(S\) being the set of \(k\)-sets where each \(k\)-set consists \(k-1\) points from \(N_{G}((t_{i},Z_{0}),\mathcal{J}_{W_{i}^{\prime}})\) and one color from \(Z_{0}\), we have
\[|\mathcal{Q}_{t_{i},(Z_{0},W_{i}^{\prime})}|\geq\frac{1}{2}\left(\frac{\mu}{8k }\right)^{k+1}|\mathcal{Q}_{Z_{0},W_{i}^{\prime}}|.\]
Let \(\phi:V(F)\to V(L)\) be the homomorphism which labels the copies of \(F\) in \(\mathfrak{F}_{L}\). Set \(Z=\{b_{1},\ldots,b_{k}\}\cup\bigcup_{i=1}^{k}(V(P_{i})\cup V(Q_{i}))\cup \bigcup_{i=1}^{k}(C_{i}^{\prime}\cup V(P_{i}^{\prime})\cup V(Q_{i}^{\prime}))\). Thus, \(|Z|=5k^{2}-3k\). Let \(\mathcal{Z}=\mathcal{F}[Z]\) be the induced subcomplex of \(\mathcal{F}\) in \(Z\). Note that \(\mathcal{Z}\) consists of \(k\) vertex-disjoint \(k\)-uniform tight paths of length \(2k-1\) where the \(i\)-th path lies in \(\mathcal{Q}_{o_{i},(Z_{i},W_{i})}\) and \(k\) vertex-disjoint \(k\)-uniform sequentially paths of length \(2k-2\) where the \(i\)-th path lies in \(\mathcal{Q}_{t_{i},(Z_{0},W_{i}^{\prime})}\). Let \(\mathcal{G}=\mathcal{J}\cup G_{\mathcal{J}}\) and \(\mathcal{Z}_{\mathcal{G}}\) be the set of labelled partition-respecting copies of \(\mathcal{Z}\) in \(\mathcal{G}\). Let \(\beta_{1}\) be such that \(\varepsilon\ll\beta_{1}\ll d_{2},\ldots,d_{k},\varepsilon_{k+1}\) and define \(d_{\mathcal{Z}}=\prod_{i=2}^{k}d_{i}^{e_{i}(\mathcal{Z})}\). By Lemma 4.8, we have
\[|\mathcal{Z}_{\mathcal{G}}|=\prod_{i=1}^{k}|\mathcal{Q}_{Z_{i},W_{i}}|| |\mathcal{Q}_{Z_{0},W_{i}^{\prime}}|=(1\pm\beta_{1})d_{\mathcal{Z}}m^{5k^{2}-3 k}.\]
Let \(\mathcal{Z}_{(T,O),\mathcal{G}}\subseteq\mathcal{Z}_{\mathcal{G}}\) be the labelled partition-respecting copies of \(\mathcal{Z}\) absorbing \((T,O)\), thus we have
\[|\mathcal{Z}_{(T,O),\mathcal{G}}|\geq\prod_{i=1}^{k}|\mathcal{Q}_{o_{i},(Z_{i}, W_{i})}||\mathcal{Q}_{t_{i},(Z_{0},W_{i}^{\prime})}|\geq\left(\frac{1}{2}\left( \frac{\mu}{8k}\right)^{k+1}\right)^{2k}\prod_{i=1}^{k}|\mathcal{Q}_{Z_{i},W_{ i}}||\mathcal{Q}_{Z_{0},W_{i}^{\prime}}|\geq 3\theta|\mathcal{Z}_{\mathcal{G}}|,\]
since \(\theta\ll\mu,1/k\).
Let \(\beta_{2}\) be such that \(\varepsilon_{k+1}\ll\beta_{2}\ll\theta,d_{k+1},1/k\) and define \(d_{\mathcal{F}-\mathcal{Z}}=\prod_{i=2}^{k+1}d_{i}^{e_{i}(\mathcal{F})-e_{i} (\mathcal{Z})}\). Let \(I\subseteq\mathcal{Z}_{\mathcal{G}}\) be the set of labelled partition-respecting copies of \(\mathcal{Z}\) which are not extensible to \((1\pm\beta_{2})d_{\mathcal{F}-\mathcal{Z}}m^{k^{2}+7k+2}\) labelled partition-respecting copies of \(\mathcal{F}\) in \(\mathcal{G}\). By Lemma 4.7, we have
\[|I|\leq\beta_{2}|\mathcal{Z}_{\mathcal{G}}|\leq\theta|\mathcal{Z}_{\mathcal{G }}|,\]
since \(\beta_{2}\ll\theta\). By Lemma 7.8, we have
\[|\mathfrak{F}_{L}|=(1\pm\beta_{2})d_{\mathcal{F}-\mathcal{Z}}d_{\mathcal{Z}}m ^{6k^{2}+4k+2},\]
since \(\varepsilon_{k+1}\ll\beta_{2}\ll\theta,d_{k+1},1/k\).
Note that a labelled partition-respecting copy of \(\mathcal{F}\) in \(\mathcal{G}\) containing a \(Z\in\mathcal{Z}_{(T,O),\mathcal{G}}\) yields exactly one gadget in \(\mathfrak{F}_{L}\cap\mathfrak{F}_{(T,O)}\), we have
\[|\mathfrak{F}_{L}\cap\mathfrak{F}_{(T,O)}| \geq|\mathcal{Z}_{(T,O),\mathcal{G}}\setminus I|(1-\beta_{2})d_ {\mathcal{F}-\mathcal{Z}}m^{k^{2}+7k+2}\] \[\geq(|\mathcal{Z}_{(T,O),\mathcal{G}}|-|I|)(1-\beta_{2})d_{ \mathcal{F}-\mathcal{Z}}m^{k^{2}+7k+2}\] \[\geq 2\theta|\mathcal{Z}_{\mathcal{G}}|(1-\beta_{2})d_{\mathcal{F }-\mathcal{Z}}m^{k^{2}+7k+2}\] \[\geq 2\theta(1-\beta_{2})(1-\beta_{1})d_{\mathcal{Z}}m^{5k^{2}-3k }d_{\mathcal{F}-\mathcal{Z}}m^{k^{2}+7k+2}\] \[\geq 2\theta(1-2\beta_{2})d_{\mathcal{Z}}d_{\mathcal{F}-\mathcal{Z} }m^{6k^{2}+4k+2}\] \[\geq 2\theta\frac{1-2\beta_{2}}{1+\beta_{2}}|\mathfrak{F}_{L}|\] \[\geq\theta|\mathfrak{F}_{L}|,\]
since \(\beta_{2}\ll\theta\).
Proof of Lemma 7.10.: Let \(\theta\ll\theta^{\prime}\ll\mu\). By Claim 7.12 with \(\theta^{\prime}\), we have for every reduced \(((T,O),\mu)\)-gadget \(L\in\mathfrak{L}_{\overrightarrow{H}}\),
\[|\mathfrak{F}_{L}\cap\mathfrak{F}_{(T,O)}|\geq\theta^{\prime}|\mathfrak{F}_{ L}|.\]
Let \(\beta\) be such that \(\varepsilon_{k+1}\ll\beta\ll d_{k+1},\theta^{\prime}\), by Lemma 7.8 with \(\theta^{\prime}\), we have \(|\mathfrak{F}_{L}\setminus\mathfrak{F}_{L}^{\text{ext}}|\leq\beta|\mathfrak{F }_{L}|\leq\theta^{\prime}|\mathfrak{F}_{L}|/2\). Thus,
\[|\mathfrak{F}_{(T,O)}^{\text{ext}}\cap\mathfrak{F}_{L}|\geq|\mathfrak{F}_{L} \cap\mathfrak{F}_{(T,O)}|-|\mathfrak{F}_{L}\setminus\mathfrak{F}_{L}^{\text{ ext}}|\geq\frac{\theta^{\prime}}{2}|\mathfrak{F}_{L}|.\]
By Claim 7.11 with \(\theta^{\prime}\) and Lemma 7.9, we have \(|\mathfrak{L}_{\overrightarrow{H},(T,O),\mu}|\geq\theta^{\prime}|\mathfrak{L}_ {\overrightarrow{H}}|\) and
\[|\mathfrak{F}_{(T,O)}^{\text{ext}}|\geq\sum_{L\in\mathfrak{L}_{\overrightarrow {H},(T,O),\mu}^{\text{ext}}}|\mathfrak{F}_{(T,O)}^{\text{ext}}\cap\mathfrak{F} _{L}|\geq\frac{\theta^{\prime}}{2}\sum_{L\in\mathfrak{L}_{\overrightarrow{H},(T,O ),\mu}^{\text{ext}}}|\mathfrak{F}_{L}|\geq\theta|\mathfrak{F}^{\text{ext}}|.\]
### Absorbing Lemma
**Lemma 7.13**.: _Let \(k,r,m,t\in\mathbb{N}\) and \(d_{2},\ldots,d_{k+1},\varepsilon,\varepsilon_{k+1},c,\nu,\theta,\mu,\alpha,\zeta\) be such that_
\[1/m \ll 1/r,\varepsilon\ll 1/t,\zeta,\varepsilon_{k+1},d_{2},\ldots,d_{k},\] \[\zeta \ll c\ll d_{2},\ldots,d_{k},\] \[1/t \ll\varepsilon_{k+1}\ll d_{k+1},\nu\leq 1/k,\] \[c \ll\varepsilon_{k+1}\ll\alpha\ll\theta\ll\mu\ll 1/k.\]
_Let \(\textbf{d}=(d_{2},\ldots,d_{k+1})\) and let \(\mathfrak{S}=(G,G_{\mathcal{J}},\mathcal{J},\mathcal{P},\overrightarrow{H})\) be an oriented \((k,m,2t,\varepsilon,\varepsilon_{k+1},r,\textbf{d})\)-regular setup. Suppose that \(V(G)=[n]\cup V\) where \(|V|=n\leq(1+\alpha)mt\) and \(V(H)=[t]\cup V^{\prime}\) where \(|V^{\prime}|=t\). Suppose that for each color cluster \(C\), there are at least \((1-\alpha)t\) point clusters \(Z\) such that \(\{C,Z\}\) has relative \((1,1)\)-degree at least \(\mu\) in \(H\). For any point \(v\) of \(G\), color cluster \(C\), there are at least \((1-\alpha)t\) point clusters \(Z\in\mathcal{P}\) such that \(|N_{\mathcal{J}}((v,C),\mu)\cap N_{H}(Z,C)|\geq\mu\binom{t}{k-1}\). And for every \(c\in[n]\), color cluster \(C\), there are at least \((1-\alpha)t\) point clusters \(Z\in\mathcal{P}\) such that \(|N_{\mathcal{J}}((c,Z),\mu)\cap N_{H}(C,Z)|\geq\mu\binom{t}{k-1}\). Then there exists a family \(\mathfrak{F}^{\prime\prime}\) of pairwise disjoint \(\mathfrak{S}\)-gadgets which are \((c,\nu)\)-extensible with the following properties._
1. \(|\mathfrak{F}^{\prime\prime}|\leq\zeta m,\)__
2. \(|\mathfrak{F}^{\prime\prime}\cap\mathfrak{F}^{\mathrm{ext}}_{(T,O)}|\geq\zeta \theta m\) _for any_ \(k\)_-subset_ \(T\) _of_ \(V\) _and_ \(k\)_-subset_ \(O\) _of_ \([n]\)_,_
3. \(V(\mathfrak{F}^{\prime\prime})\) _is_ \((2(k+1)\zeta/t)\)_-sparse in_ \(\mathcal{P}\)_._
Proof.: Let \(\beta>0\) be such that \(\varepsilon_{k+1}\ll\beta\ll d_{k+1}\). Let \(F\) be the \((1,k)\)-graph as in Definition 7.6 and let \(\mathcal{F}\) be the \((k+1)\)-complex generated by its down-closure. Let \(d_{F}=\prod_{i=2}^{k+1}d_{i}^{\varepsilon_{i}(\mathcal{F})}\). By Lemma 7.9, we have
\[|\mathfrak{F}^{\mathrm{ext}}| \leq(1+\beta)d_{F}m^{6k^{2}+4k+2}\binom{t}{k}^{2}\binom{t}{k-1}^{ 2k}t(t-1)\leq d_{F}m^{6k^{2}+4k+2}t^{2k^{2}+2},\] \[|\mathfrak{F}^{\mathrm{ext}}| \geq\frac{\mu^{k+1}}{2}(1-\beta)d_{F}m^{6k^{2}+4k+2}\binom{t}{k} ^{2}\binom{t}{k-1}^{2k}t(t-1)\] \[\geq\frac{\mu^{k+1}}{2^{k+2}k^{2k}(k-1)^{2k^{2}}}d_{F}m^{6k^{2}+4 k+2}t^{2k^{2}+2}\] \[\geq 6\theta^{1/2}d_{F}m^{6k^{2}+4k+2}t^{2k^{2}+2},\]
since \(1/t\ll\varepsilon_{k+1}\ll\beta\ll d_{k+1}\ll 1/k\) and \(\theta\ll\mu,1/k\). By Lemma 7.9, for each reduced gadget \(L\in\mathfrak{L}_{\overrightarrow{H}}\) in \(\overrightarrow{H}\), we have
\[|\mathfrak{F}^{ext}_{L}|\leq 2d_{F}m^{6k^{2}+4k+2}.\]
By Lemma 7.10 with \(\theta^{1/2}\), for any \(k\)-set \(T\subseteq V\) and any \(k\)-set \(O\subseteq[n]\), we have
\[|\mathfrak{F}^{\mathrm{ext}}_{(T,O)}|\geq\theta^{1/2}|\mathfrak{F}^{\mathrm{ ext}}|\geq 6\theta d_{F}m^{6k^{2}+4k+2}t^{2k^{2}+2}.\]
Choose a family \(\mathfrak{F}^{\prime}\) from \(\mathfrak{F}^{\mathrm{ext}}\) by including each \(\mathfrak{S}\)-gadget independently at random with probability
\[p=\frac{\zeta m}{2d_{F}m^{6k^{2}+4k+2}t^{2k^{2}+2}}.\]
Note that \(|\mathfrak{F}^{\prime}|\), \(|\mathfrak{F}^{\prime}\cap\mathfrak{F}^{\text{ext}}_{(T,O)}|\) are binomial random variables, for any \(k\)-set \(T\subseteq V\) and any \(k\)-set \(O\subseteq[n]\), we have
\[\mathbb{E}|\mathfrak{F}^{\prime}|=p|\mathfrak{F}^{\text{ext}}|\leq\frac{\zeta m }{2},\]
\[\mathbb{E}|\mathfrak{F}^{\prime}\cap\mathfrak{F}^{\text{ext}}_{(T,O)}|=p| \mathfrak{F}^{\text{ext}}_{(T,O)}|\geq 3\theta\zeta m.\]
For each \(Z\in\mathcal{P}\), note that \(Z\) exists in at most \(t^{2k^{2}+1}\) reduced gadgets, thus, there are at most \(2d_{F}m^{6k^{2}+4k+2}t^{2k^{2}+1}\)\(\mathfrak{S}\)-gadgets with vertices in \(Z\). Note that each \(\mathfrak{S}\)-gadget contains at most \(k^{2}+2k+2\) vertices in a cluster. Hence, for each cluster \(Z\in\mathcal{P}\), we have
\[\mathbb{E}|V(\mathfrak{F}^{\prime})\cap Z|\leq 2(k^{2}+2k+2)d_{F}m^{6k^{2}+4k+2 }t^{2k^{2}+1}p=\frac{(k^{2}+2k+2)\zeta m}{t}.\]
By Proposition 1.18, with probability \(1-o(1)\), the family \(\mathfrak{F}^{\prime}\) satisfies the following properties.
\[|\mathfrak{F}^{\prime}|\leq 2\mathbb{E}|\mathfrak{F}^{\prime}|\leq\zeta m,\]
\[|\mathfrak{F}^{\prime}\cap\mathfrak{F}^{\text{ext}}_{(T,O)}|\geq 2\theta\zeta m,\]
\[|V(\mathfrak{F}^{\prime})\cap Z|\leq\frac{2(k^{2}+k+1)\zeta m}{t}\]
for any \(k\)-set \(T\subseteq V\), \(k\)-set \(O\subseteq[n]\) and cluster \(Z\in\mathcal{P}\). We say that two \(\mathfrak{S}\)-gadgets are _intersecting_ if they share at least one vertex. Note that there at most \((2k^{2}+2)^{2}t^{4k^{2}+3}\) pairs of intersecting reduced gadgets. Hence, there are at most \((6k^{2}+4k+2)^{2}m^{12k^{2}+8k+1}(2k^{2}+2)^{2}t^{4k^{2}+3}\) pairs of intersecting \(\mathfrak{S}\)-gadgets. We can bound the expected number of pairs of intersecting \(\mathfrak{S}\)-gadgets by
\[(6k^{2}+4k+2)^{2}m^{12k^{2}+8k+3}(2k^{2}+2)^{2}t^{4k^{2}+3}p^{2}\]
\[=\frac{\zeta^{2}(6k^{2}+4k+2)^{2}(2k^{2}+2)^{2}m}{4d_{F}^{2}t}\leq\frac{ \zeta\theta m}{2},\]
since \(\zeta\ll d_{2},\ldots,d_{k+1},\theta,1/k\). Using Markov's inequality, we derive that with probability at least \(1/2\), \(\mathfrak{F}^{\prime}\) contains at most \(\zeta\theta m\) pairs intersecting \(\mathfrak{S}\)-gadgets. Remove one gadget from each intersecting pair in such a family and remove gadgets that are not absorbing for any \((T,O)\) where \(T\subseteq V\), \(O\subseteq[n]\) and \(|T|=|O|\). We obtain a subfamily \(\mathfrak{F}^{\prime\prime}\), satisfying the following properties.
1. \(|\mathfrak{F}^{\prime\prime}|\leq\zeta m\),
2. \(|\mathfrak{F}^{\prime\prime}\cap\mathfrak{F}^{\text{ext}}_{(T,O)}|\geq\theta \zeta m\),
3. \(V(\mathfrak{F}^{\prime\prime})\) is \((2(k^{2}+k+1)\zeta/t)\)-sparse in \(\mathcal{P}\),
as desired.
The proof of Lemma 5.2.: Since \(G\) has minimum relative \((1,1)\)-degree at least \(\delta+\mu\) and \(\mathfrak{S}\) is a representative setup. For any \(v\in V\) and any color cluster \(C\), we have
\[|N_{\mathcal{J}}((v,C),\frac{\mu}{3})|\geq(\delta+\frac{\mu}{4})\binom{t}{k-1}.\]
For any \(c\in[n]\) and any point cluster \(Z\), we have
\[|N_{\mathcal{J}}((c,Z),\frac{\mu}{3})|\geq(\delta+\frac{\mu}{4})\binom{t}{k-1}.\]
by Lemma 7.1. Let \(\zeta>0\) with \(1/r,\varepsilon\ll\zeta\ll c\) and let \(\theta>0\) with \(\eta\ll\theta\ll\mu,1/k\) and \(M:=\lceil\eta t/(\theta\zeta)\rceil\). Firstly, we need the following claim.
**Claim 7.14**.: _For each \(j\in[0,M]\), and any \(S\subseteq V\) of size at most \(j\theta\zeta n/t\) divisible by \(k\) and any \(O\subseteq[n]\) of size \(|S|\), there is a sequentially path \(P_{j}\subseteq G\) such that the following holds._
1. \(P_{j}\) _is_ \((S,O)\)_-absorbing in_ \(G\)_,_
2. \(P_{j}\) _is_ \((c,\nu)\)_-extensible and consistent with_ \(\overrightarrow{H}\)_,_
3. \(V(P_{j})\) _is_ \((100k^{3}j\zeta/t)\)_-sparse in_ \(\mathcal{P}\) _and_ \(V(P_{j})\cap T_{j}=\emptyset\)_, where_ \(T_{j}\) _denotes the connection set of_ \(P_{j}\)_._
_Proof of the claim._ Take \(P_{0}\) to be the empty path and \(P_{j}\) satisfy the above conditions for \(j\in[0,M)\).
Select a subset \(Z^{\prime}\subseteq Z\setminus V(P_{j})\) of size \(m^{\prime}=(1-\lambda)m\) since \(100k^{3}j\zeta/t\leq(2\eta t/(\zeta\theta))(100k^{3}\zeta/t)\leq\lambda\) which follows from \(\zeta\ll c\ll\eta\ll\lambda,\theta\). Also, since \(n\leq(1+\alpha)mt\), we have \(m^{\prime}\geq n/(2t)\). Let \(\mathcal{P}^{\prime}=\{Z^{\prime}\}_{Z\in\mathcal{P}}\), \(\mathcal{J}^{\prime}=\mathcal{J}[V(\mathcal{P}^{\prime})]\) and \(G^{\prime}_{\mathcal{J}^{\prime}}=G_{\mathcal{J}}[V(\mathcal{P}^{\prime})]\). By lemma 4.11, \(\mathfrak{S}^{\prime}:=(G^{\prime},G^{\prime}_{\mathcal{J}^{\prime}}, \mathcal{J}^{\prime},\mathcal{P}^{\prime},H)\) is a \((k,m^{\prime},2t,\sqrt{\varepsilon},\sqrt{\varepsilon_{k+1}},r,\mathbf{d})\)-regular setup.
By Lemma 7.2, for every \(v\in V\) and color cluster \(C\), we have
\[|N_{\mathcal{J}^{\prime}}((v,C),\mu/6)|\geq|N_{\mathcal{J}}((v,C),\mu/3)|\geq (\delta+\mu/4)\binom{t}{k-1},\]
and for every \(o\in[n]\) and point cluster \(Z\), we have
\[|N_{\mathcal{J}^{\prime}}((o,Z),\mu/6)|\geq|N_{\mathcal{J}}((o,Z),\mu/3)|\geq (\delta+\mu/4)\binom{t}{k-1},\]
Thus, we obtain that for every \(v\in V\), \(o\in[n]\), color cluster \(C\), there are at least \((1-\alpha)t\) point clusters \(Z\in\mathcal{P}\), we have
\[|N_{\mathcal{J}}((v,C),\mu/6)\cap N_{H}(Z,C)|\geq\frac{\mu}{5}\binom{t}{k-1},\]
and
\[|N_{\mathcal{J}}((o,Z),\mu/6)\cap N_{H}(C,Z)|\geq\frac{\mu}{5}\binom{t}{k-1}.\]
By Lemma 7.13 with \(4c\) instead of \(c\), \(2\zeta\) instead of \(\zeta\), we obtain a set \(\mathcal{A}^{\prime}\) of pairwise-disjoint \(\mathfrak{S}^{\prime}\)-gadgets which are \((4c,\nu)\)-extensible and such that
1. \(|\mathcal{A}^{\prime}|\leq 2\zeta m^{\prime}\),
2. \(|\mathcal{A}^{\prime}\cap\mathfrak{F}_{(T,O)}|\geq 2\zeta\theta m^{\prime}\) for any \(k\)-subset of \(V\),
3. \(V(\mathcal{A}^{\prime})\) is \((4(k^{2}+k+1)\zeta/t)\)-sparse in \(\mathcal{P}^{\prime}\).
Next, we would connect all paths of absorbing gadgets in \(\mathcal{A}^{\prime}\) and \(P_{j}\) to obtain \(P_{j+1}\). By Definition 7.6, there are \(2(k+1)\) pairwise disjoint sequentially paths in each \(\mathfrak{S}^{\prime}\)-gadget in \(\mathcal{A}^{\prime}\) which are \((4c,\nu)\)-extensible in \(\mathfrak{S}^{\prime}\). Let \(\mathcal{A}\) be the union of all such sequentially paths of all gadgets of \(\mathcal{A}^{\prime}\) and \(P_{j}\). Set \(T_{j+1}=V(G)\setminus V(\mathcal{A})\), it is obvious that \(\mathcal{A}\) is a set of pairwise disjoint sequentially paths in \(G\) such that
1. \(|\mathcal{A}|\leq 4(k+1)\zeta m^{\prime}+1\),
2. \(V(\mathcal{A})\) is \((100k^{3}j\zeta/t+4(k^{2}+k+1)\zeta/t)\)-sparse in \(\mathcal{P}\) and \(V(\mathcal{A})\cap T_{j+1}=\emptyset\),
(3') every path in \(\mathcal{A}\setminus\{P_{j}\}\) is \((2c,\nu,T_{j+1})\)-extensible in \(\mathfrak{S}\) and consistent with \(\overrightarrow{H}\). \(P_{j}\) is \((c,\nu,T_{j+1})\)-extensible in \(\mathfrak{S}\) and consistent with \(\overrightarrow{H}\).
Note that (1') follows from (1) and the addition of \(P_{j}\). (2') follows from (iii), (3) and the definition of \(T_{j+1}\). (3') follows from (ii) and (3) since \(4(k^{2}+k+1)\zeta m/t\leq 2cm\). In particular, \(P_{j}\) is \((c,\nu)\)-extensible by (ii) while all other paths go from \((4c,\nu)\)-extensible in \(\mathfrak{S}^{\prime}\) to \((2c,\nu)\)-extensible in \(\mathfrak{S}\). The consistency with \(\overrightarrow{H}\) is given by the consistency of \(P_{j}\) and the definition of \(\mathfrak{S}^{\prime}\)-gadgets.
By Lemma 6.10, we obtain a sequentially path \(P_{j+1}\) with the following properties.
1. \(P_{j+1}\) contains every path of \(\mathcal{A}\),
2. \(P_{j+1}\) starts and ends with two paths different from \(P_{j}\),
3. \(V(P_{j+1})\setminus V(\mathcal{A})\subseteq V(\mathcal{P}^{\prime})\),
4. \(V(P_{j+1})\setminus V(\mathcal{A})\) intersects in at most \(10k^{2}\mathcal{A}_{Z}+t^{2t+3k+2}\) vertices with each cluster \(Z\in\mathcal{P}\), where \(\mathcal{A}_{Z}\) denotes the number of paths of \(\mathcal{A}\) that intersect with \(Z\).
We claim that \(P_{j+1}\) satisfies (i)-(iii). First, we prove (iii). Note that for every cluster \(Z\in\mathcal{P}\), the number of paths of \(\mathcal{A}\) that intersect with \(Z\) is bounded by \(4(k+1)\zeta m/t+1\). (D) implies that \(V(P_{j+1})\setminus V(\mathcal{A})\) intersects in at most \(100k^{3}\zeta m/t\) vertices with each cluster \(Z\in\mathcal{P}\). Together with (iii), it follows that \(\mathcal{A}\) is \((100k^{3}(j+1)\zeta/t)\)-sparse in \(\mathcal{P}\).
Next, we want to prove (ii), \(V(P_{j+1})\setminus V(\mathcal{A})\) intersects in at most \(100k^{3}\zeta m/t\leq cm/4\) vertices with each cluster \(Z\in\mathcal{P}\), since \(\zeta\ll c\). Also, we have \(V(\mathcal{A})\cap T_{j+1}=\emptyset\). Hence, we obtain (ii) after deleting the vertices of \(P_{j+1}\) from \(T_{j+1}\). After the deletion, we go from \((2c,\nu)\)-extensible in (3') to \((c,\nu)\)-extensible. It is crucial that \(P_{j+1}\) starts and ends with two paths different from \(P_{j}\) by (B).
Finally, we claim that \(P_{j+1}\) is \((S,O)\)-absorbing in \(G\) for any \(S\subseteq V\) of size divisible by \(k\) and at most \((j+1)\zeta\theta n/t\) and any \(O\subseteq[n]\) of size \(|S|\). Partition \(S\) into two sets \(S_{1}\) and \(S_{2}\) such that both \(|S_{1}|,|S_{2}|\) are divisible by \(k\) and \(S_{1}\) is maximal such that \(|S_{1}|\leq j\zeta\theta n/t\). Partition \(O\) into two sets \(O_{1}\) and \(O_{2}\) such that \(|O_{1}|=|S_{1}|\) and \(|O_{2}|=|S_{2}|\). Since \(P_{j}\) is \((S^{\prime},O^{\prime})\)-absorbing in \(G\) for any set \(S^{\prime}\subseteq V\) of size at most \((j\zeta\theta n/t)\) and \(|O^{\prime}|=|S^{\prime}|\), there exists a path \(P^{\prime}_{j}\) with the same endpoints as \(P_{j}\) such that \(I(P^{\prime}_{j})=S_{1}\cup I(P_{j})\) and \(C(P^{\prime}_{j})=O_{1}\cup C(P_{j})\), besides, \(P_{j}\) is a subpath of \(P_{j+1}\). So it remains to absorb \(S_{2}\). By the choice of \(S_{1}\), we have \(|S_{2}|\leq\zeta\theta n/t+k\leq 2\zeta^{3}n/t\leq 2(1+\alpha)\zeta^{3}m\leq 5 \zeta^{3}m/2\). Therefore, we can partition \(S_{2}\) and \(O_{2}\) into \(\ell\leq 5\zeta^{3}m/(2k)\leq 2\zeta\theta m^{\prime}\) sets of size \(k\) each, let \(D_{1},\ldots,D_{\ell}\) and \(R_{1},\ldots,R_{\ell}\) be those sets. By (2), we have \(|\mathfrak{F}_{(D_{i},R_{i})}\cap\mathcal{A}^{\prime}|\geq\ell\). Thus, we can associate each \((D_{i},R_{i})\) with a different gadget \(F_{i}\in\mathcal{A}^{\prime}\) for each \(i\in[\ell]\). Each \(F_{i}\) yields a collection of \(2(k+1)\) sequentially paths \(P_{i,1},\ldots,P_{i,2(k+1)}\) and we can replace those paths with a collection of different paths with the same endpoints. Since \(P_{j}\) and each \(P_{i,u}\), \(i\in[\ell],u\in[2(k+1)]\), are subpaths of \(P_{j+1}\), the sequentially path \(P^{\prime}_{j+1}\) has the same endpoints with \(P_{j+1}\). Also, \(P^{\prime}_{j+1}\) is exactly \((C(P_{j+1})\cup O,I(P_{j+1})\cup S)\).
To finish, note that \(P_{M}\) and \(C_{M}\) has the desired properties. By the choice of \(M=\lceil\eta t/(\zeta\theta)\rceil\), we have \(M\zeta\theta/t\geq\eta\), so \(P_{M}\) with \(C_{M}\) is \(\eta\)- absorbing in \(G\). Moreover, since \(M(100k^{3}\zeta/t)\leq 200k^{2}\eta/\theta\leq\lambda\) and \(\eta\ll\lambda\), \(V(P_{M})\) is \(\lambda\)-sparse in \(\mathcal{P}\).
## 8. Concluding Remarks
Inspired by a series of very recent successes on rainbow matchings [29, 28, 30, 31], rainbow Hamilton cycles [8, 9, 21] and rainbow factors [7, 12, 33], we suspect the threshold for a rainbow spanning subgraph in (hyper)graph system is asymptotically same with the threshold for a spanning subgraph in a (hyper)graph.
Let \(1\leq d,\ell\leq k-1\). For \(n\in(k-\ell)\mathbb{N}\), define \(h_{d}^{\ell}(k,n)\) to be the smallest integer \(h\) such that every \(n\)-vertex \(k\)-graph \(H\) satisfying \(\delta_{d}(H)\geq h\) contains a Hamilton \(\ell\)-cycle. Han and Zhao [19] gave the result that
\[h_{d}^{k-1}(k,n)\geq\left(1-\binom{t}{\lfloor t/2\rfloor}\frac{\lceil t/2 \rceil^{\lceil t/2\rceil}(\lfloor t/2\rfloor+1)^{\lfloor t/2\rfloor}}{(t+1)^ {t}}+o(1)\right)\binom{n}{t} \tag{9}\]
where \(d\in[k-1]\) and \(t=k-d\). In particular, \(h_{d}^{k-1}(k,n)\geq(5/9+o(1))(\binom{n}{2}),(5/8+o(1))\binom{n}{3}\) for \(k-d=2,3\). Lang and Sanhueza-Matamala [27] conjectured that the minimum \(d\)-degree threshold for \(k\)-uniform tight Hamilton cycles coincides with the lower bounds in (9). This leads to the following conjecture.
**Conjecture 8.1**.: _For every \(k\geq 4,\mu>0\), there exists \(n_{0}\) such that the following holds for \(n\geq n_{0}\). Given a \(k\)-graph system \(\textbf{G}=\{G_{i}\}_{i\in[n]}\), if \(\delta_{k-3}(G_{i})\geq(5/8+\mu)\binom{n}{3}\) for \(i\in[n]\), then **G** admits a rainbow Hamilton cycle._
Furthermore, we believe the following holds.
**Conjecture 8.2**.: _For every \(k,d,\mu>0\), there exists \(n_{0}\) such that the following holds for \(n\geq n_{0}\). Given a \(k\)-graph system \(\textbf{G}=\{G_{i}\}_{i\in[n]}\), if \(\delta_{d}(G_{i})\geq h_{d}^{k-1}(k,n)+\mu(\binom{n}{d})\) for \(i\in[n]\), then **G** admits a rainbow Hamilton cycle._
In fact, due to the whole proof of this paper, we believe that it is interesting to study rainbow Hamilton vicinities or rainbow Hamilton frameworks to determine the thresholds of Hamilton cycles.
## 9. Acknowledgement
This work was supported by the Natural Science Foundation of China (12231018,11871311,11901292) and Youth Interdisciplinary Innovation Group of Shandong University.
|
2309.16590 | Vertex-primitive digraphs with large fixity | The relative fixity of a digraph $\Gamma$ is defined as the ratio between the
largest number of vertices fixed by a nontrivial automorphism of $\Gamma$ and
the number of vertices of $\Gamma$. We characterize the vertex-primitive
digraphs whose relative fixity is at least $1/3$, and we show that there are
only finitely many vertex-primitive digraphs of bounded out-valency and
relative fixity exceeding a positive constant. | Marco Barbieri, Primož Potočnik | 2023-09-28T16:50:44Z | http://arxiv.org/abs/2309.16590v1 | # Vertex-primitive digraphs with large fixity
###### Abstract.
The relative fixity of a digraph \(\Gamma\) is defined as the ratio between the largest number of vertices fixed by a nontrivial automorphism of \(\Gamma\) and the number of vertices of \(\Gamma\). We characterize the vertex-primitive digraphs whose relative fixity is at least \(\frac{1}{3}\), and we show that there are only finitely many vertex-primitive graphs of bounded out-valency and relative fixity exceeding a positive constant.
Key words and phrases:Vertex-primitive, fixity, product action, digraph, graph 2010 Mathematics Subject Classification: 05C25, 20B25
## 1. Introduction
Throughout this paper, we use the word _digraph_ to denote a combinatorial structure \(\Gamma\) determined by a finite nonempty set of _vertices_\(V\Gamma\) and a set of _arcs_\(A\Gamma\subseteq V\Gamma\times V\Gamma\), sometimes also viewed as a binary relation on \(V\Gamma\). If the set \(A\Gamma\) is symmetric (when viewed as a binary relation on \(V\Gamma\)), then the digraph \(\Gamma\) is called a _graph_ and unordered pairs \(\{u,v\}\) such that \((u,v)\) and \((v,u)\) are arcs are called _edges_ of \(\Gamma\).
The _fixity_ of a finite digraph \(\Gamma\), denoted by \(\operatorname{Fix}(\Gamma)\), is defined as the largest number of vertices that are left fixed by a nontrivial automorphism of \(\Gamma\), while the _relative fixity of \(\Gamma\)_ is defined as the ratio
\[\operatorname{RelFix}(\Gamma)=\frac{\operatorname{Fix}(\Gamma)}{|V\Gamma|}\,.\]
The notion of fixity of (di)graphs was introduced in a 2014 paper of L. Babai [2] (see also [4]), where several deep results regarding the fixity of strongly regular graphs were proved (these results were later used in his work on the graph isomorphism problem [3]). To convey the flavour of his work, let us mention [4, Theorem 1.6], which states that the relative fixity of a strongly regular graph (other then a complete bipartite graph or the line graph of a complete graph) is at most \(\frac{7}{8}\).
The study of the fixity of graphs continued in a series of papers [5, 19, 25] by P. Spiga and coauthors (including the authors of the present paper), where the problem was studied in the context of vertex-transitive graphs of fixed valency.
Let us mention that fixity is a well studied parameter in the slightly more general context of permutation groups, where, instead of fixity, it is more common to consider the dual notion of _minimal degree_ of a permutation group \(G\), defined by
\[\mu(G)=\min_{g\in G\setminus\{1\sigma\}}\left|\operatorname{supp}(g)\right|,\]
where \(\operatorname{supp}(g)\) denotes the set of all non-fixed points of \(g\in G\). Note that the fixity of a digraph \(\Gamma\) and the minimal degree of its automorphism group \(\operatorname{Aut}(\Gamma)\) are related via the equality
\[\operatorname{Fix}(\Gamma)=|V(\Gamma)|-\mu(\operatorname{Aut}(\Gamma))\,.\]
A vast majority of papers on the topic of minimal degree of permutation groups (including the original work of Jordan on primitive permutation groups of minimal degree \(c\) for a fixed constant \(c\)) concentrates on _primitive permutation groups_ (see, for example, [1, 8, 13, 20, 23, 24]). It is thus natural to ask the following question:
**Question 1**.: What can be said about a digraph with large relative fixity whose automorphism group acts primitively on the vertex-set?
In this paper, we answer this question in the setting where the relative fixity is more than \(\frac{1}{3}\). In our analysis, we rely heavily on the recent classification of primitive permutation groups of minimal degree at most \(\frac{2}{3}\) of the degree of the permutation group from [8]. The essence of our work thus consists of determining the digraphs upon which the permutation groups from this classification act upon.
Before stating our main result, let us first introduce a few graph theoretical concepts and constructions. First, recall that the _direct product of the family of digraphs_\(\Gamma_{1},\ldots,\Gamma_{r}\) (sometimes also called the _tensor product_ or the _categorical product_) is the digraph \(\Gamma_{1}\times\ldots\times\Gamma_{r}\) whose vertex-set is the cartesian product \(V\Gamma_{1}\times\ldots\times V\Gamma_{r}\) and whose arc-set is
\[A(\Gamma_{1}\times\ldots\times\Gamma_{r})=\left\{\left((u_{1},\ldots,u_{r}),\, (v_{1},\ldots,v_{r})\right)\big{|}\,(u_{i},v_{i})\in A\Gamma_{i}\text{ for all }i\in\{1,\ldots,r\}\right\}\,.\]
Recall also that a _union of digraphs_\(\Gamma_{1}\) and \(\Gamma_{2}\) is the digraph whose vertex-set and arc-set are the sets \(V\Gamma_{1}\cup V\Gamma_{2}\) and \(A\Gamma_{1}\cup A\Gamma_{2}\), respectively. Note that when \(\Gamma_{1}\) and \(\Gamma_{2}\) share the same vertex-set, their union is then obtained simply by taking the union of their arc-sets. Further, for a positive integer \(m\), let \(\mathbf{L}_{m}\) and \(\mathbf{K}_{m}\) denote the _loop graph_ and the _complete graph_ on a vertex-set \(V\) of cardinality \(m\) and with arc-sets \(\{(v,v):v\in V\}\) and \(\{(u,v):u,v\in V,u\plus v\}\), respectively.
We now have all the ingredients needed to present a construction yielding the digraph appearing in our main result.
**Construction 2**.: Let \(\mathcal{G}=\{\Gamma_{0},\Gamma_{1},\ldots,\Gamma_{k}\}\) be a list of \(k+1\) pairwise distinct digraphs sharing the same vertex-set \(\Delta\). Without loss of generality, we shall always assume that \(\Gamma_{0}=\mathbf{L}_{m}\) with \(m=|\Delta|\). Further, let \(r\) be a positive integer, and let \(\mathcal{J}\) be a subset of the \(r\)-fold cartesian power \(X^{r}\), where \(X=\{0,1,\ldots,k\}\). Given this input, construct the digraph
\[\mathcal{P}(r,\mathcal{G},\mathcal{J})=\bigcup_{(j_{1},j_{2},\ldots,j_{r}) \in\mathcal{J}}\Gamma_{j_{1}}\times\Gamma_{j_{2}}\times\ldots\times\Gamma_{j _{r}}\]
and call it the _merged product action digraph_.
**Remark 3**.: We give some example to give a flavour of what can be obtained using Construction 2.
If \(r=1\), then \(\mathcal{P}(1,\mathcal{G},\mathcal{J})\) is simply the union of some digraphs from the set \(\mathcal{G}\).
If \(r=2\) and \(\mathcal{J}=\{(1,0),(0,1)\}\), then \(\mathcal{P}(1,\mathcal{G},\mathcal{J})=\mathbf{L}_{m}\times\Gamma_{1}\cup \Gamma_{1}\times\mathbf{L}_{m}\), which is, in fact, the _Cartesian product_\(\Gamma\square\Gamma\). (This product is sometimes called the _box product_, and we refer to [14] for the definition of the Cartesian product.)
More generally, if \(\mathcal{J}=\{e_{i}\mid i\in\{1,\ldots,r\}\}\), where \(e_{i}=(0,\ldots,0,1,0,\ldots,0)\) is the \(r\)-tuple with \(1\) in the \(i\)-th component and zeroes elsewhere, then \(\mathcal{P}(r,\mathcal{G},\mathcal{J})=(\Gamma_{1})^{\square r}\), the \(r\)-th Cartesian power of the graph \(\Gamma_{1}\in\mathcal{G}\). More specifically, if \(\Gamma_{1}=\mathbf{K}_{m}\) and \(\mathcal{J}\) is as above, then \(\mathcal{P}(r,\mathcal{G},\mathcal{J})\) is the _Hamming graph_\(\mathbf{H}(r,m)=\mathbf{K}_{m}^{\square r}\).
While \(\mathcal{J}\) can be an arbitrary set of \(r\)-tuples in \(X^{r}\), we will be mostly interested in the case where \(\mathcal{J}\subseteq X^{r}\) is invariant under the induced action of some permutation group \(H\leq\operatorname{Sym}(r)\) on the set \(X^{r}\) given by the rule
\[(j_{1},j_{2},\ldots,j_{r})^{h}=(j_{1h^{-1}},j_{2h^{-1}},\ldots,j_{rh^{-1}})\,.\]
(Throughout this paper, in the indices, we choose to write \(ih^{-1}\) instead of \(i^{h^{-1}}\) for improved legibility.) We shall say that \(\mathcal{J}\) is an \(H\)_-invariant subset of \(X^{r}\)_ in this case. A subset \(\mathcal{J}\subseteq X^{r}\) which is \(H\)-invariant for some _transitive_ subgroup of \(\operatorname{Sym}(r)\) will be called _homogeneous_.
The last example of Remark 3 justifies the introduction of the following new family of graphs.
**Definition 4**.: Let \(r,m\) be two positive integers, and let \(\mathcal{J}\subseteq\{0,1\}^{r}\) be a homogeneous set. The graph \(\mathcal{P}\left(r,\{\mathbf{L}_{m},\mathbf{K}_{m}\},\mathcal{J}\right)\) is called _generalised Hamming graph_ and is denoted by \(\mathbf{H}(r,m,\mathcal{J})\).
**Remark 5**.: The generalised Hamming graphs \(\mathbf{H}(r,m,\mathcal{J})\), where \(\mathcal{J}\) is \(H\)-invariant, are precisely the unions of orbital graphs for the group \(\operatorname{Sym}(m)\operatorname{wr}H\) endowed with the product action (see Lemma 18 for further details).
Furthermore, a homogeneous set \(\mathcal{J}\) is said to be _Hamming_ if,
\[\mathcal{J}=\bigcup_{h\in H}\left((X\backslash\{0\})^{a}\times X^{b}\times\{0 \}^{r-a-b}\right)^{h}\,,\]
for some nonnegative integers \(a,b\) such that \(a+b\leq r\) and a transitive group \(H\leq\operatorname{Sym}(r)\). It is said to be _non-Hamming_ otherwise.
**Remark 6**.: Let \(\mathcal{P}(r,\mathcal{G},\mathcal{J})\) be a merged product action digraph, where the digraphs in \(\mathcal{G}\) have \(m\) vertices, and where \(\mathcal{J}\) is a Hamming set. Build \(\mathcal{J}^{\prime}\subseteq\{0,1\}^{r}\) from \(\mathcal{J}\) by substituting any nonzero entry of a sequence in \(\mathcal{J}\) with \(1\). Then
\[\mathcal{P}\left(r,\mathcal{G},\mathcal{J}\right)=\mathcal{P}\left(r,\{ \mathbf{L}_{m},\mathbf{K}_{m}\},\mathcal{J}^{\prime}\right)\,.\]
In particular, a generalised Hamming graph arises from Construction 2 if and only if \(\mathcal{J}\) is a Hamming set.
**Remark 7**.: The ordering of the Cartesian components in the definition of a Hamming set does not matter: indeed, a permutation of the components corresponds to a conjugation of the group \(H\) in \(\operatorname{Sym}(r)\), thus defining isomorphic digraphs in Construction 2.
We are ready to state our main result.
**Theorem 8**.: _Let \(\Gamma\) be a finite vertex-primitive digraph with at least one arc. Then_
\[\operatorname{RelFix}(\Gamma)>\frac{1}{3}\]
_if and only if one of the following occurs:_
1. \(\Gamma\) _is a generalised Hamming graph_ \(\mathbf{H}(r,m,\mathcal{J})\)_, with_ \(m\geq 4\)_, and_ \[\operatorname{RelFix}(\Gamma)=1-\frac{2}{m}\,;\]
2. \(\Gamma\) _is a merged product action graph_ \(\mathcal{P}(r,\mathcal{G},\mathcal{J})\)_, where_ \(r\geq 1\)_, where_ \(\mathcal{J}\) _is a non-Hamming subset of_ \(X^{r}\) _with_ \(X=\{0,1,\ldots,|\mathcal{G}|-1\}\)_, and where_ \(\mathcal{G}\) _is as in one of the following:_ 1. \(\mathcal{G}=\{\mathbf{J}(m,k,i)\mid i\in\{0,1,\ldots,k\}\}\) _is the family of distance-_\(i\) _Johnson graphs, where_ \(k,m\) _are fixed integers such that_ \(k\geq 2\) _and_ \(m\geq 2k+2\) _(see Section_ 4.2 _for details), and_ \[\operatorname{RelFix}(\Gamma)=1-\frac{2k(m-k)}{m(m-1)}\,;\] 2. \(\mathcal{G}=\{\mathbf{Q}\mathbf{J}(2m,m,i)\mid i\in\{0,1,\ldots,[m/2]\}\}\) _is the family of squashed distance-_\(i\) _Johnson graphs, where_ \(m\) _is a fixed integer with_ \(m\geq 4\) _(see Section_ 4.3 _for details), and_ \[\operatorname{RelFix}(\Gamma)=\frac{1}{2}\left(1-\frac{1}{2m-1}\right)\,;\] 3. \(\mathcal{G}=\{\mathbf{L}_{m},\Gamma_{1},\Gamma_{2}\}\)_, where_ \(\Gamma_{1}\) _is a strongly regular graph listed in Section_ 4.4_,_ \(\Gamma_{2}\) _is its complement, and_ \[\operatorname{RelFix}(\Gamma)=\operatorname{RelFix}(\Gamma_{1})\] _(the relative fixities are collected in Table_ 1_)._
**Remark 9**.: Although we do not assume that a vertex-primitive digraph \(\Gamma\) in Theorem 8 is a graph, the assumption of large relative fixity forces it to be such. In other words, every vertex-primtive digraph of relative fixity larger than \(\frac{1}{3}\) is a graph.
**Remark 10**.: The relative fixity can be arbitrarily close to \(1\). Indeed, this can be achieved by choosing a generalised Hamming graph \(\mathbf{H}(r,m,\mathcal{J})\) with \(m\) arbitrarily large.
By analysing the vertex-primitive graphs of relative fixity more than \(\frac{1}{3}\), one can notice that the out-valency of these graphs must grow as the number of vertices grows. More explicitly, a careful inspection of the families in Theorem 8 leads to the following result, the proof of which we leave out.
**Remark 11**.: There exists a constant \(C\) such that every finite connected vertex-primitive digraph \(\Gamma\) with
\[\operatorname{RelFix}(\Gamma)>\frac{1}{3}\]
satisfies
\[\operatorname{val}(\Gamma)\geq C\log\left(|V\Gamma|\right)\,.\]
Observe that, for the Hamming graphs \(\mathbf{H}(r,m)\) with \(m\geq 4\), we have that
\[\operatorname{val}\left(\mathbf{H}(r,m)\right)=r(m-1)\geq r\log(m)=\log\left(|V \mathbf{H}(r,m)|\right)\,.\]
In particular, as both expressions are linear in \(r\), a logarithmic bound in Remark 11 is the best that can be achieved.
One of the consequences of Remark 11 is that for every positive integer \(d\) there exist only finitely many connected vertex-primitive digraphs of out-valency at most \(d\) and relative fixity exceeding \(\frac{1}{3}\).
As Theorem 12 and Corollary 13 show, this remains to be true if \(\frac{1}{3}\) is substituted by an arbitrary positive constant. We thank P. Spiga for providing us with the main ideas used in the proof.
**Theorem 12**.: _Let \(\alpha\) and \(\beta\) be two positive constants, and let \(\mathcal{F}\) be a family of quasiprimitive permutation groups \(G\) on \(\Omega\) satisfying:_
1. \(\mu(G)\leq(1-\alpha)|\Omega|\)_; and_
2. \(|G_{\omega}|\leq\beta\) _for every_ \(\omega\in\Omega\)_._
_Then \(\mathcal{F}\) is a finite family._
**Corollary 13**.: _Let \(\alpha\) be a positive constant, and let \(d\) be a positive integer. There are only finitely many vertex-primitive digraphs of out-valency at most \(d\) and relative fixity exceeding \(\alpha\)._
The proof of Theorem 8 can be found in Section 5, while Theorem 12 and Corollary 13 are proved in Section 6.
## 2. Basic concepts and notations
### Product action
We start by recalling the definition of a wreath product and its product action. By doing so, we also settle the notation for the rest of the paper. We refer to [12, Section 2.6 and 2.7] for further details.
Let \(H\) be a permutation group on a finite set \(\Omega\). Suppose that \(r=|\Omega|\), and, without loss of generality, identify \(\Omega\) with the set \(\{1,2,\ldots,r\}\). For an arbitrary set \(X\), we may define a _permutation action of \(H\) of rank \(r\) over \(X\)_ as the the action of \(H\) on the set \(X^{r}\) given by the rule
\[(x_{1},x_{2},\ldots,x_{r})^{h}=(x_{1h^{-1}},x_{2h^{-1}},\ldots,x_{rh^{-1}})\.\]
Let \(K\) be a permutation group on a set \(\Delta\). We can consider the permutation action of \(H\) of rank \(r\) over \(K\) by letting
\[(k_{1},k_{2},\ldots,k_{r})^{h}=(k_{1h^{-1}},k_{2h^{-1}},\ldots,k_{rh^{-1}}) \quad\text{for all $(k_{1},k_{2},\ldots,k_{r})\in K^{r}$, $h\in H$}\,.\]
If we denote by \(\vartheta\) the homomorphism \(H\to\operatorname{Aut}(K^{r})\) corresponding to this action, then the _wreath product of \(K\) by \(H\)_, in symbols \(K\operatorname{wr}H\), is the semidirect product \(K^{r}\rtimes_{\vartheta}H\). We call \(K^{r}\) the _base group_, and \(H\) the _top group_ of this wreath product.
Note that the base and the top group are both embedded into \(K\operatorname{wr}H\) via the monomorphisms
\[(k_{1},k_{2},\ldots,k_{r})\mapsto((k_{1},k_{2},\ldots,k_{r}),1_{H})\]
and
\[h\mapsto((1_{K},1_{K},\ldots,1_{K}),h)\.\]
In this way, we may view the base and the top group as subgroups of the wreath product and identify an element \(((k_{1},k_{2},\ldots,k_{r}),h)\in K\operatorname{wr}H\) with the product \((k_{1},k_{2},\ldots,k_{r})h\) of \((k_{1},k_{2},\ldots,k_{r})\in K^{r}\) and \(h\in H\) (both viewed as elements of the group \(K\operatorname{wr}H\)).
The wreath product \(K\operatorname{wr}H\) can be endowed with an action on \(\Delta^{r}\) by letting
\[(\delta_{1},\delta_{2},\ldots,\delta_{r})^{(k_{1},k_{2},\ldots,k_{r})h}=\left( \delta_{1}^{k_{1}},\delta_{2}^{k_{2}},\ldots,\delta_{r}^{k_{r}}\right)^{h}= \left(\delta_{1h^{-1}}^{k_{1h-1}},\delta_{2h^{-1}}^{k_{2h-1}},\ldots,\delta_{rh ^{-1}}^{k_{rh-1}}\right)\,,\]
for all \((\delta_{1},\delta_{2},\ldots,\delta_{r})\in\Delta^{r},(k_{1},k_{2},\ldots,k_{ r})\in K^{r}\), and \(h\in H\). We call this action the _product action of the wreath product \(K\operatorname{wr}H\) on \(\Delta^{r}\)_.
We recall the condition for a wreath product endowed with product action to be primitive.
**Lemma 14** ([12, Lemma 2.7A]).: _Let \(K\) be a permutation group on \(\Delta\) and let \(H\) be a permutation group on \(\Omega\). The wreath product \(K\operatorname{wr}H\) endowed with the product action on \(\Delta^{r}\) is primitive if and only if \(H\) is transitive and \(K\) is primitive but not regular._
We now introduce some notation to deal with any subgroup \(G\) of \(\operatorname{Sym}(\Delta)\operatorname{wr}\operatorname{Sym}(\Omega)\) endowed with product action on \(\Delta^{r}\).
By abuse of notation, we identify the set \(\Delta\) with
\[\left\{\left\{\delta\right\}\times\Delta^{r-1}\,\big{|}\,\delta\in\Delta\right\}\]
via the mapping \(\delta\mapsto\left\{\delta\right\}\times\Delta^{r-1}\). We denote by \(G_{\Delta}^{\Delta}\) the permutation group that \(G_{\Delta}\) induces on \(\Delta\), that is,
\[G_{\Delta}^{\Delta}\cong G_{\Delta}/G_{(\Delta)}\,.\]
(Recall that \(G_{(\Delta)}\) denotes the pointwise stabilizer of \(\Delta\).)
Moreover, recalling that every element of \(G\) can be written uniquely as \(gh\), for some \(g\in\operatorname{Sym}(\Delta)^{r}\) and some \(h\in\operatorname{Sym}(\Omega)\), we can define the group homomorphism
\[\psi:G\to\operatorname{Sym}(\Omega),\quad gh\mapsto h\,.\]
This map defines a new permutational representation of \(G\) acting on \(\Omega\). We denote by \(G^{\Omega}\) the permutation group corresponding to the faithful action that \(G\) defines on \(\Omega\), that is,
\[G^{\Omega}\cong G/\ker(\psi)\,.\]
Recall that a primitive group \(G\), according to the O'Nan-Scott classification (see, for instance, [22, III\((b)(i)\)]), is said to be of _product action type_ if there exists a transitive group \(H\leqslant\operatorname{Sym}(\Omega)\) and a primitive almost simple group \(K\leqslant\operatorname{Sym}(\Delta)\) with socle \(T\) such that, for some integer \(r\geqslant 2\),
\[T^{r}\leqslant G\leqslant K\operatorname{wr}H\,,\]
where \(T^{r}\) is the socle of \(G\), thus contained in the base group \(K^{r}\). A detailed description of primitive groups of product action type was given by L. G. Kovacs in [18].
**Remark 15**.: By [26, Theorem 1.1 \((b)\)], a group \(G\) of product action type is permutationally isomorphic to a subgroup of \(G_{\Delta}^{\Delta}\operatorname{wr}G^{\Omega}\). Therefore, up to a conjugation in \(\operatorname{Sym}(\Delta^{r})\), the group \(K\) can always be chosen as \(G_{\Delta}^{\Delta}\), and \(H\) as \(G^{\Omega}\).
### Groups acting on digraphs
We give a short summary of standard notations for digraphs and graphs.
If a subgroup \(G\leqslant\operatorname{Aut}(\Gamma)\) is primitive on \(V\Gamma\), we say that \(\Gamma\) is \(G\)_-vertex-primitive_. In a similar way, if \(G\) is transitive on \(A\Gamma\), we say that \(\Gamma\) is \(G\)_-arc-transitive_. The analogue notions can be defined for graphs, and when \(G=\operatorname{Aut}(\Gamma)\) we drop the prefix \(G\).
For any vertex \(v\in\operatorname{V}\Gamma\), we denote by \(\Gamma(v)\) its _out-neighbourhood_, that is, the set of vertices \(u\in\Gamma\) such that \((v,u)\in A\Gamma\). The size of the out-neighbourhood of a vertex \(v\), \(|\Gamma(v)|\), is called _out-valency of \(v\)_. If \(\Gamma\) is \(G\)-vertex-primitive, for some group \(G\), then the out-valency in independent of the choice of the vertex \(v\), thus we will refer to it as the _out-valency of \(\Gamma\)_, in symbols \(\operatorname{val}(\Gamma)\). Whenever \(\Gamma\) is a graph, _neighbourhood_ and _valency_ can be defined in the same way.
An _orbital for \(G\)_ is an orbit of \(G\) in its induced action on \(\Omega\times\Omega\). An _orbital digraphs for \(G\)_ is a digraph whose vertex-set is \(\Omega\), and whose arc-set is an orbital for \(G\). An example of orbital for \(G\) is the _diagonal orbital \((\omega,\omega)^{G}\)_, whose corresponding disconnected orbital graph is called _diagonal orbital graph_. We refer to [12, Section 3.2] for further details.
Note that an orbital graph for \(G\) is always \(G\)-arc-transitive, and, conversely, every \(G\)-arc-transitive digraph is an orbital graph for \(G\). Furthermore, if \(G\leq\operatorname{Aut}(\Gamma)\) is a group of automorphism for a given digraph \(\Gamma\), then \(\Gamma\) is a union of orbitals for \(G\) acting on \(\operatorname{\mathrm{V}\Gamma}\).
The number of distinct orbital digraphs for \(G\) is called the _permutational rank of \(G\)_. In particular, \(2\)-transitive permutation groups are precisely those of permutational rank \(2\).
If \(A\subseteq\Omega\times\Omega\) is an orbital for \(G\), then so is the set \(A^{\ast}=\{(\beta,\alpha)\mid(\alpha,\beta)\in A\}\). If \(A=A^{\ast}\), then the orbital \(A\) is called _self-paired_. Similarly, an orbital digraph is _self-paired_ if its arc-set is a self-paired orbital. Note that any \(G\)-arc-transitive graph is obtained from a self-paired orbital digraph for \(G\).
## 3. Orbital digraphs for wreath products in product action
We are interested in reconstructing the orbital digraphs of a wreath product \(K\operatorname{wr}H\) endowed with product action once the orbital digraphs of \(K\) are known.
**Lemma 16**.: _Let \(K\operatorname{wr}H\) be a wreath product endowed with the product action on \(\Delta^{r}\), and let_
\[\mathcal{G}=\{\Gamma_{0},\Gamma_{1},\ldots,\Gamma_{k}\}\]
_be the complete list of the orbital digraphs for \(K\). Then any orbital digraph is a merged product action digraph of the form_
\[\mathcal{P}\left(r,\mathcal{G},(j_{1},j_{2},\ldots,j_{r})^{H}\right)\,,\]
_for a sequence of indices \((j_{1},j_{2},\ldots,j_{r})\in X^{r}\), where \(X=\{0,1,\ldots,k\}\)._
Proof.: Let \(\Gamma\) be an orbital digraph for \(K\operatorname{wr}H\). Suppose that \((u,v)\in A\Gamma\), where \(u=(u_{1},u_{2},\ldots,u_{r})\) and \(v=(v_{1},v_{2},\ldots,v_{r})\). We aim to compute the \(K\operatorname{wr}H\)-orbit of \((u,v)\), and, in doing so, proving that there is a sequence of indices \((j_{1},j_{2},\ldots,j_{r})\in X^{r}\) such that
\[A\Gamma=A\mathcal{P}\left(r,\mathcal{G},(j_{1},j_{2},\ldots,j_{r})^{H}\right)\,.\]
We start by computing the \(K^{r}\)-orbit of \((u,v)\) (where by \(K^{r}\) we refer to the base group of \(K\operatorname{wr}H\)). Since this action is componentwise, we obtain that
\[(u,v)^{K^{r}}=A\left(\Gamma_{j_{1}}\times\Gamma_{j_{2}}\times\ldots\times \Gamma_{j_{r}}\right)\,,\]
where \((u_{i},v_{i})\) is an arc of \(\Gamma_{j_{i}}\) for all \(i=1,2,\ldots,r\).
The top group \(H\) acts by permuting the components, so that
\[(u,v)^{K\operatorname{wr}H}=\bigcup_{(j_{1}^{\prime},j_{2}^{\prime},\ldots,j _{r}^{\prime})\in(j_{1},j_{2},\ldots,j_{r})^{H}}A\left(\Gamma_{j_{1}^{\prime} }\times\Gamma_{j_{2}^{\prime}}\times\ldots\times\Gamma_{j_{r}^{\prime}}\right)\]
Therefore, the arc-sets of \(\Gamma\) and \(\mathcal{P}\left(r,\mathcal{G},(j_{1},j_{2},\ldots,j_{r})^{H}\right)\) coincide.
As their vertex-sets are both \(\Delta^{r}\), the proof is complete.
Now that we know how to build the orbital digraphs for a permutation group in product action, we ask ourselves what can we say about the orbital digraphs of its subgroups.
**Theorem 17**.: _Let \(G\leq\operatorname{Sym}(\Delta)\operatorname{wr}\operatorname{Sym}(\Omega)\) be a primitive group of product action type, and let \(T\) be the socle of \(G_{\Delta}^{\Delta}\). Suppose that \(T\) and \(G_{\Delta}^{\Delta}\) share the same orbital digraphs. Then the orbital digraphs for \(G\) coincide with the orbital digraphs for \(G_{\Delta}^{\Delta}\operatorname{wr}G^{\Omega}\), or, equivalently, for \(T\operatorname{wr}G^{\Omega}\)._
Proof.: Since \(G\) is a primitive group of product action type, we can suppose that \(G\) is a subgroup of \(G_{\Delta}^{\Delta}\operatorname{wr}G^{\Omega}\) with socle \(T^{r}\), where \(r=|\Omega|\). Further, we set \(K=G_{\Delta}^{\Delta}\), \(H=G^{\Omega}\).
As \(G\leq K\operatorname{wr}H\), the partition of \(\Delta^{r}\times\Delta^{r}\) in arc-sets of orbital digraphs for \(K\operatorname{wr}H\) is coarser than the one for \(G\). Hence, our aim is to show that a generic orbital digraph for \(K\operatorname{wr}H\) is also an orbital digraph for \(G\).
Let
\[\mathcal{G}=\{\Gamma_{0},\Gamma_{1},\ldots,\Gamma_{k}\}\]
be the complete list of orbital digraphs for \(T\) acting on \(\Delta\), and let \(X=\{0,1,\ldots,k\}\). Observe that the set of orbital digraphs for \(T^{r}\) can be identified with the Cartesian product of \(r\) copies of \(\mathcal{G}\): indeed, we can bijectively map the generic orbital digraph \(T^{r}\), say \(\Gamma_{j_{1}}\times\Gamma_{j_{2}}\times\ldots\times\Gamma_{j_{r}}\), for some \((j_{1},j_{2},\ldots,j_{r})\in X^{r}\), to the generic \(r\)-tuple of the Cartesian product \(\mathcal{G}^{r}\) of the form \((\Gamma_{j_{1}},\Gamma_{j_{2}},\ldots,\Gamma_{j_{r}})\). This point of view explains why \(H\) can act on the set of orbital digraphs for \(T^{r}\) with its action of rank \(r\).
Observe that the set of orbital digraphs for \(T^{r}\) is equal to the set of orbital digraphs for \(K^{r}\). Moreover, \(T^{r}\) is a subgroup of \(G\), and \(K^{r}\) is a subgroup of \(K\operatorname{wr}H\). Thus the orbital digraphs for \(G\) and for \(K\operatorname{wr}H\) are obtained as a suitable unions of the elements of \(\mathcal{G}^{r}\).
By Lemma 16, the orbital digraphs for \(K\operatorname{wr}H\) are of the form
\[\bigcup_{(j_{1}^{\prime},j_{2}^{\prime},\ldots,j_{r}^{\prime})\in(j_{1},j_{2},\ldots,j_{r})^{H}}\Gamma_{j_{1}^{\prime}}\times\Gamma_{j_{2}^{\prime}}\times \ldots\times\Gamma_{j_{r}^{\prime}}\,,\]
for some \((j_{1},j_{2},\ldots,j_{r})\in X^{r}\). Aiming for a contradiction, suppose that
\[\Gamma_{j_{1}}\times\Gamma_{j_{2}}\times\ldots\times\Gamma_{j_{r}}\quad\text{ and}\quad\Gamma_{i_{1}}\times\Gamma_{i_{2}}\times\ldots\times\Gamma_{i_{r}}\]
are two distinct orbital digraphs for \(T^{r}\) that are merged under the action of top group \(H\), but they are not under the action of \(G\). The first portion of the assumption yields that there is an element \(h\in H\) such that
\[(\Gamma_{j_{1}}\times\Gamma_{j_{2}}\times\ldots\times\Gamma_{j_{r}})^{h}= \Gamma_{i_{1}}\times\Gamma_{i_{2}}\times\ldots\times\Gamma_{i_{r}}\,.\]
By definition of \(H=G^{\Omega}\), there is an element in \(G\) of the form
\[(g_{1},g_{2},\ldots,g_{r})h\in G.\]
Recalling that, for any \(i=1,2,\ldots,r\), \(g_{i}\in K\), we get
\[(\Gamma_{j_{1}}\times\Gamma_{j_{2}}\times\ldots\times\Gamma_{j_{r}})^{(g_{1}, g_{2},\ldots,g_{r})h}=\Gamma_{i_{1}}\times\Gamma_{i_{2}}\times\ldots \times\Gamma_{i_{r}}\,.\]
Therefore, the merging among these orbital graphs is also realised under the action of \(G\), a contradiction.
By the initial remark, the proof is complete.
## 4. Daily specials
The aim of this section is to give a descriptions of the digraphs appearing in Theorem 8.
### Generalised Hamming graphs
In this section, we clarify Remark 5 and we compute the relative fixity of the generalised Hamming graphs.
**Lemma 18**.: _Let \(H\leqslant\operatorname{Sym}(r)\) be a transitive permutation group, let \(G=\operatorname{Alt}(\Delta)\operatorname{wr}H\) endowed with the product action on \(\Delta^{r}\), and let \(\Gamma\) be a digraph with vertex-set \(V\Gamma=\Delta^{r}\). Then \(G\leqslant\operatorname{Aut}(\Gamma)\) if and only if \(\Gamma\) is a generalised Hamming graph \(\mathbf{H}(r,m,\mathcal{J})\), where \(|\Delta|=m\) and \(\mathcal{J}\subseteq\{0,1\}^{r}\) is \(H\)-invariant._
Proof.: By applying Lemma 16 and taking the union of the resulting orbital digraphs, we obtain the left-to-right direction of the equivalence. Let us now deal with the converse implication. Let \(\Gamma=\mathbf{H}(r,m,\mathcal{J})\), where \(|\Delta|=m\) and \(\mathcal{J}\subseteq\{0,1\}^{r}\) is \(H\)-invariant. By Construction 2 and Definition 4,
\[\mathbf{H}(r,m,\mathcal{J})=\bigcup_{h\in H}\left(\bigcup_{i=0}^{b}\mathbf{K} _{m}^{a+i}\times\mathbf{I}_{m}^{b+c-i}\right)^{h}\,,\]
for some non negative integers \(a,b\) such that \(a+b\leq r\). As each component of the graphs in parenthesis is either \(\mathbf{K}_{m},\mathbf{L}_{m}\) or \(\mathbf{K}_{m}\cup\mathbf{L}_{m}\), we have that
\[\operatorname{Alt}(m)^{r}\leq\operatorname{Aut}\left(\bigcup_{i=0}^{b}\mathbf{K }_{m}^{a+i}\times\mathbf{L}_{m}^{b+c-i}\right)\,.\]
Moreover, as \(\mathcal{J}\) is \(H\)-invariant, the action of rank \(r\) that \(H\) induces on \(\Delta^{r}\) preserves the arc-set of \(\mathbf{H}(r,m,\mathcal{J})\). As \(G\) is generated by \(\operatorname{Alt}(m)^{r}\) and this \(H\) in their actions on \(\Delta^{r}\), this implies that \(G\leq\operatorname{Aut}(\Gamma)\), as claimed.
Instead of directly computing the relative fixity of \(\mathbf{H}(r,m,\mathcal{J})\), we prove the following stronger result.
**Lemma 19**.: _Let \(K\operatorname{wr}H\) be a wreath product endowed with the product action on \(\Delta^{r}\), and let \(\Gamma\) be a digraph with vertex set \(\Delta^{r}\). Suppose that \(K\operatorname{wr}H\leq\operatorname{Aut}(\Gamma)\). Then_
\[\operatorname{RelFix}(\Gamma)=1-\frac{\mu\left(\operatorname{Aut}(\Gamma) \cap\operatorname{Sym}(\Delta)^{r}\right)}{|V\Gamma|}\,.\]
_In particular, the relative fixity of a generalised Hamming graph is_
\[\operatorname{RelFix}\left(\mathbf{H}(r,m,\mathcal{J})\right)=1-\frac{2}{m}\,.\]
Proof.: Suppose that \(|\Delta|=m\), then, by hypothesis,
\[K\operatorname{wr}H\leq\operatorname{Aut}(\Gamma)\leq\operatorname{Sym}(m) \operatorname{wr}\operatorname{Sym}(r)\,.\]
We claim that the automorphism that realizes the minimal degree must be contained in \(\operatorname{Aut}(\Gamma)\cap\operatorname{Sym}(m)^{r}\) (where \(\operatorname{Sym}(m)^{r}\) is the base group of \(\operatorname{Sym}(m)\operatorname{wr}\operatorname{Sym}(r)\)). Indeed, upon choosing an element of minimal degree in \(K\times\{\operatorname{id}\}\times\ldots\{\operatorname{id}\}\) and a transposition from the top group in \(\operatorname{Sym}(m)\operatorname{wr}\operatorname{Sym}(r)\), we obtain the inequalities
\[\mu\left(\operatorname{Aut}(\Gamma)\cap\operatorname{Sym}(m)^{r}\right) \leq\mu(K)m^{r-1}\] \[\leq(m-1)m^{r-1}\] \[\leq\min\left\{|\operatorname{supp}(g)|\mid g\in\operatorname{ Aut}(\Gamma)\backslash\operatorname{Sym}(m)^{r}\right\}\]
This is enough to prove the first portion of the statement.
In particular, to compute the relative fixity of \(\mathbf{H}(r,m,\mathcal{J})\), it is enough to look at the action of \(\operatorname{Sym}(m)\) on a single component. Thus, upon choosing a transposition in \(\operatorname{Sym}(m)\times\{\operatorname{id}\}\times\ldots\{\operatorname{ id}\}\), we obtain
\[\operatorname{RelFix}\left(\mathbf{H}(r,m,\mathcal{J})\right)=1-\frac{2m^{r-1 }}{m^{r}}=1-\frac{2}{m}\,.\qed\]
### Distance-\(i\) Johnson graphs
The nomenclature dealing with possible generalizations of the Johnson graph is as lush as confusing. In this paper, we are adopting the one from [16].
Let \(m,k,i\) be integers such that \(m\geq 1\), \(1\leq k\leq m\) and \(0\leq i\leq k\). A _distance-\(i\) Johnson graph_, denoted by \(\mathbf{J}(m,k,i)\) is a graph whose vertex-set is the family of \(k\)-subsets of \(\{1,2,\ldots,m\}\), and such that two \(k\)-subsets, say \(X\) and \(Y\), are adjacent whenever \(|X\cap Y|=k-i\). The usual Johnson graph is then \(\mathbf{J}(m,k,1)\), and two subsets \(X\) and \(Y\) are adjacent in \(\mathbf{J}(m,k,i)\) if and only if they are at distance-\(i\) in \(\mathbf{J}(m,k,1)\).
**Lemma 20**.: _Let \(m,k\) be two positive integers such that \(m\geq 2k+2\). The orbital digraphs of \(\operatorname{Alt}(m)\) and of \(\operatorname{Sym}(m)\) in their action on \(k\)-subsets are the distance-\(i\) Johnson graphs \(\mathbf{J}(m,k,i)\), one for each choice of \(i\in\{0,1,\ldots,k\}\)._
Proof.: Suppose that two \(k\)-subsets \(X\) and \(Y\) are such that \((X,Y)\) is an arc of the considered orbital digraph and \(|X\cap Y|=k-i\), for a nonnegative integer \(i\leq k\). Since \(\operatorname{Alt}(m)\) is \((m-2)\)-transitive and \(2k\leq m-2\), the \(\operatorname{Alt}(m)\)-orbit of the arc \((X,Y)\) contains all the pairs \((Z,W)\), where \(Z\) and \(W\) are \(k\)-subsets with \(|Z\cap W|=k-i\). Therefore, the statement is true for the alternating group. The same proof can be repeated _verbatim_ for \(\operatorname{Sym}(m)\).
**Lemma 21**.: _Let \(m,k,i\) be three positive integers such that \(m\geq 2k+2\) and \(i\neq k\). Then the relative fixity of the distance-\(i\) Johnson graphs \(\mathbf{J}(m,k,i)\) is_
\[\operatorname{RelFix}(\mathbf{J}(m,k,i))=1-\frac{2k(m-k)}{m(m-1)}\,.\]
Proof.: Under our assumption, by [15, Theorem 2 (\(a\))], the automorphism group of \(\mathbf{J}(m,k,i)\) is \(\operatorname{Sym}(m)\) in its action on \(k\) subsets. Its minimal degree is achieved by any transposition (see [13, Section 1]), where
\[\mu\left(\operatorname{Sym}(m)\right)=2\binom{m-2}{k-1}\,.\]
Hence, we find that
\[\operatorname{RelFix}(\mathbf{J}(m,k,i))=1-\frac{2k(m-k)}{m(m-1)}\,.\qed\]
### Squashed distance-\(i\) Johnson graphs
A usual construction in the realm of distance transitive graphs consist in obtaining smaller example starting from a distance transitive graph and identifying vertices at maximal distance. We need to apply this idea to a family of generalised Johnson graphs.
Consider the distance-\(i\) Johnson graph \(\mathbf{J}(2m,m,i)\), for some integers \(m\) and \(i\), with \(m\) positive and \(0\leq i\leq m\). We say that two vertices of \(\mathbf{J}(2m,m,i)\) are _disjoint_ if they have empty intersection as \(m\)-subset. Observe that being disjoint is an equivalence relation, and our definition coincides with the usual notion of antipodal for \(\mathbf{J}(2m,m,1)\) seen as a metric space. We can build a new graph \(\mathbf{Q}\mathbf{J}(2m,m,i)\) whose vertex-set is the set of equivalence classes of the disjoint relation, and such that, if \([X]\) and \([Y]\) are two generic vertices, then \(([X],[Y])\) is an arc in \(\mathbf{Q}\mathbf{J}(2m,m,i)\) whenever there is a pair of representatives, say \(X^{\prime}\in[X]\) and \(Y^{\prime}\in[Y]\), such that \((X^{\prime},Y^{\prime})\) is an arc in \(\mathbf{J}(2m,m,i)\). We call \(\mathbf{Q}\mathbf{J}(2m,m,i)\) an _squashed distance-\(i\) Johnson graph_.
Observe that \(\mathbf{J}(2m,m,i)\) is a regular double cover of \(\mathbf{Q}\mathbf{J}(2m,m,i)\). Furthermore, \(\mathbf{Q}\mathbf{J}(2m,m,i)\) and \(\mathbf{Q}\mathbf{J}(2m,m,m-i)\) are isomorphic graphs, thus we can restrict the range of \(i\) to \(\{0,1,\ldots,\lfloor m/2\rfloor\}\).
**Lemma 22**.: _Let \(m\geq 3\) be an integer. The orbital digraphs of \(\operatorname{Alt}(2m)\) and of \(\operatorname{Sym}(2m)\) in their primitive actions with stabilizer \((\operatorname{Sym}(m)\operatorname{wr}C_{2})\cap\operatorname{Alt}(2m)\) and \(\operatorname{Sym}(m)\operatorname{wr}C_{2}\) respectively are the squashed distance-\(i\) Johnson graphs \(\mathbf{J}(m,k,i)\), one for each choice of \(i\in\{0,1,\ldots,\lfloor m/2\rfloor\}\)._
Proof.: To start, we note that the set \(\Omega\) on which the groups are acting can be identified with the set of partitions of the set \(\{1,2,\ldots,2m\}\) with two parts of equal size \(m\). Suppose that \(\{X_{1},X_{2}\}\) and \(\{Y_{1},Y_{2}\}\) are two such partitions and that \((\{X_{1},X_{2}\},\{Y_{1},Y_{2}\})\) is an arc of the orbital digraph we are building, with
\[\min\{|X_{1}\cap Y_{1}|,\,|X_{1}\cap Y_{2}|\}=m-i\,,\]
for a nonnegative integer \(i\leq\lfloor m/2\rfloor\). To determine the image of \((\{X_{1},X_{2}\},\{Y_{1},Y_{2}\})\) under the group action, it is enough to know the images of \(X_{1}\) and \(Y_{2}\), that is, of at most \(2m-\lceil m/2\rceil\leq 2m-2\) distinct points. By the \((2m-2)\)-transitivity of \(\operatorname{Alt}(2m)\), the \(\operatorname{Alt}(2m)\)-orbit of \((\{X_{1},X_{2}\},\{Y_{1},Y_{2}\})\) contains all the arc of the form \((\{Z_{1},Z_{2}\},\{W_{1},W_{2}\})\), where \(\{Z_{1},Z_{2}\},\{W_{1},W_{2}\}\in\Omega\) and
\[\min\{|Z_{1}\cap W_{1}|,\,|Z_{1}\cap W_{2}|\}=m-i\,.\]
To conclude, observe that \(\Omega\) is the set of \(m\)-subsets of \(\{1,2,\ldots,2m\}\) in which two elements are identified if they are disjoint, and that
\[\min\{|X_{1}\cap Y_{1}|,\,|X_{1}\cap Y_{2}|\}=m-i\,,\]
is the adjacency condition in an squashed distance-\(i\) Johnson graph. As in Lemma 20, the same reasoning can be extended to \(\operatorname{Sym}(2m)\). Therefore, the orbital digraphs of \(\operatorname{Alt}(2m)\) and of \(\operatorname{Sym}(2m)\) in these primitive actions are the squashed distance-\(i\) Johnson graphs \(\mathbf{QJ}(2m,m,i)\), for some \(i\in\{0,1,\ldots,[m/2]\}\).
**Lemma 23**.: _Let \(m,i\) be two positive integers such that \(m\geq 3\) and \(i\neq\lfloor m/2\rfloor\). Then the relative fixity of the distance-\(i\) Johnson graphs \(\mathbf{QJ}(2m,m,i)\) is_
\[\operatorname{RelFix}(\mathbf{QJ}(2m,m,i))=1-\frac{2k(m-k)}{m(m-1)}\,.\]
Proof.: Consider \(\mathbf{J}(2m,m,i)\), the regular double covering of \(\mathbf{QJ}(2m,m,i)\). In view of [15, Theorem 2 (\(e\))], the automorphism group of \(\mathbf{J}(2m,m,i)\) is \(\operatorname{Sym}(2m)\times\operatorname{Sym}(2)\), where the central involution swaps pairs disjoint vertices. It follows that the automorphism group of \(\mathbf{QJ}(2m,m,i)\) is \(\operatorname{Sym}(2m)\). Now, we can immediately verify that the stabilizer of the vertex \(\{\{1,2,\ldots,m\},\{m+1,m+2,\ldots,2m\}\}\) is \(\operatorname{Sym}(m)\operatorname{wr}C_{2}\). The minimal degree of the primitive action of \(\operatorname{Sym}(2m)\) with stabilizer \(\operatorname{Sym}(m)\operatorname{wr}C_{2}\) is
\[\mu\left(\operatorname{Sym}(2m)\right)=\frac{1}{4}\left(1+\frac{1}{2m-1} \right)\frac{(2m)!}{m!^{2}}\]
(see [8, Theorem 4]). Thus, we find that
\[\operatorname{RelFix}(\mathbf{QJ}(2m,m,i))=\frac{1}{2}\left(1-\frac{1}{2m-1} \right)\,.\qed\]
### Strongly regular graphs
We list all the strongly regular graphs appearing as \(\Gamma_{1}\) in Theorem 8 (\(c\)). We divide them according to the socle \(L\) of the almost simple group that acts on them. Further, the present enumeration corresponds to the one of the groups that act on these graphs as listed in (the soon to be enunciated) Theorem 24 (\(e\)).
1. \(L=U_{4}(q)\), \(q\in\{2,3\}\), acting on totally singular \(2\)-dimensional subspaces of the natural module, two vertices of \(\Gamma\) are adjacent if there is a third \(2\)-dimensional subspace that intersect both vertices in a \(1\)-dimensional subspace (see [7, Section 2.2.12]);
2. \(L=\Omega_{2m+1}(3),m\geq 2\), acting on the singular points of the natural module, two vertices of \(\Gamma\) are adjacent if they are orthogonal (see [7, Theorem 2.2.12]);
3. \(L=\Omega_{2m+1}(3),m\geq 2\), acting on the nonsingular points of the natural module, two vertices of \(\Gamma\) are adjacent if the line that connects them is tangent to the quadric where the quadratic form vanishes (see [7, Section 3.1.4]);
4. \(L=\operatorname{P\Omega}_{2m}^{e}(2),\varepsilon\in\{+,-\},m\geq 3\), acting on the singular points of the natural module, two vertices of \(\Gamma\) are adjacent if they are orthogonal (see [7, Theorem 2.2.12]);
5. \(L=\operatorname{P\Omega}_{2m}^{e}(2),\varepsilon\in\{+,-\},m\geq 2\), acting on the nonsingular points of the natural module, two vertices of \(\Gamma\) are adjacent if they are orthogonal (see [7, Section 3.1.2]);
6. \(L=\operatorname{P\Omega}_{2m}^{+}(3),m\geq 2\) acting on the nonsingular points of the natural module, two vertices of \(\Gamma\) are adjacent if they are orthogonal (see [7, Section 3.1.3]);
7. \(L=\operatorname{P\Omega}_{2m}^{-}(3),m\geq 3\) acting on the singular points of the natural module, two vertices are adjacent if they are orthogonal (see [7, Section 3.1.3]);
8. \(L=\operatorname{P\Omega}_{2m}^{-}(3),m\geq 3\) acting on the singular points of the natural module, two vertices of \(\Gamma\) are adjacent if they are orthogonal (see [7, Theorem 2.2.12]);
9. \(L=\operatorname{P\Omega}_{2m}^{-}(3),m\geq 2\) acting on the nonsingular points of the natural module, two vertices of \(\Gamma\) are adjacent if they are orthogonal (see [7, Section 3.1.3]).
Table 1 collects the usual parameters of a strongly regular graph, \((v,d,\lambda,\mu)\), and their relative fixity. Recall that \(v\) is the number of vertices, \(d\) is the valency of the graph, \(\lambda\) is the number of common neighbours between two adjacent vertices, and \(\mu\) is the number of common neighbours between two nonadjacent vertices. As \(\mu(G)\) can be found in [8, Theorem 4], the relative fixity is computed as
\[\operatorname{RelFix}(\Gamma)=1-\frac{\mu(G)}{v}\,,\]
## 5. Proof of Theorem 8
The primitive permutation groups we are concerned with were classified by T. Burness and R. Guralnick in [8]. We report their result here. For the sake of our proof, we explicitly write the permutational rank of the almost simple groups of Lie type. This information can be easily obtained combining the complete list of \(2\)-transitive finite permutation groups, first described by P. J. Cameron in [9, Section 5], and the complete list of classical finite permutation groups of permutational rank \(3\), compiled by W. M. Kantor and R. A. Liebler in [17, Theorem 1.1].
**Theorem 24** ([8], Theorem 4).: _Let \(G\) be a permutation group with_
\[\mu(G)<\frac{2n}{3}\,.\]
_Then one of the following holds:_
1. \(\operatorname{Alt}(m)\leq G\leq\operatorname{Sym}(m)\)_, for some_ \(m\geq 3\)_, in its action on_ \(k\)_-subsets, for some_ \(k<m/2\)_;_
2. \(G=\operatorname{Sym}(2m)\)_, for some_ \(m\geq 2\)_, in its primitive action with stabilizer_ \(G_{\alpha}=\operatorname{Sym}(m)\operatorname{wr}C_{2}\)_;_
3. \(G=M_{22}:2\) _in its primitive action of degree_ \(22\) _with stabilizer_ \(G_{\alpha}=\operatorname{L}_{3}(4).2_{2}\)_;_
4. \(G\) _is an almost simple group of socle_ \(L\) _and permutational rank_ \(2\)_, and one of the following occurs:_ 1. \(L=\operatorname{L}_{m}(2)\)_,_ \(m\geq 3\)_, in its natural action;_ 2. \(L=\operatorname{L}_{m}(3)\)_,_ \(m\geq 3\)_, in its natural action, and_ \(G\) _contains an element of the form_ \((-I_{n-1},I_{1})\)_;_ 3. \(L=\operatorname{Sp}_{2m}(2)\)_,_ \(m\geq 3\)_, in its action on the singular points of the natural module;_ 4. \(L=\operatorname{Sp}_{2m}(2)\)_,_ \(m\geq 3\)_, in its action on the right cosets of_ \(\operatorname{SO}_{2m}^{-}(2)\)_;_ 5. \(L=\operatorname{Sp}_{2m}(2)\)_,_ \(m\geq 3\)_, in its action on the right cosets of_ \(\operatorname{SO}_{2m}^{+}(2)\)_;_
5. \(G\) _is an almost simple group of socle_ \(L\) _and permutational rank_ \(3\)_, and one of the following occurs:_ 1. \(L=U_{4}(q)\)_,_ \(q\in\{2,3\}\)_, in its primitive action on totally singular_ \(2\)_-dimensional subspaces, and_ \(G\) _contains the graph automorphism_ \(\tau\)_;_ 2. \(L=\Omega_{2m+1}(3)\) _in its action on the singular points of the natural module, and_ \(G\) _contains an element of the form_ \((-I_{2m},I_{1})\) _with a_ \(+\)_-type_ \((-1)\)_-eigenspace;_
\begin{table}
\begin{tabular}{|c|c|c c c c|c|} \hline & \multicolumn{1}{|c|}{Socle} & \multicolumn{1}{|c|}{\(v\)} & \(d\) & \(\lambda\) & \(\mu\) & \(\operatorname{RelFix}\) & Comments \\ \hline \((i)\) & \(U_{4}(2)\) & \(27\) & \(10\) & \(1\) & \(5\) & \(\frac{7}{27}\) & \\ \hline & \(U_{4}(3)\) & \(112\) & \(30\) & \(2\) & \(10\) & \(\frac{11}{56}\) & \\ \hline \((ii)\) & \(\Omega_{2m+1}(3)\) & \(\frac{1}{2}(9a-1)\) & \(\frac{3}{2}(a^{2}-1)\) & \(\frac{1}{2}(a^{2}-9)+2\) & \(\frac{1}{2}(a^{2}-1)\) & \(\frac{a+1}{3a+1}\) & \(a=3^{m-1}\) \\ \((iii)\) & \(\Omega_{2m+1}(3)\) & \(\frac{3a}{2}(3a-1)\) & \((a-1)(3a+1)\) & \(2(a^{2}-a-1)\) & \(2a(a-1)\) & \(\frac{3a^{2}+a+1}{3a(3a-1)}\) & \\ \hline \((iv)\) & \(\operatorname{PI}_{2m}^{+}(2)\) & \((4b-1)(2b-1)\) & \(2(2b-1)(b+1)\) & \((2b-2)(b-2)+1\) & \((2b-1)(b+1)\) & \(b=2^{m-2}\) \\ \hline & \(\operatorname{PI}_{2m}^{-}(2)\) & \(4b^{2}-1\) & \(2(b^{2}-1)\) & \(b^{2}-3\) & \(b^{2}-1\) & \(\frac{2b+1}{4b+1}\) & \\ \hline \((v)\) & \(\operatorname{PI}_{2m}^{\alpha}(2)\) & \(2b(4b-\varepsilon)\) & \(4b^{2}-1\) & \(2(b^{2}-1)\) & \(b(2b+\varepsilon)\) & \(\frac{2b}{4b-\varepsilon}\) & \(\varepsilon=\pm 1\) \\ \((vi)\) & \(\operatorname{PI}_{2m}^{+}(3)\) & \(\frac{3c}{2}(9c-1)\) & \(\frac{3c}{2}(3c-1)\) & \(\frac{c}{2}(3c-1)\) & \(\frac{3c}{2}(c-1)\) & \(3(c+1)\) & \\ \hline \((vii)\) & \(\operatorname{PI}_{2m}^{\alpha}(3)\) & \(\frac{1}{2}(9c^{2}-1)\) & \(\frac{3}{2}(c^{2}-1)\) & \(\frac{1}{2}(c^{2}-9)+2\) & \(\frac{1}{2}(c^{2}-1)\) & \(\frac{3c+1}{9c+1}\) & \\ \hline \((viii)\) & \(\operatorname{PI}_{2m}^{-}(3)\) & \(\frac{3c}{2}(9c+1)\) & \(\frac{3c}{2}(3c+1)\) & \(\frac{c}{2}(3c-1)\) & \(\frac{3c}{2}(c+1)\) & \(\frac{9c^{2}+3c-2}{3c(9c+1)}\) & \\ \hline \end{tabular}
\end{table}
Table 1. Parameters of strongly regular graphs with large fixity.
* \(L=\Omega_{2m+1}(3)\) _in its action on the nonsingular points of the natural module whose orthogonal complement is an orthogonal space of_ \(-\)_-type, and_ \(G\) _contains an element of the form_ \((-I_{2m},I_{1})\) _with a_ \(-\)_-type_ \((-1)\)_-eigenspace;_
* \(L=\operatorname{P\Omega}_{2m}^{\varepsilon}(2)\)_,_ \(\varepsilon\in\{+,-\}\)_, in its action on the singular points on the natural module, and_ \(G=\operatorname{SO}_{2m}^{\varepsilon}(2)\)_;_
* \(L=\operatorname{P\Omega}_{2m}^{\varepsilon}(2)\)_,_ \(\varepsilon\in\{+,-\}\)_, in its action on the nonsingular points on the natural module, and_ \(G=\operatorname{SO}_{2m}^{\varepsilon}(2)\)_;_
* \(L=\operatorname{P\Omega}_{2m}^{+}(3)\) _in its action on the nonsingular points on the natural module, and_ \(G\) _contains an element of the form_ \((-I_{2m-1},I_{1})\) _such that the discriminant of the_ \(1\)_-dimensional_ \(1\)_-eigenspace is a nonsquare;_
* \(L=\operatorname{P\Omega}_{2m}^{-}(3)\) _in its action on the singular points on the natural module, and_ \(G\) _contains an element of the form_ \((-I_{2m-1},I_{1})\)_;_
* \(L=\operatorname{P\Omega}_{2m}^{-}(3)\) _in its action on the nonsingular points on the natural module, and_ \(G\) _contains an element of the form_ \((-I_{2m-1},I_{1})\) _such that the discriminant of the_ \(1\)_-dimensional_ \(1\)_-eigenspace is a square;_
* \(G\leq K\operatorname{wr}\operatorname{Sym}(r)\) _is a primitive group of product action type, where_ \(K\) _is a permutation group appearing in points_ \((a)-(e)\)_, the wreath product is endowed with the product action, and_ \(r\geq 2\)_;_
* \(G\) _is an affine group with a regular normal socle_ \(N\)_, which is an elementary abelian_ \(2\)_-subgroup._
Proof of Theorem 8.: The proof is split in two independent chunks. First, we prove that every vertex-primitive digraph of relative fixty exceeding \(\frac{1}{3}\) belongs to one of the families appearing in Theorem 8. Then, we tackle the problem of computing the relative fixities of the graphs appearing in Theorem 8, thus showing that they indeed all have relative fixity larger than \(\frac{1}{3}\).
Assume that \(\Gamma\) is a digraph on \(n\) vertices with at least one arc and with \(\operatorname{RelFix}(\Gamma)>\frac{1}{3}\) such that \(G=\operatorname{Aut}(\Gamma)\) is primitive. If \(\Gamma\) is disconnected, then the primitivity of \(G\) implies that \(\Gamma\cong\mathbf{L}_{n}\). Hence we may assume that \(\Gamma\) is connected. Moreover, \(\operatorname{RelFix}(\Gamma)>\frac{1}{3}\) implies that \(\mu(G)<\frac{2n}{3}\). Hence \(G\) is one of the groups determined in [8] and described in Theorem 24.
Suppose that \(G\) is an almost simple group. Then \(G\) is one of the groups appearing in parts \((a)-(e)\) of Theorem 24. Since any \(G\)-vertex-primitive group is a union of orbital digraphs for \(G\), the digraphs arising from these cases will be merged product action digraphs \(\mathcal{P}(1,\mathcal{G},\mathcal{J})\) (see Remark 3). Hence, our goal is to consider these almost simple groups in turn and compile their list of orbitals digraphs \(\mathcal{G}\).
Let \(G\) be a group as described in Theorem 24\((a)\). Lemma 20 states the orbital digraphs for \(G\) are the distance-\(i\) Johnson graph \(\mathbf{J}(m,k,i)\).
Assume that \(k=1\), that is, consider the natural action of either \(\operatorname{Alt}(m)\) or \(\operatorname{Sym}(m)\) of degree \(m\). Since this action is \(2\)-transitive, their set of orbital digraphs is \(\mathcal{G}=\{\mathbf{L}_{m},\mathbf{K}_{m}\}\). In particular, \(\mathcal{P}(1,\mathcal{G},\mathcal{J})=\mathbf{H}(1,m,\mathcal{J})\). This case exhausts the generalized Hamming graphs with \(r=1\), which appear in Theorem 8\((i)\). Therefore, in view of Remark 6, for as long as we suppose \(r=1\), we can also assume that \(\mathcal{J}\) is a non-Hamming homogeneous set. Observe \(m\geq 4\), otherwise, we go against our assumption on the relative fixity.
Going back to distance-\(i\) Johnson graphs, to guarantee that \(\mathcal{J}\) is non-Hamming, we have to take \(k\geq 2\). Thus,
\[\mathcal{G}=\{\mathbf{J}(m,k,i)\mid i\in\{0,1,\ldots,k\}\}\,\]
which corresponds to Theorem 8\((ii)(a)\).
Let \(G=\operatorname{Sym}(2m)\) be a permutation group from Theorem 24\((b)\). If \(m=2\), the degree of \(G\) is \(3\), and the relative fixity of any action of degree \(3\) can either be \(0\) or \(\frac{1}{3}\). Hence, we must suppose that \(m\geq 3\): by Lemma 22, the orbital digraphs for \(G\) are the squashed distance-\(i\) Johnson graph \(\mathbf{QJ}(2m,m,i)\). We obtain that
\[\mathcal{G}=\{\mathbf{QJ}(2m,m,i)\mid i\in\{0,1,\ldots,\lfloor m/2\rfloor\}\}\,\]
as described in Theorem 8\((ii)(b)\).
Let \(G=M_{22}:2\) in the action described in Theorem 24\((c)\). Consulting the list of all the primitive groups of degree \(22\) in Magma[6] (which is based on the list compiled in [11]), we realize that they are all \(2\)-transitive. Hence, the set of orbital digraphs is \(\mathcal{G}=\{\mathbf{K}_{22},\mathbf{L}_{22}\}\). In particular, all the graphs are generalised Hamming graphs.
Let \(G\) be an almost simple of Lie type appearing in Theorem 24\((d)\). Since all these groups are \(2\)-transitive with a \(2\)-transitive socle \(L\), their orbital digraphs are either \(\mathbf{K}_{m}\) or \(\mathbf{L}_{m}\), where \(m\geq 7\) is the degree of \(G\). Once again, we obtain only generalise Hamming graphs.
Let \(G\) be an almost simple of Lie type described in Theorem 24\((e)\). Any group of permutational rank \(3\) defines two nondiagonal orbital digraphs, and, as such digraphs are arc-transitive and one the complement of the other, they are strongly regular digraphs (see, for instance, [7, Section 1.1.5]). The set of orbital digraphs is of the form \(\mathcal{G}=\{\mathbf{L}_{m},\Gamma_{1},\Gamma_{2}\}\), where we listed the possible \(\Gamma_{1}\) in Section 4.4, and where \(m=|V\Gamma_{1}|\). The graphs described in this paragraph appear in Theorem 8\((ii)(c)\).
We have exhausted the almost simple groups from Theorem 24. Hence, we pass to Theorem 24\((f)\). Suppose that \(G\leq K\operatorname{wr}\operatorname{Sym}(r)\) is a primitive group of product action type. We want to apply Theorem 17 to \(G\). The only hypothesis we miss is that \(T\) and \(G^{\Delta}_{\Delta}\) share the same set of orbital digraphs.
We claim that \(T\) and \(K\) induces the same set of orbital digraphs. If \(K\) is either alternating or symmetric, the claim follows from Lemmas 20 and 22. If \(K\) is \(2\)-transitive, then we can observe that its socle \(L\) is also \(2\)-transitive: the socle of \(M_{22}:2\) is \(T=M_{22}\) in its natural \(3\)-transitive action, while the socle \(T\) of the almost simple groups of Lie type of rank \(2\) is \(2\)-transitive by [9, Section 5]. In particular, \(K\) and \(T\) both have \(\mathcal{G}=\{\mathbf{L}_{m},\mathbf{K}_{m}\}\) as their set of orbital graphs. Finally, suppose that \(K\) is an almost simple group of permutational rank \(3\). We have that its socle \(T\) is also of permutational rank \(3\) by [17, Theorem 1.1]. Observe that, since any orbital digraph for \(T\) is a subgraph of an orbital digraph for \(G\), the fact that \(G\) and \(L\) both have permutational rank \(3\) implies that they share the same set of orbital digraphs. Therefore, the claim is true.
By our claim together with the double inclusion
\[T\leq G^{\Delta}_{\Delta}\leq K\,,\]
we obtain that \(T,G^{\Delta}_{\Delta}\) and \(K\) all induce the same set of orbital digraphs. Therefore, we can apply Theorem 17 to \(G\): we obtain that \(G\) shares its orbital graphs with \(T\operatorname{wr}G^{\Omega}\).
Therefore, all the \(G\)-vertex-primitive digraphs are union of orbital digraphs for \(T\operatorname{wr}H\), with \(T\) socle type of \(G\) and \(H\) transitive permutation group isomorphic to \(G^{\Omega}\). In other words, we found all the graphs \(\mathcal{P}(r,\mathcal{G},\mathcal{J})\) with \(r\geq 2\) described in Theorem 8. (Recall that, by Definition 4, among the graphs \(\mathcal{P}(r,\mathcal{G},\mathcal{J})\), we find all the generalised Hamming graphs.)
Suppose that \(G\) is an affine group with a regular normal socle \(N\), which is an elementary abelian \(2\)-subgroup. We have that \(G\) can be written as the split extension \(N:H\), where \(H\) is a group of matrices that acts irreducibly on \(N\). It follows that \(G\) is \(2\)-transitive on \(N\), hence, as \(|N|\geq 4\), the graphs arising in this scenario are \(\mathbf{L}_{|N|},\mathbf{K}_{|N|}\) and \(\mathbf{L}_{|N|}\cup\mathbf{K}_{|N|}\), which are generalised Hamming graphs.
We have completed the first part of the proof, showing that the list of vertex-primitive digraphs appearing in Theorem 8 is exhaustive. As all the orbital digraphs in \(\mathcal{G}\) are actually graphs, the same property is true for the graphs in the list, as we have underlined in Remark 9.
We can now pass to the second part of the proof, that is, we can now tackle the computation of relative fixities. We already took care of the generalised Hamming graphs in Lemma 18. Thus, we can suppose that \(\Gamma\) is a merged product action graph \(\mathcal{P}(r,\mathcal{G},\mathcal{J})\) appearing in Theorem 8\((ii)\).
Suppose that \(r=1\), that is, \(\Gamma\) is a union of graphs for some primitive almost simple group \(K\). (We are tacitely assuming that \(K\) is maximal among the groups appearing in a given part of Theorem 24.) In view of [21, Theorem], we have that \(K\) is a maximal subgroup of either \(\operatorname{Alt}(|V\Gamma|)\) or \(\operatorname{Sym}(|V\Gamma|)\). Therefore, there are just two options for \(\operatorname{Aut}(\Gamma)\): either it is isomorphic to \(K\) or it contains \(\operatorname{Alt}(|V\Gamma|)\). In the latter scenario, as \(\operatorname{Alt}(|V\Gamma|)\) is \(2\)-transitive on the vertices, we obtain
that \(\Gamma\in\{\mathbf{L}_{m},\mathbf{K}_{m},\mathbf{L}_{m}\cup\mathbf{K}_{m}\}\). All those graphs are generalised Hamming graphs, against our assumption on \(\Gamma\). Therefore, we have \(K=\operatorname{Aut}(\Gamma)\). In particular, the relative fixity for \(\Gamma\) are computed in Lemma 21, Lemma 23 or Table 1 given that \(\mathcal{G}\) is described in Theorem 8\((ii)(a)\), \((ii)(b)\) or \((ii)(c)\) respectively.
Suppose now that \(r\geqslant 2\). The automorphism group of \(\Gamma\) either embeds into \(\operatorname{Sym}(m)\operatorname{wr}\operatorname{Sym}(r)\), where \(m=|V\Gamma_{i}|\) for any \(\Gamma_{i}\in\mathcal{G}\), or, by maximality of \(\operatorname{Sym}(m)\operatorname{wr}\operatorname{Sym}(r)\), \(\operatorname{Aut}(\Gamma)=\operatorname{Sym}(m^{r})\). In the latter scenario, \(\Gamma\in\{\mathbf{L}_{m},\mathbf{K}_{m},\mathbf{L}_{m}\cup\mathbf{K}_{m}\}\). All these graphs can be written as a merged product graph where \(r=1\) and \(\mathcal{J}\) is a Hamming set. This goes against our assumption on \(\Gamma\), thus we must suppose \(\operatorname{Aut}(\Gamma)\neq\operatorname{Sym}(m^{r})\).
As a consequence, we obtain that, for some almost simple group \(K\) listed in Theorem 24\((a)-(e)\), and for some transitive group \(H\leqslant\operatorname{Sym}(r)\), \(K\operatorname{wr}H\leqslant\operatorname{Aut}(\Gamma)\). Note that, as \(K\leqslant\operatorname{Aut}(\Gamma)_{\Delta}^{\Delta}\), by [21, Theorem], \(\operatorname{Aut}(\Gamma)_{\Delta}^{\Delta}\) is either \(K\) or it contains \(\operatorname{Alt}(m)\). If the latter case occurs, then \(\operatorname{Alt}(m)^{r}\operatorname{wr}H\leqslant\operatorname{Aut}(\Gamma)\). By Lemma 18, \(\Gamma\) is a generalised Hamming graph, which contradicts our choice of \(\Gamma\). Therefore, \(\operatorname{Aut}(\Gamma)\leqslant K\operatorname{wr}\operatorname{Sym}(r)\).
Observe that we can apply Lemma 19. We obtain that
\[\operatorname{RelFix}(\Gamma)=1-\frac{\mu(K)m^{r-1}}{m^{r}}=1-\frac{\mu(K)}{m} =\operatorname{RelFix}\left(\mathcal{P}(1,\mathcal{G},\mathcal{J}^{\prime}) \right)\,,\]
for some non-Hamming homogeneous set \(\mathcal{J}^{\prime}\). In particular, the relative fixities for \(r\geqslant 2\) coincides with those we have already computed for \(r=1\). This complete the proof.
## 6. Proof of Theorem 12
Recall that a permutation group \(G\) on \(\Omega\) is _quasiprimitive_ if all its normal subgroups are transitive on \(\Omega\). Clearly, any primitive group is quasiprimitive. Moreover, recall that, by repeating the proof of Cauchy-Frobenius Lemma (see [12, Theorem 1.7A]) on the conjugacy class of a permutation \(x\in G\), we get
\[\operatorname{fix}(x)|x^{G}|=|x^{G}\cap G_{\omega}|\]
where \(\operatorname{fix}(x)=|\Omega|-|\operatorname{supp}(x)|\) is the number of fixed points of \(x\).
Proof of Theorem 12.: (We would like to thank P. Spiga again for pointing out the key ingredients for this proof.) Let \(G\) be a quasiprimitive permutation group on a set \(\Omega\), and let \(x\in G\backslash\{1\}\) be an element achieving \(|\operatorname{supp}(x)|\leqslant(1-\alpha)|\Omega|\). For any point \(\omega\in\Omega\), we obtain
\[\alpha\leqslant\frac{|x^{G}\cap G_{\omega}|}{|x^{G}|}\leqslant\frac{|G_{\omega }|}{|x^{G}|}\leqslant\frac{\beta}{|x^{G}|}\,.\]
It follows that \(|x^{G}|\leqslant\alpha^{-1}\beta\). Now consider the normal subgroup of \(G\) defined by
\[N:=\bigcap_{g\in G}\mathbf{C}_{G}(x^{g})\,.\]
Recall that \(|G:\mathbf{C}_{G}(x)|=|x^{G}|\). Observe that \(G\) acts by conjugation on the set
\[\{\mathbf{C}_{G}(x^{g})\mid g\in G\}\,,\]
it defines a single orbit of size \(|x^{G}|\), and \(N\) is the kernel of this action. Therefore
\[|G:N|\leqslant|x^{G}|!\leqslant\left\lceil\frac{\beta}{\alpha}\right\rceil!\,,\]
that is, \(N\) is a bounded index subgroup of \(G\). Since \(G\) is quasiprimitive, either \(N\) is trivial or \(N\) is transitive. Aiming for a contradiction, we suppose that \(N\) is transitive. Since \([N,x]=1\), for any \(\omega\in\Omega\) and for any \(n\in N\),
\[\omega^{nx}=\omega^{xn}=\omega^{n}\,,\]
The transitivity of \(N\) implies that \(x=1\), against our choice of \(x\). Therefore, \(N\) is trivial. It follows that
\[|G|=|G:N|\leq\left\lceil\frac{\beta}{\alpha}\right\rceil!\,.\]
Since there are finitely many abstract groups of bounded size, the proof is complete.
An equivalent formulation of Sims' Conjecture states that if \(G\) is a primitive permutation group and the minimal out-valency among its nondiagonal orbital digraphs is at most \(d\), then the size of a point stabilizer is bounded from above by a function \(\mathbf{f}(d)\) depending only on the positive integer \(d\). An answer in the positive to this conjecture was given in [10].
Proof of Corollary 13.: Let \(\Gamma\) be a vertex-primitive digraphs of out-valency at most \(d\) and relative fixty exceeding \(\alpha\), and let \(G=\operatorname{Aut}(\Gamma)\). The hypothesis on the out-valency implies that, for any \(v\in V\Gamma\), \(|G_{v}|\leq\mathbf{f}(d)\), where \(\mathbf{f}(d)\) is the function that solves Sims' Conjecture. The result thus follows by choosing \(\beta=\mathbf{f}(d)\) in Theorem 12.
We conclude the paper by observing that, as \(\mathbf{f}(d)\geq(d-1)!\), from Corollary 13 we cannot obtain a bound as sharp as that in Remark 11.
|
2305.19576 | Periodic Vlasov-Stokes' system: Existence and Uniqueness of strong
solutions | This paper deals with the Vlasov-Stokes' system in three dimensions with
periodic boundary conditions in the spatial variable. We prove the existence of
a unique strong solution to this two-phase model under the assumption that
initial velocity moments of certain order are bounded. We use a fixed point
argument to arrive at a global-in-time solution. | Harsha Hutridurga, Krishan Kumar, Amiya K. Pani | 2023-05-31T05:53:35Z | http://arxiv.org/abs/2305.19576v1 | # Periodic Vlasov-Stokes' system: existence
###### Abstract.
This paper deals with the Vlasov-Stokes' system in three dimensions with periodic boundary conditions in the spatial variable. We prove the existence of a unique strong solution to this two-phase model under the assumption that initial velocity moments of certain order are bounded. We use a fixed point argument to arrive at a global-in-time solution.
## 1. Introduction
This paper deals with a coupled system of partial differential equations arising in the study of thin sprays. From a modeling perspective, it is assumed that the spray particles (droplets) are a dispersed phase in a gas medium. Studying two-phase models comprising of a kinetic equation for the dispersed phase and a fluid equation for the gas dates back to the works of O'Rourke [11] and Williams [12] (see also [13]).
We choose to model the three dimensional background fluid by the linear unsteady Stokes' equation and the droplet distribution by the Vlasov equation while the coupling is via a drag term:
\[\begin{cases}\partial_{t}f+v\cdot\nabla_{x}f+\nabla_{v}\cdot\Big{(}\left( \boldsymbol{u}-v\right)f\Big{)}=0&\text{in }(0,T)\times\Omega_{x}\times\mathbb{R}^{3},\\ f(0,x,v)=f_{0}(x,v)&\text{in }\Omega_{x}\times\mathbb{R}^{3}.\end{cases} \tag{1.1}\]
\[\begin{cases}\partial_{t}\boldsymbol{u}-\Delta_{x}\boldsymbol{u}+\nabla_{x}p =\int_{\mathbb{R}^{2}}(v-\boldsymbol{u})\,f\,\mathrm{d}v&\text{in } (0,T)\times\Omega_{x},\\ \nabla_{x}\cdot\boldsymbol{u}=0&\text{in }\Omega_{x},\\ \boldsymbol{u}(0,x)=\boldsymbol{u}_{0}(x)&\text{in }\Omega_{x}.\end{cases} \tag{1.2}\]
Here \(\Omega_{x}\) denotes the three dimensional torus \(\mathbb{T}^{3}\). The unknowns in the above coupled system are the following: the fluid velocity \(\boldsymbol{u}(t,x)\), the fluid pressure \(p(t,x)\), the droplet distribution function \(f(t,x,v)\). We impose periodic boundary conditions in the \(x\) variable. The above model with homogeneous Dirichlet boundary condition for the fluid velocity and with specular reflection boundary condition for the droplet distribution was studied by Hamdache in [1], wherein he proved the existence of global-in-time weak solutions. Hofer studied the Vlasov-steady Stokes' system in [10] with compactly supported initial data in the phase space. Various other kinetic-fluid equations have been studied in the literature: Vlasov-Burgers' equations [11, 12, 13]; Vlasov-Euler equations [11, 12, 14, 15], to name a few.
In this paper, we make precise the notion of strong solutions to our system (1.1)-(1.2). Using (i) certain a priori bounds coming from the energy identity, (ii) the regularity theory for the Stokes' equation, (iii) the DiPerna-Lions' theory for the well-posedness of the transport equation with Sobolev vector fields and (iv) a fixed point argument, we prove the global-in-time well-posedness result for the fluid-kinetic system (1.1)-(1.2). The aforementioned a priori bounds have been known since the work of Hamdache [1]. In most of the works on existence and uniqueness of solutions mentioned above, a standard assumption on the initial droplet distribution is that its velocity moments up to certain order are bounded. More precisely, one assumes
\[\int_{\Omega_{x}}\int_{\mathbb{R}^{d}}\left|v\right|^{k}f_{0}(x,v)\,\mathrm{ d}v\,\mathrm{d}x\leq C,\]
where the order \(k\) typically depends on the dimension \(d\) that one is considering. A conventional result is then to show that similar bounds hold for velocity moments of the droplet distribution at later times as well. In this work, we also assume that the velocity moments associated with the first-order derivatives of the initial droplet distribution are also bounded and that this property is propagated in time. More precisely, we assume that
\[\int_{\Omega_{x}}\int_{\mathbb{R}^{3}}|v|^{p}|\nabla_{x}f_{0}|^{2}\,\mathrm{d}v \,\mathrm{d}x+\int_{\Omega_{x}}\int_{\mathbb{R}^{3}}|v|^{p}|\nabla_{v}f_{0}|^{ 2}\,\mathrm{d}v\,\mathrm{d}x\leq C.\]
This particular assumption is inspired by the work of M. Chae et al. [1].
Our arguments leading to the application of the Banach fixed theorem goes in parallel to the arguments that can be found in the work of Yu [23] addressing the well-posedness of the Vlasov-Navier-Stokes' system in two dimensions. We would like to point out that there is a minor gap in one of the arguments of [23] which we highlight and fix in this article. We thank Cheng Yu for discussing this minor issue with us and for suggesting a way to fix that error as well (Cheng Yu, personal communication, August 17, 2021). It should, however, be noted that our proof requires the aforementioned velocity moment bounds associated with the first-order derivatives which wasn't used in [23]. We believe that, with only of the assumptions made in [23], it may not be possible to close this line of argument via the contraction map (see Remark 2.12).
## 2. Well-posedness result
We set the local density \(\rho\) and the local macroscopic velocity \(V\) as
\[\rho(t,x)=\int_{\mathbb{R}^{3}}f(t,x,v)\,\mathrm{d}v\quad\text{and}\quad V(t,x )=\frac{1}{\rho}\int_{\mathbb{R}^{3}}f(t,x,v)v\,\mathrm{d}v.\]
In what follows, we denote the \(k^{th}\) order velocity moments by
\[m_{k}f(t,x)=\int_{\mathbb{R}^{3}}|v|^{k}f(t,x,v)\,\mathrm{d}v,\quad\text{for} \quad k\in\mathbb{N}\cup\{0\}.\]
Through out this paper, we use standard notation for Sobolev spaces. We denote by \(W^{m,p}\) the \(L^{p}\)-Sobolev space of order \(m\geq 0\). We take \(\boldsymbol{W^{m,p}}=\left(W^{m,p}(\Omega_{x})\right)^{3},\;\forall\;m\geq 0,\;1\leq p\leq\infty\). We also use the standard notations \(H^{s}=W^{s,2}\) and \(\boldsymbol{H^{s}}=\boldsymbol{W^{s,2}}\). We further denote a special class of divergence-free (in the sense of distribution) vector fields by
\[\boldsymbol{J_{1}}=\left\{\boldsymbol{z}\in\boldsymbol{H^{1}}:\nabla_{x} \cdot\boldsymbol{z}=0,\boldsymbol{z}\text{ is periodic}\right\}.\]
Throughout this manuscript, any function defined on \(\Omega_{x}\) is assumed to be periodic in the \(x\)-variable.
### Notion of solution and main result
We say that \((f,\boldsymbol{u},p)\) is a **strong solution** to the Vlasov-Stokes' system (1.1)-(1.2) if
* \(f\in W^{1,1}(0,T;W^{1,1}(\Omega_{x}\times\mathbb{R}^{3}))\cap L^{\infty}(0,T; L^{1}(\Omega_{x}\times\mathbb{R}^{3})\cap L^{\infty}(\Omega_{x}\times\mathbb{R}^{3}))\)
* \(\boldsymbol{u}\in L^{\infty}(0,T;\boldsymbol{J_{1}})\cap L^{2}(0,T;\boldsymbol {H^{2}})\cap H^{1}(0,T;\boldsymbol{L^{2}})\)
* \(p\in L^{2}(0,T;H^{1}(\Omega_{x})/\mathbb{R})\)
* \((f,\boldsymbol{u},p)\) satisfies the equations (1.1) and (1.2) in almost everywhere sense (in the phase space) for almost all time \(t\in(0,T]\).
**Theorem 2.1**.: _(Existence and Uniqueness of strong solution) Let the initial datum \(f_{0}\) be such that_
\[f_{0}\geq 0, \tag{2.2}\] \[f_{0}\in L^{1}(\Omega_{x}\times\mathbb{R}^{3})\cap L^{\infty}( \Omega_{x}\times\mathbb{R}^{3})\cap H^{1}(\Omega_{x}\times\mathbb{R}^{3}),\] (2.3) \[\int_{\Omega_{x}}\int_{\mathbb{R}^{3}}|v|^{p}\left\{f_{0}+|\nabla_ {x}f_{0}|^{2}+|\nabla_{v}f_{0}|^{2}\right\}\,\mathrm{d}v\,\mathrm{d}x\leq C, \tag{2.1}\]
_for \(0\leq p\leq 9+\delta\) with \(\delta>0\) and let the initial datum \(\boldsymbol{u_{0}}\in\boldsymbol{H^{2}}\cap\boldsymbol{J_{1}}\). Then, there exists a unique global-in-time strong solution \((f,\boldsymbol{u},p)\) to the Vlasov-Stokes' system (1.1)-(1.2). Furthermore,_
\[\int_{\Omega_{x}}\int_{\mathbb{R}^{3}}|v|^{p}\left\{f+|\nabla_{x}f|^{2}+| \nabla_{v}f|^{2}\right\}\,\mathrm{d}v\,\mathrm{d}x\leq C, \tag{2.4}\]
_for \(0\leq p\leq 9+\delta\) with \(\delta>0\) and for all \(t>0\)._
The proof of the above result goes via the following steps:
* A bound on the \((9+\delta)^{\text{th}}\) order velocity moment, with \(\delta>0\), of \(f\) helps to deduce \(\mathbf{u}\in L^{\infty}(0,T;\mathbf{W^{1,\infty}})\), thanks to Stokes' regularity [1, 1].
* Using \(\mathbf{u}\in L^{\infty}(0,T;\mathbf{W^{1,\infty}})\), we prove that the velocity moments of \(|\nabla_{x}f|^{2}\) and \(|\nabla_{v}f|^{2}\) stay bounded for all time if they are bounded initially. This is essentially the assertion (2.4) in the statement of Theorem 2.1. The essential ideas of this step is an adaptation of the calculations in [1, Theorem 5, p.2462] and [1, Lemma 3.2, p.11].
* Using DiPerna-Lions theory [10] for well-posedness of the transport equations and using a certain recurrence relation involving velocity moments, we conclude existence and uniqueness of solution to the Vlasov-Stokes' system by employing the Banach fixed-point theorem in the Banach space \(L^{\infty}(0,T;\mathbf{J_{1}})\cap L^{2}(0,T;\mathbf{H^{2}})\). This step is inspired by the work of Goudon [1] on the Vlasov-Burgers' equations and by the work of Yu [23] on the Vlasov-Navier-Stokes' equations.
**Remark 2.2**.: _Hofer in [11] proves the existence and uniqueness of the solution to the \(3D\) Vlasov-Stokes' equation while considering the steady Stokes' equation for the background fluid medium. He proves the existence of unique solution \((f,\mathbf{u})\in W^{1,\infty}((0,T)\times\mathbb{R}^{3}\times\mathbb{R}^{3}) \times\big{(}L^{\infty}(0,T;\mathbf{W^{2,\infty}})\cap\mathbf{W^{1,\infty}}((0,T) \times\mathbb{R}^{3})\big{)}\) for the initial data \(f_{0}(x,v)\in W^{1,\infty}(\mathbb{R}^{3}\times\mathbb{R}^{3})\) with compact support. To proof in [11] goes via a fixed point argument in the Banach space \(W^{1,\infty}((0,T)\times\mathbb{R}^{3}\times\mathbb{R}^{3})\). The assumption of the \(W^{1,\infty}\) data having compact support implies that the velocity moments of any arbitrary order are bounded. Hence it is more restrictive compared to the present setting of this article._
### Qualitative and quantitative aspects of the model problem
Next, we recall a result that yields bound on the \(L^{\infty}\)-norm of the local density. This estimate is important while addressing the well-posedness of the Stokes' system.
**Lemma 2.3**.: _Let \(\mathbf{u}\in L^{1}(0,T;\mathbf{L^{\infty}})\). Let \(f_{0}\) be such that \(\sup_{C^{r}_{t,v}}f_{0}\in L^{\infty}_{loc}\left(\mathbb{R}_{+};L^{1}(\mathbb{ R}^{3})\right)\), where \(C^{r}_{t,v}:=\Omega_{x}\times B(e^{t}v,r),\,\forall\,r>0\). Here \(B(e^{t}v,r)\) denotes the ball of radius \(r\) with center at \(e^{t}v\). Then, the following estimate holds:_
\[\|\rho(t,x)\|_{L^{\infty}((0,T]\times\Omega_{x})}\leq e^{3T}\sup_{t\in[0,T]} \|\sup_{C^{r}_{t,v}}f_{0}\|_{L^{1}(\mathbb{R}^{3})}. \tag{2.5}\]
The proof of the above result can be found in [11, Proposition 4.6, p.44]. The following result gathers certain properties of solutions to the two-phase model (1.1)-(1.2), the proof of which can be found in [10]. Hence we skip its proof.
**Lemma 2.4**.: _Any strong solution \((f,\mathbf{u},p)\) to the Vlasov-Stokes' system (1.1)-(1.2) has the following properties:_
1. _Positivity preserving:_ _For any non-negative initial data_ \(f_{0}\)_, the solution_ \(f\) _is also non-negative._
2. _Mass conservation:_ _The distribution function_ \(f\) _conserves the total mass in the following sense:_ \[\int_{\mathbb{R}^{3}}\int_{\Omega_{x}}\,f(t,x,v)\,\mathrm{d}x\,\mathrm{d}v= \int_{\mathbb{R}^{3}}\int_{\Omega_{x}}\,f_{0}(x,v)\,\mathrm{d}x\,\mathrm{d}v, \quad t\in[0,T].\]
3. _Total momentum conservation:_ _The distribution function_ \(f\) _and the fluid velocity_ \(\mathbf{u}\) _together conserve total momentum in the following sense: for all_ \(t\in[0,T]\)_,_ \[\int_{\mathbb{R}^{3}}\int_{\Omega_{x}}vf(t,x,v)\,\mathrm{d}x\,\mathrm{d}v+2 \int_{\Omega_{x}}\mathbf{u}(t,x)\,\mathrm{d}x=\int_{\mathbb{R}^{3}}\int_{\Omega_{ x}}vf_{0}(x,v)\,\mathrm{d}x\,\mathrm{d}v+2\int_{\Omega_{x}}\mathbf{u_{0}}(x)\, \mathrm{d}x.\]
4. _Energy dissipation:_ _For any non-negative initial data_ \(f_{0}\)_, the total energy of the Vlasov-Stokes' system (_1.1_)-(_1.2_) dissipates in time, i.e._ \[\frac{\mathrm{d}}{\mathrm{d}t}\left(\int_{\mathbb{R}^{3}}\int_{\Omega_{x}}|v|^ {2}f(t,x,v)\,\mathrm{d}x\,\mathrm{d}v+\int_{\Omega_{x}}\mathbf{u}^{2}\,\mathrm{d}x \right)\leq 0.\]
While proving the aforementioned energy dissipation property in [1], Hamdache derives the following identity:
\[\begin{split}&\frac{1}{2}\left(\int_{\mathbb{R}^{3}}\int_{\Omega_{x}} |v|^{2}f(t,x,v)\,\mathrm{d}x\,\mathrm{d}v+\int_{\Omega_{x}}\mathbf{u}^{2}\,\mathrm{ d}x\right)+\int_{0}^{t}\int_{\Omega_{x}}|\nabla_{x}\mathbf{u}|^{2}\,\mathrm{d}x\, \mathrm{d}t\\ &\quad+\int_{0}^{t}\int_{\mathbb{R}^{3}}\int_{\Omega_{x}}|\mathbf{u}- v|^{2}f\,\mathrm{d}x\,\mathrm{d}v\,\mathrm{d}t=\frac{1}{2}\int_{\mathbb{R}^{3}} \int_{\Omega_{x}}|v|^{2}\,f_{0}\,\mathrm{d}x\,\mathrm{d}v+\frac{1}{2}\int_{ \Omega_{x}}\mathbf{u}_{0}^{2}\,\mathrm{d}x.\end{split} \tag{2.6}\]
This helps us to deduce that
\[\mathbf{u}\in L^{\infty}(0,T;\mathbf{L^{2}})\quad\text{and}\quad\mathbf{u}\in L^{2}(0,T; \mathbf{J_{1}}) \tag{2.7}\]
provided \(|v|^{2}f_{0}\in L^{1}(\Omega_{x}\times\mathbb{R}^{3})\) and \(\mathbf{u_{0}}\in\mathbf{L^{2}}\).
Now, an application of the Sobolev imbedding yields \(H^{1}(\Omega_{x})\subset L^{p}(\Omega_{x}),2\leq p\leq 6\). Therefore,
\[\mathbf{u}\in L^{2}(0,T;\mathbf{L^{p}})\quad\text{for}\quad 2\leq p\leq 6. \tag{2.8}\]
The following result shows integrability estimates on the local density and the local momentum. As these appear as source terms in the Stokes' equation, these estimates are crucial in deducing the regularity of solutions to the Stokes' problem. The proof of the following result can be found in [1, Lemma 2.2, p.56].
**Lemma 2.5**.: _Let \(p\geq 1\). Let \(\mathbf{u}\in L^{2}(0,T;\mathbf{L^{p+3}}),f_{0}\in L^{\infty}(\Omega_{x}\times\mathbb{ R}^{3})\cap L^{1}(\Omega_{x}\times\mathbb{R}^{3})\) and let_
\[\int_{\mathbb{R}^{3}}\int_{\Omega_{x}}|v|^{p}f_{0}\,\mathrm{d}x\mathrm{d}v<\infty.\]
_Then the local density \(\rho\) and the local momentum \(\rho V\) satisfy the following:_
\[\rho\in L^{\infty}\left(0,T;L^{\frac{p+3}{3}}(\Omega_{x})\right)\quad\text{and }\quad\rho V\in L^{\infty}\left(0,T;L^{\frac{p+3}{4}}(\Omega_{x})\right). \tag{2.9}\]
**Remark 2.6**.: _Setting \(p=3\) in the Lemma 2.5 shows_
\[\rho\in L^{\infty}\left(0,T;L^{2}(\Omega_{x})\right)\quad\text{and}\quad\rho V \in L^{\infty}\left(0,T;L^{\frac{2}{3}}(\Omega_{x})\right). \tag{2.10}\]
_A use of the Stokes' regularity result yields_
\[\mathbf{u}\in L^{2}(0,T;\mathbf{W}^{2,\frac{3}{2}}). \tag{2.11}\]
_An application of the Sobolev inequality shows_
\[\mathbf{u}\in L^{2}(0,T;\mathbf{L^{p}})\quad\text{for}\quad\frac{3}{2}\leq p<\infty. \tag{2.12}\]
**Remark 2.7**.: _Choosing \(p=5\) in Lemma 2.5, we arrive at_
\[\rho\in L^{\infty}\left(0,T;L^{\frac{8}{3}}(\Omega_{x})\right)\quad\text{and }\quad\rho V\in L^{\infty}\left(0,T;L^{2}(\Omega_{x})\right). \tag{2.13}\]
_A use of the Stokes' regularity result shows_
\[\mathbf{u}\in H^{1}(0,T;\mathbf{L^{2}})\cap L^{2}(0,T;\mathbf{H^{2}})\cap L^{\infty}(0,T; \mathbf{H^{1}}). \tag{2.14}\]
**Remark 2.8**.: _Set \(p=9+\delta\) with \(\delta>0\) in the Lemma 2.5 to obtain_
\[\rho\in L^{\infty}\left(0,T;L^{\frac{12\delta+\delta}{3}}(\Omega_{x})\right) \quad\text{and}\quad\rho V\in L^{\infty}\left(0,T;L^{\frac{12\delta+\delta}{4 }}(\Omega_{x})\right). \tag{2.15}\]
_A use of the Stokes' regularity result yields_
\[\mathbf{u}\in L^{\infty}(0,T;\mathbf{W^{1,\infty}}). \tag{2.16}\]
The following Lemma shows the propagation of velocity moments which is crucial for the proof of Theorem 2.1. The proof of the assertion (2.4) made in Theorem 2.1 is entrusted to the following Lemma.
**Lemma 2.9**.: _Let \(\mathbf{u}\in L^{\infty}(0,T;\mathbf{W}^{\mathbf{1},\mathbf{\infty}})\) and let \(f_{0}\geq 0\) be such that_
\[\int_{\Omega_{x}}\int_{\mathbb{R}^{3}}|v|^{k}\{f_{0}+|\nabla_{x}f_{0}|^{2}+| \nabla_{v}f_{0}|^{2}\}\,\mathrm{d}v\,\mathrm{d}x\leq C,\]
_for \(0\leq k\leq 9+\delta\) with \(\delta>0\). Then, the solution \(f\) of the Vlasov equation satisfies_
\[\int_{\Omega_{x}}\int_{\mathbb{R}^{3}}|v|^{k}\{f+|\nabla_{x}f|^{2}+|\nabla_{v} f|^{2}\}\,\mathrm{d}v\,\mathrm{d}x\leq C\]
_for \(0\leq k\leq 9+\delta\) with \(\delta>0\) and for all \(t>0\). Furthermore, there hold for \(k\geq 1\),_
\[\sup_{t\in[0,T]}\int_{\Omega_{x}}m_{k}f\,\mathrm{d}x+k\int_{0}^{T }\int_{\Omega_{x}}m_{k}f\,\mathrm{d}x\,\mathrm{d}t \tag{2.18}\] \[\qquad\qquad\qquad\leq k\|\mathbf{u}\|_{L^{1}(0,T;\mathbf{L}^{\mathbf{\infty }})}\sup_{t\in[0,T]}\int_{\Omega_{x}}m_{k-1}f\,\mathrm{d}x+\int_{\Omega_{x}}m_ {k}f_{0}\,\mathrm{d}x, \tag{2.17}\]
\[\|m_{0}f\|_{L^{3}(\Omega_{x})}^{3}\leq C\int_{\Omega_{x}}\int_{\mathbb{R}^{3} }|v|^{6}f\mathrm{d}v\,\mathrm{d}x, \tag{2.19}\]
\[\|m_{1}f\|_{L^{2}(\Omega_{x})}^{2}\leq C\int_{\Omega_{x}}\int_{\mathbb{R}^{3} }|v|^{5}f\mathrm{d}v\,\mathrm{d}x. \tag{2.20}\]
Proof.: Consider the equation for \(\frac{\partial f}{\partial x_{i}}\):
\[\partial_{t}\frac{\partial f}{\partial x_{i}}+v\cdot\nabla_{x}\frac{\partial f }{\partial x_{i}}+\nabla_{v}\cdot\left(\frac{\partial\mathbf{u}}{\partial x_{i}}f \right)+\nabla_{v}\cdot\left(\left(\mathbf{u}-v\right)\frac{\partial f}{\partial x _{i}}\right)=0,\]
for \(i=1,2,3\). Multiplying the above vector equation by \(\left(1+|v|^{k}\right)\nabla_{x}f\) and integrating with respect to \(x,v\) yields
\[\frac{1}{2}\frac{\mathrm{d}}{\mathrm{d}t}\int_{\Omega_{x}}\int_{\mathbb{R}^{3 }}\left(1+|v|^{k}\right)|\nabla_{x}f|^{2}\,\mathrm{d}v\,\mathrm{d}x=I_{1}+I_{ 2}+I_{3}\]
where
\[I_{1} =-\int_{\Omega_{x}}\int_{\mathbb{R}^{3}}\left(1+|v|^{k}\right) \nabla_{x}\mathbf{u}\nabla_{x}f\cdot\nabla_{v}f\,\mathrm{d}v\,\mathrm{d}x,\] \[I_{2} =3\int_{\Omega_{x}}\int_{\mathbb{R}^{3}}\left(1+|v|^{k}\right)| \nabla_{x}f|^{2}\,\mathrm{d}v\,\mathrm{d}x,\] \[I_{3} =-\frac{1}{2}\int_{\Omega_{x}}\int_{\mathbb{R}^{3}}\left(1+|v|^{k }\right)(\mathbf{u}-v)\cdot\nabla_{v}\left(|\nabla_{x}f|^{2}\right)\,\mathrm{d}v \,\mathrm{d}x.\]
After using Young's inequality in \(I_{1}\), we obtain
\[I_{1}\leq\|\nabla_{x}\mathbf{u}\|_{\mathbf{L}^{\mathbf{\infty}}}\int_{\Omega_{x}}\int_{ \mathbb{R}^{3}}\left(1+|v|^{k}\right)\left(|\nabla_{x}f|^{2}+|\nabla_{v}f|^{2 }\right)\,\mathrm{d}v\,\mathrm{d}x.\]
An integration by parts yields
\[I_{3}=-\int_{\Omega_{x}}\int_{\mathbb{R}^{3}}\left(1+|v|^{k}\right)|\nabla_{x} f|^{2}\,\mathrm{d}v\,\mathrm{d}x+I_{4}\]
with
\[I_{4}=\frac{k}{2}\int_{\Omega_{x}}\int_{\mathbb{R}^{3}}|v|^{k-2}v\cdot(\mathbf{u}- v)\,|\nabla_{x}f|^{2}\,\mathrm{d}v\,\mathrm{d}x.\]
A use of the Young's inequality shows
\[I_{4} \leq\frac{k}{2}\|\mathbf{u}\|_{\mathbf{L}^{\mathbf{\infty}}}\int_{\Omega_{x} }\int_{\mathbb{R}^{3}}|v|^{k-1}|\nabla_{x}f|^{2}\,\mathrm{d}v\,\mathrm{d}x+ \frac{k}{2}\int_{\Omega_{x}}\int_{\mathbb{R}^{3}}|v|^{k}|\nabla_{x}f|^{2}\, \mathrm{d}v\,\mathrm{d}x\] \[\leq\frac{k}{2}\|\mathbf{u}\|_{\mathbf{L}^{\mathbf{\infty}}}\int_{\Omega_{x}} \int_{\mathbb{R}^{3}}\left(\frac{k-1}{k}|v|^{k}+\frac{1}{k}\right)|\nabla_{x} f|^{2}\,\mathrm{d}v\,\mathrm{d}x+\frac{k}{2}\int_{\Omega_{x}}\int_{\mathbb{R}^{3}}|v|^{k}| \nabla_{x}f|^{2}\,\mathrm{d}v\,\mathrm{d}x\] \[\leq C\left(1+\|\mathbf{u}\|_{\mathbf{L}^{\mathbf{\infty}}}\right)\int_{\Omega_ {x}}\int_{\mathbb{R}^{3}}\left(1+|v|^{k}\right)|\nabla_{x}f|^{2}\,\mathrm{d}v\, \mathrm{d}x.\]
A similar computation involving the equation for \(\nabla_{v}f\) yields
\[\frac{1}{2}\frac{\mathrm{d}}{\mathrm{d}t}\int_{\Omega_{x}}\int_{ \mathbb{R}^{3}}\left(1+|v|^{k}\right)|\nabla_{v}f|^{2}\,\mathrm{d}v\,\mathrm{d}x \leq\int_{\Omega_{x}}\int_{\mathbb{R}^{3}}\left(1+|v|^{k}\right) \left(|\nabla_{x}f|^{2}+|\nabla_{v}f|^{2}\right)\,\mathrm{d}v\,\mathrm{d}x\] \[+C\left(1+\|\mathbf{u}\|_{\mathbf{L^{\infty}}}\right)\int_{\Omega_{x}} \int_{\mathbb{R}^{3}}\left(1+|v|^{k}\right)|\nabla_{v}f|^{2}\,\mathrm{d}v\, \mathrm{d}x.\]
Altogether, we obtain
\[\frac{\mathrm{d}}{\mathrm{d}t}\left(\int_{\Omega_{x}}\int_{ \mathbb{R}^{3}}\left(1+|v|^{k}\right)\left(|\nabla_{x}f|^{2}+|\nabla_{v}f|^{2} \right)\,\mathrm{d}v\,\mathrm{d}x\right)\leq C\left(1+\|\mathbf{u}\|_{\mathbf{W^{1, \infty}}}\right)\] \[\int_{\Omega_{x}}\int_{\mathbb{R}^{3}}\left(1+|v|^{k}\right) \left(|\nabla_{x}f|^{2}+|\nabla_{v}f|^{2}\right)\,\mathrm{d}v\,\mathrm{d}x.\]
A use of Gronwall's inequality shows our desired result.
Our next task it to derive (2.17) and (2.18). Multiplying equation (1.1) by \(|v|^{k}\), for \(k\geq 1\) and integrating in \(x,v\) variables yields
\[\partial_{t}\int_{\Omega_{x}}m_{k}f\,\mathrm{d}x+k\int_{\Omega_{x}}m_{k}f\, \mathrm{d}x=k\int_{\Omega_{x}}\int_{\mathbb{R}^{3}}|v|^{k-2}\mathbf{u}\cdot vf\, \mathrm{d}v\,\mathrm{d}x.\]
An integration of the above equation in time yields (2.17). Note that
\[m_{0}f=\int_{|v|<R}f\mathrm{d}v+\int_{|v|\geq R}f\mathrm{d}v\leq\|f\|_{L^{ \infty}(\mathbb{R}^{3})}R^{3}+\frac{1}{R^{6}}\int_{|v|\geq R}|v|^{6}f\mathrm{d}v.\]
After choosing \(R=\left(\int_{\mathbb{R}^{3}}|v|^{6}f\mathrm{d}v\right)^{\frac{1}{9}}\), we find that
\[|m_{0}f|\leq\left(\|f\|_{L^{\infty}(\mathbb{R}^{3})}+1\right)\left(\int_{ \mathbb{R}^{3}}|v|^{6}f\mathrm{d}v\right)^{\frac{1}{3}}.\]
Now, for \(k=1\):
\[m_{1}f=\int_{|v|<R}vf\mathrm{d}v+\int_{|v|\geq R}vf\mathrm{d}v\leq\|f\|_{L^{ \infty}(\mathbb{R}^{3})}R^{4}+\frac{1}{R^{4}}\int_{|v|\geq R}|v|^{5}f\mathrm{d}v.\]
Then, choosing \(R=\left(\int_{\mathbb{R}^{3}}|v|^{5}f\mathrm{d}v\right)^{\frac{1}{6}}\), we obtain
\[|m_{1}f|\leq\left(\|f\|_{L^{\infty}(\mathbb{R}^{3})}+1\right)\left(\int_{ \mathbb{R}^{3}}|v|^{5}f\mathrm{d}v\right)^{\frac{1}{2}}.\]
Thus, we arrive at (2.18) and this concludes the proof.
### Proof of the main theorem
We shall now prove Theorem 2.1.
Proof of Theorem 2.1.: Let \(0<T<\infty\) and set \(X:=L^{\infty}(0,T;\mathbf{J_{1}})\cap L^{2}(0,T;\mathbf{H^{2}})\), with the norm
\[\|\mathbf{u}\|_{X}=\|\mathbf{u}\|_{L^{\infty}(0,T;\mathbf{J_{1}})}+\|\mathbf{u}\|_{L^{2}(0,T; \mathbf{H^{2}})}.\]
Let us arbitrarily fix a \(f_{0}\) satisfying (2.1)-(2.2)-(2.3) and let us fix a \(\mathbf{u_{0}}\in\mathbf{H^{2}}\cap\mathbf{J_{1}}\). We now consider the map
\[\mathcal{T}:X \to X\] \[\mathbf{u}^{*}\longmapsto\mathbf{u}=\mathcal{T}(\mathbf{u}^{*})\]
defined by the following scheme:
* Solve the Vlasov equation: (2.20) \[\partial_{t}f+v\cdot\nabla_{x}f+\nabla_{v}\cdot\left((\mathbf{u}^{*}-v)\,f\right)=0,\] with initial data \(f_{0}\) and with periodic boundary conditions in the \(x\) variable.
* Solve the Stokes' equation: (2.21) \[\partial_{t}\mathbf{u}-\Delta_{x}\mathbf{u}+\nabla_{x}p=\rho V-\rho\mathbf{u},\] with intial data \(\mathbf{u_{0}}\) and with periodic boundary conditions in the \(x\) variable. Here \(\rho\) and \(\rho V\) are the local density and the local momentum associated with the solution \(f\) of (2.20), respectively.
To begin with, we show that the above map \(\mathcal{T}\) is well-defined. For a given \(\mathbf{u}^{*}\in X\) and a given initial datum \(f_{0}\), the Vlasov equation (2.20) is uniquely solvable (see Lemma 2.10 below for details). Having solved (2.20) for \(f(\mathbf{u}^{*})\), one gathers that the corresponding local density \(\rho\in L^{\infty}\) (see Lemma 2.3) and the corresponding momentum \(\rho V\in L^{2}\) (see Lemma 2.5). Hence, classical theory for the Stokes' problem [1] yields a unique solution \(\mathbf{u}\in X\) for the problem (2.21). Thus, the map \(\mathcal{T}:X\to X\) that takes \(\mathbf{u}^{*}\) to \(\mathcal{T}(\mathbf{u}^{*})=\mathbf{u}\) is well-defined.
Our next step in the proof is to show that \(\mathcal{T}\) is a contraction map and that has been demonstrated in Lemma 2.11 below. Therefore, an application of the Banach fixed-point theorem ensures the existence of a unique solution \((f,\mathbf{u})\) in a short time interval \((0,T^{0})\). As the solution \((f,\mathbf{u})\) stays bounded at \(t=T^{0}\), thanks to a priori estimates, we can employ continuation argument to extend the interval of existence upto \((0,T]\). As \(T\) is arbitrary, we get global-in-time well-posedness of our system.
Next we deal with Lemmata 2.10 and 2.11 which played a crucial role in the above proof.
**Lemma 2.10**.: _Let \(\mathbf{u}^{*}\in X\) and let \(f_{0}\in L^{1}(\Omega_{x}\times\mathbb{R}^{3})\cap L^{\infty}(\Omega_{x}\times \mathbb{R}^{3})\). Then, there exists a unique solution \(f\in L^{\infty}(0,T;L^{1}(\Omega_{x}\times\mathbb{R}^{3})\cap L^{\infty}( \Omega_{x}\times\mathbb{R}^{3}))\) to (2.20)._
Proof.: Note that (2.20) can be rewritten as
\[\partial_{t}f+b\cdot\nabla_{x,v}f-3f=0,\]
where \(b=(v,\mathbf{u}^{*}-v)\), which lies in
\[L^{1}(0,T;H^{1}(\Omega_{x}\times(-K,K)^{3})),\quad 0<K<\infty.\]
Note that \(\operatorname{div}_{x,v}b=-3\in L^{\infty}((0,T)\times\Omega_{x}\times \mathbb{R}^{3})\). Furthermore, \(|b|/(1+|v|)\) is bounded. This setting appeals to the general results in [10]. In particular, we can apply [10, Corollaries II-1 and II-2, p.518] to arrive at the existence of the unique solution.
**Lemma 2.11**.: _The map \(\mathcal{T}\) defined by (2.20) and (2.21) is a contraction map._
Proof.: Take \(\mathbf{u}^{*}_{1},\mathbf{u}^{*}_{2}\in X\). Let \(f_{i}\) be the unique solution to (2.20) for a given \(\mathbf{u}^{*}_{i}\in X\). Define \(\bar{\mathbf{u}}=\mathbf{u}_{1}-\mathbf{u}_{2},\bar{\mathbf{u}}^{*}=\mathbf{u}^{*}_{1}-\mathbf{u}^{*}_ {2}\) and \(\bar{f}=f_{1}-f_{2}\), then from (2.20)-(2.21) we find that
\[\bar{f}_{t}+v\cdot\nabla_{x}\bar{f}+\nabla_{v}\cdot\left(\bar{\mathbf{u}}^{*}f_{1 }+\mathbf{u}^{*}_{2}\bar{f}-v\bar{f}\right)=0, \tag{2.22}\]
and
\[\begin{cases}\partial_{t}\bar{\mathbf{u}}-\Delta_{x}\bar{\mathbf{u}}+\nabla_{x}\bar{ p}=\int_{\mathbb{R}^{3}}\left(v\bar{f}-\mathbf{u}_{2}\bar{f}-\bar{\mathbf{u}}f_{1} \right)\,\mathrm{d}v,\\ \nabla_{x}\cdot\bar{\mathbf{u}}=0\end{cases} \tag{2.23}\]
with initial data
\[\bar{f}(0,x,v)=0,\qquad\bar{\mathbf{u}}(0,x)=0.\]
Stokes' regularity [11, 1] yields
\[\|\bar{\mathbf{u}}\|_{X}^{2}\leq C\,\left\|\int_{\mathbb{R}^{3}}\left(v\bar{f}+ \mathbf{u}_{2}\bar{f}-\bar{\mathbf{u}}f_{1}\right)\,\mathrm{d}v\right\|_{L^{2}((0,T) \times\Omega_{x})}^{2} \tag{2.24}\]
Now, the Holder inequality followed by Sobolev imbedding shows
\[\begin{split}\left\|\int_{\mathbb{R}^{3}}\left(v\bar{f}-\mathbf{u}_{ 2}\bar{f}-\bar{\mathbf{u}}f_{1}\right)\,\mathrm{d}v\right\|_{L^{2}([0,T]\times \Omega_{x})}\leq\left\|\int_{\mathbb{R}^{3}}v\bar{f}\,\mathrm{d}v\right\|_{L^{2 }([0,T]\times\Omega_{x})}\\ +T^{\frac{1}{6}}\|\mathbf{u}_{2}\|_{X}\left\|\int_{\mathbb{R}^{3}} \bar{f}\,\mathrm{d}v\right\|_{L^{3}([0,T]\times\Omega_{x})}+T^{\frac{1}{2}}\| \bar{\mathbf{u}}\|_{X}\|m_{6}f_{1}\|_{L^{\infty}(0,T;L^{3}(\Omega_{x}))}.\end{split} \tag{2.25}\]
For a sufficiently small \(T>0\), there holds
\[C\,T\|m_{0}f_{1}\|_{L^{\infty}(0,T;L^{3}(\Omega_{x}))}^{2}\leq\frac{1}{2}. \tag{2.26}\]
Hence for such a choice of \(T\), we obtain
\[\|\bar{\mathbf{u}}\|_{X}^{2}\leq C\left\|\int_{\mathbb{R}^{3}}v\bar{f}\,\mathrm{d}v \right\|_{L^{2}([0,T]\times\Omega_{x})}^{2}+C\|\mathbf{u}_{2}\|_{X}^{2}\left\|\int_{ \mathbb{R}^{3}}\bar{f}\,\mathrm{d}v\right\|_{L^{3}([0,T]\times\Omega_{x})}^{2}. \tag{2.27}\]
Now, a similar calculation as in the proof of Lemma 2.9 implies
\[\left\|\int_{\mathbb{R}^{3}}v\bar{f}\,\mathrm{d}v\right\|_{L^{2}([0,T]\times \Omega_{x})}^{2}\leq C\int_{0}^{T}\int_{\mathbb{R}^{3}}\int_{\Omega_{x}}\,|v|^{ 5}|\bar{f}|\,\mathrm{d}x\,\mathrm{d}v\,\mathrm{d}t, \tag{2.28}\]
and
\[\left\|\int_{\mathbb{R}^{3}}\bar{f}\,\mathrm{d}v\right\|_{L^{3}([0,T]\times \Omega_{x})}^{2}\leq C\int_{0}^{T}\int_{\mathbb{R}^{3}}\int_{\Omega_{x}}\,|v|^{ 6}|\bar{f}|\,\mathrm{d}x\,\mathrm{d}v\,\mathrm{d}t. \tag{2.29}\]
Multiply equation (2.22) by \(|v|^{k}\frac{\bar{f}}{\sqrt{\bar{f}^{2}+\delta}}\) with \(k\geq 1\) and \(\delta>0\), to obtain
\[|v|^{k}\partial_{t}\left(\sqrt{\bar{f}^{2}+\delta}\right) +|v|^{k}\,v\cdot\nabla_{x}\left(\sqrt{\bar{f}^{2}+\delta}\right) +|v|^{k}\frac{\bar{f}}{\sqrt{\bar{f}^{2}+\delta}}\bar{\boldsymbol{u}}^{*} \cdot\nabla_{v}f_{1}\] \[+|v|^{k}\boldsymbol{u}_{2}^{*}\cdot\nabla_{v}\left(\sqrt{\bar{f} ^{2}+\delta}\right)-|v|^{k}\,\frac{\bar{f}}{\sqrt{\bar{f}^{2}+\delta}}\, \nabla_{v}\cdot\left(v\bar{f}\right)=0.\]
An integrate with respect to \(x,v\) shows
\[\partial_{t}\int_{\mathbb{R}^{3}}\int_{\Omega_{x}} |v|^{k}\left(\sqrt{\bar{f}^{2}+\delta}\right)\,\mathrm{d}x\, \mathrm{d}v-k\int_{\mathbb{R}^{3}}\int_{\Omega_{x}}\,|v|^{k-2}\,\frac{\bar{f} }{\sqrt{\bar{f}^{2}+\delta}}\,f_{1}\,\bar{\boldsymbol{u}}^{*}\cdot v\, \mathrm{d}x\,\mathrm{d}v\] \[-\int_{\mathbb{R}^{3}}\int_{\Omega_{x}}\,|v|^{k}\,f_{1}\,\bar{ \boldsymbol{u}}^{*}\cdot\nabla_{v}\left(\frac{\bar{f}}{\sqrt{\bar{f}^{2}+ \delta}}\right)\,\mathrm{d}x\,\mathrm{d}v+k\int_{\mathbb{R}^{3}}\int_{\Omega_{ x}}\,|v|^{k}\,\frac{\bar{f}^{2}}{\sqrt{\bar{f}^{2}+\delta}}\,\mathrm{d}x\, \mathrm{d}v\] \[-k\int_{\mathbb{R}^{3}}\int_{\Omega_{x}}\,|v|^{k-2}\,\left(\sqrt{ \bar{f}^{2}+\delta}\right)\,\boldsymbol{u}_{2}^{*}\cdot v\,\mathrm{d}x\, \mathrm{d}v\] \[+\int_{\mathbb{R}^{3}}\int_{\Omega_{x}}\,|v|^{k}\,\bar{f}\,\nabla _{v}\left(\frac{\bar{f}}{\sqrt{\bar{f}^{2}+\delta}}\right)\cdot v\,\mathrm{d}x \,\mathrm{d}v=0.\]
A use of the Sobolev inequality with integration in time yields
\[\begin{split}\sup_{t\in[0,T]}\int_{\mathbb{R}^{3}}\int_{\Omega_{ x}}\,|v|^{k}&\left(\sqrt{\bar{f}^{2}+\delta}\right)\,\mathrm{d}x\, \mathrm{d}v+k\int_{0}^{T}\int_{\mathbb{R}^{3}}\int_{\Omega_{x}}|v|^{k}\frac{ \bar{f}^{2}}{\sqrt{\bar{f}^{2}+\delta}}\,\mathrm{d}x\,\mathrm{d}v\,\mathrm{d} t\\ &\leq\int_{0}^{T}k\|\bar{\boldsymbol{u}}^{*}\|_{\boldsymbol{H}^{ 2}}\|m_{k-1}f_{1}\|_{L^{1}(\Omega_{x})}\,\mathrm{d}t+|T_{k}^{1}|+|T_{k}^{2}|\\ &\quad+\int_{0}^{T}k\|\boldsymbol{u}_{2}^{*}\|_{\boldsymbol{H}^{ 2}}\left\|\int_{\mathbb{R}^{3}}|v|^{k-1}\left(\sqrt{\bar{f}^{2}+\delta}\right) \,\mathrm{d}v\right\|_{L^{1}(\Omega_{x})}\,\mathrm{d}t\\ &\leq T^{\frac{1}{2}}\|\bar{\boldsymbol{u}}^{*}\|_{X}\|m_{k-1}f_{1} \|_{L^{\infty}(0,T;L^{1}(\Omega_{x}))}+|T_{k}^{1}|+|T_{k}^{2}|\\ &+T^{\frac{1}{2}}\|\boldsymbol{u}_{2}^{*}\|_{X}\left\|\int_{ \mathbb{R}^{3}}|v|^{k-1}\left(\sqrt{\bar{f}^{2}+\delta}\right)\,\mathrm{d}v \right\|_{L^{\infty}(0,T;L^{1}(\Omega_{x}))}.\end{split} \tag{2.30}\]
Here
\[T_{k}^{1}=\int_{0}^{T}\int_{\mathbb{R}^{3}}\int_{\Omega_{x}}|v|^{k}\,f_{1}\, \bar{\boldsymbol{u}}^{*}\cdot\nabla_{v}\left(\frac{\bar{f}}{\sqrt{\bar{f}^{2}+ \delta}}\right)\,\mathrm{d}x\,\mathrm{d}v\,\mathrm{d}t=\int_{0}^{T}\int_{ \mathbb{R}^{3}}\int_{\Omega_{x}}\,|v|^{k}\,f_{1}\,\bar{\boldsymbol{u}}^{*} \cdot\frac{\delta\,\nabla_{v}\bar{f}}{\left(\bar{f}^{2}+\delta\right)^{\frac{ 3}{2}}}\,\mathrm{d}x\,\mathrm{d}v\,\mathrm{d}t\]
and
\[T_{k}^{2}=\int_{0}^{T}\int_{\mathbb{R}^{3}}\int_{\Omega_{x}}|v|^{k}\nabla_{v} \left(\frac{\bar{f}}{\sqrt{\bar{f}^{2}+\delta}}\right)\cdot v\bar{f}\,\mathrm{d }x\,\mathrm{d}v\,\mathrm{d}t=\int_{0}^{T}\int_{\mathbb{R}^{3}}\int_{\Omega_{x} }\,|v|^{k}\frac{\delta\,\nabla_{v}\bar{f}}{\left(\bar{f}^{2}+\delta\right)^{ \frac{3}{2}}}\cdot v\bar{f}\,\mathrm{d}x\,\mathrm{d}v\,\mathrm{d}t.\]
As \(\bar{f}\in L^{1}(0,T;L^{\infty}(\Omega_{x}\times\mathbb{R}^{3}))\) and as fifth order velocity moments of \(|\nabla_{v}\bar{f}|^{2}\) and \(\bar{f}\) are bounded (see Lemma 2.9), \(|T_{k}^{1}|\to 0\) and \(|T_{k}^{2}|\to 0\) as \(\delta\to 0\) for \(k=1,2,3,4,5,6\). Next, we multiply equation (2.22) by \(\frac{\bar{f}}{\sqrt{\bar{f}^{2}+\delta}}\) and
integrate with respect to \(x,v\) and \(t\) variables to obtain
\[\int_{\mathbb{R}^{3}}\int_{\Omega_{x}}\,\sqrt{\bar{f}^{2}+\delta}\, \mathrm{d}x\,\mathrm{d}v-\int_{0}^{T}\int_{\mathbb{R}^{3}}\int_{\Omega_{x}}\, \nabla_{v}\left(\frac{\bar{f}}{\sqrt{f^{2}+\delta}}\right)\cdot\left(\bar{\mathbf{u }}^{*}f_{1}+\mathbf{u}_{2}^{*}\bar{f}-v\bar{f}\right)\,\mathrm{d}x\,\mathrm{d}v\] \[=\int_{\mathbb{R}^{3}}\int_{\Omega_{x}}\,\sqrt{\bar{f}^{2}(0,x,v)+ \delta}\,\mathrm{d}x\,\mathrm{d}v.\]
Note that \(\bar{f}(0,x,v)=0\) and \(\nabla_{v}\left(\frac{\bar{f}}{\sqrt{f^{2}+\delta}}\right)=\frac{\delta\, \nabla_{v}\bar{f}}{\left(\bar{f}^{2}+\delta\right)^{\frac{2}{2}}}\). Hence, arguing as we did with the \(T_{k}^{1}\) and \(T_{k}^{2}\) terms, in the \(\delta\to 0\) limit, the above equation yields
\[\int_{\mathbb{R}^{3}}\int_{\Omega_{x}}|\bar{f}|\,\mathrm{d}x\,\mathrm{d}v=0. \tag{2.31}\]
Using the recurrence relation in (2.30), we arrive at
\[\int_{0}^{T}\int_{\mathbb{R}^{3}}\int_{\Omega_{x}}|v|^{5}\,|\bar{f }|\,\mathrm{d}x\,\mathrm{d}v\,\mathrm{d}t\lesssim T^{\frac{1}{2}}\|\bar{\mathbf{u }}^{*}\|_{X}\left(\|m_{4}f_{1}\|_{L^{\infty}(0,T;L^{1}(\Omega_{x}))}\right.\] \[\left.+\|\mathbf{u}_{2}^{*}\|_{X}\|m_{3}f_{1}\|_{L^{\infty}(0,T;L^{1}( \Omega_{x}))}+\|\mathbf{u}_{2}^{*}\|_{X}^{2}\|m_{2}f_{1}\|_{L^{\infty}(0,T;L^{1}( \Omega_{x}))}\right.\] \[\left.+\|\mathbf{u}_{2}^{*}\|_{X}^{3}\|m_{1}f_{1}\|_{L^{\infty}(0,T;L^ {1}(\Omega_{x}))}+\|\mathbf{u}_{2}^{*}\|_{X}^{4}\|m_{0}f_{1}\|_{L^{\infty}(0,T;L^{ 1}(\Omega_{x}))}\right), \tag{2.32}\]
and
\[\int_{0}^{T}\int_{\mathbb{R}^{3}}\int_{\Omega_{x}}|v|^{6}\,|\bar{f }|\,\mathrm{d}x\,\mathrm{d}v\,\mathrm{d}t\lesssim T^{\frac{1}{2}}\|\bar{\mathbf{u }}^{*}\|_{X}\left(\|m_{5}f_{1}\|_{L^{\infty}(0,T;L^{1}(\Omega_{x}))}\right.\] \[\left.+\|\mathbf{u}_{2}^{*}\|_{X}\|m_{4}f_{1}\|_{L^{\infty}(0,T;L^{1}( \Omega_{x}))}+\|\mathbf{u}_{2}^{*}\|_{X}^{2}\|m_{3}f_{1}\|_{L^{\infty}(0,T;L^{1}( \Omega_{x}))}\right.\] \[\left.+\|\mathbf{u}_{2}^{*}\|_{X}^{3}\|m_{2}f_{1}\|_{L^{\infty}(0,T;L^ {1}(\Omega_{x}))}+\|\mathbf{u}_{2}^{*}\|_{X}^{4}\|m_{1}f_{1}\|_{L^{\infty}(0,T;L^{ 1}(\Omega_{x}))}\right.\] \[\left.+\|\mathbf{u}_{2}^{*}\|_{X}^{5}\|m_{0}f_{1}\|_{L^{\infty}(0,T;L^ {1}(\Omega_{x}))}\right). \tag{2.33}\]
Using (2.28), (2.29), (2.32) and (2.33) in (2.27) while employing (2.17) for handling \(m_{k}f_{1}\) terms, for sufficiently small \(T>0\), we obtain
\[\|\bar{\mathbf{u}}\|_{X}\leq\alpha\|\bar{\mathbf{u}}^{*}\|_{X},\quad\text{ for some }\quad\alpha\in(0,1).\]
This shows that \(\mathcal{T}\) is a contraction map.
**Remark 2.12**.: _In [13], the author treats the difference \(\overline{f}:=f_{1}-f_{2}\) as non-negative (see, in particular, the two inequalities at the end of page 290 in [13]). This is a misstep and the above proof fixes that. The correct versions of those two inequalities in three dimensions are given above (see (2.28) and (2.29)). Furthermore, in our above analysis, we encountered the terms \(T_{k}^{1}\) and \(T_{k}^{2}\). To understand their behaviours in the \(\delta\to 0\) limit requires the boundedness property of the velocity moments associated with the first-order derivatives of the distribution function. Such a bound was established in Lemma 2.9 above. It isn't very clear if one can prove the map \(\mathcal{T}\) is a contraction with the assumptions on the initial datum only analogous to those in [13]._
**Remark 2.13**.: _Hofer in [14] sets up the proof of well-posedness in a similar fashion (similar to the above proof of Theorem 2.1). In our scheme, we solve the Vlasov equation for a fixed \(\mathbf{u}^{*}\) followed by solving the unsteady Stokes' equation with the local density and local momentum associated with the solution \(f(\mathbf{u}^{*})\). In [14], however, the author's scheme is to solve the steady Stokes' equation for a fixed \(f^{*}\in W^{1,\infty}((0,T)\times\mathbb{R}^{3}\times\mathbb{R}^{3})\) followed by solving the Vlasov equation with the fluid velocity \(\mathbf{u}(f^{*})\). The contraction property is demonstrated by analysis the Vlasov equation. Hence, it goes via the analysis of the characteristics and the Banach space where the contraction property is established turns out to be \(W^{1,\infty}((0,T)\times\mathbb{R}^{3}\times\mathbb{R}^{3})\)._
|
2307.00138 | Substrate suppression of oxidation process in pnictogen monolayers | 2D materials present an interesting platform for device designs. However,
oxidation can drastically change the system's properties, which need to be
accounted for. Through {\it ab initio} calculations, we investigated
freestanding and SiC-supported As, Sb, and Bi mono-elemental layers. The
oxidation process occurs through an O$_2$ spin-state transition, accounted for
within the Landau-Zener transition. Additionally, we have investigated the
oxidation barriers and the role of spin-orbit coupling. Our calculations
pointed out that the presence of SiC substrate reduces the oxidation time scale
compared to a freestanding monolayer. We have extracted the energy barrier
transition, compatible with our spin-transition analysis. Besides, spin-orbit
coupling is relevant to the oxidation mechanisms and alters time scales. The
energy barriers decrease as the pnictogen changes from As to Sb to Bi for the
freestanding systems, while for SiC-supported, they increase across the
pnictogen family. Our computed energy barriers confirm the enhanced robustness
against oxidation for the SiC-supported systems. | R. L. H. Freire, F. Crasto de Lima, A. Fazzio | 2023-06-30T21:07:37Z | http://arxiv.org/abs/2307.00138v1 | # Substrate suppression of oxidation process in pnictogen monolayers
###### Abstract
2D materials present an interesting platform for device designs. However, oxidation can drastically change the system's properties, which need to be accounted for. Through _ab initio_ calculations, we investigated freestanding and SiC-supported As, Sb, and Bi mono-elemental layers. The oxidation process occurs through an O\({}_{2}\) spin-state transition, accounted for within the Landau-Zener transition. Additionally, we have investigated the oxidation barriers and the role of spin-orbit coupling. Our calculations pointed out that the presence of SiC substrate reduces the oxidation time scale compared to a freestanding monolayer. We have extracted the energy barrier transition, compatible with our spin-transition analysis. Besides, spin-orbit coupling is relevant to the oxidation mechanisms and alters time scales. The energy barriers decrease as the pnictogen changes from As to Sb to Bi for the freestanding systems, while for SiC-supported, they increase across the pnictogen family. Our computed energy barriers confirm the enhanced robustness against oxidation for the SiC-supported systems.
The realization of two-dimensional (2D) materials through diverse experimental techniques have increased interest in their technological applications on electronic devices. Particularly, the arising topological insulating phase in bisunthene [1], antimonene [2], strained arsenene [3; 4], with the former robust against disorder [5; 6], leading to low-power spintronics [7]. However, the experimental conditions towards scalable production of these materials pose great challenges due to their relatively low stability [8], mainly at room temperature and in the presence of air (oxygen). Freestanding monoelemental materials, like phosphorene, were shown to be very unstable upon O\({}_{2}\)-exposure being degraded within a few hours [9]. Indeed, freestanding monolayer pnictogens (P and As) are more prone to oxidation than other 2D materials presenting the same atomic structure [10], while the presence of a substrate can alter the oxidation process [8].
The O\({}_{2}\) molecule occurs naturally in a triplet (\({}^{3}\Sigma_{g}^{-}\)) ground state. On the other hand, under experimental conditions (e.g., photoexcitation [11]), O\({}_{2}\) molecule can be found in excited singlet states, namely \({}^{1}\Delta_{g}\) and \({}^{1}\Sigma_{g}^{+}\). The singlet states are more reactive than the ground state triplet, being of great importance in oxidation process [12]. Experimental results over oxidation of 3D-stacked pnictogen systems (down to a few layers), show the robustness of oxidation for the internal layers, while the surface presents oxygen groups [13; 14]. Ruled by the higher interlayer bond of heavier pnictogens (compared with the phosphorene), the formation of surface oxide-layer protects the internal layers from oxidation [15; 16; 17]. There are studies about oxidation on 2D pnictogen materials, however focusing on the freestanding configuration [18; 19; 20; 21; 22; 23], while not taking into account fundamental aspects, such as the role of triplet-singlet transitions, and spin-orbit effects.
At the same time, the realization of supported materials through molecular beam epitaxy (MBE) has attracted attention, for example, Sb/Ag(111) [24], Bi/SiC(0001) [25] and As/SiC(0001) [26] with a planar structure [27]. Particularly, the topological insulating phase of bisunthene and other pnictogens was predicted when supported on SiC substrate [1; 2]. While the presence of a substrate can alter the oxidation kinetics of 2D systems [28]. In this sense, understanding the mechanisms behind oxygen interaction with those substrate-supported materials is a key point for future experimental investigations upon applications and routes to improve their stability.
In this paper, we show that the oxidation process of pnictogen monolayers is considerably lower (slower) when deposited on top of SiC substrate. Taking an _ab initio_ approach based on the density functional theory (DFT) we investigated the rate of formation of reactive oxygen species, i.e. O\({}_{2}\) triplet-singlet transition, close to the materials' surface in the buckled free-standing (FS) form and in the flat structure when on top of SiC substrate (SiC). We connected such rate of formation with the reaction barrier calculated within the nudge elastic band (NEB) method. The FS case reacts barrierless with the singlet O\({}_{2}\) molecule, while the supported one presents a non-negligible barrier. Additionally, the barriers found for the triplet O\({}_{2}\) molecule are considerably larger for the heavier pnictogen Bi. Our results draw attention to the possible atmospheric stability of supported pnictogens monolayer.
Group-5A elemental monolayers were investigated through spin-polarized calculations based on density functional theory (DFT) [29; 30], performed within the semi-local exchange-correlation functional proposed by Perdew-Burke-Ernzerhof [31]. For the total energies, the electron-ion interactions were considered within the projector augmented wave (PAW) method [32; 33], as implemented in the vienna _ab-initio_ simulation package (VASP) [34; 35]. For all calculations, the cutoff energy for the plane-wave expansion of the Kohn-Sham orbitals was set to 400 eV, under an energy convergence parameter of \(10^{-6}\) eV, with all atoms relaxed until the atomic forces on every atom were smaller than \(10^{-2}\) eV A\({}^{-1}\). We considered \(3\times 3\) unit cells with 13 A and 16 A distance between periodic images for FS and SiC-supported sys
tems respectively. A uniform \(4\times 4\times 1\) k-point mesh was considered for the Brillouin zone (BZ) integration.
The oxidation process of pnictogen 2D allotropes is known in the literature to be an exothermic process. We calculate the adsorption energy (\(E_{a}\)) of a single oxygen atom on the pnictogen surface in its buckled freestanding geometry (FS) and in the flat geometry presented when supported on silicon-carbide (SiC-supported) [Fig. 1(a) and (b)]. It is worth pointing out that the bismuthene and antimonene on top of SiC form honeycomb lattices, while arsenene has a lower energy triangular lattice [26], which is considered here. In Table 1,we present our calculations for the adsorption energy
\[E_{a}=E_{\rm X+O}-E_{\rm X}-\frac{1}{2}E_{\rm O_{2}}, \tag{1}\]
where \(E_{X}\) is the pristine pnictogen configuration, \(E_{\rm X+O}\) the pnictogen with single oxygen adsorbed on its surface, and \(E_{\rm O_{2}}\) the isolated \(O_{2}\) molecule total energy. Indeed, the adsorption process is still exothermic even for the substrate-supported case. To obtain those adsorption energies we have considered different adsorption sites according to the surface geometry. Thus, in the FS case, we probed on-top, bridge, valley, and hollow sites, while for SiC were on-top, bridge, and hollow sites. For all cases in the lower energy configuration, the oxygen atom forms a bridge between adjacent pnictogen atoms. Comparing the FS with the supported SiC system, we see higher adsorption energies for Sb and Bi, while a decrease is observed for As. Here, the supported As system has a larger tensile strain than the Sb and Bi, when compared to their freestanding structure [26]. The oxygen adsorption, bridging two adjacent As atoms, contributes to lowering the tensile strain, therefore leading to lower adsorption energy.
Although there is an indication of a higher exothermic process for As, oxidation can have different reaction time scales for each system. Here we will (i) explore the Landau-Zener probability of transition between oxygen molecule triplet and its most reactive form, oxygen-singlet, close to the pnictogen surfaces and (ii) explore energy barriers for the oxidation process considering the role of the spin-orbit coupling through the nudge elastic band (NEB) method.
Analyzing the total energy of an O\({}_{2}\) molecule close to a materials interface, we see a dependence between the singlet and triplet spin configurations total energies and the molecule distance from the pnictogen surface, as show in Fig. 1 (c). Away from the surface the singlet and triplet state are separated in energy by \(\Delta E_{\rm vac}\sim 1\) eV, while close to the pnictogen surfaces they present an energy crossing. This crossing implies a transition probability between the two spin states of O\({}_{2}\) molecule. Based on the slope of the triplet and singlet curves we have obtained the triplet-singlet transition probabilities (\(P_{ts}\)) by employing the Landau-Zener relation (\(P_{LZ}\)) [36; 37; 8]
\[P_{ts}=(1-P_{LZ})(1+P_{LZ}), \tag{2}\]
where
\[P_{LZ}=\exp\left(-\frac{V^{2}}{hv|F_{t}-F_{s}|}\right). \tag{3}\]
Here, \(V\) is the spin-orbit matrix element of O\({}_{2}\) molecule (122 cm\({}^{-1}\)), \(v\) the velocity of O\({}_{2}\) molecule at room temperature (483.59 m s\({}^{-1}\)), and \(F_{i}\) the forces acting on the O\({}_{2}\) molecule for each spin state (triplet and singlet) [8]. It is worth noting that \(F_{i}\) will depend on the materials local adsorption site, and the arriving geometry of the O\({}_{2}\) molecule. That is, a single adsorption site can not capture the variations on triplet-single transition, as at experimental conditions this should run over a large distribution of possible sites and molecule geometries (orientation with respect to the surface). Our analysis includes different adsorption sites for both FS and SiC-supported structures and different molecule geometries. This will generate one-dimensional curves as that presented as an example in Fig. 1 (c), in which the singlet and triplet potential energy surfaces cross at some point (\(d_{cross}\)) [37]. We extracted information about the (i) triplet-singlet crossing distance (\(d_{cross}\)); (ii) crossing point relative energy (\(\Delta E_{cross}\)), (iii) the singlet minimum relative energy
\begin{table}
\begin{tabular}{c c c c} phases & As & Sb & Bi \\ \hline FS & \(-1.01\) & \(-1.32\) & \(-1.06\) \\ SiC & \(-2.69\) & \(-1.15\) & \(-0.45\) \\ \end{tabular}
\end{table}
Table 1: Oxygen adsorption energy, \(E_{a}\) (eV/O-atom), on pnictogen surfaces on their freestanding (FS) configuration and on silicon carbide-supported (SiC) configuration. The most stable configuration is an epoxy-like bridge bond inclined towards the hexagonal center.
Figure 1: O\({}_{2}\) adsorption model for (a) freestanding and (b) SiC-supported structures, and (c) an example for evaluating the Landau-Zener probabilities, including a few definitions like the distance of the molecule center-of-mass from the 2D material surface at the triplet-singlet crossing (\(d_{\rm cross}\)), the singlet-triplet energy difference far from the surface (\(\Delta E_{vac}\)), at the energy minimum (\(\Delta E_{min}\)).
(\(\Delta E_{min}\)), and (iv) the triplet-singlet transition probability (\(P_{ts}\)).
In Fig. 2, we present the triplet-singlet transition probability (\(P_{ts}\), in the color bar), mapping it with respect to the crossing relative energies (\(\Delta E_{cross}\)) and the distance from the surface at the crossing point (\(d_{cross}\)). In the right panel close to the color bar we represent the \(P_{ts}\) statistical distribution. Here, 50% of the FS configurations presented \(P_{ts}\,^{\prime}s<5\%\), while 60% for SiC-supported. Additionally, the SiC-supported has a probability transition more concentrated around the 2%, while the FS configurations present values spreading to higher probabilities. That is, we have a statistical indication that the triplet-single transition is more probable in FS than in the SiC-supported pnictogens.
In Table 2, we summarize the average values and mean deviation for the different configurations probed. Despite the significant mean deviation values, we can see that the \(P_{ts}\) average for FS is larger than that for SiC-supported, indicating FS as more prone to \(O_{2}\) triplet-singlet transition than SiC-supported, thus facilitating the oxidation process. The crossing distance between the triplet-singlet curves is higher for the SiC-supported than in FS, given the buckled nature of the latter. We see a monotonic growth of \(P_{ts}\) when going from As\(\rightarrow\)Sb\(\rightarrow\)Bi in the FS case, which is not observed for the SiC system. Furthermore, we see a correlation between \(d_{cross}\) with the \(P_{ts}\), where the closer to the surface, the larger \(P_{ts}\), that is, the surface orbitals interaction with the molecule is ruling the transition. In fact, because of the different bonding nature within the two structures, their orbitals will have different spreading into the vacuum region. In the FS structure, there is a hybridization between in-plane and out-of-plane orbitals forming a \(sp^{3}\) (\(s,p_{x},p_{y},p_{z}\)) bonding, while in the flat SiC-supported, the absence of hybridization between in-plane and out-of-plane orbitals leads to the formation of a \(sp^{2}\) bonding, and a remaining out-of-plane orbital (\(p_{z}\)) [27]. Because the \(p_{z}\) orbital is not hybridized in the latter it can possibly spread into larger distances within the vacuum region if compared to the FS structure. Thus, the molecule will feel the presence of the SiC-supported structure at larger distances as a result of the interaction with this out-of-plane orbital depending on the surface site and geometry it approaches. The singlet configuration presents minimum energy when close to the system surface, being the singlet minimum relative energy (\(\Delta E_{min}\)) lower for the SiC system. This singlet minimum energy is due to unstable physisorbed configurations of the O\({}_{2}\) that arise only when constraining the system in the singlet state. As we will show below, such configuration presents a barrierless transition to oxidation and cannot be stabilized on FS systems.
Given the scenario for triplet-singlet transition, the reaction rate is also dependent on the energy barrier for both configurations to adsorb on the pnictogen surface.
\begin{table}
\begin{tabular}{c c c c c} & phases & \(\Delta E_{min}\) & \(d_{cross}^{OX}\) & \(P_{ts}\) \\ \hline As & FS & 1.04 (0.02) & 1.51 (0.46) & 2.46 (1.78) \\ & SiC & 0.94 (0.11) & 1.87 (0.28) & 2.15 (1.25) \\ Sb & FS & 0.93 (0.07) & 1.41 (0.54) & 3.33 (2.03) \\ & SiC & 0.72 (0.15) & 1.74 (0.64) & 2.85 (1.67) \\ Bi & FS & 0.77 (0.09) & 1.30 (0.55) & 4.25 (2.31) \\ & SiC & 0.70 (0.10) & 1.74 (0.61) & 2.77 (1.34) \\ \end{tabular}
\end{table}
Table 2: Average values of \(\Delta E_{min}\) (eV), \(d_{cross}\) (Å) [shown in Fig. 1], and Landau-Zener triplet-singlet probability transition \(P_{ts}\) (%) for all configurations tested [Fig. 2]. Numbers in parentheses are the standard deviation for the respective quantity.
Figure 2: Triplet-singlet Landau-Zener transition probabilities (\(P_{ts}\)) for free-standing (top) and SiC-supported (bottom) systems for a few different adsorption sites, depicted with the triplet-singlet crossing distance (\(d_{cross}\)) and crossing energy (\(\Delta E_{cross}\)). \(P_{ts}\) is indicated by the color bar, and the histogram indicates the \(P_{ts}\) value distribution.
Here we have calculated the energy barrier through the nudge elastic band (NEB) method, considering three scenarios: (i) O\({}_{2}\) in an enforced singlet configuration, (ii) O\({}_{2}\) with a free spin degree of freedom without spin-orbit coupling, and (iii) a fully relativistic case taking spin-orbit coupling into account. Our results are presented in Fig. 3 and summarized in Table 3. First, analyzing the enforced singlet case the O\({}_{2}\) molecule finds no energy barrier to dissociate over the FS material surface, while for SiC-supported systems there always exists an energy barrier. The singlet energy barrier for the latter is lower for the As and Sb system (0.36 and 0.47 eV respectively) while a higher value of 1.52 eV was found for Bi. We see a different scenario when considering a free spin degree of freedom, here far from the surface the O\({}_{2}\) is in a triplet state while through the barrier it changes to a singlet state before dissociation (see the magnetization in the lower panels of Fig. 3). Such behavior is present with or without the spin-orbit effect. This spin transition before the dissociation is dictated by a spin selection rule given the non-magnetic character of oxidized pnictogens [38, 39]. The spin-orbit effect is negligible for As and Sb systems, while presenting different effects on Bi. For Bi-FS the spin-orbit coupling lowers both the barrier maximum and the initial state energies, while for Bi-SiC it lowers the initial state keeping the barrier maximum energy. In the singlet states s=0, the spin-orbit contribution vanishes (\(\tilde{L}\cdot\vec{s}\)), while on the triplet state it presents a non-vanishing contribution. For the Bi-FS the triplet state persists higher on the barrier which gives this barrier lowering, while on the Bi-SiC in the barrier maximum, the s=0 state is already defined.
We see different behavior of the barrier for FS and SiC configuration across the pnictogen group. While for FS, heavier pnictogen present a lower barrier, for supported system the opposite is observed. The decrease in barrier towards heavier pnictogens in FS configuration was also previously observed [21]. For FS system, the Bi system presents a lower energy barrier. Indeed, our Landau-Zener transition probability analysis has shown that the triplet-singlet transition for Bismuth is more favorable than Sb and As. As indicated by the magnetization panels of Fig. 3, the barrier height in the non-strained FS system is ruled by the triplet-singlet transition. On the other hand, the SiC-supported pnictogens are under strain, which can change their interaction energy with O\({}_{2}\). Bismuth has the largest atomic radius among the pnictogens studied, being under lower strain followed by Sb and As for the SiC supported structure [26]. Such lower strain energy makes the initial configuration (before O\({}_{2}\) reaction) lower in energy compared with other pnictogens, leading to a higher barrier for the reaction.
The rate of oxidation for the pristine pnictogen systems can be estimated as
\[f_{0}=\nu e^{(-E_{b}/kT)} \tag{4}\]
with \(\nu\) the attempt frequency, \(E_{b}\) the calculated barrier energy. In the kinetic theory of gases, for one atmospheric pressure, at 300 K (\(kT=0.026\) eV), the number of \(O_{2}\) molecules arriving at a surface per unity of area, per unity of time is
\[\frac{n}{4}\sqrt{\frac{8kT}{\pi m}}\sim\frac{1.87\cdot 10^{24}}{\rm s\cdot cm^{ 2}}, \tag{5}\]
with \(n=5.1\cdot 10^{24}\) m\({}^{-3}\) the number of \(O_{2}\) molecules in air per volume at atmospheric pressure/temperature, and \(m=4.9\cdot 10^{-26}\) kg the \(O_{2}\) mass, at \(kT=4.16\cdot 10^{-18}\) kg m\({}^{2}\)s\({}^{-2}\).
Such rate of oxidation \(f_{0}\), is valid for the pristine non-oxidized surface. When the system approaches its most stable oxide phase X\({}_{2}\)O\({}_{3}\) (with X=As, Sb, Bi), such rate should vanish. Therefore the rate of oxidation should
Figure 3: O\({}_{2}\) reaction barriers (upper panels) and magnetization along the barrier (lower panels) calculated by the nudge elastic band method, for (a1)-(d1) the FS configuration and (a2)-(d2) the SiC configuration. The atoms’ trajectory shown is for the Bi systems, similar geometries are observed for the other systems.
decay with the surface oxygen concentration \(\eta\) from \(f_{0}\) to zero at the critical concentration \(\eta_{c}\) equivalent to the oxygen density in the X\({}_{2}\)O\({}_{3}\) phase.
\[f(\eta)=f_{0}e^{-\frac{\eta}{\eta_{c}-\eta}}. \tag{6}\]
Given such oxidation rate, the reaction time needed for the system to oxidize one cm\({}^{2}\) from oxygen concentration zero up to \(\eta\) is
\[T=2\int_{0}^{\eta}[f(x)]^{-1}dx. \tag{7}\]
In Fig. 4, we display the reaction time as a function of the relative O concentration \(\eta/\eta_{c}\), for different temperatures. Here we can see a fast oxidation process for the FS systems. Indeed, experimental results on multilayer pnictogen systems have shown a fast oxidation process on the exposed surface layer [13; 14; 15]. However, on supported SiC systems, the time scale increases by several orders of magnitude. For As and Sb systems, despite the increased time scale, the oxidation process still hinders experimental realization of Arsenene/Antimonene at atmospheric conditions. On the other hand, supported Bi present an oxidation process slow enough to allow an exposition of its surface on atmospheric condition. Increasing the temperature can lead to an oxidation reaction time drastically reduced. For instance, temperatures about 390 K should be enough for the supported Bi system to lose its oxidation robustness.
In summary, we have shown the triplet-singlet spin-transition of O\({}_{2}\) molecule, rules the oxidation process in monolayer pnictogens. Through our Landau-Zener statistical analysis, we have shown that the FS systems present higher spin-transition probabilities than the SiC-supported ones. By exploring the minimum energy path through the O\({}_{2}\) dissociation, we have extracted the barrier transition energy, compatible with our spin-transition analysis. Besides, spin-orbit coupling plays an important role in the oxidation mechanisms and time scales. Particularly, it has a significant effect on SiC-supported systems. The energy barrier presents an inverse dependence with the heavier pnictogen for the FS system (lower for Bi), while a direct dependence is observed for the SiC-supported systems (higher for Bi). The computed barriers confirm the enhanced robustness against oxidation for the SiC-supported systems. Based on that, we have established that according to the reaction time scale for complete oxidation (at 300 K), SiC-supported Bi is robust against atmospheric conditions. Our results open a path to explore the optimal 2D systems/substrate interplay aiming their experimental manipulation for further applications at atmospheric conditions.
Figure 4: Reaction time for oxidation from zero up to oxygen concentration of X\({}_{2}\)O\({}_{3}\) phase (X=As, Sb, Bi), namely critical concentration \(n_{c}\). Upper panels are for FS configuration and lower panels for SiC-supported.
\begin{table}
\begin{tabular}{c c c c c c c} system & FS\({}_{soc}^{s=1}\) & FS\({}_{no-soc}^{s=1}\) & FS\({}_{soc}^{s=0}\) & SiC\({}_{soc}^{s=1}\) & SiC\({}_{no-soc}^{s=1}\) & SiC\({}_{s}^{s=0}\) \\ \hline As & 0.90 & 0.91 & 0.00 & 1.17 & 1.17 & 0.36 \\ Sb & 0.59 & 0.59 & 0.00 & 1.20 & 0.91 & 0.47 \\ Bi & 0.40 & 0.40 & 0.00 & 2.38 & 2.11 & 1.16 \\ \end{tabular}
\end{table}
Table 3: Barrier energies \(E_{bar}\) (eV) for O\({}_{2}\) reaction on pnictogen surfaces for the initial state in enforced singlet (\(s=0\)), and triplet (\(s=1\)) without/with SOC (no-soc/soc).
###### Acknowledgements.
The authors acknowledge financial support from the Brazilian agencies FAPESP (grants 20/14067-3, 19/20857-0, and 17/02317-2), CNPq, INCT-Nanocarbono, INCT-Materials Informatics, and Laboratorio Nacional de Computacao Cientifica for computer time (project ScafMat2 and emt2D).
|
2302.14793 | TREXIO: A File Format and Library for Quantum Chemistry | TREXIO is an open-source file format and library developed for the storage
and manipulation of data produced by quantum chemistry calculations. It is
designed with the goal of providing a reliable and efficient method of storing
and exchanging wave function parameters and matrix elements, making it an
important tool for researchers in the field of quantum chemistry. In this work,
we present an overview of the TREXIO file format and library. The library
consists of a front-end implemented in the C programming language and two
different back-ends: a text back-end and a binary back-end utilizing the HDF5
library which enables fast read and write operations. It is compatible with a
variety of platforms and has interfaces for the Fortran, Python, and OCaml
programming languages. In addition, a suite of tools has been developed to
facilitate the use of the TREXIO format and library, including converters for
popular quantum chemistry codes and utilities for validating and manipulating
data stored in TREXIO files. The simplicity, versatility, and ease of use of
TREXIO make it a valuable resource for researchers working with quantum
chemistry data. | Evgeny Posenitskiy, Vijay Gopal Chilkuri, Abdallah Ammar, Michał Hapka, Katarzyna Pernal, Ravindra Shinde, Edgar Josué Landinez Borda, Claudia Filippi, Kosuke Nakano, Otto Kohulák, Sandro Sorella, Pablo de Oliveira Castro, William Jalby, Pablo López Rıós, Ali Alavi, Anthony Scemama | 2023-02-28T17:44:54Z | http://arxiv.org/abs/2302.14793v2 | # TREXIO: A File Format and Library for Quantum Chemistry
###### Abstract
TREXIO is an open-source file format and library developed for the storage and manipulation of data produced by quantum chemistry calculations. It is designed with the goal of providing a reliable and efficient method of storing and exchanging wave function parameters and matrix elements, making it an important tool for researchers in the field of quantum chemistry. In this work, we present an overview of the TREXIO file format and library. The library consists of a front-end implemented in the C programming language and two different back-ends: a text back-end and a binary back-end utilizing the HDF5 library which enables fast read and write operations. It is compatible with a variety of platforms and has interfaces for the Fortran, Python, and OCaml programming languages. In addition, a suite of tools has been developed to facilitate the use of the TREXIO format and library, including converters for popular quantum chemistry codes and utilities for validating and manipulating data stored in TREXIO files. The simplicity, versatility, and ease of use of TREXIO make it a valuable resource for researchers working with quantum chemistry data.
quantum chemistry, data, interoperability
## I Introduction
Quantum chemistry relies on quantum mechanics to explain and predict the properties and behaviors of atoms, molecules, and materials. Although density functional theory (DFT) is one of the most widely used approaches thanks to its excellent ratio between computational cost and accuracy, another important tool is wave function theory (WFT), which describes the behavior of a quantum system in terms of its wave function. In order to perform WFT calculations, it is necessary to manipulate a large number of parameters, such as the expansion coefficients of the wave function and the matrix elements of the Hamiltonian operator. These parameters are typically numerous and difficult to handle, making it important to have a robust and efficient method for storing and accessing them.
Reproducible research remains a challenging topic, despite recent advances such as the introduction of the FAIR (findable, accessible, interoperable, reusable) data principles.[1] A key aspect of reproducibility is software interoperability, which refers to the ability of different programs to work together and exchange information, allowing different systems to communicate and exchange data in order to function as a cohesive whole. Interoperable software is prevalent nowadays and is a key component of the Unix philosophy.[2] In Unix shells, the most straightforward application of software interoperability is made through the use of the _pipe_ operator, where the output of a program is the input of another program. Similarly, shell scripts are created through the composition of smaller programs, exchanging data through files or pipes.
A major challenge of reproducible research is the uniformity of input/output (I/O) data within a particular research domain. The Unix philosophy recommends the use of text files because they are architecture-independent, readable in any language, and can be read as a stream, which is useful for making programs communicate over a network. However, storing data in a text format can result in large file sizes and conversion from ASCII to binary format can be computationally expen
sive for large data sets. To address this concern, domain-specific binary formats have been developed, such as the Joint Photographic Experts Group (JPEG) format[3] for digital images and the Moving Picture Experts Group (MPEG) format[4] for videos. These binary formats are utilized through standardized application programming interfaces (API).
In the field of wave function theory such a standard format and API is still lacking, and the purpose of the TREXIO library presented in this article is to fill this gap. This paper is organized as follows: firstly, a brief overview of the related work is presented. Secondly, the TREXIO format for the electronic wave functions is introduced together with some details concerning the internal representation and the associated API. Finally, some applications are demonstrated with a major focus on the interoperability achieved within the TREX Center of Excellence in Exascale Computing[5] due to the use of the TREXIO format.
## II Related Work
It is worth mentioning that there have been several efforts to unify the data formats within different subdomains of quantum chemistry. Probably one of the earliest works in this direction was the definition of the Crystallographic Information File (CIF) for establishing databases of crystal structures.[6] A few years later, the Chemical Markup Language (CML)[7; 8] was introduced. It is a format based on the Extensible Markup Language (XML) which is used to describe chemical data: molecules, chemical properties, reactions, spectra, materials, _etc_. With formats like CIF or CML, the burden of following a standard is placed on the code _writing_ the data. As a consequence, any tool that can read the format will be able to interpret the data without needing to understand the specific code that was used to produce it. This means that data can be easily shared and reused across different programs, and new tools can be developed to work with the format without needing to know anything about the code used to produce the data.
Recently, the cclib Python package[9], originally developed for performing computational chemistry calculations, has accumulated several internal converters capable of parsing and transforming the output of different programs into the internal representation called ccData. A similar approach has been taken by the developers of IOData[10], who have implemented converters and parsers for commonly used programs and their output files. However, there is currently no unified data representation or API that can be integrated into quantum chemistry codes to improve interoperability. Consequently, each time a given program modifies its input/output formatting, the IOData package must be adapted accordingly and promptly, which poses an additional challenge for maintainers. More recently, consolidated efforts have given rise to QCSchema[11], which provides an API-like access to data generated by existing quantum chemistry codes, thereby addressing the issue of dependence on the output file's formatting style. In this case, the responsibility for adhering to conventions falls on the code _reading_ the data, as it must be aware of the conventions chosen by the code that generated the data. With the Electronic Structure Common Data Format (ESCDF)[12] and its associated library, codes that write data can supply metadata to assist codes that read data in comprehending the organization of the data in the files. Hence, ESCDF aims to provide low-level tools and flexibility to facilitate the exchange of large datasets between codes with high-performance I/O. While this greatly reduces the difficulty of understanding conventions for developers reading the data, they may still need to apply different conversions depending on the code that generated the data. Consequently, implementing support for ESCDF may require more effort on the part of code developers compared to using a standardized format such as CML.
Another popular format for storing quantum chemistry data is the Gaussian[13] fchk format. While it is a proprietary format specific to the Gaussian software package, its compatibility with several other software programs has contributed to its extensive utilization. However, the format's proprietary and closed-source nature prevents external developers from improving the format, leaving enhancements and compatibility updates solely in the hands of Gaussian developers.
Recently, the mwfn[14] format was introduced with the primary goal of enhancing the existing solutions such as wfn,[13] wfx,[15] and Molden[16] formats, which were designed to store parameters of molecular orbitals and atomic basis sets in view of reconstructing the one-particle density matrix. Although mwfn is an improvement on these other formats, it does not allow the user to store enough information for a wave function coming from a configuration interaction (CI) or coupled cluster (CC) calculation.
For post-Hartree-Fock calculations, the FCIDUMP format[17] has become a _de facto_ standard because of its simplicity. It is a text-based format that only contains minimal information for building the second-quantized Hamiltonian, namely the one- and two-electron integrals in the basis of molecular orbitals (MO), the number of electrons and information about the spin state and orbital symmetries. The nuclear coordinates and basis set are not saved in FCIDUMP files. The text format makes its adoption extremely simple, but it has a very high impact on the performance since FCIDUMP files are usually large. Although very practical, the use of the FCIDUMP format has other important limitations than efficiency. Once a program has computed a post-Hartree-Fock wave function using an FCIDUMP file as an input, the parameters of the basis set and the molecular orbitals may have been lost unless they were stored in a separate file in another format. Although configuration interaction or coupled cluster calculations can be performed using FCIDUMP files, this format is too limited to be used for quantum Monte Carlo (QMC) calculations, which require _all_ the
wave function parameters.
The Q5Cost[18; 19; 20] initiative was one of the first attempts aiming at standardizing the WFT data by introducing both a format and the API to interact with it. With Q5Cost, it was possible to store all the wave function parameters of CI expansions together with the basis set, molecular orbitals, and even electron repulsion integrals. The Q5Cost library was relying on the Hierarchical Data Format version 5 (HDF5)[21] to provide efficient I/O and keep the data well organized in the file. Nevertheless, Q5Cost had some severe drawbacks. First, Q5Cost was written in Fortran which made its use tedious in other programming languages such as C++ or Python. In addition, to be able to interpret a Q5Cost file, it was often necessary to know which code had generated it. Indeed, most WFT codes have different conventions in terms of normalization of the basis functions, ordering of the atomic orbitals, _etc_, and no conversion into a unique internal representation was imposed by the library. So the burden of understanding conventions was still on the shoulders of the readers of the files. Finally, Q5Cost had important technical limitations: the Q5Cost library was intended to be used as a compiled Fortran module (a so-called.mod file), that depended on the compiled Fortran modules provided by the HDF5 library. As the format of the compiled Fortran modules is specific to the compiler vendor and even to the version of the compiler, the Q5Cost library could not be simply linked as an external library to any code. Using the Q5Cost library in a Fortran code imposed that the user's code was compiled with the same Fortran compiler as the one that was used to compile both the HDF5 Fortran modules and the Q5Cost library. This contamination of dependencies could lead to some important impact on the performance of the user's code, and the only solution to solve that problem was to compile many different versions of the HDF5 Fortran interface and Q5Cost library with multiple compilers and compiler versions.
The TREXIO initiative, heavily influenced by the Q5Cost project, aims to propose a standard format and library for wave function calculations. This initiative seeks to leverage the strengths of the Q5Cost project and learn from its design flaws that hindered its widespread adoption. One of the key improvements we aim to achieve is to shift the effort of adopting a format and conventions to the side of the code writing the data. This way, the files will be easily readable without any prior knowledge by any code, similar to CML or JPEG.
## III The TREXIO format
The TREXIO format (version 2.3.0) is designed to store all the necessary information to represent a wave function, including: the number of up- and down-spin electrons, nuclear coordinates and charges, basis set and effective core potential (ECP) parameters, atomic and molecular orbital parameters, Slater determinants and CI coefficients, configuration state function (CSF) definitions, and metadata related to the description of excited states. It is also capable of storing data required for the computation of the wave function, such as one- and two-electron integrals, numerical integration grids used in DFT calculations, and one- and two-particle reduced density matrices.
One notable feature of TREXIO is that it is self-contained, meaning that all the parameters needed to recreate the wave function are explicitly stored within the file, eliminating the need for external databases. For example, instead of storing the name of a basis set (such as cc-pVDZ), the actual basis set parameters used in the calculation are stored. All data are stored in atomic units for simplicity.
The data in TREXIO are organized into _groups_, each containing multiple _attributes_ defined by their _type_ and _dimensions_. Each attribute within a group corresponds to a single scalar or array variable in a code. In what follows, the notation <group>.<attribute> will be used to identify an attribute within a group. For example, nucleus.charge refers to the charge attribute in the nucleus group. It is an array of type float with dimension nucleus.num, the attribute describing the number of nuclei. For simplicity, the singular form is always used for the names of groups and attributes.
### Data types
So that TREXIO can be used in any language, we use a limited number of data types. It is important to keep in mind that these types are abstract in the sense that they are defined independently of their implementation, and are not tied to any specific representation on a computer. The main data types are int for integers, float for floating-point values, and str for character strings. The real and imaginary parts of complex numbers are stored separately as floats. To minimize the risk of integer overflow and accuracy loss, numerical data types are stored using 64-bit representations by default. However, in specific cases where integers are bounded (such as orbital indices in four-index integrals), the smallest possible representation is used to reduce the file size. The API presented in the next section handles any necessary type conversions.
There are also two types derived from int: dim and index. dim is used for dimensioning variables, which are positive integers used to specify the dimensions of an array. In the previous example, nucleus.num is a dimensioning variable that specifies the dimensions of the nucleus.charge array. index is used for integers that correspond to array indices, because some languages (such as C or Python) use zero-based indexing, while others (such as Fortran) use one-based indexing by default. For convenience, values of the index type are shifted by one when TREXIO is used in one-based languages to be consistent with the semantics of the language.
Arrays can be stored in either dense or sparse formats. If the sparse format is selected, the data is stored in coordinate format. For example, the element A(i,j,k,l) is stored as a quadruplet of integers \((i,j,k,l)\) along with the corresponding value. Typically, one- and two-dimensional arrays are stored as dense arrays, while arrays with higher dimensions are stored in sparse format.
### Stored data
In this section, we provide a comprehensive overview of the data that can be stored in TREXIO files. A complete list of the groups and attributes is available as supplementary information or in the documentation of the library. In both resources, multi-dimensional arrays are expressed in column-major order, meaning that elements of the same column are stored contiguously.
#### ii.2.1 Metadata
In order to facilitate the archiving of TREXIO files in open-data repositories, users have the option to store metadata in the metadata group. This includes the names of the codes that were used to create the file, a list of authors, and a textual description. This allows for more information about the file to be easily accessible and transparent.
#### ii.2.2 System information
The chemical system consists of nuclei and electrons, where the nuclei are considered as fixed point charges with Cartesian coordinates. The wave function is stored in the spin-free formalism,[22] and therefore, it is necessary to explicitly store the number of spin-up (\(N_{\uparrow}\)) and spin-down (\(N_{\downarrow}\)) electrons. These numbers correspond to the normalization of the spin-up and spin-down single-particle reduced density matrices.
Certain calculations, such as DFT calculations, require the use of a numerical integration grid. The grid group provides information for storing grids, inspired by the data required by the numgrid software.[23; 24]
To keep things simple, TREXIO can only store a single wave function per file. When working with excited states, it is often the case that multiple states only differ in their CI coefficients, while other parameters (such as geometry, basis set, molecular orbitals, etc.) are the same. To facilitate the storage of multiple states, TREXIO provides the option to store all the data needed to describe one state in a main file, along with the names of additional TREXIO files that contain only the state-specific parameters.
#### ii.2.3 Basis set
In the basis group, the atomic basis set is defined as a list of shells. Each shell \(i\) is centered at a center \(A_{i}\), has a specified angular momentum \(l_{i}\), and a radial function \(R_{i}\). The radial function is a linear combination of \(N_{\text{prim}\,i}\)_primitive_ functions, which can be Slater type orbitals (STO, \(p=1\)) or Gaussian type orbitals (GTO, \(p=2\)). These primitive functions are parameterized by exponents \(\gamma_{ki}\) and coefficients \(a_{ki}\):
\[R_{i}(\mathbf{r})=\mathcal{N}_{i}|\mathbf{r}-\mathbf{R}_{A_{i}}|^{n_{i}}\sum_ {k=1}^{N_{\text{prim}\,i}}a_{ki}\,f_{ki}(\gamma_{ki},p)\,e^{-\gamma_{ki}| \mathbf{r}-\mathbf{R}_{A_{i}}|^{p}}. \tag{1}\]
Different codes have different normalization practices, so it is necessary to store normalization factors in the TREXIO file to ensure that it is self-contained and does not rely on the client program having the ability to compute overlap integrals. Some codes assume that the contraction coefficients are applied to _normalized_ linear combinations of primitives, so a normalization constant \(f_{ki}\) for each primitive must also be stored. Some codes assume that the functions \(R_{i}\) are normalized, requiring the computation of an additional normalization factor, \(\mathcal{N}_{i}\).
#### ii.2.4 Atomic orbitals
The ao group in TREXIO contains information related to the expansion of the shells in the basis set into atomic orbitals (AOs). For example, a \(p\)-shell is expanded into three AOs: \(p_{x}\), \(p_{y}\), and \(p_{z}\). AOs are defined as follows:
\[\chi_{i}(\mathbf{r})=\mathcal{N}_{i}^{\prime}\,P_{\eta(i)}(\mathbf{r})\,R_{s( i)}(\mathbf{r}) \tag{2}\]
where \(i\) is the atomic orbital index, \(P\) refers to either polynomials or spherical harmonics, and \(s(i)\) specifies the shell on which the AO is expanded.
\(\eta(i)\) denotes the chosen angular function. The AOs can be expressed using real spherical harmonics or polynomials in Cartesian coordinates. In the case of real spherical harmonics, the AOs are ordered as \(0,+1,-1,+2,-2,\ldots,+m,-m\). In the case of polynomials, the canonical (or alphabetical) ordering is used,
\[p :p_{x},p_{y},p_{z}\] \[d :d_{xx},d_{xy},d_{xz},d_{yy},d_{yz},d_{zz}\] \[f :f_{xxx},f_{xxy},f_{xxz},f_{yyy},f_{xyz},f_{xzz},f_{yyy},f_{yyz},f_{zzz}\] \[\vdots\]
Note that for \(p\) orbitals in real spherical harmonics, the ordering is \(0,+1,-1\) which corresponds to \(p_{z},p_{x},p_{y}\).
\(\mathcal{N}_{i}^{\prime}\) is a normalization factor that allows for different normalization coefficients within a single shell, as in the GAMESS[25] convention where each individual function is unit-normalized. Using GAMESS convention, the normalization factor of the shell \(\mathcal{N}_{d}\) (Eq. 1) in the basis
group is appropriate for instance for the \(d_{z}^{2}\) function (i.e. \(\mathcal{N}_{d}\equiv\mathcal{N}_{z^{2}}\)) but not for the \(d_{xy}\) AO, so the correction factor \(\mathcal{N}_{i}^{\prime}\) for \(d_{xy}\) in the \(\mathsf{ao}\) groups is the ratio \(\frac{\mathcal{N}_{xy}}{\mathcal{N}_{z^{2}}}\).
#### ii.2.5 Effective core potentials
An effective core potential (ECP) \(V_{A}^{\rm{ECP}}\) can be used to replace the core electrons of atom A. It can be expressed as: [26]
\[V_{A}^{\rm{ECP}}=V_{A\ell_{\rm{max}}+1}+\sum_{\ell=0}^{\ell_{\rm{max}}}\delta V _{A\ell}\sum_{m=-\ell}^{\ell}|Y_{\ell m}\rangle\langle Y_{\ell m}| \tag{3}\]
The first term in this equation is attributed to the local channel, while the remaining terms correspond to non-local channel projections. \(\ell_{\rm{max}}\) refers to the maximum angular momentum in the non-local component of the ECP. The functions \(\delta V_{A\ell}\) and \(V_{A\ell_{\rm{max}}+1}\) are parameterized as:
\[\delta V_{A\ell}(\mathbf{r}) =\sum_{q=1}^{N_{\rm{eff}}}\beta_{Aq\ell}\left|\mathbf{r}-\mathbf{ R}_{A}\right|^{n_{Aq\ell}}e^{-\alpha_{Aq\ell}\left|\mathbf{r}-\mathbf{R}_{A} \right|^{2}}\] \[V_{A\ell_{\rm{max}}+1}(\mathbf{r}) =-\frac{Z_{\rm{eff}}}{\left|\mathbf{r}-\mathbf{R}_{A}\right|}+ \delta V_{A\ell_{\rm{max}}+1}(\mathbf{r}) \tag{4}\]
where \(Z_{\rm{eff}}\) is the effective nuclear charge of the center. All the parameters can be stored in the ecp group.
#### ii.2.6 Molecular orbitals
The \(\mathsf{mo}\) group is devoted to the storage of the molecular orbitals (MOs). MO coefficients are stored in a two-dimensional array, with additional information such as symmetries or occupation numbers stored in separate arrays. It is also possible to store the spin to enable the description of unrestricted Hartree-Fock or unrestricted Kohn-Sham determinants.
#### ii.2.7 Hamiltonian matrix elements
One-electron integrals can be stored in the AO and MO bases in the groups \(\mathsf{ao}\)_\(\mathsf{le}\)_\(\mathsf{int}\) and \(\mathsf{mo}\)_\(\mathsf{le}\)_\(\mathsf{int}\), respectively. Similarly, two-electron integrals can be stored in the AO and MO bases in the groups \(\mathsf{ao}\)_\(\mathsf{2e}\)_\(\mathsf{int}\) and \(\mathsf{mo}\)_\(\mathsf{2e}\)_\(\mathsf{int}\), respectively. One-electron integrals are stored as two-dimensional arrays, while two-electron integrals are stored in a sparse format, with a quadruplet of indices and the corresponding value stored for each non-zero integral. The order of the indices follows Dirac's bra-ket notation.
It is also possible to store a low-rank representation of the two-electron integrals, obtained via a Cholesky decomposition.
#### ii.2.8 Cl expansion
The wave function \(\Psi\) can be represented as a combination of Slater determinants \(D_{I}\):
\[\left|\Psi\right\rangle=\sum_{I}C_{I}\left|D_{I}\right\rangle \tag{5}\]
In the determinant group of a TREXIO file, the definition of these Slater determinants, as well as the configuration interaction (CI) expansion coefficients, can be stored. Each Slater determinants is represented as a Waller-Hartree double determinant, [27] i.e. the product of a determinant with \(\uparrow\)-spin electrons and a determinant with \(\downarrow\)-spin electrons. To enable the storage of arbitrary CI expansions and to reduce the storage size, the determinants are stored as pairs of binary strings: one for the \(\uparrow\) spin sector and one for the \(\downarrow\) spin. Each binary string has a length equal to the number of MOs, with the \(i\)-th bit set to one if and only if the \(i\)-th MO is included in the determinant. As the creation of these binary strings may be tedious, we provide some helper functions to transform lists of orbital indices into binary strings. If the orbital indices are not in increasing order, a reordering is made and the user is informed if a change of sign is needed in the corresponding CI coefficient.
Alternatively, the wave function may be expanded in a basis of configuration state functions (CSFs),
\[\left|\Psi\right\rangle=\sum_{I}\tilde{C}_{I}\left|\psi_{I}\right\rangle. \tag{6}\]
where each CSF \(\psi_{I}\) is a linear combination of Slater determinants. The csf group allows for the storage of the CSF expansion coefficients, as well as the matrix \(\langle D_{I}|\psi_{J}\rangle\) in a sparse format. This enables the projection of the CSFs onto the basis of Slater determinants.
#### ii.2.9 Amplitudes
The wave function may also be expressed in terms of the action of the cluster operator \(\hat{T}\):
\[\hat{T}=\hat{T}_{1}+\hat{T}_{2}+\hat{T}_{3}+\ldots \tag{7}\]
on a reference wave function \(\Psi\), where \(\hat{T}_{1}\) is the single excitation operator,
\[\hat{T}_{1}=\sum_{ia}t_{i}^{a}\,\hat{a}_{a}^{\dagger}\hat{a}_{i}, \tag{8}\]
\(\hat{T}_{2}\) is the double excitation operator,
\[\hat{T}_{2}=\frac{1}{4}\sum_{ijab}t_{ij}^{ab}\,\hat{a}_{a}^{\dagger}\hat{a}_{b} ^{\dagger}\hat{a}_{j}\hat{a}_{i}, \tag{9}\]
_etc_. Indices \(i\), \(j\), \(a\) and \(b\) denote molecular orbital indices.
Wave functions obtained with perturbation theory or configuration interaction are of the form:
\[|\Phi\rangle=\hat{T}|\Psi\rangle \tag{10}\]
and coupled-cluster wave functions are of the form:
\[|\Phi\rangle=e^{\hat{T}}|\Psi\rangle \tag{11}\]
The reference wave function \(\Psi\) is stored using the determinant and/or csf groups, and the amplitudes are stored using the amplitude group. The attributes with the exp suffix correspond to exponentialized operators.
### Reduced density matrices
The reduced density matrices, stored in the rdm group, are defined in the basis of molecular orbitals.
The \(\uparrow\)-spin and \(\downarrow\)-spin components of the one-body density matrix are given by
\[\gamma^{\uparrow}_{ij} =\langle\Psi|\hat{a}^{\dagger}_{j\alpha}\,\hat{a}_{i\alpha}|\Psi\rangle \tag{12}\] \[\gamma^{\downarrow}_{ij} =\langle\Psi|\hat{a}^{\dagger}_{j\beta}\,\hat{a}_{i\beta}|\Psi\rangle \tag{13}\]
and the spin-summed two-body density matrix is
\[\gamma_{ij}=\gamma^{\uparrow}_{ij}+\gamma^{\downarrow}_{ij} \tag{14}\]
The \(\uparrow\uparrow\), \(\downarrow\downarrow\), and \(\uparrow\downarrow\) components of the two-body density matrix are given by
\[\Gamma^{\uparrow\uparrow}_{ijkl} =\langle\Psi|\hat{a}^{\dagger}_{k\alpha}\,\hat{a}^{\dagger}_{l \alpha}\hat{a}_{j\alpha}\,\hat{a}_{i\alpha}|\Psi\rangle \tag{15}\] \[\Gamma^{\downarrow\downarrow}_{ijkl} =\langle\Psi|\hat{a}^{\dagger}_{k\beta}\,\hat{a}^{\dagger}_{l \beta}\hat{a}_{j\beta}\,\hat{a}_{i\beta}|\Psi\rangle\] (16) \[\Gamma^{\uparrow\downarrow}_{ijkl} =\langle\Psi|\hat{a}^{\dagger}_{k\alpha}\,\hat{a}^{\dagger}_{l \beta}\hat{a}_{j\beta}\,\hat{a}_{i\alpha}|\Psi\rangle+\] \[\langle\Psi|\hat{a}^{\dagger}_{l\alpha}\,\hat{a}^{\dagger}_{k \beta}\hat{a}_{i\beta}\,\hat{a}_{j\alpha}|\Psi\rangle, \tag{17}\]
and the spin-summed one-body density matrix is
\[\Gamma_{ijkl}=\Gamma^{\uparrow\uparrow}_{ijkl}+\Gamma^{\downarrow\downarrow} _{ijkl}+\Gamma^{\uparrow\downarrow}_{ijkl}. \tag{18}\]
### Correlation factors
Explicit correlation factors can be introduced in the wave function, such as in QMC, \(F_{12}\), or transcorrelated methods.
In the current version of the library, it is possible to store two different types of Jastrow factors. The Jastrow factor is an \(N\)-electron function which multiplies the reference wave function expansion: \(\Psi=\Phi\times\exp(J)\), where
\[J(\mathbf{r},\mathbf{R})=J_{\mathrm{eN}}(\mathbf{r},\mathbf{R})+J_{\mathrm{ ee}}(\mathbf{r})+J_{\mathrm{eeN}}(\mathbf{r},\mathbf{R}). \tag{19}\]
In the following, we use the notations \(r_{ij}=|\mathbf{r}_{i}-\mathbf{r}_{j}|\) and \(R_{i\alpha}=|\mathbf{r}_{i}-\mathbf{R}_{\alpha}|\), where indices \(i\) and \(j\) correspond to electrons and \(\alpha\) to nuclei.
The first form of Jastrow factor is the one used in the CHAMP [28] program. [29]\(J_{\mathrm{eN}}\) contains electron-nucleus terms:
\[J_{\mathrm{eN}}(\mathbf{r},\mathbf{R})=\sum_{i=1}^{N_{\mathrm{ elec}}}\sum_{\alpha=1}^{N_{\mathrm{eucl}}}\left[\frac{a_{1,\alpha}\,f_{\alpha}(R_{i \alpha})}{1+a_{2,\alpha}\,f_{\alpha}(R_{i\alpha})}\right.\] \[\left.+\sum_{p=2}^{N_{\mathrm{eucl}}^{\mathrm{e}}}a_{p+1,\alpha} \left[f_{\alpha}(R_{i\alpha})\right]^{p}-J_{\mathrm{eN}}^{\infty}\right] \tag{20}\]
\(J_{\mathrm{ee}}\) contains electron-electron terms:
\[J_{\mathrm{ee}}(\mathbf{r})=\sum_{i=1}^{N_{\mathrm{elec}}}\sum_{j=1}^{i-1} \left[\frac{\frac{1}{2}\big{(}1+\delta^{\uparrow\downarrow}_{ij}\big{)}\,b_{ 1}\,f_{\mathrm{ee}}(r_{ij})}{1+b_{2}\,f_{\mathrm{ee}}(r_{ij})}\right.\] \[\left.+\sum_{p=2}^{N_{\mathrm{eucl}}^{\mathrm{h}}}b_{p+1}\left[f_{ \mathrm{ee}}(r_{ij})\right]^{p}-J_{\mathrm{ee},ij}^{\infty}\right] \tag{21}\]
where \(\delta^{\uparrow\downarrow}_{ij}\) is zero when the electrons \(i\) and \(j\) have the same spin, and one otherwise. \(J_{\mathrm{eeN}}\) contains electron-electron-nucleus terms:
\[J_{\mathrm{eeN}}(\mathbf{r},\mathbf{R})=\sum_{\alpha=1}^{N_{ \mathrm{eucl}}}\sum_{i=1}^{N_{\mathrm{elec}}}\sum_{j=1}^{i-1}\sum_{p=2}^{N_{ \mathrm{ord}}}\sum_{k=0}^{p-1}\sum_{l=0}^{p-k-2\delta_{k,0}}c_{lkp\alpha} \left[g_{\mathrm{ee}}(r_{ij})\right]^{k} \tag{22}\] \[\left[\left[g_{\alpha}(R_{i\alpha})\right]^{l}+\left[g_{\alpha}( R_{j\alpha})\right]^{l}\right]\left[g_{\alpha}(R_{i\,\alpha})\,g_{\alpha}(R_{j \alpha})\right]^{(p-k-l)/2}, \tag{23}\]
\(c_{lkp\alpha}\) being non-zero only when \(p-k-l\) is even. The terms \(J_{\mathrm{ee}}^{\infty}\) and \(J_{\mathrm{ee}}^{\infty}\) are shifts to ensure that \(J_{\mathrm{ee}}\) and \(J_{\mathrm{eN}}\) have an asymptotic value of zero. \(f\) and \(g\) are scaling functions defined as
\[f_{\alpha}(r)=\frac{1-e^{-\kappa_{\alpha}\,r}}{\kappa_{\alpha}}\text{ and }g_{\alpha}(r)=e^{-\kappa_{\alpha}\,r}, \tag{24}\]
and the possible presence of an index \(\alpha\) indicates that the scaling coefficient \(\kappa\) depends on the atom \(\alpha\).
The second form of Jastrow factor is the \(\mu\) Jastrow factor [30]
\[J_{\mathrm{ee}}(\mathbf{r})=\sum_{i=1}^{N_{\mathrm{elec}}}\sum_{j=1}^{i-1}r_{ ij}\left(1-\mathrm{erf}(\mu\,r_{ij})\right)-\frac{1}{\mu\sqrt{\pi}}e^{-(\mu\,r_{ ij})^{2}}. \tag{25}\]
It is a single parameter correlation factor that has been recently introduced in the context of transcorrelated methods. It imposes the electron-electron cusp and is built such that the leading order in \(1/r_{12}\) of the effective two-electron potential reproduces the long-range interaction of the range-separated density functional theory. An envelope function has then been introduced to cancel out the Jastrow effects between two electrons when at least one electron is close to a nucleus, and standard one-body terms were also introduced to avoid the expansion of the one-body density.
As there exist multiple forms of Jastrow factors in the literature, contributions to extend this section are welcome.
QMC data
We also provide in the qmc group some information specific to QMC calculations. In QMC methods, the wave function is evaluated at points in the \(3N\)-dimensional space, where \(N\) is the number of electrons. It might be convenient to store the coordinates of points together with the wave function, and to store the value of the wave function and the local energy \(\hat{H}\Psi(\mathbf{r})/\Psi(\mathbf{r})\) evaluated at these points, for example, to check that different codes give the same values.
## IV The TREXIO library
The TREXIO library is written in the C language, and is licensed under the open-source 3-clause BSD license to allow for use in all types of quantum chemistry software, whether commercial or not.
The design of the library is divided into two main sections: the front-end and the back-end. The front-end serves as the interface between users and the library, while the back-end acts as the interface between the library and the physical storage.
### The front-end
By using the TREXIO library, users can store and extract data in a consistent and organized manner. The library provides a user-friendly API, including functions for reading, writing, and checking for the existence of data. The functions follow the pattern trexio_[has|read|write]_<group>_<attribute>, where the group and attribute specify the particular data being accessed. It also includes an error handling mechanism, in which each function call returns an exit code of type trexio_exit_code, explaining the type of error. This can be used to catch exceptions and improve debugging in the upstream user application. Figures 1 and 2 show examples of usage of the TREXIO library in C and Python, respectively.
To ensure the consistency of the data, the attributes can only be written if all the other attributes on which they explicitly depend have been written. For example, as the nucleus.coord array is dimensioned by the number of nuclei nucleus.num, the nucleus.coord attribute can only be written after nucleus.num. However, the library is not aware of non-explicit dependencies, such as the relation between the electron repulsion integrals (ERIs) and MO coefficients. A complete control of the consistency of the data is therefore impossible, so the attributes were chosen to be by default _immutable_. By only allowing data to be written only once, the risk of modifying data in a way that creates inconsistencies is reduced. For example, if the ERIs have already been written, it would be inconsistent to later modify the MO coefficients. To allow for flexibility, the library also allows for the use of an _unsafe_ mode, in which data can be overwritten. However, this mode carries the risk of producing inconsistent files, and the metadata group's unsafe attribute is set to 1 to indicate that the file has potentially been modified in a dangerous way. This attribute can be manually reset to 0 if the user is confident
Figure 1: C code writing the nuclear coordinates of a water molecule in a TREXIO file, with error handling.
Figure 2: Python code writing the nuclear coordinates of a water molecule in a TREXIO file.
that the modifications made are safe.
### The back-end
At present, TREXIO supports two back-ends: one relying only on the C standard library to produce plain text files (the so-called text back-end), and one relying on the HDF5 library.
With the text back-end, the TREXIO "file" is a directory containing multiple text files, one for each group. This back end is intended to be used in development environments, as it gives access to the user to standard tools such as diff and grep. In addition, text files are better adapted than binary files for version control systems such as Git, so this format can be also used for storing reference data for unit tests.
HDF5 is a binary file format and library for storing and managing large amounts of data in a hierarchical structure. It allows users to manipulate data in a way similar to how files and directories are manipulated within the file system. The HDF5 library provides optimal performance through its memory mapping mechanism and supports advanced features such as serial and parallel I/O, chunking, and compression filters. However, HDF5 files are in binary format, which requires additional tools such as h5dump to view them in a human-readable format. HDF5 is widely used in scientific and engineering applications, and is known for its high performance and ability to handle large data sets efficiently.
The TREXIO HDF5 back-end is the recommended choice for production environments, as it provides high I/O performance. Furthermore, all data is stored in a single file, making it especially suitable for parallel file systems like Lustre. These file systems are optimized for large, sequential I/O operations and are not well-suited for small, random I/O operations. When multiple small files are used, the file system may become overwhelmed with metadata operations like creating, deleting, or modifying files, which can adversely affect performance.
In a benchmarking program designed to compare the two back-ends of the library, the HDF5 back-end was found to be significantly faster than the text back-end. The program wrote a wave function made up of 100 million Slater determinants and measured the time taken to write the Slater determinants and CI coefficients. The HDF5 back-end achieved a speed of \(10.4\times 10^{6}\) Slater determinants per second and a data transfer rate of 406 MB/s, while the text back-end had a speed of \(1.1\times 10^{6}\) determinants per second and a transfer rate of 69 MB/s. These results were obtained on a DELL 960 GB mix-use solid-state drive (SSD). The HDF5 back-end was able to achieve a performance level close to the peak performance of the SSD, while the text back-end's performance was limited by the speed of the CPU for performing binary to ASCII conversions.
In addition to the HDF5 and text back-ends, it is also possible to introduce new back-ends to the library. For example, a back-end could be created to support object storage systems, such as those used in cloud-based applications [31] or for archiving in open data repositories. To use a new back-end, only a minor modification is required in the code using TREXIO: the correct back-end argument needs to be passed to the trexio_open function (see Figures 1 and 2).
### Supported languages
One of the main benefits of using C as the interface for a library is that it is easy to use from other programming languages. Many programming languages, such as Python or Julia, provide built-in support for calling C functions, which means that it is relatively straightforward to write a wrapper that allows a library written in C to be called from another language. In general, libraries with a C interface are the easiest to use from other programming languages, because C is widely supported and has a simple, stable application binary interface (ABI). Other languages, such as Fortran and C++, may have more complex ABIs and may require more work to interface with them.
TREXIO has been employed in codes developed in various programming languages, including C, C++, Fortran, Python, OCaml, and Julia. While Julia is designed to enable the use of C functions without the need for additional manual interfacing, the TREXIO C header file was automatically integrated into Julia programs using the CBindings.jl package.[32] In contrast, specific bindings have been provided for Fortran, Python, and OCaml to simplify the user experience.
In particular, the binding for Fortran is not distributed as multiple compiled Fortran module files (.mod), but instead as a single Fortran source file (.F90). The distribution of the source file instead of the compiled module has multiple benefits. It ensures that the TREXIO module is always compiled with the same compiler as the client code, avoiding the compatibility problem of.mod files between different compiler versions and vendors. The single-file model requires very few changes in the build system of the user's codes, and it facilitates the search for the interface of a particular function. In addition, advanced text editors can parse the TREXIO interface to propose interactive auto-completion of the TREXIO function names to the developers.
Finally, the Python module, partly generated with SWIG [33] and fully compatible with NumPy,[34] allows Python users to interact with the library in a more intuitive and user-friendly way. Using the Python interface is likely the easiest way to begin using TREXIO and understanding its features. In order to help users get started with TREXIO and understand its functionality, tutorials in Jupyter notebooks are available on GitHub ([https://github.com/TREX-CoE/trexio-tutorials](https://github.com/TREX-CoE/trexio-tutorials)), and can be executed via the Binder platform.
### Source code generation and documentation
Source code generation is a valuable technique that can significantly improve the efficiency and consistency of software development. By using templates to generate code automatically, developers can avoid manual coding and reduce the risk of errors or inconsistencies. This approach is particularly useful when a large number of functions follow similar patterns, as in the case of the TREXIO library, where functions are named according to the pattern trexio_[has|read|write]_<group>_<attribute>. By generating these functions from the format specification using templates, the developers can ensure that the resulting code follows a consistent structure and is free from errors or inconsistencies.
The description of the format is written in a text file in the Org format.[35] Org is a structured plain text format, containing information expressed in a lightweight markup language similar to the popular Markdown language.[36] While Org was introduced as a mode of the GNU Emacs text editor, its basic functionalities have been implemented in most text editors such as Vim, Atom or VS Code.
There are multiple benefits in using the Org format. The first benefit is that the Org syntax is easy to learn and allows for the insertion of equations in LaTeX syntax. Additionally, Org files can be easily converted to HyperText Markup Language (HTML) or Portable Document Format (PDF) for generating documentation. The second benefit is that GNU Emacs is a programmable text editor and code blocks in Org files can be executed interactively, similar to Jupyter notebooks. These code blocks can also manipulate data defined in tables and this feature is used to automatically transform tables describing groups and attributes in the documentation into a JavaScript Object Notation (JSON) file.[37; 38] This JSON file is then used by a Python script to generate the needed functions in C language, as well as header files and some files required for the Fortran, Python, and OCaml interfaces.
With this approach, contributions to the development of the TREXIO library can be made simply by adding new tables to the Org file, which can be submitted as _pull requests_ on the project's GitHub repository ([https://github.com/trex-coe/trexio](https://github.com/trex-coe/trexio)). Overall, this process allows for a more efficient and consistent development process and enables contributions from a wider range of individuals, regardless of their programming skills.
### Availability and reliability
The TREXIO library is designed to be portable and easy to install on a wide range of systems. It follows the C99 standard to ensure compatibility with older systems, and can be configured with either the GNU Autotools or the CMake build systems. The only external dependency is the HDF5 library, which is widely available on HPC platforms and as packages on major Linux distributions. Note that it is possible to disable the HDF5 back-end at configuration time, allowing TREXIO to operate only with the text back-end and have zero external dependencies. This can be useful for users who may not be able to install HDF5 on certain systems.
TREXIO is distributed as a tarball containing the source code, generated code, documentation, and Fortran interface. It is also available as a binary.deb package for Debian-based Linux distributions and as packages for Guix[39], Spack[40] and Conda.[41] The Python module can be found in the PyPI repository, the OCaml binding is available in the official OPAM repository, and the.deb packages are already available in Ubuntu 23.04.
To ensure the reliability and quality of the TREXIO library, we have adopted standard continuous integration and deployment practices. For example, we use unit tests that are executed automatically using GitHub actions whenever modifications are made to the codebase. These tests cover a wide range of functionalities and help to identify any potential issues or bugs in the code. Additionally, the TREXIO library is regularly used by the authors of the present paper, and as such, it is continuously tested and validated in the context of ongoing research activities.
TREXIO was built, tested and installed successfully on 20 different architectures supported by the Debian build farm. Furthermore, we ensure that the quality of our code meets the requirements of the CERT coding standards,[42] and we use the cppcheck[43] tool to validate the quality of our code. These measures demonstrate our commitment to ensuring that the TREXIO library is a reliable and trustworthy tool.
### Open-Source Governance and Sustainability Strategies
Our approach to the development and governance of the TREXIO library follows the standard design of open-source projects, which typically involve a collaborative effort from a community of contributors. The TREX European Center of Excellence initiated the project and proposed the first functional version of the software. However, we consider this to be just the starting point for a larger community effort.
As an open-source project, we encourage contributions from anyone interested in the development of the library. This includes not only contributions to the codebase but also contributions to the documentation, testing, and other aspects of the project. We believe that this collaborative approach is the key to the success of any open-source project.
Regarding governance, we have a small group of maintainers who oversee the development of the project, review and merge contributions, and ensure the quality of the code. However, we strive to make the development process as transparent and open as possible, and
we encourage contributions from anyone interested in the project.
Overall, our strategy for the governance and development of the TREXIO library follows the standard design of open-source projects, which emphasizes collaboration and transparency. We believe that this approach, combined with our commitment to seeking and securing funding for the continued development and maintenance of TREXIO, will ensure the long-term success and usefulness of the library to the quantum chemistry community.
## V Examples of applications
The open-source Python package trexio_tools[44] has been created to enhance the use of the TREXIO library and corresponding data format. It includes converters for transforming output files from codes such as Gaussian, GAMESS,[25] or PySCF[45] into TREXIO files. However, in the future, it would be preferable if the developers of these codes were to offer the option to export data in TREXIO format in order to maintain numerical precision and ensure consistency in the stored data. In addition, the package includes utilities to convert certain data blocks from TREXIO files into FCIDUMP or Molden formats. It also has a feature that validates the consistency of a wave function by numerically calculating overlap integrals on a grid and comparing them to the overlap matrix stored in the file. This helps to confirm that all basis set parameters are consistent with the conventions of the original program.
TREXIO is currently used to exchange wave function parameters between the selected CI code Quantum Package[46] and the QMC code CHAMP.[28] The QMC codes QMC=Chem[47] and TurboRVB[48] are also able to read TREXIO files, allowing for comparison of the three QMC codes using the same wave function. TREXIO is also used to transfer integrals between Quantum Package and the FCIQMC code NECI,[49] and to read density matrices produced by Quantum Package in GammCor[50] for symmetry-adapted perturbation theory (SAPT)[51] molecular interaction calculations with near-full CI density matrices.[52] In addition, the recent development of a code for calculating electron repulsion integrals using Slater-type orbitals[53] now produces TREXIO files, enabling FCIQMC calculations using Slater-type orbitals with NECI and similar selected CI calculations with Quantum Package, which can then be used as trial wave functions for QMC calculations.
## VI Conclusion
The TREXIO format and library offer a convenient and flexible way to store and exchange quantum chemistry data. Its open-source nature allows for easy integration into various software applications and its compatibility with multiple programming languages makes it accessible to a wide range of users. The use of the HDF5 library as the default back-end ensures efficient storage and retrieval of data, while the option to disable HDF5 and use the text back-end allows for zero external dependencies. The development of TREXIO has been driven by the need to facilitate collaboration and reproducibility in quantum chemistry research, and its adoption in various codes and projects is a testament to its usefulness in achieving these goals. We would like to emphasize that the TREXIO library is a work in progress, and we are committed to expanding its scope and functionality in future releases. Our immediate priorities include supporting periodic boundary conditions and other basis sets such as grids, and plane waves. Overall, the TREXIO format and library is a valuable resource for the quantum chemistry community and its continued development and adoption will surely benefit the field.
###### Acknowledgements.
The authors would like to thank Susi Lehtola for providing valuable feedback on an earlier version of this manuscript. This work was supported by the European Centre of Excellence in Exascale Computing TREX -- Targeting Real Chemical Accuracy at the Exascale. Hence, the name of the software is _TREX Input/Output_ (TREXIO). This project has received funding from the European Union's Horizon 2020 -- Research and Innovation program -- under grant agreement no. 952165. A CC-BY 4.0 ([https://creativecommons.org/licenses/by/4.0/](https://creativecommons.org/licenses/by/4.0/)) public copyright license has been applied by the authors to the present document and will be applied to all subsequent versions up to the Author Accepted Manuscript arising from this submission, in accordance with the grant's open access conditions.
|
2309.12090 | Multi-Task Cooperative Learning via Searching for Flat Minima | Multi-task learning (MTL) has shown great potential in medical image
analysis, improving the generalizability of the learned features and the
performance in individual tasks. However, most of the work on MTL focuses on
either architecture design or gradient manipulation, while in both scenarios,
features are learned in a competitive manner. In this work, we propose to
formulate MTL as a multi/bi-level optimization problem, and therefore force
features to learn from each task in a cooperative approach. Specifically, we
update the sub-model for each task alternatively taking advantage of the
learned sub-models of the other tasks. To alleviate the negative transfer
problem during the optimization, we search for flat minima for the current
objective function with regard to features from other tasks. To demonstrate the
effectiveness of the proposed approach, we validate our method on three
publicly available datasets. The proposed method shows the advantage of
cooperative learning, and yields promising results when compared with the
state-of-the-art MTL approaches. The code will be available online. | Fuping Wu, Le Zhang, Yang Sun, Yuanhan Mo, Thomas Nichols, Bartlomiej W. Papiez | 2023-09-21T14:00:11Z | http://arxiv.org/abs/2309.12090v1 | # Multi-Task Cooperative Learning via Searching for Flat Minima
###### Abstract
Multi-task learning (MTL) has shown great potential in medical image analysis, improving the generalizability of the learned features and the performance in individual tasks. However, most of the work on MTL focuses on either architecture design or gradient manipulation, while in both scenarios, features are learned in a competitive manner. In this work, we propose to formulate MTL as a multi/bi-level optimization problem, and therefore force features to learn from each task in a cooperative approach. Specifically, we update the sub-model for each task alternatively taking advantage of the learned sub-models of the other tasks. To alleviate the negative transfer problem during the optimization, we search for flat minima for the current objective function with regard to features from other tasks. To demonstrate the effectiveness of the proposed approach, we validate our method on three publicly available datasets. The proposed method shows the advantage of cooperative learning, and yields promising results when compared with the state-of-the-art MTL approaches. _The code will be available online._
Keywords:Multi-Task Cooperative Learning Optimization.
## 1 Introduction
With the development of deep learning, multi-task learning (MTL) has shown great potential to improve performance for individual tasks and to learn more transferable features (better generalizability), whilst reducing the number of the network parameters [16]. MTL has been widely studied in many domains including image classification [14] or image segmentation [9]. The core assumption behind MTL is that tasks could be correlated and thus provide complementary features for each other [4]. MTL is also applied in medical image analysis tasks [11, 6, 20, 5], where strong associations between multiple tasks commonly exist. For example, the diagnosis of cancer may indicate the extent of disease severity, which can be correlated with the patient's survival, thus diagnosis and prognosis of cancer could be learned simultaneously [18]. In clinical diagnosis, annotations of organs or tissues could support radiologists to grade disease, to mimic this process, Zhou _et.al_[24] studied to simultaneously segment and classify (grade)
tumors into benign or malignant class using 3D breast ultrasound images. Similarly, to improve the prediction of lymph node (LN) metastasis [21], Zhang _et.al_ proposed a 3D multi-attention guided multi-task learning network for joint gastric tumor segmentation and LN classification [23].
Typically, MTL methods can be broadly categorized into hard and soft parameter-sharing paradigms [16]. The former adopts one backbone as the encoder to extract common features for all tasks, and the latter designs encoders for each task while constraining their associated parameters. To exploit the correlation between tasks, a large amount of work focuses on the architecture design of the network to enable the cross-task interaction [23]. For example, Misra _et.al_ designed a cross-stitch model to combine features from multiple networks [12]. Besides network design, many researchers pay more attention to the neural network optimization process to counter the _negative transfer_ issue [16]. As tasks could compete with each other for shared resources, the overall performance might be even poorer than those of solving individual tasks. To address this issue, previous works either change the weights of each task objective adaptively using heuristics [2], or manipulate the gradient to be descending direction for each task [10]. However, as those methods formulate MTL in a competitive manner, it is difficult to guarantee that the complementary information is fully utilized by each task. Moreover, most of them are designed for or evaluated on a simple scenario, where only one domain is involved and the tasks are homogeneous, namely all tasks are either dense prediction or image-level classification.
In this work, we propose a novel cooperative MTL framework (MT-COOL), which manages to update the features of one task while taking into account the current state of other features. Specifically, we adopt the soft parameter-sharing strategy and update each sub-model conditioning on the information learned by other tasks in an alternative manner. To avoid the _negative transfer_ problem during the training, we further propose to search for flat minima of the current task with regard to others at each iteration. As a proof of concept, we first validate this method on the simple MNIST dataset for classification tasks. To show the advantage of the proposed approach in the medical domain, we use REFUGE2018 dataset for optic cup/disc segmentation and glaucoma classification, and HRF-AV dataset for artery and vein segmentation tasks. The results show a promising perspective of the proposed multi-task cooperative approach, compared to the state-of-the-art methods.
The main contributions of this work are as follows:
* We propose a novel MTL framework, which learns features for each task in a cooperative manner.
* We propose an effective optimization strategy to alleviate convergence issues.
* We validate the proposed method on three MTL scenarios with different task settings. The proposed method delivers promising results in all settings, compared with the state-of-the-art MTL approaches.
## 2 Method
For a better explanation, here we take two-task learning as an example, which can be generalized to n-task problems easily.
### Bi-Level Optimization for Cooperative Two-Task Learning
Formally, let \(x_{i}\in\mathbb{R}^{W\times H\times C}\) denotes an image with the width \(W\), height \(H\) and channel \(C\), \(y_{i}\in\mathbb{R}^{C_{0}}\) is a label for classification, (or \(y_{i}\in\mathbb{R}^{W\times H\times C_{0}}\) for segmentation) and \(C_{0}\) is the number of classes, \(F_{i}(\cdot;\theta_{i})\) is a feature extractor, \(G_{i}(\cdot;\phi_{i})\) is a prediction function for task \(i=1,\ldots,T\) where \(T\) is a number of tasks, and here \(T=2\). \(\theta_{i}\) and \(\phi_{i}\) are corresponding parameters to be learned. Our task is to predict label \(\widehat{y}_{i}=G_{i}(F_{i}(x_{i}))\).
For MTL, instead of using shared backbone, _i.e._, \(F_{1}=F_{2}\), and updating them simultaneously with a single loss \(\ell\), we propose to optimize them in a cooperative manner, that is learning \((F_{1},G_{1})\) conditioned on a fixed and informative \(F_{2}\), and versa vice. Generally, it can be formulated as a bi-level optimization problem:
\[(U)\min_{\theta_{1},\phi_{1}}\mathcal{L}_{1}(\theta_{1},\phi_{1},\theta_{2})=\ell_{1}(G_{1}(\mathcal{M}(F_{1}(x_{1};\theta_{1}),F_{2}(x_{1}; \theta_{2}));\phi_{1}),\widehat{y}_{1}), \tag{1}\] \[(L)\min_{\theta_{2},\phi_{2}}\mathcal{L}_{2}(\theta_{2},\phi_{2},\theta_{1})=\ell_{2}(G_{2}(\mathcal{M}(F_{1}(x_{2};\theta_{1}),F_{2}(x_{2}; \theta_{2}));\phi_{2}),\widehat{y}_{2}), \tag{2}\]
where \(\ell_{i}\) is the loss function, e.g. cross-entropy loss for classification. \(\mathcal{M}\) denotes a feature fusion to facilitate the current task learning by incorporating useful information from other tasks. A common choice for \(\mathcal{M}\) is to use a linear combination of features, also known as _cross-stitch_[12] or concatenation operation in multi-layers (which is used in this work due to its simplicity).
To solve the problem Eq.(1)-(2), we propose to update \((\theta_{1},\phi_{1})\) and \((\theta_{2},\phi_{2})\) alternatively, as other traditional methods for bi-level optimization problem could be inefficient [1] due to the complexity of deep neural networks. However, without any constraint, this alternative optimization strategy could fail to achieve convergence to an optimal solution. For example, at the \(t\)-th iteration, we first optimize \(\mathcal{L}_{1}(\theta_{1},\phi_{1},\theta_{2}^{(t-1)})\) to obtain an optimum \((\theta_{1}^{(t)},\phi_{1}^{(t)})\). It is possible that for the second task, \(\mathcal{L}_{2}(\theta_{2}^{(t-1)},\phi_{2}^{(t-1)},\theta_{1}^{(t-1)})< \mathcal{L}_{2}(\theta_{2}^{(t-1)},\phi_{2}^{(t-1)},\theta_{1}^{(t)})\), which means that the update for the first task could increase the prediction risk of the second one, and cancel the gain from optimization of \(\mathcal{L}_{2}\). Here, we also term this issue as _negative transfer_. To alleviate this effect, we propose to search for flat minima for one task with regard to the features from the other task in each iteration.
### Finding Flat minima via Injecting Noise
As mentioned above, the network optimized for one task could be sensitive to the change of parameters for other tasks, which may cause non-convergent solutions. Hence, at each iteration, for each task, we search for an optimum that is non-sensitive to the update of other parameters within a fixed neighborhood. We term this kind of optima as _flat minima_.
To formally state this idea, assume that noise \(\epsilon_{i}\sim\{\mathcal{U}(-b,b)\}^{d_{\epsilon_{i}}}\) with \(b>0\), \(d_{\epsilon}=d_{\theta_{i}}\) and \(d_{\theta_{i}}\) the dimension of \(\theta_{i}\). Then for _task 1_, at \(t\)-th iteration our target
is to minimize the expected loss function with regard to the parameters \((\theta_{1},\phi_{1})\) and noise \(\epsilon_{2}\), _i.e.,_
\[(U)\ \mathcal{R}_{1}^{[t]}(\theta_{1},\phi_{1})=\int_{\mathbb{R}^{d \epsilon_{2}}}\mathcal{L}_{1}(\theta_{1},\phi_{1},\theta_{2}^{[t-1]}+\epsilon_ {2})dP(\epsilon_{2})=\mathbb{E}[\mathcal{L}_{1}(\theta_{1},\phi_{1},\theta_{2} ^{[t-1]}+\epsilon_{2})], \tag{3}\]
\[s.t.\ |\theta_{1}-\theta_{1}^{[t-1]}|<b,\]
where \(P(\epsilon_{2})\) is the noise distribution, and the solution is denoted as \((\theta_{1}^{[t]},\phi_{1}^{[t]})\). Similarly, for _task 2_, the loss function is as follows,
\[(L)\ \mathcal{R}_{2}^{[t]}(\theta_{2},\phi_{2})=\int_{\mathbb{R}^{d \epsilon_{1}}}\mathcal{L}_{2}(\theta_{2},\phi_{2},\theta_{1}^{[t]}+\epsilon_ {1})dP(\epsilon_{1})=\mathbb{E}[\mathcal{L}_{2}(\theta_{2},\phi_{2},\theta_{1 }^{[t]}+\epsilon_{1})], \tag{4}\]
\[s.t.\ |\theta_{2}-\theta_{2}^{[t-1]}|<b.\]
Note that it is hard to find an ideal flat minimum \((\theta_{1}^{[t]},\phi_{1}^{[t]})\) for Eq. (3), such that \(\mathcal{L}_{1}(\theta_{1}^{[t]},\phi_{1}^{[t]},\theta_{2}^{[t-1]}+\epsilon_ {2}^{(j_{1})})=\mathcal{L}_{1}(\theta_{1}^{[t]},\phi_{1}^{[t]},\theta_{2}^{[t -1]}+\epsilon_{2}^{(j_{2})})\), \(\forall\epsilon_{2}^{(j_{1})},\epsilon_{2}^{(j_{2})}\sim P(\epsilon_{2})\), and \(\mathcal{L}_{1}(\theta_{1}^{[t]},\phi_{1}^{[t]},\theta_{2}^{[t-1]})<\mathcal{ L}_{1}(\theta_{1}^{[t-1]},\phi_{1}^{[t-1]},\theta_{2}^{[t-1]})\), which satisfies the requirement to avoid the optimization issue (see Sect. 2.1). Hence, our goal is to find an approximately flat minimum to alleviate this issue. A similar idea has been proposed for continual learning [19]. However, our method differs as follows: (1) the flat minimum in [19] is searched for the current task, while in our work, it is searched with regard to other tasks; (2) Once the flat minimum is found for the first task in a continual learning problem, search region for the remaining tasks is fixed, while in our work, the parameters for each task are only constrained in a single iteration, and search region could change during the optimization.
In practice, it is difficult to minimize the expected loss, we instead minimize its empirical loss for Eq. (3) and Eq. (4) as follows,
\[(U)\ L_{1}^{[t]}(\theta_{1},\phi_{1})=\frac{1}{M}\sum_{j=1}^{M} \mathcal{L}_{1}(\theta_{1},\phi_{1},\theta_{2}^{[t-1]}+\epsilon_{2}^{(j)})+ \lambda\cdot KL(\widehat{y}_{1}^{(j)},\frac{1}{M}\sum_{n=1}^{M}\widehat{y}_{1 }^{(n)}), \tag{5}\]
\[(L)\ L_{2}^{[t]}(\theta_{2},\phi_{2})=\frac{1}{M}\sum_{j=1}^{M} \mathcal{L}_{2}(\theta_{2},\phi_{2},\theta_{1}^{[t]}+\epsilon_{1}^{(j)})+ \lambda\cdot KL(\widehat{y}_{2}^{(j)},\frac{1}{M}\sum_{n=1}^{M}\widehat{y}_{2 }^{(n)}), \tag{6}\]
where \(\epsilon_{i}^{(j)}\) is a noise vector sampled from \(P(\epsilon_{i})\), \(M\) is the sampling times, and \(KL\) is the Kullback-Leibler Divergence. The first term in Eq. (5) or Eq. (6) is designed to find a satisfying minimum for the current task, and the second term enforces this minimum to be flat as desired.
**Warm Up the Network.** To initialize the parameters for Eq.(3)) and Eq.(4) with non-sensitive \((\theta_{1}^{[0]},\theta_{2}^{[0]})\), we minimize the following loss function,
\[\mathcal{L}_{total}=\frac{1}{M}\sum_{j=1}^{M}(\mathcal{L}_{1}( \theta_{1}+\epsilon_{1}^{(j)},\phi_{1},\theta_{2}+\epsilon_{2}^{(j)})+ \mathcal{L}_{2}(\theta_{2}+\epsilon_{2}^{(j)},\phi_{2},\theta_{1}+\epsilon_{1 }^{(j)})). \tag{7}\]
**Algorithm.** We term the proposed **multi-task **co**operative **learning method as MT-COOL. The algorithm is described in Algorithm 1. Note that to alleviate the optimization issue discussed in Section 2.1, after the update for each task, we clamp the parameters to ensure that they fall within the flat region, as described in Line 17 in Algorithm 1.
Network Configuration
Fig. 1 illustrates the framework for two-task cooperative learning. Our framework consists of an encoder and task-specific decoders. The parameters at each layer of the encoder are evenly allocated to each task, and the learned features are then concatenated as the input of the next layer.
Figure 1: A general framework for our MTL method. (a) is the conventional convolution block, (b) illustrates the structure of a convolution block for cooperative two-task learning, and (c) shows the general framework for MTL, which contains an encoder and task-specific decoders.
## 3 Experiments
We validate our MTL framework in three scenarios as follows: (1) classification tasks on different classes with the MNIST dataset [8], (2) one domain for simultaneous segmentation and classification tasks using the REFUGE2018 dataset [13], and (3) one domain for two segmentation tasks with HRF-AV dataset [7]. For our method, we adopt the stochastic gradient descent (SGD) optimizer, and empirically set the bound value \(b=0.05\), the learning rate \(\alpha=\beta=0.1\). To reduce the training time and the memory, we simply set the sampling number \(M=1\). All experiments are implemented using one GTX 1080Ti GPU.
### Dataset
(1) **MNIST.** This dataset contains 50,000 training and 10,000 testing images. To simulate a multi-task learning setting, we divide both the training and test images into two subsets with either even numbers \(\{0,2,4,6,8\}\) (denoted as _Task 1_) or odd numbers \(\{1,3,5,7,9\}\) (denoted as _Task 2_). For the network, we adopt the widely used LeNet architecture for MNIST dataset [8], of which the last layer contains 50 hidden units, followed by a final prediction output. (2) **REFUGE2018.** The REFUGE2018 challenge [13] provides 1200 retinal color fundus photography. The target of this challenge is glaucoma detection and optic disc/cup segmentation. We divide this dataset into 800 samples for training and 400 test subset, where the ratio of the number of glaucomas to non-glaucoma images are both \(1:9\). As discussed in [13], glaucoma is mostly characterized by the optic nerve head area. Hence, we cropped all images around the optic disc into \(512\times 512\). We used the UNet [15] for the segmentation task, with the four down-sampling modules as the shared encoders. The output of segmentation and the features from the bottom layers are taken as the input of the decoder for classification. (3) **HRF-AV.** This dataset [7] contains 45 fundus images with a high resolution of \(3504\times 2336\). The tasks for this dataset are the binary vessel segmentation and the artery/vein (A/V) segmentation. We randomly split the dataset into 15 and 30 samples for training and testing. We adopt the U-Net as the backbone with the bottom feature channel being 256. During training, we crop patches with size of \(2048\times 2048\) randomly as input.
### Results on MNIST Dataset
#### 3.2.1 Ablation Study
To validate the effectiveness of the two terms in Eq.(5) and Eq.(6), we conduct two experiments: (1) **Vanilla.** We simply optimize the objective of each task alternatively without any constraints or sampling operations. (2) **Ours (_w/o_ Reg).** We sample noises during training, and optimize the losses with solely the first term in Eq.(5) and Eq.(6), _i.e.,_ without the similarity regularization. We run 5 times for each method, and report their mean and standard deviation values.
As shown in the top four rows of Table 1, compared to the **Independent** approach, the proposed **Vanilla** bi-level optimization method can utilize the
features from other tasks and boost the performance of the current one. By introducing noises to find flat minima during training, **Ours (_w/o_ Reg)** further achieves higher prediction, particularly for _Task 2_. Finally, by adding similarity regularization, our method obtains the best results.
#### 3.2.2 Comparison Study
We compare the proposed method with four state-of-the-art (SOTA) MTL approaches, including MGDA [17], PCGrad [22], GradDrop [3] and CAGrad [10]. We also implement the **Joint** method as a baseline, which simply sums the loss of each task as the total loss for training.
As shown in Table 1, all MTL methods improve the performance on each task, compared to **Independent**. Among all the compared methods, our technique performs the best on both tasks.
### Comparison on REFUGE2018 Dataset
For REFUGE2018 dataset, we compare our method with CAGrad, GradDrop, MGDA, PCGrad, and Joint. We run each method three times, and report the
\begin{table}
\begin{tabular}{|c|c|c|c|} \hline Methods & Params & _Task 1_ & _Task 2_ \\ \hline Independent & \(\approx\) 2 & 99.41 \(\pm\) 0.03492 & 98.77 \(\pm\) 0.06029 \\ \hline \hline Ours (Vanilla) & 1 & 99.61\(\pm\)0.06210 & 99.37\(\pm\)0.04494 \\ \hline Ours (_w/o_ Reg) & 1 & 99.66\(\pm\)0.03765 & 99.56\(\pm\)0.07203 \\ \hline MT-COOL (Ours) & 1 & **99.72\(\pm\)0.03978** & **99.62\(\pm\)0.01576** \\ \hline \hline Joint & 1 & 99.60 \(\pm\) 0.03765 & 99.51 \(\pm\)0.06281 \\ \hline CAGrad [10] & 1 & 99.67\(\pm\)0.05293 & 99.51\(\pm\)0.05229 \\ \hline GradDrop [3] & 1 & 99.65\(\pm\) 0.03492 & 99.53\(\pm\)0.04245 \\ \hline MGDA [17] & 1 & 99.63\(\pm\) 0.05883 & 99.47\(\pm\)0.05078 \\ \hline PCGrad [22] & 1 & 99.66\(\pm\)0.04180 & 99.51\(\pm\)0.09108 \\ \hline \end{tabular}
\end{table}
Table 1: Performance of SOTA MTL methods on MNIST dataset. We set the number of parameters of **Joint** method as the base 1, and the values in the column ‘Params’ are the ratio of the parameter number of each method to the **Joint**.
Figure 2: Visualization results from MTL methods on REFUGE2018 dataset. The selected samples rank the 1st quartile, median and 3rd quartile in terms of the segmentation performance of **Independent**.
\(mean\pm std\) values of Dice score on optic cup and disc for the segmentation task, and accuracy (Acc), Area Under the Receiver Operating Characteristics (AUROC), sensitivity (Sen) and specificity (Spe) for the classification task.
As shown in Table 2, our method achieves comparable results on the segmentation task with the **Independent**, while other MTL methods degrade significantly, particularly on Disc. For the classification task, our method achieves the best performance in terms of all the metrics. Fig. 2 provides the visualization results for qualitative comparison. One can see that the proposed method obtains the best prediction shape among all MTL methods.
### Comparison on HRF-AV Dataset
We also conduct a comparison study on HRF-AV dataset. Each method is repeated three times, and the mean results are presented in Table 3. One can see that compared to the **Independent**, all the other MTL methods perform poorly, especially on A/V segmentation task. For example, the best F1 scores on A/V segmentation among the five MTL methods are 0.5127 and 0.5736, respectively, obtained by GradDrop, which are much lower than those from **Independent**. On the contrary, our method performs comparably with the **Independent** on A/V segmentation, and even slightly better on binary segmentation. For qualitative comparison, please refer to Fig.1 in the Supplementary material.
\begin{table}
\begin{tabular}{|l|c|c|c|c|c|c|c|} \hline \multirow{2}{*}{Methods} & \multirow{2}{*}{Params} & \multicolumn{4}{c|}{Segmentation} & \multicolumn{4}{c|}{Classification} \\ \cline{3-8} & & Cup (Dice\%) & Disc (Dice\%) & Acc & AUROC & Sen & Spe \\ \hline Independent & \(\approx\) 2 & 95.14\(\pm\)0.05110 & 86.87\(\pm\) 05644 & 0.900\(\pm\)0.00235 & 0.902\(\pm\)0.0106 & 0.658\(\pm\)0.0117 & 0.927\(\pm\)0.00392 \\ \hline \hline Joint & 1 & 91.19\(\pm\)0.7600 & 77.36\(\pm\)0.5236 & 0.907\(\pm\)0.0183 & 0.895\(\pm\)0.0221 & 0.658\(\pm\)0.0656 & 0.935\(\pm\)0.0264 \\ \hline CAGrad [10] & 1 & 92.67\(\pm\)0.7702 & 81.71\(\pm\)0.2874 & 0.914\(\pm\)0.00513 & 0.904\(\pm\)0.00562 & 0.658\(\pm\)0.0235 & 0.942\(\pm\)0.00796 \\ \hline GradDrop [3] & 1 & 91.70\(\pm\)0.6376 & 78.91\(\pm\)1.439 & 0.909\(\pm\)0.00424 & 0.922\(\pm\)0.0115 & 0.716\(\pm\)0.0471 & 0.930\(\pm\)0.00988 \\ \hline MGDA [17] & 1 & 93.87\(\pm\)0.5017 & 83.87\(\pm\)0.9732 & 0.895\(\pm\)0.0154 & 0.914\(\pm\)0.00610 & 0.633\(\pm\)0.0824 & 0.924\(\pm\)0.0260 \\ \hline PCGrad [22] & 1 & 91.74\(\pm\)0.5569 & 79.80\(\pm\)0.8748 & 0.911\(\pm\)0.00849 & 0.898\(\pm\)0.0136 & 0.675\(\pm\)0.0204 & 0.937\(\pm\)0.00796 \\ \hline MT-COOL (Ours) & 1 & **94.37\(\pm\)0.1706** & **86.18\(\pm\)0.3046** & **0.937\(\pm\)0.0113** & **0.942\(\pm\)0.0149** & **0.750\(\pm\)0.000** & **0.958\(\pm\)0.0126** \\ \hline \end{tabular}
\end{table}
Table 2: Performance of SOTA MTL methods on REFUGE2018 dataset.
\begin{table}
\begin{tabular}{|l|c|c|c|c|c|c|c|c|} \hline \multirow{2}{*}{Methods} & \multirow{2}{*}{Params} & \multicolumn{4}{c|}{A/V Segmentation} & \multicolumn{4}{c|}{Binary Segmentation} \\ \cline{3-10} & & Acc (A) & F1 (A) & Acc (V) & F1 (V) & Acc (AV) & F1 (A/V) & Acc & F1 \\ \hline Independent & \(\approx\) 2 & 0.9814 & 0.6999 & 0.9821 & 0.7492 & 0.9692 & 0.7698 & 0.9691 & 0.7831 \\ \hline Joint & 1 & 0.9622 & 0.3537 & 0.9661 & 0.5171 & 0.9664 & 0.7360 & 0.9691 & 0.7835 \\ \hline CAGrad [10] & 1 & 0.9687 & 0.4754 & 0.9696 & 0.5520 & 0.9668 & 0.7364 & 0.9690 & 0.7790 \\ \hline GradDrop [3] & 1 & 0.9708 & 0.5127 & 0.9716 & 0.5736 & 0.9666 & 0.7343 & 0.9686 & 0.7742 \\ \hline MGDA [17] & 1 & 0.9636 & 0.2343 & 0.9632 & 0.5315 & 0.9660 & 0.7263 & 0.9691 & 0.7793 \\ \hline PCGrad [22] & 1 & 0.9671 & 0.4262 & 0.9681 & 0.5387 & 0.9667 & 0.7357 & 0.9687 & 0.7763 \\ \hline MT-COOL (Ours) & 1 & **0.9801** & **0.6671** & **0.9811** & **0.7135** & **0.9674** & **0.7424** & **0.9701** & **0.7912** \\ \hline \end{tabular}
\end{table}
Table 3: Performance of SOTA MTL methods on HRF-AV dataset.
## 4 Conclusion
In this work, we propose a novel MTL framework via bi-level optimization. Our method learns features for each task in a cooperative manner, instead of competing for resources with each other. We validate our model on three datasets, and the results prove its great potential in MTL. However, there are still some issues that need to be studied in the future. For example, we need to validate our method on large-scale tasks and find a more efficient learning strategy such as using distributed learning. Moreover, how to allocate the parameters to each task automatically and effectively is important for model generalization. For better interpretability, learning features specific to each task should also be studied.
|
2309.04619 | High-entropy effect at rare-earth site in DyNi | We report the structural and magnetic properties of RNi (R=Dy,
Tb$_{1/3}$Dy$_{1/3}$Ho$_{1/3}$, and
Gd$_{1/5}$Tb$_{1/5}$Dy$_{1/5}$Ho$_{1/5}$Er$_{1/5}$) to investigate the
high-entropy effect at the rare-earth site. The lattice parameters are almost
unchanged by the increase of configurational entropy, which is due to the
successive partial substitution of Dy by pair of rare earth elements located on
both sides of Dy in the periodic table. All compounds exhibit ferromagnetic
ground states. The replacement of Dy with Tb+Ho, which does not have magnetic
interactions in competition with Dy, does not affect the magnetic ordering
temperature. Although (Gd$_{1/5}$Tb$_{1/5}$Dy$_{1/5}$Ho$_{1/5}$Er$_{1/5}$)Ni
shows the Curie temperature close to that of DyNi, an additional magnetic
anomaly, which would be a spin reorientation, is observed probably due to the
introduction of competing magnetic interactions between R=Gd and Er compounds
and R=Tb, Dy, and Ho ones. We have also assessed the magnetocaloric effect, and
the configurational entropy dependence of the magnetic entropy change reflects
that of the temperature derivative of the magnetic susceptibility. Our analysis
suggests the possibility of enhancing magnetocaloric properties by designing
the anisotropy of rare-earth magnetic moments in the high-entropy state. | Yuito Nakamura, Koshin Takeshita, Terukazu Nishizaki, Jiro Kitagawa | 2023-09-08T22:13:50Z | http://arxiv.org/abs/2309.04619v1 | # High-entropy effect at rare-earth site in DyNi
###### Abstract
We report the structural and magnetic properties of RNi (R=Dy, Tb\({}_{1/3}\)Dy\({}_{1/3}\)Ho\({}_{0/3}\), and Gd\({}_{1/5}\)Tb\({}_{1/5}\)Dy\({}_{1/5}\)Ho\({}_{1/5}\)Er\({}_{1/5}\)) to investigate the high-entropy effect at the rare-earth site. The lattice parameters are almost unchanged by the increase of configurational entropy, which is due to the successive partial substitution of Dy by pair of rare earth elements located on both sides of Dy in the periodic table. All compounds exhibit ferromagnetic ground states. The replacement of Dy with Tb+Ho, which does not have magnetic interactions in competition with Dy, does not affect the magnetic ordering temperature. Although (Gd\({}_{1/5}\)Tb\({}_{1/5}\)Dy\({}_{1/5}\)Ho\({}_{1/5}\)Er\({}_{1/5}\))Ni shows the Curie temperature close to that of DyNi, an additional magnetic anomaly, which would be a spin reorientation, is observed probably due to the introduction of competing magnetic interactions between R=Gd and Er compounds and R=Tb, Dy, and Ho ones. We have also assessed the magnetocaloric effect, and the configurational entropy dependence of the magnetic entropy change reflects that of the temperature derivative of the magnetic susceptibility. Our analysis suggests the possibility of enhancing magnetocaloric properties by designing the anisotropy of rare-earth magnetic moments in the high-entropy state.
+
Footnote †: preprint: AIP/12020
## I Introduction
High-entropy alloys (HEAs) are unique systems composed of multiple elements with near equimolar ratios. They offer a vast compositional space and are a promising platform for studying novel phenomena [1; 2; 3]. Additionally, they have attracted considerable attention due to their rich functionalities, such as high strength, energy storage, radiation protection, magnetism, superconductivity, and biocompatibility [4; 5; 6; 7; 8; 9; 10; 11; 12]. HEA concept is now introduced into intermetallic compounds (high-entropy intermetallic compounds).
Numerous rare-earth intermetallic compounds exhibit magnetic moments solely attributed to the rare-earth elements. However, the influence of a high-entropy state at the rare-earth site on the magnetic ordering temperatures of such systems remains insufficiently explored. We are primarily concerned with the robustness of the magnetic ordering of rare-earth atoms in the presence of the high-entropy state. In this study, we focus on the well-defined RNi (R:rare-earth) system, wherein magnetic ordering temperatures and magnetic structures are elucidated. The highest magnetic ordering temperature [13] is 71 K in GdNi. The ordering temperature is moderately lower compared to R\({}_{2}\)In or R\({}_{6}\)CoTe\({}_{2}\) series, where Gd\({}_{2}\)In and Gd\({}_{6}\)CoTe\({}_{2}\) show the highest magnetic ordering temperatures of 190 K and 220 K, respectively [14]. Hence, we anticipate the possible destruction of magnetic orderings within all RNi compounds by increasing atomic disorder.
Additionally, we are concerned with the potential modulation of magnetocaloric effects by introducing a high-entropy state. Certain RNi compounds demonstrate a significant magnetocaloric effect in proximity to the temperature of liquid hydrogen [13; 15]. This observation holds promise for magnetic refrigeration-based hydrogen liquefaction and is significant in realizing a hydrogen society. The magnetocaloric effects of HEAs have garnered considerable attention [16; 17; 18; 9; 19]. Notably, the equimolar quinary alloy GdTbDyHoEr exhibits a remarkable magnetocaloric effect [20]. A recent investigation into the configurational entropy dependence of magnetocaloric effects in rare-earth HEAs has revealed that magnetic properties depend on the intrinsic magnetic characteristics of rare-earth elements [21]. Another study [19] suggests a reduction in the peak value of magnetic entropy change with an increase in configurational entropy in HEAs containing Dy. Transition-metal-based HEAs, such as FeMnNiGeSi, have emerged as a novel material class enabling the manipulation of magnetocaloric effects by introducing magnetocaloric transformations [22]. To the best of our knowledge, reports on the magnetocaloric effects of crystalline high-entropy rare-earth intermetallic compounds are rare, while there are many reports for amorphous HEAs containing rare-earth and transition-metal elements [19].
It is well-established that the lattice parameters and the number of 4\(f\) electrons significantly impact the magnetic properties in rare-earth intermetallic compounds. So, we examined the configurational entropy dependence of the magnetic properties of DyNi through a successive replacement of Dy with a pair of rare-earth elements located on both sides of Dy in the periodic table: partial replacement by Tb+Ho or Gd+Tb+Ho+Er. Within our replacement sequence, we can maintain the lattice constants and the average number of 4\(f\) electrons. Consequently, we could explore the high-entropy effect at the rare-earth site while preserving the electronic state of DyNi intact. In RNi compounds, GdNi and (Dy, Ho, or Er)Ni crystallize into the orthorhombic CrB-type and the orthorhombic FeB-type structure, respectively [13; 23; 24]. The crystal structure of TbNi might be controversial: a monoclinic structure with the space group \(P2_{1}m\) or an orthorhombic with the space group \(Pmn^{13}\). All RNi (R=Gd to Er) compounds are ferromagnets with the Curie temperature \(T_{\rm c}\)=71 K for R=Gd, 67 K for R=Tb, 62 K for R=Dy, 37 K for R=Ho, and 13 K for R=Er, respectively [13; 25]. Despite the changes in crystal structure that occur upon going from R=Gd
to R=Tb and from R=Tb to R=Dy, we synthesized DyNi, (Tb\({}_{1/3}\)Dy\({}_{1/3}\)Ho\({}_{1/3}\))Ni, and (Gd\({}_{1/5}\)Tb\({}_{1/5}\)Dy\({}_{1/5}\)Ho\({}_{1/5}\)Er\({}_{1/5}\))Ni, which are predominantly composed of the FeB-type structure components.
In this paper, we report on the structural and magnetic properties of RNi (R=Dy, Tb\({}_{1/3}\)Dy\({}_{1/3}\)Ho\({}_{1/3}\), and Gd\({}_{1/5}\)Tb\({}_{1/5}\)Dy\({}_{1/5}\)Ho\({}_{1/5}\)Er\({}_{1/5}\)). Our findings confirm that ferromagnetic ordering is robust, and that \(T_{\rm C}\) is relatively unaffected by the increase of configurational entropy at the rare-earth site. (Gd\({}_{1/5}\)Tb\({}_{1/5}\)Dy\({}_{1/5}\)Ho\({}_{1/5}\)Er\({}_{1/5}\))Ni shows an additional magnetic anomaly below \(T_{\rm C}\), which suggests a possible spin reorientation. We evaluated the configurational entropy dependence of the magnetocaloric effect, which is discussed along with the anisotropy of rare-earth magnetic moments.
## II Materials and Methods
Polycrystalline samples were prepared using a homemade arc furnace as detailed in Table 1. The materials used were rare earth (Gd, Tb, Dy, Ho, and Er) (99.9 %) and Ni (99.9 %). The constituent elements with the stoichiometric ratio were melted on a water-cooled Cu hearth under an Ar atmosphere. The button-shaped samples were remelted several times and flipped each time to ensure homogeneity. Each as-cast sample was then annealed in an evacuated quartz tube at 800 \({}^{\circ}\)C for four days. Room temperature X-ray diffraction (XRD) patterns of powdered samples were obtained using an X-ray diffractometer (XRD-7000L, Shimadzu) with Cu-K\(\alpha\) radiation.
The temperature dependence of dc magnetization \(\chi_{\rm dc}\) (\(T\)) between 50 K and 300 K was measured using VSM (vibrating sample magnetometer) option of VersaLab (Quantum Design). The isothermal magnetization curves between 50 K and 110 K were also taken using the VersaLab.
## III Results and Discussion
Figure 1 displays the XRD patterns of DyNi, (Tb\({}_{1/3}\)Dy\({}_{1/3}\)Ho\({}_{1/3}\))Ni, and (Gd\({}_{1/5}\)Tb\({}_{1/5}\)Dy\({}_{1/5}\)Ho\({}_{1/5}\)Er\({}_{1/5}\))Ni, along with the simulation pattern of DyNi with the FeB-type structure taken from the ICSD database (Coll Code: 103332). All experimental patterns match the simulation pattern. As mentioned in the Introduction, GdNi or TbNi crystallizes into a structure different from the FeB-type of RNi (R=Dy, Ho, and Er). However, the FeB-type structure is stabilized when dominant elements of Dy, Ho, and Er are present. We note that the extra diffraction peaks assigned as the R\({}_{2}\)O\({}_{3}\) (R=Dy, Tb+Dy+Ho, or Gd+Tb+Dy+Ho+Er) phase are detected (see * in Fig. 1). Table 1 lists the lattice parameters determined with the help of Rietveld refinement program [26; 27]. While the \(c\)-axis length is almost independent of configurational entropy change at the rare-earth site, the \(a\)-axis (the \(b\)-axis) exhibits a slight expansion (contraction) with increasing configurational entropy.
Figure 2 depicts \(\chi_{\rm dc}\) (\(T\)) under an external field of 100 Oe for the RNi system. Each sample exhibits a steep increase in \(\chi_{\rm dc}\) below approximately 70 K, which is indicative of ferromagnetic ordering. \(T_{\rm C}\) is defined by the minimum point of the temperature derivative of \(\chi_{\rm dc}\) (see the inset of Fig.2 and Table 1). This is one of the effective ways to obtain \(T_{\rm C}\) of ferromagnets [28; 29]. DyNi undergoes a ferromagnetic transition at \(T_{\rm C}\)=59 K, which is consistent with the literature data [25]. (Tb\({}_{1/3}\)Dy\({}_{1/3}\)Ho\({}_{1/3}\))Ni possesses \(T_{\rm C}\)=63 K, slightly enhanced compared to DyNi, and the \(T_{\rm C}\) value remains unchanged in (Gd\({}_{1/5}\)Tb\({}_{1/5}\)Dy\({}_{1/5}\)Ho\({}_{1/5}\)Er\({}_{1/5}\))Ni. We note that the \(\chi_{\rm dc}\) (\(T\)) of (Gd\({}_{1/5}\)Tb\({}_{1/5}\)Dy\({}_{1/5}\)Ho\({}_{1/5}\)Er\({}_{1/5}\))Ni shows a small anomaly around 57 K, which is discussed later. The results of \(\chi_{\rm dc}\) (\(T\)) indicate that ferromagnetic ordering is resistant to atomic disorder at the rare-earth site.
DyNi, HoNi, and ErNi, which possess the orthorhombic FeB-type structure, exhibit a non-collinear magnetic structure at \(T_{\rm C}\) = 62 K, 37 K, and 13 K, respectively [13; 25]. In these compounds, rare-earth magnetic moments have a ferromagnetic arrangement parallel to the \(a\)-axis and an antiferromagnetic arrangement parallel to the \(c\)-axis. The angle between the
Figure 1: XRD patterns of DyNi, (Tb\({}_{1/3}\)Dy\({}_{1/3}\)Ho\({}_{1/3}\))Ni, and (Gd\({}_{1/5}\)Tb\({}_{1/5}\)Dy\({}_{1/5}\)Ho\({}_{1/5}\)Er\({}_{1/5}\))Ni. The origin of each pattern is shifted by a value for clarity.
rare-earth moment and the \(a\)-axis is 29\({}^{\circ}\) for DyNi, 25\({}^{\circ}\) for HoNi, or 61\({}^{\circ}\) for ErNi [13; 25]. Although the crystal structures of GdNi and TbNi differ from the FeB-type (GdNi: CrB-type, TbNi: monoclinic or orthorhombic) [13], they are also ferromagnets with \(T_{\rm C}\) = 69 K and 67 K, respectively [13]. The magnetic ordering temperatures of RNi (=Dy, Ho, and Er) compounds follow the de Gennes scaling, which suggests a weak effect of energy-level splitting of the \(J\)-multiplet due to the crystalline-electric-field effect [30; 31] at \(T_{\rm C}\). In such a case, the 4\(f\) electron distribution of a single rare-earth ion would be responsible for the magnetic structure [32; 33]. The 4\(f\) electron distribution of a single R\({}^{3+}\) ion (R=Dy or Ho) is oblate, and the direction of the rare-earth magnetic moment is perpendicular to the equatorially expanded 4\(f\)-electron charge cloud [32]. On the other hand, the 4\(f\) electron distribution of a single Er\({}^{3+}\) ion is prolate [32], causing the magnetic moment of Er ion to be perpendicular to that of the R\({}^{3+}\) ion (R=Dy or Ho). In fact, the magnetic moments of DyNi and HoNi are nearly parallel to the \(a\)-axis and the direction of the Er\({}^{3+}\) moment tilts toward the \(c\)-axis. The 4\(f\) electron distribution of a single Tb\({}^{3+}\) ion is oblate, which is the same as Dy\({}^{3+}\) or Ho\({}^{3+}\). Therefore, the magnetic structure of (Tb\({}_{1/3}\)Dy\({}_{1/3}\)Ho\({}_{1/3}\))Ni would not be significantly different from that of DyNi. However, in (Gd\({}_{1/5}\)Tb\({}_{1/5}\)Dy\({}_{1/5}\)Ho\({}_{1/5}\)Er\({}_{1/5}\))Ni, competition between easy magnetization axes might occur, potentially leading to a spin reorientation as observed in (Gd\({}_{0.38}\)Tb\({}_{0.27}\)Dy\({}_{0.20}\)Ho\({}_{0.15}\))Mn\({}_{5}\)Sn [34]. As shown in Fig. 2, \(\chi_{\rm dc}\) (\(T\)) of (Gd\({}_{1/5}\)Tb\({}_{1/5}\)Dy\({}_{1/5}\)Ho\({}_{1/5}\)Er\({}_{1/5}\))Ni shows a small anomaly around 57 K, which is clearly detected by d\(\chi\)/d\(T\) with a double-dip structure (see the inset of Fig. 2). We speculate that the anomaly at a lower temperature of 57 K suggests a change of magnetic structure like a spin reorientation.
The isothermal magnetization curves (\(M\): magnetization and \(H\): external field) measured around \(T_{\rm C}\) are shown in Fig.3(a) for DyNi, Fig. 3(b) for (Tb\({}_{1/3}\)Dy\({}_{1/3}\)Ho\({}_{1/3}\))Ni, and Fig.3(c) for (Gd\({}_{1/5}\)Tb\({}_{1/5}\)Dy\({}_{1/5}\)Ho\({}_{1/5}\)Er\({}_{1/5}\))Ni, respectively. In each sample, the pronounced steep increase of magnetization at lower external fields below approximately \(T_{\rm C}\) supports the ferromagnetic ground state. We note that the noticeable irreversibility is not observed in any of the samples. Figure 3(d) provides a comparison of the magnetization curves among the three compounds at temperatures of 50 K, 70 K, 90 K, and 110 K. With decreasing temperature, the \(M\)-\(H\) curve of (Tb\({}_{1/3}\)Dy\({}_{1/3}\)Ho\({}_{1/3}\))Ni deviates from the other curves, albeit displaying a resemblance to that of (Gd\({}_{1/5}\)Tb\({}_{1/5}\)Dy\({}_{1/5}\)Ho\({}_{1/5}\)Er\({}_{1/5}\))Ni. As illustrated in Fig. 2, \(\chi_{\rm dc}\) (\(T\)) of (Tb\({}_{1/3}\)Dy\({}_{1/3}\)Ho\({}_{1/3}\))Ni is smaller compared to the other two compounds at low temperatures, indicating a relatively weaker magnetic response in (Tb\({}_{1/3}\)Dy\({}_{1/3}\)Ho\({}_{1/3}\))Ni. Consequently, this might lead to the lowest \(M\) for (Tb\({}_{1/3}\)Dy\({}_{1/3}\)Ho\({}_{1/3}\))Ni at a fixed \(T\) (temperature) and \(H\). It should be noted that the variation in magnetic moment associated with each sample is another contributing factor to the differences in \(M\) at fixed \(T\) and \(H\). Further investigation is required to elucidate the individual element's specific contribution. Moreover, Fig. 3(d) reveals the intersection of magnetization curves between DyNi and (Gd\({}_{1/5}\)Tb\({}_{1/5}\)Dy\({}_{1/5}\)Ho\({}_{1/5}\)Er\({}_{1/5}\))Ni at 50 K or 70 K. Such phenomena may be attributed to changes in magnetic anisotropy energy and saturation magnetic moment.
The magnetic entropy change \(\Delta S_{\rm mag}\) (\(T\),\(H\)) is obtained by using the Maxwell's relation as follows:
\[\Delta S_{\rm mag}(T,H)=\int_{0}^{H_{\rm max}}\left[\frac{\partial M(T,H)}{ \partial T}\right]_{H}dH \tag{1}\]
, where \(H_{\rm max}\) is the maximum external field. The temperature dependences of -\(\Delta S_{\rm mag}\) (\(T\)) at \(H_{\rm max}\)=10 kOe, 20 kOe, and 30 kOe for the RNi system are summarized in Fig.4 (a). All samples show a maximum of -\(\Delta S_{\rm mag}\) (\(T\)) at approximately \(T_{\rm C}\). According to Eq. (1), \(\Delta S_{\rm mag}\) (\(T\)) is influenced by \([\frac{\partial M(T,H)}{\partial T}]_{H}\). This implies that a significant change in \(M\) with decreasing temperature at a fixed \(H\) is necessary to enhance \(\Delta S_{\rm mag}\) (\(T\)). Therefore, it is worthwhile to compare -\(\Delta S_{\rm mag}\) (\(T\)) with \(-\)d\(\chi_{\rm dc}\)/d\(T\) (refer to Figs.4 (a) and 4 (b)), as the latter represents the change in the initial slope of the \(M\)-\(H\) curve resulting from the temperature variations. A larger value of \(-\)d\(\chi_{\rm dc}\)/d\(T\) has the potential to contribute to a more significant change in \(M\) when the temperature changes. The dependence of \(-\)d\(\chi_{\rm dc}\)/d\(T\) on configurational entropy resembles that of - \(\Delta S_{\rm mag}\) (\(T\)), particularly at \(H_{\rm max}\)=10 kOe. At each \(H_{\rm max}\), while the peak value of -\(\Delta S_{\rm mag}\) for (Gd\({}_{1/5}\)Tb\({}_{1/5}\)Dy\({}_{1/5}\)Ho\({}_{1/5}\)Er\({}_{1/5}\))Ni diminishes compared to DyNi, the temperature dependence of -\(\Delta S_{\rm mag}\) becomes broader. This broadening is advantageous for magnetic refrigeration applications. The presence of a spin reorientation below \(T_{\rm C}\) contributes to this advantage, as the modification in magnetic structure gives rise to an additional \(-\)d\(\chi_{\rm dc}\)/d\(T\). As mentioned earlier, this spin reorientation likely arises from the interaction between rare-earth elements with distinct magnetic anisotropy. Consequently, the present study suggests the potential enhancement of magnetocaloric properties by manipulating rare-earth magnetic moment anisotropy
Figure 2: Temperature dependences of \(\chi_{\rm dc}\) of RNi system. The external field is 100 Oe. The inset is the temperature derivative of \(\chi_{\rm dc}\) for each sample.
in the high-entropy state.
In this discussion, we aim to compare the magnetocaloric effect between RNi and rare-earth HEAs. Specifically, we examine the peak value of -\(\Delta S_{\rm mag}\), denoted as -\(\Delta S_{\rm mag}^{\rm peak}\). In the RNi system, the configurational entropy dependence of -\(\Delta S_{\rm mag}^{\rm peak}\) exhibits a non-systematic trend. -\(\Delta S_{\rm mag}^{\rm peak}\) decreases on going from DyNi, (Gd\({}_{1/5}\)Tb\({}_{1/5}\)Dy\({}_{1/5}\)Ho\({}_{1/5}\)Er\({}_{1/5}\))Ni to (Tb\({}_{1/3}\)Dy\({}_{1/3}\)Ho\({}_{1/3}\))Ni. In certain rare-earth HEAs [19], -\(\Delta S_{\rm mag}^{\rm peak}\) decreases in the order of GdTbDy, GdTbDyHo, GdTb, and GdTbDyHoEr (GdTbHoEr). In these rare-earth HEAs, changes occur in the average number of 4\(f\) electrons and lattice constants, resulting in varying magnetic ordering temperatures ranging from 184 K to 258 K [19]. In contrast, our RNi system maintains a nearly constant \(T_{\rm C}\), likely due to minimal alterations in lattice parameters and the average number of 4\(f\) electrons. However, both RNi and rare-earth HEAs exhibit a non-systematic configurational entropy dependence of -\(\Delta S_{\rm mag}^{\rm peak}\). Therefore, it appears that factors other than configurational entropy may influence the control of -\(\Delta S_{\rm mag}^{\rm peak}\). Here we comment on the -\(\Delta S_{\rm mag}^{\rm peak}\) value of (Gd\({}_{1/5}\)Tb\({}_{1/5}\)Dy\({}_{1/5}\)Ho\({}_{1/5}\)Er\({}_{1/5}\))Ni. It is widely acknowledged that -\(\Delta S_{\rm mag}^{\rm peak}\) follows a power law dependence on the magnetic field [35], represented as -\(\Delta S_{\rm mag}^{\rm peak}\)\(\propto\)\(H^{n}\). By applying this relation to (Gd\({}_{1/5}\)Tb\({}_{1/5}\)Dy\({}_{1/5}\)Ho\({}_{1/5}\)Er\({}_{1/5}\))Ni (see also Fig.4 (a)) and deducing the exponent \(n\) to be 0.89, we can estimate -\(\Delta S_{\rm mag}^{\rm peak}\) value at \(H_{\rm max}\)=50 kOe to be 10.6 J/kg-K. This value is larger compared to equimolar quinary rare-earth HEAs such as GdTbDyHoEr and GdTbHoErPr, which exhibit -\(\Delta S_{\rm mag}^{\rm peak}\) values of 8.6 J/kg-K and 6.92 J/kg-K, respectively, at \(H_{\rm max}\)=50
kOe [20, 21].
## IV Summary
We have studied the effect of configurational entropy on the structural and magnetic properties of DyNi by successively replacing Dy with pair of R elements located on both sides of Dy in the periodic table. This elemental substitution of Dy preserves the lattice parameters and average number of 4\(f\) electrons. Although the crystal structures of GdNi and TbNi differ from the FeB-type of RNi (R=Dy, Ho, and Er), all RNi (R=Dy, Tb\({}_{1/3}\)Dy\({}_{1/3}\)Ho\({}_{1/3}\), and Gd\({}_{1/5}\)Tb\({}_{1/5}\)Dy\({}_{1/5}\)Ho\({}_{1/5}\)Er\({}_{1/5}\)) samples crystallize into the FeB-type structure. \(T_{\rm C}\) of DyNi is almost unchanged by increasing the configurational entropy at the rare-earth site, and the ferromagnetic ordering is robust under the high-entropy state. In (Gd\({}_{1/5}\)Tb\({}_{1/5}\)Dy\({}_{1/5}\)Ho\({}_{1/5}\)Er\({}_{1/5}\))Ni, the additional magnetic anomaly is observed, which would be attributed to a spin reorientation resulting from the introduction of Gd+Er and the emergence of competing magnetic interactions. The competition does not disrupt the ferromagnetic ordering, even in the high-entropy state, but rather leads to a spin reorientation transition. Furthermore, we assessed the magnetocaloric effect of the RNi system. Although the peak value of -\(\Delta S_{\rm mag}\) of (Gd\({}_{1/5}\)Tb\({}_{1/5}\)Dy\({}_{1/5}\)Ho\({}_{1/5}\)Er\({}_{1/5}\))Ni is reduced compared to DyNi, the temperature dependence of -\(\Delta S_{\rm mag}\) becomes broader. Additionally, we observed a strong correlation between the configurational entropy dependence of -\(\Delta S_{\rm mag}\) (\(T\)) and that of -d\(\chi\)d\({}_{\rm dc}\)/d\(T\). Hence, the broadening of -\(\Delta S_{\rm mag}\) (\(T\)) in (Gd\({}_{1/5}\)Tb\({}_{1/5}\)Dy\({}_{1/5}\)Ho\({}_{1/5}\)Er\({}_{1/5}\))Ni can be attributed to the spin reorientation arising from the mixing of rare-earth elements with distinct magnetic anisotropy. Consequently, our study suggests the potential for enhancing the magnetocaloric properties by designing the anisotropy of rare-earth magnetic moments in the high-entropy state.
###### Acknowledgements.
J.K. is grateful for the support provided by the Comprehensive Research Organization of Fukuoka Institute of Technology.
## Author declarations
### Conflict of Interest
The authors have no conflicts to disclose.
### Author Contributions
Yuito Nakamura: Investigation, Formal analysis. Koshin Takeshita: Investigation, Formal analysis. Terukazu Nishizaki: Investigation, Formal analysis, Writing - reviewing & editing. Jiro Kitagawa: Supervision, Formal analysis, Writing - original draft, Writing - reviewing & editing.
## Data availability
The data that support the findings of this study are available from the corresponding author upon reasonable request.
|
2309.05507 | A Co-design Study for Multi-Stakeholder Job Recommender System
Explanations | Recent legislation proposals have significantly increased the demand for
eXplainable Artificial Intelligence (XAI) in many businesses, especially in
so-called `high-risk' domains, such as recruitment. Within recruitment, AI has
become commonplace, mainly in the form of job recommender systems (JRSs), which
try to match candidates to vacancies, and vice versa. However, common XAI
techniques often fall short in this domain due to the different levels and
types of expertise of the individuals involved, making explanations difficult
to generalize. To determine the explanation preferences of the different
stakeholder types - candidates, recruiters, and companies - we created and
validated a semi-structured interview guide. Using grounded theory, we
structurally analyzed the results of these interviews and found that different
stakeholder types indeed have strongly differing explanation preferences.
Candidates indicated a preference for brief, textual explanations that allow
them to quickly judge potential matches. On the other hand, hiring managers
preferred visual graph-based explanations that provide a more technical and
comprehensive overview at a glance. Recruiters found more exhaustive textual
explanations preferable, as those provided them with more talking points to
convince both parties of the match. Based on these findings, we describe
guidelines on how to design an explanation interface that fulfills the
requirements of all three stakeholder types. Furthermore, we provide the
validated interview guide, which can assist future research in determining the
explanation preferences of different stakeholder types. | Roan Schellingerhout, Francesco Barile, Nava Tintarev | 2023-09-11T14:51:20Z | http://arxiv.org/abs/2309.05507v1 | # A Co-design Study for Multi-Stakeholder Job Recommender System Explanations
###### Abstract
Recent legislation proposals have significantly increased the demand for eXplainable Artificial Intelligence (XAI) in many businesses, especially in so-called 'high-risk' domains, such as recruitment. Within recruitment, AI has become commonplace, mainly in the form of job recommender systems (JRSs), which try to match candidates to vacancies, and vice versa. However, common XAI techniques often fall short in this domain due to the different levels and types of expertise of the individuals involved, making explanations difficult to generalize. To determine the explanation preferences of the different stakeholder types - candidates, recruiters, and companies - we created and validated a semi-structured interview guide. Using grounded theory, we structurally analyzed the results of these interviews and found that different stakeholder types indeed have strongly differing explanation preferences. _Candidates_ indicated a preference for brief, textual explanations that allow them to quickly judge potential matches. On the other hand, _hiring managers_ preferred visual graph-based explanations that provide a more technical and comprehensive overview at a glance. _Recruiters_ found more exhaustive textual explanations preferable, as those provided them with more talking points to convince both parties of the match. Based on these findings, we describe guidelines on how to design an explanation interface that fulfills the requirements of all three stakeholder types. Furthermore, we provide the validated interview guide, which can assist future research in determining the explanation preferences of different stakeholder types.
Keywords:Explainable AI, Job Recommender Systems, User Studies, Grounded Theory
## 1 Introduction
Within the emerging field of explainable artificial intelligence (XAI), a substantial amount of research has attempted to make the inner workings of AI models more transparent [11, 18]. While such information can assist developers in understanding their model (e.g., by allowing the detection of bugs and biases, understanding feature importance), it is often complicated and requires considerable a priori knowledge of AI to interpret. However, the use of AI has become
commonplace in user-controlled environments, such as the recommender systems used by different commercial platforms (e.g., YouTube, TikTok, Amazon). In such environments, explanations cannot assume AI knowledge, as the majority of explainees are lay users. Moreover, different types of users interact with such systems - the stakeholders. These stakeholders consist of every individual or group who affects, or is affected by, the delivery of recommendations to users [1]. Stakeholders can be strongly diverse, coming from different backgrounds and having distinct expertise. As such, the way in which an explanation is conveyed to each stakeholder individually should be fine-tuned to their specific needs.
One field where such fine-tuned explanations are especially crucial is recruitment. Recruitment is inherently a multi-stakeholder domain, as users (candidates) need to be linked to vacancies (provided by companies) by recruiters. These three main stakeholders all rely on the same recommendations but can require widely different explanations. For example, telling a candidate that a vacancy is relevant for them as it comes with a high salary can be an acceptable explanation. However, the same explanation will be useless for the company, as that salary will be provided to every other potential candidate. Furthermore, a candidate and a recruiter might only look at a handful of recommendations per session, while a company could receive hundreds of applicants for a single vacancy. Therefore, the explanation requirements of each stakeholder are unique and require a tailored design.
This paper attempts to determine the explanation preferences of the stakeholders of a job recommender system: job seekers, companies, and recruiters. This is done through the execution of a co-design study, which allows stakeholder representatives to manually indicate how they prefer an explanation to be presented to them. Therefore, this research aims to answer the following research question:
**RQ:**: _What are the explanation preferences of recruiters, candidates, and company representatives for job recommender systems?_
Our results show interesting differences in the preferences of the different stakeholders. Regarding the preferred types of explanations, _candidates_ preferred brief written explanations, as their main interest is to be able to quickly judge the potential matches proposed by the system. On the contrary, company's _hiring managers_ preferred visual, graph-based explanations, as these allow a comprehensive overview at a glance. Finally, _recruiters_ preferred more exhaustive textual explanations, as those provided them with more talking points to convince both parties of the match. These results allow us to provide design guidelines for an interface that fulfills the requirements of all three stakeholder types. Furthermore, the co-design study allowed us to validate and improve the used interview guide.
## 2 Related work
Within the field of explainable AI, there is no single agreed-upon method to provide explanations [2]. Different use cases require different approaches, each with their own strengths and weaknesses.
One of the most common methods of providing explanations is through text [24, 5]. Textual explanations consist of brief sections of text that explain the rationale of the XAI model. Such texts often contain information on the impact different features had on the prediction and how those features interacted with each other. There are multiple ways to generate such texts, e.g., through the use of large language models (LLMs) [19] or predefined templates [36].
Another popular approach is the use of feature attribution maps: visualizations that show the importance of different features to the prediction [23]. Such maps can take different forms, depending on the specific task and data involved. When using tabular data, bar charts are often used to show the contribution of each different feature type to the prediction. When using multi-dimensional data, such as images or time series, are used, heatmaps can provide an overview of the importance of the different dimensions interacting with each other [9].
A further explanation type that has been gaining popularity recently, is the knowledge graph-based explanation [31]. These explanations depend on the connections within a knowledge graph to explain the rationale behind a prediction. This is usually done by highlighting important nodes and edges within the graph, which provide 'paths' from the subject to the recommended item, accompanied by their importance to the model's prediction [35].
### Challenges in multi-stakeholder explainability
In multi-stakeholder environments, explanations need to meet additional requirements [1]. An explanation that is sufficient for a developer, is not necessarily understandable for a user or provider, and vice versa [30]. There are multiple strategies to deal with this discrepancy, each with its own strengths and weaknesses. The most obvious solution is to create individual explanations for the different stakeholders [37]. Although this leads to the most fine-tuned explanations, it introduces an additional layer of complexity to the system as a whole. Another approach would be to simply use a single explanation, but to present it differently based on the stakeholders' level of expertise [1]. Unfortunately, it can be difficult to incorporate the different stakeholder perspectives simultaneously - some facts could be confidential or sensitive for a specific stakeholder, making it challenging to incorporate them in the explanation, even when they are relevant. Similarly, a highly specific overview of how the model came to the prediction might be useful for a developer, but will be too confusing for a lay user or provider.
### Explainability in job recommender systems
Explaining reciprocal recommendations, such as job recommendations, tends to be more difficult than standard recommendations, as the preferences of both |
2309.11572 | Architecture Knowledge Representation and Communication Industry Survey | Background: The literature offers various methods for capturing software
architectural knowledge (AK), including views, viewpoints, and architecture
decision records (ADRs). In parallel, sustainability has gained prominence in
software engineering, especially concerning software architecture.
Nevertheless, practical industry reviews on these subjects seem to be lacking.
Aim: In this research we aim to understand the current practice in architecture
knowledge, and to explore where sustainability can be applied to address
sustainability in software architecture in the future. Method: We used a
survey, which utilized a questionnaire containing 34 questions and collected
responses from 45 architects working at a prominent bank in the Netherlands,
aimed to evaluate the practical representation and communication of
architectural knowledge and sustainability. Result: Our analysis yielded two
primary discoveries and several intriguing detailed results regarding how AK is
captured and conveyed to diverse stakeholders. Firstly, it seems crucial to
develop a new architectural element that connects various architectural
features and perspectives tailored for different stakeholders. Secondly,
providing clear guidance, references, and goals is essential to motivate
architects to adopt Sustainable Software Engineering practices. Conclusion:
After analysing the data collected through this survey, we have concluded that:
a) There are no established domain-specific AK methods/tools in the financial
domain. Most practitioners use domain-generic tools. b) A new architectural
element that links the various architectural features and viewpoints created
for various stakeholders appears to be necessary. c) There is sufficient
sustainability awareness and motivation among software architects. However,
what they lack are clear guidance, references, and goals to practice
sustainable software engineering. | Haben Birhane Gebreweld | 2023-09-20T18:17:16Z | http://arxiv.org/abs/2309.11572v1 | # Architecture Knowledge Representation and Communication Industry Survey
###### Abstract.
_Background:_ The literature presents several approaches, such as views, viewpoint, and architecture decision records (ADRs), to describe software architectural knowledge (AK). On the other hand, sustainability is a subject that is receiving increasing attention in software engineering, particularly in relation to software architecture. However, there appears to be a lack of industry reviews on these topics from a practical perspective.
_Aim:_ In this research we aim to understand the current practice in architecture knowledge, and to explore where sustainability can be applied to address sustainability in software architecture in the future.
_Method:_ We used a survey, which utilized a questionnaire containing 34 questions and collected responses from 45 architects working at a prominent bank in the Netherlands, aimed to evaluate the practical representation and communication of architectural knowledge and sustainability.
_Result:_ Our analysis yielded two primary discoveries and several intriguing detailed results regarding how AK is captured and conveyed to diverse stakeholders. The report aims to communicate two essential messages to guide future research in the field. Firstly, it seems crucial to develop a new architectural element that connects various architectural features and perspectives tailored for different stakeholders. Secondly, providing clear guidance, references, and goals is essential to motivate architects to adopt Sustainable Software Engineering practices.
_Conclusion:_ After analysing the data collected through this survey, we have concluded that: **a)** There are no established domain-specific AK methods/tools in the financial domain. Most practitioners use domain-generic tools. **b)** A new architectural element that links the various architectural features and viewpoints created for various stakeholders appears to be necessary. **c)** There is sufficient sustainability awareness and motivation among software architects. However, what they lack are clear guidance, references, and goals to practice sustainable software engineering.
Software Engineering, Architecture Knowledge, Sustainability, Empirical Experiment, +
Footnote †: journal: 10.145/1122445.1122456
+
Footnote †: journal: 10.145/1122445.1122456
+
Footnote †: journal: 10.145/1122445.1122456
+
Footnote †: journal: 10.145/1122445.1122456
+
Footnote †: journal: 10.145/1122445.1122456
+
Footnote †: journal: 10.145/1122445.1122456
+
Footnote †: journal: 10.145/1122445.1122456
## 1. Introduction
Software architectural knowledge refers to the knowledge acquired while designing a software architecture, encompassing the assumptions, decisions, context, and other factors involved in that process (Birhane Gebreweld, 2022). Various approaches have been developed both in literature and industry to depict this knowledge, such as views and viewpoints(Birhane Gebreweld, 2022), architecture decision records (ADRs)(Birhane Gebreweld, 2022), and standards like ISO 42010(Birhane Gebreweld, 2022) and the C4 Model(Birhane Gebreweld, 2022). However, for this knowledge to be effective, it is important that all relevant stakeholders share information about the architecture and how it is represented. The way this information is communicated depends on the organizational structures involved and can take various forms, such as wikis, workshops, emails, etc. Understanding how architectural knowledge is represented and communicated in professional practice is important to identify appropriate relationships that address sustainability elements in software architecture. By studying how this knowledge is represented and shared, we can gain insights into best practices for ensuring that this knowledge is effectively communicated and can be used to make informed decisions about the sustainability of software architecture.
As researchers, we develop intriguing methods and techniques for managing architectural knowledge, while practitioners have their preferred means of capturing and sharing architectural information. To ensure that our methods do not become a "silver bullet" that practitioners do not utilize, it is crucial to conduct industry reviews. By building upon existing industry practices and filling in any missing pieces, we can develop effective and useful methods that practitioners will embrace.
The purpose of this research is to gain insight into the current practices related to architecture knowledge and explore how sustainability can be integrated into software architecture in the future. Our objective is to characterize architecture knowledge and sustainability from the perspective of software architects, specifically with regards to representation and communication in the professional practice context. To achieve this, we conducted a questionnaire survey and gathered responses from architects working at a prominent bank in the Netherlands about their experiences in the industry, focusing on how they represent and communicate architectural
knowledge and sustainability. Regarding scientific contributions, as far as we are aware, our study is the first of its kind to explore how software architects perceive and utilize sustainability aspects within software architecture in an industrial context, along with examining architectural knowledge representation and communication.
This study offers several significant contributions, including:
* A practical review of architectural knowledge representation and communication techniques utilized in the industry.
* An assessment of how practitioners approach representing and communicating sustainability aspects in software architecture.
* This study presents a particular collection of AK representation and communication techniques utilized by software architects who work in the financial industry.
The paper is structured in the following manner. In section 2, we review previous studies that have explored the relationship between architectural knowledge and sustainability in software architecture. Section 3 outlines how we designed and executed the survey. We present a summary of the survey results in section 4, and in section 5, we provide a detailed analysis of the findings. This analysis aims to make sense of the results and convey the main insights gained from the study. Finally, in section 6, we discus the threats to validity of our study, and we provide our conclusion in section 7.
## 2. Related Work
There is a wide range of architectural modeling languages available, but it is unclear whether they are capable of adequately describing software architecture to meet users' requirements. Additionally, the strengths, limitations, and needs of these languages are uncertain, creating a gap between what is offered and what users need. Malavolta et al.Malavolta et al. (2019) aimed to address this gap by surveying 48 practitioners from 40 IT businesses in 15 countries in order to plan for the next generation of these languages. The study examined the perceived benefits, drawbacks, and requirements related to existing languages. While practitioners were generally satisfied with the design capabilities of their employed architectural languages, the study revealed dissatisfaction with the architectural language analysis features and their capacity to describe extra-functional attributes. Moreover, the study found that the use of architectural languages in practice was mostly influenced by industry development rather than academic research, and there was a need for a more formal and practical architectural language. Our research shares similarities with the aforementioned study as we, too, are investigating how architectural knowledge is represented from the perspective of industry practitioners. To achieve this, we conducted a survey among software architects working for a leading bank in the Netherlands. Our study is distinct in that it delves into sustainability and the communication of architectural knowledge among stakeholders. In addition to exploring a specific domain (financial domain), we go beyond the use of architecture description languages and investigate how architects communicate and share their knowledge.
Despite a significant amount of research and development of various models and tools, the widespread adoption of Architectural Knowledge Management (AKM) in software companies is lacking due to the cost of capturing AKCaplin et al. (2018) Capilla et al., Determining what the industry needs from AK to get through this barrier and identifying the advantages and disadvantages of the current AK techniques are therefore necessary. Capilla et al.Capilla et al. (2018) undertook an informal retrospective analysis based on their prior work as researchers and proponents of numerous AK research methodologies in order to address this. By conducting a series of interviews with various software businesses, they also looked into the trends and problems for a future research agenda to support the usage of AK in contemporary software development methods. They came up with some interesting observations using this method. While we are also looking into the tools and techniques practitioners use to capture and communicate architectural knowledge, which will help us understand current trends in the industry, our study has some parallels to the research mentioned above. Our research, in contrast to the aforementioned study, has a keen focus on comprehending how software architects represent and communicate both architectural knowledge and sustainability.
While there have been secondary research studies conducted on sustainability in software engineering, none have particularly addressed software architecture. Andrikopoulos et al.Andrikopoulos et al. (2019) systematic mapping study seeks to fill this research gap by exploring the confluence between sustainability and software architecture. The study's findings showed that current studies have neglected the holistic viewpoint required to resolve such a complex issue by excessively emphasizing on particular sustainability-related dimensions. To develop the maturity of the field, more reflective research studies and improved coverage of the architecting life cycle activities are required. The report suggests a research agenda for sustainability-aware software architecture based on these findings. Our research is similar to the study as we also aim to explore the incorporation of sustainability aspects into software architecture. However, our study takes a unique approach by focusing on how sustainability aspects of software can be effectively represented and communicated from the perspective of software architecture practitioners by conducting industry survey with software architects.
## 3. Study Design and Execution
By conducting this study, we aim to provide software architects and the research community with a useful evaluation of how Architecture Knowledge (AK) and Sustainability are represented and communicated from a practical point of view. Our research objective is formally characterized by employing the structure proposed by Basili et al. Basili et al. (2019) as follows.
\begin{tabular}{p{142.3pt} p{142.3pt}} _Analyze_ & Architecture Knowledge \& Sustainability \\ _For the purpose of With respect to From point of view of In the context of_ & Characterizing Representation \& Communication \\ \end{tabular}
Therefore, the present study defines the primary research questions (RQs) and its corresponding sub-research questions as follows:
* _How is software architecture knowledge represented and communicated in practice?_
When addressing _RQ1_, we investigate and evaluate the entire industrial context of how AK is represented and communicated. In section 4, we will provide additional information on the tools, techniques, standards, and documentation utilized by software architects in the industry. This will aid us in understanding the industrial structure of AK representation and communication, which we can exploit to integrate sustainability elements into software architecture in the future.
* _How is software architecture knowledge represented in the financial domain?_
Our goal through _RQ1.1_ is to achieve a more comprehensive understanding of the frequently utilized and advantageous AK elements in the financial domain. Moreover, we intend to gain insight into the tools utilized in the financial industry to supplement the AK representation component of _RQ1_.
* _How is software architecture knowledge communicated in the financial domain?_
Similar to _RQ1.1_, our objective with _RQ1.2_ is to gain an understanding of the tools and techniques utilized by practitioners in the financial industry to communicate AK effectively.
* _What architecture knowledge methods are domain-specific or domain-generic?_
Our aim with _RQ1.3_ is twofold: to identify any architecture knowledge methods that are specific to the financial sector and to identify any domain-generic methods that can be applied to the financial industry.
* _How can sustainability aspects be represented and communicated in software architecture?_
Through _RQ2_, we aim to gather insights from software architects on how they would incorporate sustainability aspects into their daily work and software architecture. Given their wealth of expertise, we aim to challenge software architects to provide possible ways for integrating sustainability into software architecture.
We provide a three-phase research methodology in order to respond to these RQs. Figure 1 shows an overview, which is expanded upon below.
In **Step (1)**, To begin designing the survey, the first step is to identify the research questions and map them to a set of questions that will be used to gather information from software architects about their industry practices. To accomplish this, we have included a variety of questions, including commonly used ones from the literature that pertain to demographics and architectural knowledge. This initial step is crucial because it impacts the quality of the information we obtain from the population. Upon completion, we will have a comprehensive survey that will be distributed to the significant community of software architects at one of the largest banks in the Netherlands.
We ask respondents **34 questions** in this survey, with 91% of them being open-ended and allowing respondents to candidly express their unique experiences. Figure 2 depicts the flow of the eight blocks numbered Q1-Q8. The blocks in Figure 2 are labeled with the research questions they intend to answer, except for the consent, demographics, and conclusion blocks. The following blocks are included:
**Consent:** We explained the purpose of the study, which is to understand the current practice in architecture knowledge, and to explore where sustainability can be applied to address sustainability in software architecture in the future. We include the researchers' contact details as well as their confidentiality policy.
**Demographics (Q1.1-Q1.4):** By probing the participants' professional backgrounds, particularly their involvement in software projects (see Figure 3), their experiences in their present organizations, and their specific roles or positions within the organization, this section aims to learn more about the participants (see Figure 4)
**AK in Practice (Q2.1-Q2.5):** We start this part with a formal explanation of our interpretation of architectural knowledge to avoid any misunderstandings (AK)1. We acknowledge that there may be different ways to understand AK, though. In this section, we are therefore asking participants about the kinds of AK they document and retain, the AK components they believe are missing,
Figure 1. Study Design.
and the AK elements they consider to be the most valuable. Our objective in this section is to comprehend how participants are representing and communicating AK.
**AK representation in financial domain (Q3.1-Q3.4):** In a similar manner, we began this section by providing a formal description of our interpretation of AK representation2. We ask participants about the notations, languages, methods, and tools they use to capture AK in general and specifically in their most recent project, as well as the architectural documentation they find most useful. As we are specifically targeting architects who work in a bank, our goal is to gain an understanding of how AK is represented in the financial domain.
Footnote 2: _Architecture Knowledge (AK) Representation_ is defined as capturing and preserving AK in a particular form (e.g., diagrams, PowerPoint, views, viewpoints, or principles)
**AK communication in Practice (Q4.1-Q4.3):** In this section, similar to the one above, we provide a description of what is meant by AK communication3. We also inquire with the participants about the stakeholders involved in their roles, as well as the tools and methods they use to communicate with different stakeholders. By doing so, we are able to gather first-hand information on how AK is communicated in practice.
Footnote 3: _Architecture Knowledge (AK) Communication_ describes to how the knowledge is disclosed between involved stakeholders (e.g., via workshops, or corporate sharing platforms)
**Domain Specific Vs Domain Generic AK (Q5.2-Q5.3):** We inquired with the participants about their familiarity with certain AK methods that are unique to their specific business domain, as well as the regulations they keep in mind while representing or communicating AK within their domain. Our goal is to distinguish between AK tools and methods that are specific to their business domain and those that have a more general purpose.
**Sustainability aspects within software architecture (Q6.1-Q6.5):** In this section of the survey, our goal is to explore how software practitioners incorporate sustainability aspects into their architectural decisions. To achieve this, we have included a series of questions designed to better understand the participant's perception of IT sustainability. We begin by asking what the concept of IT sustainability means to the participant. Following this, we ask architects whether they consider sustainability aspects during their work, and depending on their response, we delve further to understand which aspects they consider or the reasons for not considering them. We also ask whether participants are aware of any IT sustainability targets set by their company or department, and if they integrate sustainability aspects into their daily work. Through these questions, we seek to gain insights into how architects interpret sustainability, both generally and specifically in the context of software architecture.
**Survey Conclusion (Q7.1-Q7.2):** Participants are encouraged to provide any additional information they feel is important in this section. Specifically, we inquire about what they believe is necessary to accurately represent and communicate AK, and whether they have any comments about the study itself. These questions are designed to capture any issues that may be of significance to participants but have not been addressed in the survey.
In **Step (2)**, we first conduct a pilot survey with a small group of research and industry experts to check the quality of the survey and eliminate any possible pitfalls and unclear aspects before reaching out to the main survey population. This step generates a set of feedback that needs to be incorporated into the original survey design. Then, we conduct a survey with the main population consisting of software architects working for a leading bank in the Netherlands. The objective is to produce a set of architecture and sustainability representation and communication (ARC) techniques used in industry. To **determine the main population** of the survey and ensure our survey was conducted effectively, we first established the objective of the study, which was to conduct a practical review of architectural knowledge and how it is represented and communicated in the financial industry. After analyzing the possible population for our survey, we determined that software architects would be the most suitable participants. This is because they possess extensive knowledge about AK and its application in the industry (check Figure 3), and because the organization already has architects in different software architecture roles with decades of experience (check Figure 4).
Next, we reached out to the identified population using a two-fold approach. The first approach involved obtaining a mailing list of software architects within the organization, while the second involved requesting Team leads of different structures within the organization to provide us with a list of architects under their department. By consolidating the information from these two sources,
Figure 2. Survey Questionnaire Flow
we generated a draft list of 145 architects. After eliminating redundant names and removing the architects needed for an extensive interview, we arrived at a set of 124 architects. Using these two steps to the best of our abilities, we tried to reach out to all software architects working in the bank. However, we did not conduct any further steps to verify if we indeed reached out to all architects.
The survey was conducted using **Qualtrics4** and designed to be anonymous to alleviate concerns about judgment or consequences. We reached out to the 124 architects via email and received 45 (39%) survey responses, with eight indicating they no longer worked at the company and 69 not replying.
Footnote 4: [https://www.qualtrics.com/](https://www.qualtrics.com/)
We compiled all recorded responses from all participants into a spreadsheet format where each column represents a specific question and each row represents a response from a participant. However, we made the decision to make all questions, except for the demographic and consent questions, optional. As the consent questions are crucial to obtain legal permission from participants to record their responses, and the demographic data is essential for interpreting the remaining optional questions. As a result, there are variations in the responses we received from each participant, as some attempted to answer all questions, while others chose to skip questions they did not want to answer.
We analyzed the responses given to each question to find trends and prevailing viewpoints about the specific topic raised by the question. To facilitate analysis, we divided the questions into two categories: closed-ended and open-ended. For closed-ended questions, we simply counted the number of occurrences. For open-ended questions, we categorized responses by interpreting each response's intent(we presented the complete analysis of the responses in the replication package5). We created these categories using information from various sources, including the literature (such as the dimensions of sustainability), the study question, and similarities in the responses' intended purposes(check Table 6). Our main objective was to encode open-ended responses so that quantitative and conventional statistical analyses (Bradley et al., 2015) could be applied to the data.
Footnote 4: [https://www.qualtrics.com/](https://www.qualtrics.com/)
Footnote 5: [https://docs.google.com/spreadsheets/d/1KIMtIwXGCASJXzWJIRGAGwDrIntbyrH_p/edit/tusp-share_link:kouid-117224618040701995271&rtpfpf-true&sd-true](https://docs.google.com/spreadsheets/d/1KIMtIwXGCASJXzWJIRGAGwDrIntbyrH_p/edit/tusp-share_link:kouid-117224618040701995271&rtpfpf-true&sd-true)
The reflection of the results to address our core research questions is the final step in **Step (3)**. Our objective is to provide a summary of the methods for architectural representation and communication that are currently used, as well as how architects address sustainability issues when designing software. We end by summarizing the takeaways from this practical review.
Based on the data presented in Figure 3, it is evident that **94%** of the survey participants have engaged in software projects for a minimum of **10 years**, with their experience ranging from 10 to 41 years. This suggests that the results obtained in our study were derived from experts who possess extensive and valuable experience gained from a long and successful industrial career. Only **two** of the respondents reported having less than 10 years of experience, with 7 and 9 years respectively.
We were fortunate to have received participation from a diverse group of architects in the bank, encompassing a broad range of roles. This enabled us to gain insights into software architecture from various levels of abstraction, as well as the experiences of different stakeholders. As illustrated in Figure 4, our participants spanned 7 different architectural roles, with the majority of them being Domain Architects (37%) and IT Architects (26%). The next most frequent roles were Solution Architects (11%) and Enterprise Architects (8%).
## 4. Results
The main results we inferred from the data6 we gathered are presented in this section. It does so in accordance with the five blocks from **Q3 to **Q7** and the survey structure specified in Section 3 of the report.
Footnote 6: Raw Data of Survey: [https://bit.ly/3EWC2H3](https://bit.ly/3EWC2H3)
**Table 1** displays the responses to questions in the **Q3 block**, which focuses on the practical application of AK. We initiated the block by asking participants, "What does AK mean to you?" (Cheng et al., 2017). Most participants provided definitions that complemented the formal definition we provided. For instance, one participant explained that _'AK is a group of elements, such as problem statements, triggers to have a solution, reasons for creating a new solution, scope, requirements, assumptions, current state architecture, transition state, and target state. It involves defining and registering exceptions/deviations to deliver a solution building block."_ However, a few participants shared unique perspectives. One stated that _'AK is also about knowing what kind of architectural artifacts (e.g., future state architectures, current state architecture, guidelines, standards) exist in the organization and identifying any missing artifacts. But most importantly, it involves interpreting and using them correctly."_
We present the results for the **Q4 block**, which pertains to the practical representation of AK, in **Table 2**. **Table 3** displays the results of the **Q5 block**, which examines the implementation of AK communication. We began by asking participants about the stakeholders with whom they need to communicate AK knowledge in their present position. The stakeholders that participants engage with differ depending on their current role, in addition to their peers. For example, business architects communicate with business and IT management, business change teams, and IT delivery teams. Domain architects, on the other hand, engage with product owners, enterprise architects, principal architects, developers, business analysts, IT leads, Security CISO, Design Authority, and external vendors. IT architects communicate with Grid Owner, Product Owner, Business Analyst, Enterprise Architects, and Domain Architects.
The results for the **Q6 block**, which pertain to domain-specific versus domain-generic _AK_, are presented in **Table 4**. Finally, we summarize the results for the **Q7 block**, which concerns questions related to sustainability aspects within software architecture, in **Table 5**.
## 5. Discussion
We conducted a survey with software architects in one of the leading banks in the Netherlands and identified two main findings, along with a list of detailed results from analyzing each cluster of questions designed to address our research question as discussed in
section 3. Overall, the survey yielded valuable insights into the representation and communication of software architecture knowledge in practice. The findings are:
During the survey, we asked participants about the architectural elements they felt were missing in their work. This question was found to be revealing as it elicited many interesting responses. We categorized and analyzed these responses, which can be seen in Table 6, to facilitate quantitative data analysis. Most participants identified the need for architectural elements that can facilitate communication and bridge different viewpoints and features for various stakeholders. For example, one participant stated, "_As architecture is about communication, with different views, we tend to develop views that are understood by architects (which gives grip on the architecture), but less meaningful for (other) stakeholders. Linking architecture views to (non-architecturally) views of different stakeholders is now lacking. We tend to speak our own language and not the language of different stakeholders._" This view expressed by a participant is not isolated, as shown in Figure 5, where 8 out of 31 respondents (26%) shared similar views. Another participant expressed the desire for "more linkage to use non-architecture viewpoints" to better represent and communicate AK.
Figure 4. Current Roles of Participants within the bank.
_Where DA stands for domain architect, IA for IT architect, SA for solution architect, EA for enterprise architect, SWA for software architect, BA for business architect, and HA for hybrid cloud architect._
Figure 3. Survey Population Experience in Years
\begin{table}
\begin{tabular}{|p{34.1pt}|p{34.1pt}|} \hline Q2.2 & **Question:** What type of AK do you document and keep? Capilla et al.[5] \\ \hline \multicolumn{2}{|p{34.1pt}|}{**Answer:** 26 out of 32 respondents, which represents 81% of the participants, mentioned Solution Intent as the AK to keep and document. Meanwhile, Current State Architecture was mentioned by 10 participants (31%), and Design Decisions were mentioned by 25% of respondents. Future State Architecture was mentioned by 22% of the participants. Five participants mentioned both Guidelines and Standards as the AK to capture and retain. Additionally, there were several other AK documents mentioned by the participants that are worth noting. These include High level Design, Information Architecture, Architectural Landscape, Problem Statements, and Architectural Review Comments. \\ \hline Q2.3 & **Question:** Do you capture AK consistently in all projects? Capilla et al.[5] \\ \hline \multicolumn{2}{|p{34.1pt}|}{**Answer:** Of the 31 respondents to this question, 18 participants (58%) answered ”Yes” while the remaining 13 individuals (42%) answered ”No”. Among the 13 who answered ”No”, some specified their reasons, including 31% who said that every project is different, 31% who stated that AK is not required, 15% who believed that it can sometimes be an overkill, and others mentioned that it is labor-intensive and that they don’t have enough time. \\ \hline Q2.4 & **Question:** In your experience, what are the AK elements that you miss? \\ \hline \multicolumn{2}{|p{34.1pt}|}{**Answer:** Of the 31 participants who answered this question, 14% of them reported being satisfied with the existing AK elements. However, 26% (8 individuals) identified AK elements that act as a means of communication and bridge between diverse stakeholders as the ones they miss the most in their work. On the other hand, 23% pointed out that AK elements that give the context and status of the architecture were the ones they miss in their work.. Additionally, 13% of participants mentioned missing AK elements related to Charity, Detali, and Guidance, and 10% mentioned missing elements related to Design Decisions. Finally, 6% of participants missed AK elements related to Sustainability Aspects. \\ \hline Q2.5 & **Question:** In your experience, what are the AK elements that you find particularly useful? \\ \hline \multicolumn{2}{|p{34.1pt}|}{**Answer:** Out of 31 participants who responded to the question, 12 participants (39%) found AK elements related to Standards, References, and Guidelines to be the most beneficial. Furthermore, 20% of participants chose Architecture Model and Business Architecture as the most useful each. Thirteen percent of participants found Solution Intent[7] to be the most beneficial, while 10% each chose Design Decisions and context \& status of the architecture as the most useful AK elements. \\ \hline \end{tabular}
\end{table}
Table 1. Architecture Knowledge (AK) representation in financial domain
\begin{table}
\begin{tabular}{|p{34.1pt}|p{34.1pt}|} \hline Q3.1 & **Question:** Do you know any standard notation or language to capture **AK?** Capilla et al.[5] \\ \hline \multicolumn{2}{|p{34.1pt}|}{**Answer:** Of the 31 participants who responded to the question, 90% (28 participants) indicated ”Yes”. Among these 28 participants, 82% (23 individuals) specified that they use ArchiMate to capture AK, while 5 participants (18%) specified using UML. Additionally, 2 participants each (totaling 7%) mentioned using Sparx EA, Draw.io, PowerPoint, and BPMN as their preferred languages and notations for capturing **AK**. \\ \hline Q3.2 & **Question:**In your experience, what is the most useful architectural documentation? Capilla et al.[5] \\ \hline \multicolumn{2}{|p{34.1pt}|}{**Answer:** Of the 31 participants who responded to this question, 9 individuals (29%) identified ArchiMate as the most useful architecture documentation. Additionally, 5 participants each (totaling 16% of the total) mentioned Current State Architecture and Diagrams as being useful. Three participants each (10% of the total) identified Solution Intent, Views/Viewpoints, and PowerPoint as the most useful. Finally, 2 participants (7% of the total) each mentioned Design Decision Documents and Visio as being useful. \\ \hline Q3.3 & **Question:** If you think about your last project, in what format was the knowledge stored?Capilla et al. [5] \\ \hline \multicolumn{2}{|p{34.1pt}|}{**Answer:** 31 participants responded to the question on how they store AK, mentioning various formats and methods. ArchiMate, Word document, and PowerPoint were equally popular among 9 participants (representing 29%) as the formats used to store AK in their last project. Solution Intent was mentioned by 8 participants (26%), while Confluence was used by 6 participants (19%). \\ \hline Q3.4 & **Question:** What tools or methods do you use to capture and represent AK? \\ \hline \multicolumn{2}{|p{34.1pt}|}{**Answer:** Of the 31 respondents, 17 individuals (55%) use ArchiMate, 15 (48%) use PowerPoint, 14 (45%) use Sparx EA, and 11 (35%) use Confluence to capture and represent AK. Additionally, 10 participants use Microsoft Word and 9 use Visio for this purpose. \\ \hline \end{tabular}
\end{table}
Table 2. Architecture Knowledge (AK) representation in financial domain
principles, as depicted in Figure 6. Moreover, other participants referred to various sustainability dimensions, indicating a high level of awareness on the subject.
Subsequently, we inquired whether participants were aware of their organization or department's sustainability targets, and the majority responded negatively, highlighting a lack of awareness in this regard. Nevertheless, when we asked whether they integrate sustainability aspects into their work, 75% of participants responded affirmatively, which appears to contradict their lack of awareness of sustainability targets. However, this discrepancy may be attributed to their advanced level of understanding of the concept.
Upon further investigation, most participants reported that they were not incorporating sustainability aspects into their daily work due to a dearth of clear guidelines, references, criteria, and goals. Additionally, they expressed a need for more support in bridging the knowledge gap on how to implement sustainability aspects in their work. Overall, our study underscores the importance of incorporating clear guidelines, references, criteria, and goals on sustainability aspects by the organization to leverage the motivation and high level of sustainability awareness of architects.
\begin{table}
\begin{tabular}{|p{28.5pt}|p{28.5pt}|} \hline Q5.1 & **Question:** Do you know certain methods for **AK** which are exclusively valid or applied to your business domain? \\ \cline{2-3} & **Answer:** Out of 31 respondents to this question, only 5 (16\%) responded “Yes”, and specified BPMN, TOGAF, SAFe, Target State Design as exclusive AKs to the finance area. \\ \cline{2-3} & **Question:** Can you think of any other AK methods that are general-purpose and that have not already been mentioned? \\ \cline{2-3} & **Answer:** In response to the question, 20 participants provided feedback. Of these, 11 participants (representing 55\% of the respondents) answered with “No.” Among the 8 participants who provided an answer to the specific inquiry, they mentioned the following frameworks: TOGAF, Agile Design Thinking, BPMN, Service Oriented Architecture, and Standardized Architecture Decision Records (ARDs). \\ \cline{2-3} & **Question:** In general, do you have to keep certain regulations in mind (e.g., GDPR, sustainability targets, etc.) while representing or communicating **AK** in your business domain? \\ \cline{2-3} & **Answer:** Out of 30 respondents to this question, 87\% (26 participants) disclosed that they consider certain regulations while representing and communicating AKs. \\ \hline \end{tabular}
\end{table}
Table 4. domain-specific Vs generic-specific AK
\begin{table}
\begin{tabular}{|p{28.5pt}|p{28.5pt}|} \hline Q6.1 & **Question:** What is IT sustainability for you? \\ \cline{2-3} & **Answer:** Among the 23 individuals who responded to the question, 30\% (7 participants) demonstrated a comprehensive understanding by referring to at least two aspects of sustainability or software engineering principles. Specifically, 39\% (9 participants) associated IT sustainability with the environmental dimension, while 22\% (5 participants) focused on the technical dimensions. The remaining two participants understood IT sustainability in terms of its economic dimension and its ability to cope with changing environments across all domains. \\ \hline Q6.2 & **Question:** Are you aware of any IT related sustainability targets or measures in your organization/department? \\ \cline{2-3} & **Answer:** Out of 28 respondents to this question, 18 (representing 64\% of the total) reported not being aware of any sustainability targets. The remaining 10 participants (36\%) reported having knowledge of some targets, with the most commonly mentioned targets being cloud-related policies and targets related to energy consumption, both at 33\%. \\ \hline Q6.3 & **Question:** Do you consider sustainability aspects in your current role? \\ \cline{2-3} & **Answer:** Of the 28 participants who responded to the question, 75\% (21 participants) said “Yes”. Out of these 21 participants, 48\% (10 participants) considered the technical aspect of sustainability, while 14\% (3 participants) considered the environmental aspect, and 9\% (2 participants) considered the economic aspect. Additionally, 14\% (3 participants) considered other aspects such as business and quality requirements, and adapting to changes. \\ \hline Q6.4 & **Question:** What tools or methods do you use to capture and represent AK? \\ \cline{2-3} & **Answer:** Of the 31 respondents, 17 individuals (55\%) use ArchiMate, 15 (48\%) use PowerPoint, 14 (45\%) use Sparx EA, and 11 (35\%) use Confluence to capture and represent AK. Additionally, 10 participants use Microsoft Word and 9 use Visio for this purpose. We also inquired about how and where they would incorporate sustainability into their daily work if it were necessary. Some suggested including it as a quality attribute in the Solution Intent or providing guidance on what to assess. Others suggested integrating it into the business architecture, domain architecture, intake phase, data center, and patterns and designs. \\ \hline \end{tabular}
\end{table}
Table 5. Sustainability aspects of software architecture
\begin{table}
\begin{tabular}{|p{28.5pt}|p{28.5pt}|} \hline Q6.1 & **Question:** What is IT sustainability for you? \\ \cline{2-3} & **Answer:** Among the 23 individuals who responded to the question, 30\% (7 participants) demonstrated a comprehensive understanding by referring to at least two aspects of sustainability or software engineering principles. Specifically, 39\% (9 participants) associated IT sustainability with the environmental dimension, while 22\% (5 participants) focused on the technical dimensions. The remaining two participants understood IT sustainability in terms of its economic dimension and its ability to cope with changing environments across all domains. \\ \hline Q6.2 & **Question:** Are you aware of any IT related sustainability targets or measures in your organization/department? \\ \cline{2-3} & **Answer:** Out of 28 respondents to this question, 18 (representing 64\% of the total) reported not being aware of any sustainability targets. The remaining 10 participants (36\%) reported having knowledge of some targets, with the most commonly mentioned targets being cloud-related policies and targets related to energy consumption, both at 33\%. \\ \hline Q6.3 & **Question:** Do you consider sustainability aspects in your current role? \\ \cline{2-3} & **Answer:** Of the 28 participants who responded to the question, 75\% (21 participants) said “Yes”. Out of these 21 participants, 48\% (10 participants) considered the technical aspect of sustainability, while 14\% (3 participants) considered the environmental aspect, and 9\% (2 participants) considered the economic aspect. Additionally, 14\% (3 participants) considered other aspects such as business and quality requirements, and adapting to changes. \\ \hline Q6.4 & **Question:** What tools or methods do you use to capture and represent AK? \\ \cline{2-3} & **Answer:** Of the 31 respondents, 17 individuals (55\%) use ArchiMate, 15 (48\%) use PowerPoint, 14 (45\%) use Sparx EA, and 11 (35\%) use Confluence to capture and represent AK. Additionally, 10 participants use Microsoft Word and 9 use Visio for this purpose. We also inquired about how and where they would incorporate sustainability into their daily work if it were necessary. Some suggested including it as a quality attribute in the Solution Intent or providing guidance on what to assess. Others suggested integrating it into the business architecture, domain architecture, intake phase, data center, and patterns and designs. \\ \hline \end{tabular}
\end{table}
Table 3. Architecture Knowledge(AK) communication in practice
Two main research questions and three sub questions served as the framework for the survey that is the subject of this report. The following provides a summary of the findings.
During our research, we discovered that architecture knowledge (AK) is communicated and represented through various documentation and artifacts such as Solution Intent, Current State Architecture, Design Decisions, and Guidelines. However, not all projects consistently capture AK, and some participants mentioned missing AK elements related to communication, context and status, design decisions, and sustainability aspects. On the other hand, participants found AK elements related to standards, references, guidelines,
\begin{table}
\begin{tabular}{|p{142.3pt}|p{142.3pt}|p{142.3pt}|p{142.3pt}|} \hline \multicolumn{4}{|c|}{**Q.2.4 In your experience, what are the AK elements that you miss?**} \\ \hline
**Category** & **Description** & **Response Example** & **No** & **(\%)** \\ \hline Communication \& Bridge & AK elements that serve as a bridge between various view/viewpoints intended for different stakeholders and enable effective communication to provide a comprehensive understanding of the architecture. & _“As architecture is about communication, with different views, we tend to develop views that are understood by archicets (which gives grip on the architecture), but less meaningful for (other) stakeholders, linking architecture views to (non architectural) views of different stakeholders is now lacking. We tend to speak our own language and not the language of different stakeholders.”_ & _“Enterprise level Architectural Fitness Function, that are functions that can be added to the pipelines. So you can validate that what is implemented is in line with the architectural guidelines.”_ & _“Enterprise level Architectural Fitness Function, that are functions that can be added to the pipelines. So you can validate that what is implemented is in line with the architectural guidelines.”_ & _“Enterprise level Architectural Fitness Function, that are functions that can be added to the pipelines. So you can validate that what is implemented is in line with the architectural guidelines.”_ & _“_Enterprise level Architectural Fitness Function, that are functions that can be added to the pipelines. So you can validate that what is implemented is in line with the architectural guidelines.”_ & _“_Enterprise level Architectural Fitness Function, that are functions that can be added to the pipelines. So you can validate that what is implemented is in line with the architectural guidelines.”_ & _“_Enterprise level Architectural Fitness Function, that are functions that can be added to the pipelines. So you can validate that what is implemented is in line with the architectural guidelines.”_ & _“_Enterprise level Architectural Fitness Function, that are functions that can be added to the pipelines. So you can validate that what is implemented is in line with the architectural guidelines.”_ & _“_Enterprise level Architectural Fitness Function, that are functions that can be added to the pipelines. So you can validate that what is implemented is in line with the architectural guidelines.”_ & _“_Enterprise level Architectural Fitness Function, that are functions that can be added to the pipelines. So you can validate that what is implemented is in line with the architectural guidelines.”_ & _“_Enterprise level Architectural Fitness Function, that are functions that can be added to the pipelines. So you can validate that what is implemented is in line with the architectural guidelines.”_ & _”_Enterprise level Architectural Fitness Function, that are functions that can be added to the pipelines. So you can validate that what is implemented is in line with the architectural guidelines.”_ & _“_Enterprise level Architectural Fitness Function, that are functions that can be added to the pipelines. So you can validate that what is implemented is in line with the architectural guidelines.”_ & _”_Enterprise level Architectural Fitness Function, that are functions that can be added to the pipelines. So you can validate that what is implemented is in line with the architectural guidelines.”_ & _”_Enterprise level Architectural Fitness Function, that are functions that can be added to the pipelines. So you can validate that what is implemented is in line with the architectural guidelines.”_ & _”_Enterprise level Architectural Fitness Function, that are functions that can be added to the pipelines. So you can validate that what is implemented is in line with the architectural guidelines.”_ & _”_Enterprise level Architectural Fitness Function, that are functions that can be added to the pipelines. So you can validate that what is implemented is in line with the architectural guidelines.”_ & _”_Enterprise level Architectural Fitness Function, that are functions that can be added to the pipelines. So you can validate that what is implemented is in line with the architectural guidelines.”_ & _”_Enterprise level Architectural Fitness Function, that are functions that can be added to the pipelines. So you can validate that what is implemented is in line with the architectural guidelines.”_ & _”_Enterprise level Architectural Fitness Function, that are functions that can be added to the pipelines. So you can validate that what is implemented is in line with the architectural guidelines.”_ & _”_Enterprise level Architectural Fitness Function, that are functions that can be added to the pipelines. So you can validate that what is implemented is in line with the architectural guidelines.”_ & _”_Enterprise level Architectural Fitness Function, that are functions that can be added to the pipelines. So you can validate that what is implemented is in line with the architectural guidelines.”_ & _”_Enterprise level Architectural Fitness Function, that are functions that can be added to the pipelines. So you can validate that what is implemented is in line with the architectural guidelines.”_ & _”_Enterprise level Architectural Fitness Function, that are functions that can be added to the pipelines. So you can validate that what is implemented is in line with the architectural guidelines.”_ & _”_Enterprise level Architectural Fitness Function, that are functions that can be added to the pipelines. So you can validate that what is implemented is in line with the architectural guidelines.”_ & _”_Enterprise level Architectural Fitness Function, that are functions that can be added to the pipelines. So you can validate that what is implemented is in line with the architectural guidelines.”_ & _”_Enterprise level Architectural Fitness Function, that are functions that can be added to the pipelines. So you can validate that what is implemented is in line with the architectural guidelines.”_ & _”_Enterprise level Architectural Fitness Function, that are functions that can be added to the pipelines. So you can validate that what is implemented is in line with the architectural guidelines.”_ & _”_Enterprise level Architectural Fitness Function, that are functions that can be added to the pipelines. So you can validate that what is implemented is in line with the architectural guidelines.”_ & _”_Enterprise level Architectural Fitness Function, that are functions that can be added to the pipelines. So you can validate that what is implemented is in line with the architectural guidelines.”_ & _”_Enterprise level Architectural Fitness Function, that are functions that can be added to the pipelines. So you can validate that what is implemented is in line with the architectural guidelines.”_ & _”_Enterprise level Architectural Fitness Function, that are functions that can be added to the pipelines. So you can validate that what is implemented is in line with the architectural guidelines.”_ & _”_Enterprise level Architectural Fitness Function, that are functions that can be added to the pipelines. So you can validate that what is implemented is in line with the architectural guidelines.”_ & _”_Enterprise level Architectural Fitness Function, that are functions that can be added to the pipelines. So you can validate that what is implemented is in line with the architectural guidelines.”_ & _”_Enterprise level Architectural Fitness Function, that are functions that can be added to the pipelines. So you can validate that what is implemented is in line with the architectural guidelines.”_ & _”_Enterprise level Architectural Fitness Function, that are functions that can be added to the pipelines. So you can validate that what is implemented is in line with the architectural guidelines.”_ & _”_Enterprise level Architectural Fitness Function, that are functions that can be added to the pipelines. So you can validate that what is implemented is in line with the architectural guidelines.”_ & _”_Enterprise level Architectural Fitness Function, that are functions that can be added to the pipelines. So you can validate that what is implemented is in line with the architectural guidelines.”_ & _”_Enterprise level Architectural Fitness Function, that are functions that can be added to the pipelines. So you can validate that what is implemented is in line with the architectural guidelines.”_ & _”_Enterprise level Architectural Fitness Function, that are functions that can be added to the pipelines. So you can validate that what is implemented is in line with the architectural guidelines.”_ & _”_Enterprise level Architectural Fitness Function, that are functions that can be added to the pipelines. So you can validate that what is implemented is in line with the architectural guidelines.”_ & _”_Enterprise level Architectural Fitness Function, that are functions that can be added to the pipelines. So you can validate that what is implemented is in line with the architectural guidelines.”_ & _”_Enterprise level Architectural Fitness Function, that are functions that can be added to the pipelines. So you can validate that what is implemented is in line with the architectural guidelines.”_ & _”_Enterprise level Architectural Fitness Function, that are functions that can be added to the pipelines. So you can validate that what is implemented is in line with the architectural guidelines.”_ & _”_Enterprise level Architectural Fitness Function, that are functions that can be added to the pipelines. So you can validate that what is implemented is in line with the architectural guidelines.”_ & _”_Enterprise level Architectural Fitness Function, that are functions that can be added to the pipelines. So you can validate that what is implemented is in line with the architectural guidelines.”_ & _”_Enterprise level Architectural Fitness Function, that are functions that can be added to the pipelines. So you can validate that what is implemented is in line with the architectural guidelines.”_ & _”_Enterprise level Architectural Fitness Function, that are functions that can be added to the pipelines. So you can validate that what is implemented is in line with the architectural guidelines.”_ & _”_Enterprise level Architectural Fitness Function, that are functions that can be added to the pipelines. So you can validate that what is implemented is in line with the architectural guidelines.”_ & _Enterprise level Architectural Fitness Function, that are functions that can be added to the pipelines. So you can validate that what is implemented is in line with the architectural guidelines.”_ & _Enterprise level Architectural Fitness Function, that are functions that can be added to the pipelines. So you can validate that what is implemented is in line with the architectural guidelines.”_ & _Enterprise level Architectural Fitness Function, that are functions that can be added to the pipelines. So you can validate that is implemented is in line with the architectural guidelines.”_ & _Enterprise level Architectural Fitness Function, that are functions that can be added to the pipelines. So you can validate that what is implemented is in line with the architectural guidelines.”_ & _Enterprise level Architectural Fitness Function, that are functions that can be added to the pipelines. So you can validate that what is implemented is in line with the architectural guidelines.”_ & _Enterprise level Architectural Fitness Function, that are functions that can be added to the pipelines. So you can validate that what is implemented is in line with the architectural guidelines.”_ & _Enterprise level Architectural Fitness Function, that are functions that can be added to the pipelines. So you can validate that what is implemented is in line with the architectural guidelines.”_ & _Enterprise level Architectural Fitness Function, that are functions that can be added to the pipelines. So you can validate that what is implemented is in line with the architectural guidelines.”_ & _Enterprise level Architectural Fitness Function, that are functions that can be added to the pipelines. So you can validate that what is implemented is in line with the architectural guidelines.”_ & _Enterprise level Architectural Fitness Function, that are functions that can be added to the pipelines. So you can validate that what is implemented is in line with the architectural guidelines.”_ & _Enterprise level Architectural Fitness Function, that are functions that can be added to the pipelines. So you can validate that what is implemented is in line with the architectural guidelines.”_ & _Enterprise level Architectural Fitness Function, that are functions that can be added to the pipelines. So you can validate that what is implemented is implemented in line with the architectural guidelines.”_ & _Enterprise level Architectural Fitness Function, that are functions that can be added to the pipelines. So you can validate that that is implemented in line with the architectural guidelines.”_ & _Enterprise level Architectural Fitness Function, that are functions that can be added to the pipelines. So you can validate that what is implemented is in line with the architectural guidelines.”_ & _Enterprise level Architectural Fitness Function, that are functions that can be added to the pipelines. So you can validate that are functions that can be added to the pipelines. So you can validate that what is implemented is in line with the architectural guidelines.”_ & _Enterprise level Architectural Fitness Function, that are functions that can be added to the pipelines. So you can validate that what is implemented is in line with the architectural guidelines.”_ & _Enterprise level Architectural Fitness Function, that are functions that can be added to the pipelines. So you can validate that that is implemented in line with the architectural guidelines.”_ & _Enterprise level Architectural Fitness Function, that are functions that can be added to the pipelines. So you can validate that is implemented in line with the architectural guidelines.”_ & _Enterprise level Architectural Fitness Function, that are functions that can be added to the pipelines. So you can validate that what is implemented is in line with the architectural guidelines.”_ & _Enterprise level Architectural Fitness Function, that are functions that can be added to the pipelines. So you can validate that what is implemented is in line with the architectural guidelines.”_ & _Enterprise level Architectural Fitness Function, that are functions that can be added to the pipelines. So you can validate that is implemented in line with the architectural guidelines.”_ & _Enterprise level Architectural Fitness Function, that are functions that can be added to the pipelines. So you can validate that is implemented in line with the architectural guidelines.”_ & _Enterprise level Architectural Fitness Function, that are functions that can be added to the pipelines. So you can validate that are implemented in line with the architectural guidelines.”_ & _Enterprise level Architectural Fitness Function, that are functions that can be added to the pipelines. So you can validate that is implemented in line with the architectural guidelines.”_ & _Enterprise level Architectural Fitness Function, that are functions that can be added to the pipelines. So you can validate that is implemented in line with the architectural guidelines.”_ & _Enterprise level Architectural Fitness Function, that are functions that can be added to the pipelines. So you can validate that that is implemented in line with the architectural guidelines.”_ & _Enterprise level Architectural Fitness Function, that are functions that can be added to the pipelines. So you can validate that what is implemented in line with the architectural guidelines.”_ & _Enterprise level Architectural Fitness Function, that are functions that can be added to the pipelines. So you can validate that that is implemented in line with the architectural
architecture models, and business architecture to be particularly useful.
RQ1.1 _How is software architecture knowledge represented in the financial domain?_
Based on the responses, it can be concluded that standard notations or languages, such as ArchiMate and UML, are commonly used to capture and represent software architecture knowledge in the financial domain. ArchiMate was reported as the most commonly used notation. Various tools, including PowerPoint, Sparx EA, Confluence, Word documents, and Visio, were identified as useful for capturing and representing architecture knowledge. The most useful architectural documentation included ArchiMate, current state architecture and diagrams, solution intent, views/viewpoints, and PowerPoint.
RQ1.2 _How is software architecture knowledge communicated in the financial domain?_
The findings suggest that software architecture knowledge in the financial domain is communicated through various methods and tools, including written documents, meetings, presentations, email, and workshops. The choice of communication method may depend on the stakeholder, the level of involvement, and the complexity of the information. Overall, PowerPoint is the most commonly used tool for sharing and communicating AK, followed by email and Confluence. Meetings and documents are also frequently used, with some participants reporting the use of workshops. However, it is important to note that the specific methods and tools for communicating software architecture knowledge may differ depending on the industry or domain.
RQ1.3 _What architecture knowledge methods are domain-specific or domain-generic?_
Architecture knowledge (AK) methods are employed in practice through a combination of domain-specific and domain-agnostic approaches. While most respondents did not identify any AK elements specific to the finance domain, a few mentioned techniques such as BPMN, TOGAF, SAFe, and Target State Design as being exclusive to finance. However, it is worth noting that TOGAF and BPMN are not solely utilized in finance. Only a few general-purpose AK frameworks such as Agile Design Thinking, Service Oriented Architecture, and Standardized Architecture Decision Records (ARDs) were mentioned. This suggests that there may be a lack of awareness of domain-specific AKs or that many methods are considered applicable across different domains. Notably, 84% of respondents did not mention any domain-specific AKs, highlighting the need for further exploration of the AK methods unique to specific domains.
RQ2 _How can sustainability aspects be represented and communicated in software architecture?_
To incorporate sustainability aspects into software architecture, clear guidelines, references, and goals are required to capture these aspects in daily work. Participants suggest integrating sustainability into the business and domain architecture, intake phase, data center, and patterns and designs. Quality attributes in the Solution Intent can also be used to represent sustainability aspects. However, a comprehensive understanding of the various dimensions and principles of sustainable software engineering is necessary for effective representation and communication. Providing clear guidance, references, and goals can motivate architects to practice sustainable software engineering and integrate sustainability into their daily work.
Figure 5. Architectural Elements missed by Architects
Figure 6. Architects understanding of Sustainable IT
## 6. Threats to Validity
While we made every effort to conduct a rigorous study, there are several limitations that must be acknowledged. The _external validity_ of our study may be limited by the fact that we only targeted participants from a single organization, despite the fact that we were able to attract a decent number of participants with significant experience in software architecture. Although the organization we targeted was large and likely representative of the industry as a whole, the lack of diversity in our population may limit the generalizability of our findings. As a result, it may be necessary to replicate our study with a more diverse sample in order to confirm the external validity of our results.
The _internal validity_ of a study can be threatened by various factors that impact the accuracy and reliability of the study's results. One potential threat to internal validity in a survey is the use of non-mandatory questions. In our study, we designed most of the questions to be non-mandatory to avoid obliging participants to answer questions they may not be qualified to answer or may have distaste towards. However, this design choice can impact the overall quality of responses received, as participants may choose not to answer certain questions, resulting in missing data and potentially biased results. To address this internal validity threat, we took a careful approach to analysing the survey responses. Rather than using the total recorded response for each question, we only considered the total number of respondents who answered each specific question. By doing this, we were able to account for missing data and ensure that the responses analysed for each question were only from participants who chose to answer that particular question. This approach allowed us to mitigate the potential impact of non-mandatory questions on the study's internal validity and ensure that our results were as accurate and reliable as possible.
_Construct validity_ is a critical aspect of any research study that seeks to examine and measure theoretical concepts or constructs. In our study, we aimed to explore the perception of software architecture and architectural knowledge related to sustainability aspects, and we focused on software architects with a lot of experience to gather insights. While software architects may be the ideal candidates to respond to questions related to software architecture, it can be challenging to determine the best approach for measuring and analyzing sustainability aspects in software architecture due to the lack of an established view on the combination of these two areas. As researchers, we made every effort to define the theoretical concepts and constructs we wished to study and determine how to measure them in a valid and reliable way. However, the lack of consensus on the combination of sustainability and software architecture posed a significant challenge in this regard. Therefore, we opted to investigate how architects perceive sustainability concepts and where they may apply sustainability to address sustainability in software architecture. This approach allowed us to explore the perceptions and perspectives of experienced software architects, even in the absence of a well-established theoretical framework for the combination of sustainability and software architecture. However, this construct validity threat must be considered when interpreting our study's findings, and further research is needed to establish a more robust theoretical foundation for the study of sustainability in software architecture.
## 7. Conclusion and Future Work
This paper presents the findings of a survey we conducted on the representation and communication of architectural knowledge (AK) in practice. Our study targeted software architects working for a leading bank in the Netherlands with extensive industry experience in various architectural roles. We identified two main findings through our analysis of the survey results: the need for a new architectural element that links different features and viewpoints created for various stakeholders, and the need for clear guidance, references, and goals to motivate architects to practice sustainable software engineering. These findings offer valuable insights for future research in the field. We recommend further investigation into the development of this new architectural element and how it can be integrated into existing practices. Additionally, we suggest exploring ways to promote sustainable software engineering practices among architects through the establishment of clear guidance and goals. Our study highlights the importance of effective AK representation and communication in software industry and the potential benefits of incorporating sustainable practices into architectural decision-making.
|
2309.07310 | CRIL: A Concurrent Reversible Intermediate Language | We present a reversible intermediate language with concurrency for
translating a high-level concurrent programming language to another lower-level
concurrent programming language, keeping reversibility. Intermediate languages
are commonly used in compiling a source program to an object code program
closer to the machine code, where an intermediate language enables behavioral
analysis and optimization to be decomposed in steps. We propose CRIL
(Concurrent Reversible Intermediate Language) as an extension of RIL used by
Mogensen for a functional reversible language, incorporating a multi-thread
process invocation and the synchronization primitives based on the P-V
operations. We show that the operational semantics of CRIL enjoy the properties
of reversibility, including the causal safety and causal liveness proposed by
Lanese et al., checking the axiomatic properties. The operational semantics is
defined by composing the bidirectional control flow with the dependency
information on updating the memory, called annotation DAG. We show a simple
example of `airline ticketing' to illustrate how CRIL preserves the causality
for reversibility in imperative programs with concurrency. | Shunya Oguchi, Shoji Yuen | 2023-09-13T20:52:54Z | http://arxiv.org/abs/2309.07310v1 | # CRIL: A Concurrent Reversible Intermediate Language
###### Abstract
We present a reversible intermediate language with concurrency for translating a high-level concurrent programming language to another lower-level concurrent programming language, keeping reversibility. Intermediate languages are commonly used in compiling a source program to an object code program closer to the machine code, where an intermediate language enables behavioral analysis and optimization to be decomposed in steps. We propose CRIL (Concurrent Reversible Intermediate Language) as an extension of RIL used by Mogensen for a functional reversible language, incorporating a multi-thread process invocation and the synchronization primitives based on the P-V operations. We show that the operational semantics of CRIL enjoy the properties of reversibility, including the causal safety and causal liveness proposed by Lanese et al., checking the axiomatic properties. The operational semantics is defined by composing the bidirectional control flow with the dependency information on updating the memory, called _annotation DAG_. We show a simple example of 'airline ticketing' to illustrate how CRIL preserves the causality for reversibility in imperative programs with concurrency.
## 1 Introduction
Reversible programming languages have been proposed to describe reversible computation where the control flows both forward and backward [25, 5, 24, 7]. They directly describe reversible computation and develop new aspects of software development since reversibility holds all information at any point of execution. In forward-only execution, the computation can overwrite the part of its intermediate history unless it is used in the following computation for efficiency. In analyzing the behavior, such as debugging, it is common to replay the execution to the point in focus to recreate the lost part of history. For a concurrent program, replaying the execution is usually difficult since updating shared resources among multiple control threads depends on the runtime environment.
Intermediate languages mediate the translation from the source language to a low-level machine language for execution. Step-by-step translation via intermediate languages is a common technique for optimization in compilers. The intermediate language in LLVM [15] is often used as a behavioral model for program analysis.
Mogensen uses RIL [17] as an intermediate language with reversibility for a functional reversible language in the memory usage analysis. RSSA [18] based on RIL is used for compiling and optimizing Janus programs [10, 4]. Reversibility with concurrency has been studied in process calculi [3, 21, 12, 11], in event structures [19, 20, 22, 16] and recently in programming languages such as Erlang [13] and a simple imperative programming language [7, 9].
We propose a reversible intermediate language CRIL by extending RIL. CRIL extends RIL by allowing multiple blocks to run concurrently and the synchronization primitive based on the P-V operations. In CRIL, concurrent blocks interact with each other via shared variables. To establish the reversibility for concurrent programs, the causality among shared variables has to be preserved. Unlike sequential
reversible programs, even if one step of a program is reversible, the whole program is not reversible in general since shared variables may not be reversed correctly.
To make a program of CRIL reversible, we give the operational semantics as the labeled transition system, \(\mathit{LTSI}_{CRIL}\), as the composition of the operational semantics with one-step reversibility and a data structure called 'annotation DAG'. An annotation DAG accumulates the causality of updating memory in a forward execution and rolls back the causality to control the reversed flow in the backward execution. We show that \(\mathit{LTSI}_{CRIL}\) has the basic properties for reversibility proposed in [14]. Using the approach of [14], it is shown that \(\mathit{LTSI}_{CRIL}\) enjoys the _Causal Safety_ and the _Causal Liveness_, which are important in analyzing CRIL programs compositionally.
By translating a high-level programming language to CRIL, \(\mathit{LTSI}_{CRIL}\) works as a virtual machine, and its behavior is guaranteed to be reversible. CRIL enables fine-grained behavioral analysis such as optimization and reversible debugging. In section 4, we present a simple example of airline ticketing given in [6] to enable reversible debugging.
The paper is organized as follows. Section 2 presents the syntax of CRIL and the operational semantics for control flow. Section 3 introduces annotation DAG as a data structure to store the causality of updating memory. We define \(\mathit{LTSI}_{CRIL}\) as the operational semantics for CRIL and show the reversibility of \(\mathit{LTSI}_{CRIL}\), which is followed by the airline ticketing example in section 4. Section 5 presents concluding remarks.
## 2 Cril
The syntax of CRIL is defined in figure 1. Following RIL [17], a CRIL program consists of an unordered set of basic blocks. Given a set of labels \(\mathcal{L}\), a block has an entry point followed by a block body and an exit point with labels. A block body is either a basic instruction or a call statement.
### Basic block
We assume all references to variables have a global scope and there exists a heap memory M indexed by integers, where M[x] denotes the \(x\)-th element in M. An expression \(e\) is either an arithmetic expression or a boolean expression with the usual operators +,-,-,-==,!=,!=,!=,!,!=,?=,&&, \(|\,|\,1\) of the C language, where ~ is the bitwise exclusive OR operation. The boolean operators and logical connectives treat 0 as false and any non-0 value as true. An expression can contain integer constants, which are denoted by \(k\).
Entry/exit pointAn entry/exit point of a basic block is the following forms:
\begin{tabular}{c c|c c} \multicolumn{2}{c|}{Entry point} & \multicolumn{2}{c}{Exit point} \\ \hline (1) & \(l\) \(\!\!<\)- & (1') & -? \(l\) \\ (2) & \(l_{1}\,;l_{2}\,
where \(l,l_{1},l_{2}\in\mathscr{L}\). We write \(\mathsf{entry}(b)\) for the entry point of a basic block \(b\), and \(\mathsf{exit}(b)\) for the exit point of a basic block \(b\).
The informal meaning of each item is explained as follows:
(1) and (1'): \(l<\)- receives the control at \(l\) unconditionally in a forward execution. In a backward execution, it sends the control to the block that receives the control at \(l\). \(\lnot>l\) dually works in the reversed way of \(l<\)-.
(2) and (2'): \(l_{1};l_{2}<\)- \(e\) receives the control at \(l_{1}\) when \(e\) is evaluated to a non-0 value and at \(l_{2}\) otherwise in a forward execution. In a backward execution, it returns the control to the block that receives the control at \(l_{1}\) when \(e\) is evaluated to non-0 and at \(l_{2}\) otherwise. \(e\succ l_{1};l_{2}\) dually works in the reversed way of \(l_{1};l_{2}<\)- \(e\).
(3) and (3'): begin\(l\) receives the control from the call statement labeled by \(l\) in a forward execution. In a backward execution, it returns the control to the statement labeled by \(l\). end\(l\) dually works in the reversed way of end\(l\).
A basic block is either an instruction block or a call statement.
Instruction blockBasic instruction is in the forms:
\begin{tabular}{l l l l l} (1) & _left_\(\oplus\)= \(e\) & (3) & \(\forall\ x\) & (5) & assert\(e\) \\ (2) & _left_\({}_{1}<\)-\(>\)_left_\({}_{2}\) & (4) & \(\mathsf{P}\ x\) & (6) & skip \\ \end{tabular} We write \(\mathsf{inst}(b)\) for the basic instruction in \(b\). The informal semantics is explained as follows:
(1): _left_\(\oplus\)= \(e\) is an _update_ statement where _left_ is a left-value, and \(\oplus\in\{+,-,\^{\_}\}\). _left_ is relatively updated by \(e\) in that \(+\)=, \(-\)=, and \(\^{-}\)= with the same semantics as in the C language. If _left_\(=x\), \(x\) must not appear in \(e\). If _left_\(=\)M[\(x\)], heap references must not appear in \(e\).
(2): _left_\({}_{1}<\)-\(>\)_left_\({}_{2}\) is an _exchange_ where _left_\({}_{1}\) and _left_\({}_{2}\) are left-values. It swaps the values specified by _left_\({}_{1}\) and _left_\({}_{2}\). The same variable must not appear on both sides of \(<\)-\(>\).
(3) and (4): \(\forall\ x\) and \(\mathsf{P}\ x\) are the P and V operations for synchronization, which correspond to those commonly used in operating systems. We assume variables in P and V instruction only appear as the parameters of P and V. In a forward execution, \(\forall\ x\) is defined when \(x\) is 0 and terminates when \(x\) is 1 and P\(x\) is defined when \(x\) is 1 and terminates when \(x\) is 0. In a backward execution, \(\forall\ x\) and P\(x\) work as P\(x\) and V\(x\) of the forward execution respectively.
(5): assert\(e\) aborts the execution if \(e\) evaluates to 0, and does nothing otherwise.
(6): skip does nothing in either direction.
We call \(\mathscr{R}=\mathit{Vars}\cup\{\mathbb{M}\}\)_memory resources_. Let \(\mathsf{Var}(E)\) be the set of memory resource references appearing in \(E\), where \(E\) is one of _entry_, _exit_, or _inst_ in the grammar of figure 1. For example, \(\mathsf{Var}(\mathsf{z-=M[x]+y})=\{\mathbb{M},\mathtt{x},\mathtt{y},\mathtt{ z}\}\). \(\mathsf{read}(b)\) is the memory resources that \(b\) uses, and \(\mathsf{write}(b)\) is the memory resources that \(b\) updates.
\[\begin{array}{ll}\mathsf{read}(b)=\mathsf{Var}(\mathsf{entry}(b))\\ \cup\mathsf{Var}(\mathsf{inst}(b))\\ \cup\mathsf{Var}(\mathsf{exit}(b))\end{array}\qquad\mathsf{write}(b)=\begin{cases} \{x\}&\quad\text{If }\mathsf{inst}(b)=x\oplus\mathsf{=}e\\ \{\mathbb{M}\}&\quad\text{If }\mathsf{inst}(b)=\mathbb{M}[x]\oplus\mathsf{=}e\\ \{x,y\}&\quad\text{If }\mathsf{inst}(b)=x<\!\!-\!y\\ \{x,\mathbb{M}\}&\quad\text{If }\mathsf{inst}(b)\in\{x<\!\!-\!>\!M[y],\mathbb{M}[y]<\!\!-\!>x\}\\ \{\mathbb{M}\}&\quad\text{If }\mathsf{inst}(b)=\mathbb{M}[x]<\!\!-\!>\!M[y]\\ \{x\}&\quad\text{If }\mathsf{inst}(b)\in\{\mathbb{P}\ x,\mathbb{V}\ x\}\\ \varnothing&\quad\text{Otherwise.}\end{array}\]
Call statementA _call statement_ is a basic block in the following form:
\[l\mathrel{\hbox{\hbox to 0.0pt{\hbox{\kern 2.0pt<}\hbox{\kern-2.0pt\lower 4.0pt \hbox{\kern 2.0pt\sim}}\hbox{\kern 2.0pt\lower 4.0pt\hbox{\kern 2.0pt \sim}}\hbox{\kern 2.0pt\lower 4.0pt\hbox{\kern 2.0pt\sim}}\hbox{\kern 2.0pt \lower 4.0pt\hbox{\kern 2.0pt\sim}}\hbox{\kern 2.0pt\lower 4.0pt\hbox{\kern 2.0pt \sim}}\hbox{\kern 2.0pt\lower 4.0pt\hbox{\kern 2.0pt\sim}}\hbox{\kern 2.0pt \lower 4.0pt\hbox{\kern 2.0pt\sim}}\hbox{\kern 2.0pt\lower 4.0pt\hbox{\kern 2.0pt \sim}}\hbox{\kern 2.0pt\lower 4.0pt\hbox{\kern 2.0pt\sim}}\hbox{\kern 2.0pt \lower 4.0pt\hbox{\kern 2.0pt\sim}}\hbox{\kern 2.0pt\lower 4.0pt\hbox{\kern 2.0pt \sim}}\hbox{\kern 2.0pt\lower 4.0pt\hbox{\kern 2.0pt\sim}}\hbox{\kern 2.0pt \lower 4.0pt\hbox{\kern 2.0pt\sim}}\hbox{\kern 2.0pt\lower 4.0pt\hbox{\kern 2.0pt \sim}}\hbox{\kern 2.0pt\lower 4.0pt\hbox{\kern 2.0pt\sim}}\hbox{\kern 2.0pt \lower 4.0pt\hbox{\kern 2.0pt\sim}}\hbox{\kern 2.0pt\lower 4.0pt\hbox{\kern 2.0pt \sim}}\hbox{\kern 2.0pt\lower 4.0pt\hbox{\kern 2.0pt\sim}}\hbox{\kern 2.0pt \lower 4.0pt\hbox{\kern 2.0pt\sim}}\hbox{\kern 2.0pt\lower 4.0pt\hbox{\kern 2.0pt \sim}}\hbox{\kern 2.0pt\lower 4.0pt\hbox{\kern 2.0pt\sim}}\hbox{\kern 2.0pt \lower 4.0pt\hbox{\kern 2.0pt\sim}}\hbox{\kern 2.0pt\lower 4.0pt\hbox{\kern 2.0pt \sim}}\hbox{\kern 2.0pt\lower 4.0pt\hbox{\kern 2.0pt\sim}}\hbox{\kern 2.0pt \lower 4.0pt\hbox{\kern 2.0pt\sim}}\hbox{\kern 2.0pt\lower 4.0pt\hbox{\kern 2.0pt \sim}}\hbox{\kern 2.0pt\lower 4.0pt\hbox{\kern 2.0pt\sim}}\hbox{\kern 2.0pt \lower 4.0pt\hbox{\kern 2.0pt\sim}}\hbox{\kern 2.0pt\lower 4.0pt\hbox{\kern 2.0pt \sim}}\hbox{\kern 2.0pt\lower 4.0pt\hbox{\kern 2.0pt\sim}}\hbox{\kern 2.0pt \lower 4.0pt\hbox{\kern 2.0pt\sim}}\hbox{\kern 2.0pt\lower 4.0pt\hbox{\kern 2.0pt \sim}}\hbox{\kern 2.0pt\lower 4.0pt\hbox{\kern 2.0pt\sim}}\hbox{\kern 2.0pt \lower 4.0pt\hbox{\kern 2.0pt\sim}}\hbox{\kern 2.0pt\lower 4.0pt\hbox{\kern 2.0pt \sim}}\hbox{\kern 2.0pt\lower 4.0pt\hbox{\kern 2.0pt\sim}}\hbox{\kern 2.0pt \lower 4.0pt\hbox{\kern 2.0pt\sim}}\hbox{\kern 2.0pt\lower 4.0pt\hbox{\kern 2.0pt \sim}}\hbox{\kern 2.0pt\lower 4.0pt\hbox{\kern 2.0pt\sim}}\hbox{\kern 2.0pt \lower 4.0pt\hbox{\kern 2.0pt\sim}}\hbox{\kern 2.0pt\lower 4.0pt\hbox{\kern 2.0pt \sim}}\hbox{\kern 2.0pt\lower 4.0pt\hbox{\kern 2.0pt\sim}}\hbox{\kern 2.0pt \lower 4.0pt\hbox{\kern 2.0pt\sim}}\hbox{\kern 2.0pt\lower 4.0pt\hbox{\kern 2.0pt \sim}}\hbox{\kern 2.0pt\lower 4.0pt\hbox{\kern 2.0pt\sim}}\hbox{\kern 2.0pt \lower 4.0pt\hbox{\kern 2.0pt\sim}}\hbox{\kern 2.0pt\lower 4.0pt\hbox{\kern 2.0pt\sim}}\hbox{\kern 2.0pt \lower 4.0pt\hbox{\kern 2.0pt\sim}}\hbox{\kern 2.0pt\lower 4.0pt\hbox{\kern 2.0pt \sim}}\hbox{\kern 2.0pt\lower 4.0pt\hbox{\kern 2.0pt\sim}}\hbox{\kern 2.0pt \lower 4.0pt\hbox{\kern 2.0pt\sim}}\hbox{\kern 2.0pt\lower 4.0pt\hbox{\kern 2.0pt\sim}}\hbox{\kern 2.0pt \lower 4.0pt\hbox{\kern 2.0pt\sim}}\hbox{\kern 2.0pt\lower 4.0pt\hbox{\kern 2.0pt\sim}}\hbox{\kern 2.0pt \lower 4.0pt\hbox{\kern 2.0pt\sim}}\hbox{\kern 2.0pt\lower 4.0pt\hbox{\kern 2.0pt \sim}}\hbox{\kern 2.0pt\lower 4.
### Basic operational semantics
The set of process identifiers \(\mathsf{PID}\) is \((\mathbb{N}_{+})^{*}\) where \(\mathbb{N}_{+}\) is the set of positive integers. \(p\in\mathsf{PID}\) denotes an identifier uniquely assigned to a process. When \(p\) executes a process block \(\mathsf{PB}(b,Pg)\), we also write \(\mathsf{PB}(p)\). If \(p\) is labeled by \(l\), \(\mathsf{PB}(p)=\mathsf{PB}(b,Pg)\) where \(\mathsf{entry}(b)=\mathsf{begin}l\). A special _root_ process has the identifier of \(\varepsilon\). The runtime invokes the root process and sends the control to a process block labeled by \(\mathtt{main}\) to start an execution of a CRIL program. For a process \(p\), \(p\cdot i\) is assigned to the \(i\)-th subprocess invoked by a call statement of process \(p\). \(\preceq\) is the prefix relation. A process set \(PS\) is a set of process identifiers satisfying (1) \(\varepsilon\in PS\); (2) \(p\in PS\) implies \(p^{\prime}\in PS\) for \(p^{\prime}\preceq p\); and (3) \(p\cdot i\) implies \(p\cdot j\in PS\) for \(j<i\). For a process set \(PS\) and a process id \(p\), \(\mathsf{isleaf}(PS,p)\) holds if for all \(p^{\prime}\in PS\), \(p\preceq p^{\prime}\) implies \(p=p^{\prime}\).
A _process configuration_ is \((l,stage)\), where \(l\in\mathcal{L}\) and \(stage\in\{\mathsf{begin},run,end\}\) are the location of the control in a process block. If \(stage=\mathsf{begin}\), it is before executing the process block, if \(stage=\mathsf{run}\), it is executing the process block, and if \(stage=\mathsf{end}\) it terminated the process block. \(\mathsf{PC}\) is the set of process configurations.
A _program configuration_ is \((Pg,\rho,\sigma,Pr)\), where \(Pg\) is the program (which never changes), \(\rho:\mathit{Vars}\rightarrow\mathbb{Z}\) maps a variable to its value, \(\sigma:\mathbb{N}\rightarrow\mathbb{Z}\) maps a heap memory address to its value. A _process map_\(Pr:\mathsf{PID}\rightarrow\mathsf{PC}\cup\{\bot\}\) maps a process to a process configuration. We assume \(Pr_{act}\) is a process set where \(Pr_{act}=\{p\in\mathsf{PID}|Pr(p)\in\mathsf{PC}\}\). \(\mathcal{C}\) is the set of all program configurations.
A transition relation over program configurations
\[(Pg,\rho,\sigma,Pr)\xleftarrow{p,Rd,Wt}\]
\[(Pg,\rho,\sigma,Pr)\xleftarrow{p,Rd,Wt}\]
\[(Pg,\rho,\sigma,Pr)\]
is defined in figure 2. \((Pg,\rho,\sigma,Pr)\) steps forward to \((Pg,\rho^{\prime},\sigma^{\prime},Pr^{\prime})\) by the process \(p\) with reading memory resource \(Rd\) and updating memory resource \(Wt\). And \((Pg,\rho^{\prime},\sigma^{\prime},Pr^{\prime})\) steps backward to \((Pg,\rho,\sigma,Pr)\) in the same way.
We explain the SOS rules in figure 2. **AssVar** and **AssArr** present the update behavior. The exchange behavior is presented by **SwapVarVar**, **SwapVarArr**, **SwapArrVar**, and **SwapArrArr**. **SwapVarArr** and **SwapArrVar** are reversible since \(y\) is evaluated to the same value on both sides of \(\xleftarrow{}\). **SwapVarVar** and **SwapArrArr** are clearly reversible. **Skip** presents the skip behavior. **Assert** presents the assertion behavior, which stops when \(e\) is evaluated to \(0\).
**V-op** and **P-op** present the behavior of \(\mathbb{V}\)\(x\) and \(\mathbb{P}\)\(x\) for synchronization by \(x\) shared among concurrent processes. In forward execution, \(\mathbb{V}\)\(x\) sets \(x=1\) when \(x=0\), and waits otherwise. In backward execution, \(\mathbb{V}\)\(x\) sets \(x=0\) when \(x=1\), and waits otherwise. \(\mathbb{P}\) behaves in a symmetrical fashion. By the pair of \(\mathbb{V}\)\(x\) and \(\mathbb{P}\)\(x\), \(x\) can be used as a semaphore to implement the mutual exclusion for both directions of execution.
**Inst** presents the one-step behavior of a basic block. The instruction updates \(\rho\) and \(\sigma\) and the entry and exit points give the status of the process. The process is running if \(stage\) is \(\mathsf{run}\). the process is at the initial block or at the final block, if \(stage\) is \(\mathsf{begin}\) or \(\mathsf{end}\). The transition label \(Rd\) is \(\mathsf{read}(b)\) and the transition label \(Wt\) is \(\mathsf{write}(b)\).
**CallFork** presents that a call statement forks subprocesses. When \(p\) executes a call statement \(\mathsf{call}\)\(l_{1},\cdots,l_{n}\) forwards, it forks subprocesses labeled by \(l_{1},\cdots,l_{n}\) and \(p\) stores the label for returning the controls in \(Pr\). Note that the process map is changed to \(Pr^{\prime}\) with subprocesses after forking subprocesses. Since \(\mathsf{isleaf}(Pr^{\prime}_{act},p)\) does not hold, \(p\) does not pass the control to the next block until all the subprocesses are merged. **CallMerge** works dually to **CallFork**. In a forward execution, when all subprocesses reach the \(\mathsf{end}\) stage, all subprocesses are set to inactive and \(p\) resumes to pass the control to the next basic block. In a backward execution, **CallFork** behaves as **CallMerge** of forward execution and vice versa for **CallMerge**.
Figure 2: The basic operational semantics
In a program configuration of CRIL, there is no stack as in RIL to store the return label for subroutine calls. The process map stores the return label, which is not available until \(\mathsf{isleaf}(Pr_{act},p)\) holds, where it checks if the label is on the stack.
Figure3 shows an example of CRIL program \(Pg\). There are four process blocks \(\{b_{1},b_{2},b_{3}\}\),\(\{b_{4},b_{5}\}\), \(\{b_{6}\}\), and \(\{b_{7}\}\). A process map assigns \(\epsilon\) to \(\{b_{1},b_{2},b_{3}\}\). In the following execution, it assigns 1 to \(\{b_{4},b_{5}\}\), 2 to \(\{b_{6}\}\), and 3 to \(\{b_{7}\}\).
An example of the transitions for \(Pg\) is as follows:
\(\begin{array}{l}(Pg,\rho_{0},\sigma_{0},[\epsilon\mapsto(\mathtt{main}, \mathtt{begin}))]\\ \underbrace{\epsilon,\varnothing,\varnothing}_{\mathrm{prog}}\ (Pg,\rho_{0},\sigma_{0},[ \epsilon\mapsto(\mathtt{1},\mathtt{run})])\\ \underbrace{\epsilon,\varnothing,\varnothing}_{\mathrm{prog}}\ (Pg,\rho_{0},\sigma_{0}, \begin{array}{l}[\epsilon\mapsto(\mathtt{1},\mathtt{run}),1\mapsto(\mathtt{ begin},\mathtt{sub0}),\\ 2\mapsto(\mathtt{sub1},\mathtt{begin}),3\mapsto(\mathtt{sub2},\mathtt{ negin})\end{array}\end{array}\end{array})\end{array}\end{array}\)
where \(\rho_{1}=\rho_{0}[x\mapsto 1]\)
where \(\rho_{2}=\rho_{2}[y\mapsto 1]\)
where \(\rho_{3}=\rho_{1}[z\mapsto 1]\)
\(\begin{array}{l}\frac{1,\{x\},\{x\}}{\mathtt{prog}}\ (Pg,\rho_{4},\sigma_{0}, \begin{array}{l}[\epsilon\mapsto(\mathtt{1},\mathtt{run}),1\mapsto(\mathtt{ sub0},\mathtt{end}),\\ 2\mapsto(\mathtt{sub1},\mathtt{end}),3\mapsto(\mathtt{sub2},\mathtt{end}) ]\end{array})\end{array}\)
where \(\rho_{4}=\rho_{3}[x\mapsto 2]\)
\(\begin{array}{l}\frac{e,\varnothing,\varnothing}{\mathtt{prog}}\ (Pg,\rho_{4},\sigma_{0}, [\epsilon\mapsto(\mathtt{1}2,\mathtt{run})])\\ \frac{e,\varnothing,\varnothing}{\mathtt{prog}}\ (Pg,\rho_{4},\sigma_{0},[ \epsilon\mapsto(\mathtt{main},\mathtt{end})])\end{array}\)
This forward execution ends with \(\mathtt{x}=2,\mathtt{y}=1,\mathtt{z}=1\). The operational semantics show that the computation may be reversed to \((Pg,\rho_{0},\sigma_{0},[\epsilon\mapsto(\mathtt{main},\mathtt{begin}))]\). However, it is possible to reverse to a different configuration such as \(\mathtt{x}=0,\mathtt{y}=-1,\mathtt{z}=-1\) if the call statement is reversed in a different order. Thus, this operational semantics is not reversible. In the next section, we will combine an annotation for the dependency information as DAG to make the basic properties for reversibility as well as Causal Safety and Causal Liveness.
## 3 Reversibility of CRIL
Table 1 (a) shows the transitions of store \(\rho\) by the sequence of basic blocks in the forward computation of the example in the previous section. Process \(p\) makes the forward (left-to-right) transition of \(\underbrace{\frac{p,Rd,Wt}{\mathtt{prog}}}_{\mathrm{prog}}\). The program configuration at the end is \((Pg,[\mathtt{x}\mapsto 2,\mathtt{y}\mapsto 1,\mathtt{z}\mapsto 1],\sigma_{0},[ \epsilon\mapsto(\mathtt{main},\mathtt{end})]\). The configuration may lead to a different store by the backward (right-to-left) transitions of \(\underbrace{\frac{p,Rd,Wt}{\mathtt{prog}}}_{\mathrm{prog}}\) as shown in table 1 (b). Although each step of the operational semantics keeps the local reversibility, it does not preserve the causality of shared memory. The forward step of \(\underbrace{\frac{p,Rd,Wt}{\mathtt{prog}}}_{\mathrm{prog}}\) updates \(Wt\) reading \(Rd\) making the causality from \(Rd\) to \(Wt\). Our idea is to control processes to keep the causality by observing \(Rd\) and \(Wt\) being combined with the operational semantics.
Figure 3: A CRIL program \(Pg\)
and \(b_{6}\), and the order between \(b_{5}\) and \(b_{6}\) affect the causality. We say \(b_{i}\)_conflicts_ with \(b_{j}\) where \(i\neq j\) if \(\mathsf{read}(b_{i})\cap\mathsf{write}(b_{j})\neq\varnothing\) or \(\mathsf{read}(b_{j})\cap\mathsf{write}(b_{i})\neq\varnothing\). Since \(b_{6}\) and \(b_{7}\) do not conflict with each other, the order between \(b_{6}\) and \(b_{7}\) does not affect the causality. Thus, for the forward execution in table 1 (a), the reversed execution \(b_{3}b_{2}b_{3}b_{6}b_{7}b_{4}b_{2}b_{1}\) reaches \(\rho_{0}\) as a legitimate reversed computation.
### Annotation DAG
We shall present a data structure called 'annotation DAG' (Directed Acyclic Graph) that keeps the conflicting information in forward execution and controls the backward execution by matching the causality, observing the memory \(Wt\) updated by reading the memory \(Rd\).
**Definition 1**.: _An annotation DAG is \(A=(V,E_{R},E_{W})\) satisfying the following conditions:_
1. \(V\subseteq(\mathsf{PID}\times\mathbb{N})\cup\{\bot\}\) _where_ \(\mathbb{N}\) _is the set of natural numbers,_ \(\bot\in V\)_, and if_ \((p,n)\in V\) _then for all_ \(n^{\prime}\leq n\)_,_ \((p,n^{\prime})\in V\)_;_
2. \(E_{R},E_{W}\subseteq V\times\mathcal{R}\times V\) _where_ \((v^{\prime},r,v),(v^{\prime\prime},r,v)\in E_{R}\cup E_{W}\) _implies_ \(v^{\prime}=v^{\prime\prime}\)_;_
3. \(E_{R}\cap E_{W}=\varnothing\) _and_ \((V,E_{R}\uplus E_{W})\) _is a DAG with the finite set of nodes_ \(V\)_;_
4. \((v^{\prime},r,v)\in E_{W}\) _and_ \(v^{\prime}\neq\bot\) _imply_ \((v^{\prime\prime},r,v^{\prime})\in E_{W}\)_; and_
5. \((v,r,v^{\prime}),(v,r,v^{\prime\prime})\in E_{W}\) _implies_ \(v^{\prime}=v^{\prime\prime}\)__
\(\mathcal{A}\) _is the set of all annotation DAGs, and_ \(A_{\text{init}}\) _is_ \((\bot\},\varnothing,\varnothing)\)_._
We write \(v\stackrel{{ r}}{{\rightarrow}}v^{\prime}\) for \((v,r,v^{\prime})\in E_{W}\) and \(v\stackrel{{ r}}{{\dashrightarrow}}v^{\prime}\) for \((v,r,v^{\prime})\in E_{R}\). Condition 5 with conditions 3 and 2 ensures that when \(v^{\prime}\stackrel{{ r}}{{\rightarrow}}v\), there is a unique sequence of \(E_{W}\) with the label of \(r\) from \(\bot\) to \(v\): \(\bot\stackrel{{ r}}{{\rightarrow}}v_{1}\stackrel{{ r}}{{ \rightarrow}}\cdots\stackrel{{ r}}{{\rightarrow}}v_{n}=v\). \(\mathsf{last}(r,E_{W})\) denotes the last node \(v\) of such sequence. When \(\mathsf{last}(r,E_{W})=v\neq\bot\), \(v^{\prime}\stackrel{{ r}}{{\rightarrow}}v\) for a unique \(v^{\prime}\) and \(v\stackrel{{ r}}{{\rightarrow}}v^{\prime\prime}\) for all \(v^{\prime\prime}\). \(\mathsf{last}(r,\varnothing)=\bot\) for all \(r\in\mathcal{R}\). Since \(V\) is finite, for \((p,n)\in V\)
there is the maximum number for process \(p\) if such \((p,n)\) exists. Given \(V\subseteq\mathsf{PID}\times\mathbb{N}\cup\{\bot\}\), we write \(\mathsf{max}_{p}(V)\) for \(max_{(p,n)\in V}\)\(n\) for some \((p,n)\in V\). \(\mathsf{max}_{p}(V)=-1\) when \((p,n)\notin V\) for all \(n\).
**Definition 2**.: _For \(A_{1},A_{2}\in\mathcal{A}\), \(A_{1}=(V_{1},E_{R1},E_{W1})\xrightleftharpoons[]{\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \
We illustrate the behavior controlled by the annotation DAG for the simple example of the previous section. Starting from the initial configuration, \((C_{0},A_{0})=((Pg,\rho_{0},\sigma_{0},[(\varepsilon\mapsto(\mathtt{main}, \mathtt{begin}))]),(\{\bot\},\varnothing,\)\(\varnothing))\), it ends up with \((C_{8},A_{8})=(P_{g},[x,y,z\mapsto 2,1,1],\sigma_{0},[\varepsilon\mapsto( \mathtt{main},\mathtt{end})])\).
Forward accumulation of causalityWe present the construction of annotation DAGs as follows:
1. After process \(\varepsilon\) executes \(b_{1}\) and \(b_{2},A_{2}=(\{\bot,(\varepsilon,0),(\varepsilon,1)\},\varnothing,\varnothing)\);
2. The call statement in \(b_{2}\) forks three subprocesses. Then, process \(1\) executes \(b_{4}\), \((1,0)\) is added to \(V\) and \(\bot\xrightarrow[]{\mathtt{x}}(1,0)\) is added since \(\mathtt{read}(b_{4})=\mathsf{write}(b_{4})=\{\mathtt{x}\}\) to make \(A_{3}\), meaning \(\mathtt{x}\) is updated by the initial \(\mathtt{x}\), and the store is updated as \([\mathtt{x},\mathtt{y},\mathtt{z}\mapsto 1,0,0]\).
3. Next, process \(2\) executes \(b_{6}\) where \(\mathtt{read}(b_{6})=\{\mathtt{x},\mathtt{y}\}\) and \(\mathsf{write}(b_{6})=\{\mathtt{y}\}\). \(\xrightleftharpoons[]{\mathtt{2},\{\mathtt{x},\mathtt{y}\},\{\mathtt{y}\}}\)\(\mathtt{ann}\) adds a fresh node \((2,0)\), \(\bot\xrightarrow[]{\mathtt{y}}(2,0)\), and \((1,0)\xrightarrow[]{\mathtt{x}}(2,0)\). The causality of \((2,0)\) means \(\mathtt{y}\) is updated by the initial \(\mathtt{y}\) and \(\mathtt{x}\) of \((1,0)\) to make \(A_{4}\).
4. Then, process \(3\) executes \(b_{7}\) where \(\mathtt{read}(b_{7})=\{\mathtt{x},\mathtt{z}\}\) and \(\mathsf{write}(b_{7})=\{\mathtt{z}\}\). \(\xrightleftharpoons[]{\mathtt{3},\{\mathtt{x},\mathtt{z}\},\{\mathtt{z}\}}\)\(\mathtt{ann}\) adds \((3,0)\), \(\bot\xrightarrow[]{\mathtt{z}}(3,0)\), and \((1,0)\xrightarrow[]{\mathtt{x}}(3,0)\), to make \(A_{5}\) shown in figure 4 (a), meaning the causality at \((3,0)\) to update the initial \(\mathtt{z}\) using the initial \(\mathtt{z}\) and \(\mathtt{x}\) of \((1,0)\).
5. At last, process \(1\) executes \(b_{5}\) where \(\mathtt{read}(b_{5})=\mathsf{write}(b_{5})=\{\mathtt{x}\}\). \(\xrightleftharpoons[]{\mathtt{1},\{\mathtt{x}\},\{\mathtt{x}\}}\)\(\mathtt{ann}\) just adds \((1,1)\) and \((1,0)\xrightarrow[]{\mathtt{x}}(1,1)\) to form \(A_{6}\) shown in figure 4 (b), meaning \(\mathtt{x}\) is updated by \(\mathtt{x}\) of \((1,0)\).
6. No more causality is created after merging the subprocesses. Just the relation adds \((\varepsilon,2)\) and \((\varepsilon,3)\) with no edges to form \(A_{8}\) shown in figure 4 (c).
Backward rollback of causalityThe following is the summary of the corresponding backward execution.
1. The removable nodes of \(A_{8}\) are \(\{(\varepsilon,3),(1,1)\}\). Here, \(C_{8}\) specifies \(\varepsilon\) to remove \((\varepsilon,3)\) followed by removing \((\varepsilon,2)\) back to \((C_{6},A_{6})\), where \(C_{6}=(Pg,[\mathtt{x},\mathtt{y},\mathtt{z}\mapsto 2,1,1],\sigma_{0},[ \varepsilon\mapsto(12,\mathtt{run}),1\mapsto(\mathtt{sub}0,\mathtt{end}),2\)\(\mapsto(\mathtt{sub}1,\mathtt{end}),3\mapsto(\mathtt{sub}2,\mathtt{end})])\)
2. \(C_{6}\) may reverse any subprocess, but \(A_{6}\) allows only to remove \((1,1)\) by \(\xrightleftharpoons[]{\mathtt{p},\mathit{Rd},\mathit{Wt}}\)\(\mathtt{ann}\) to obtain \(A_{5}\).
3. After removing \((1,1)\) and \((1,0)\xrightarrow[]{\mathtt{x}}(1,1)\) from \(A_{6}\), we obtain \(A_{5}\) whose removable nodes are \((2,0)\) and \((3,0)\). \((1,0)\) is not removable since \((1,0)\) has two outgoing edges, although \((1,0)=\mathtt{last}(\mathtt{x},E_{W})\).
Figure 4: Annotation DAGs along with forward execution
* \(C_{5}\) may reverse either process 2 or process 3, and let process 2 reverse to become \(C_{4}^{\prime}\). Then, remove \(\bot\stackrel{{ y}}{{\rightarrow}}(2,0)\) and \((1,0)\stackrel{{ x}}{{\dashrightarrow}}(2,0)\) to obtain \(A_{4}^{\prime}\) and \([x,y,z\mapsto 1,0,1]\) as the store \(\rho\). Note that \((C_{4}^{\prime},A_{4}^{\prime})\) did not appear in the forward execution.
* From \((C_{4}^{\prime},A_{4}^{\prime})\), process 3 is reversed to remove \((3,0)\), \(\bot\stackrel{{\pi}}{{\rightarrow}}(3,0)\), and \((1,0)\stackrel{{ x}}{{\dashrightarrow}}(3,0)\) to obtain \(A_{3}\) and \([x,y,z\mapsto 1,0,0]\).
* Then, process 1 is reversed by removing \((1,0)\) and \(\bot\stackrel{{ x}}{{\rightarrow}}(1,0)\) to obtain \(A_{2}=(\{\bot,(\varepsilon,0),(\varepsilon,1)\},\varnothing,\)\(\varnothing).
* At last, process \(\varepsilon\) reverses \(b_{2}\) and \(b_{1}\) to obtain \((C_{init},A_{init})\).
In (B4) step, there are two possibilities of reversing process 3 or process 2. In the above, \(A_{5}\) is reversed by process 2 to \(A_{4}^{\prime}\) followed by process 3.
For a CRIL program \(Pg\), let \(B\) be the basic blocks in \(Pg\). Let \(\mathcal{O}=\mathsf{PID}\times\bigcup_{b\in B}\mathsf{read}(b)\times\bigcup_{b \in B}\mathsf{write}(b)\). Proposition 2 ensures there is always a removable node along with removable edges.
### Properties for reversibility
We show that the operational semantics controlled by annotation DAG has proper properties for reversibility. We focus on the following two properties that are considered fundamental properties for reversibility [14].
**Causal Safety (CS):**: An action can not be reversed until any actions caused by it have been reversed.
**Causal Liveness (CL):**: We should allow actions to reverse in any order compatible with Causal Safety, not necessarily the exact inverse of the forward order.
[14] shows that those properties hold in an LTSI (LTS with Independence) provided that a small number of axioms are valid in the LTSI. We shall follow this approach by defining LTS from \(\xRightarrow{p,Rd,Wt}\) and add the independence relation to have the LTSI for the CRIL behavior. We will then show that the axioms for **CS** and **CL** hold.
**Definition 4**.: \((\mathcal{C}\times\mathcal{A},\mathsf{Lab},\rightharpoonup)\) _is the forward LTS for CRIL where:_
* \(\mathsf{Lab}=\mathsf{PID}\times 2^{\mathcal{R}}\times 2^{\mathcal{R}}\)_; and_
* \((C,A)\xrightarrow{(p,Rd,Wt)}(C^{\prime},A^{\prime})\) _if_ \((C,A)\xRightarrow{p,Rd,Wt}(C^{\prime},A^{\prime})\)__
**Definition 5**.: _The (combined) LTS for CRIL is \((\mathcal{C}\times\mathcal{A},\mathsf{Lab}\uplus\mathsf{Lab},\rightharpoonup)\) where:_
* \(\mathsf{Lab}=\{(p,Rd,Wt)\mid(p,Rd,Wt)\in\mathsf{Lab}\}\)_; and_
* _For_ \(a\in\mathsf{Lab}\)_,_ \((C,A)\stackrel{{ a}}{{\dashrightarrow}}(C^{\prime},A^{\prime})\) _iff_ \((C,A)\stackrel{{ a}}{{\dashrightarrow}}(C^{\prime},A^{\prime})\)_, and_ \((C,A)\stackrel{{ a}}{{\dashrightarrow}}(C^{\prime},A^{\prime})\) _iff_ \((C^{\prime},A^{\prime})\stackrel{{ a}}{{\dashrightarrow}}(C,A)\)_._
Figure 5: Annotation DAGs in backward execution
\(\mathsf{Lab}\uplus\mathsf{Lab}\) is ranged over by \(\alpha,\beta,\cdots\), and \(\mathsf{Lab}\) by \(a,b,\cdots\). \(\mathsf{und}:\mathsf{Lab}\uplus\mathsf{Lab}\rightarrow\mathsf{Lab}\) where \(\mathsf{und}(a)=a\) and \(\mathsf{und}(\underline{a})=a\). \(\underline{a}=a\). Given \(t:P\stackrel{{ a}}{{\rightarrow}}Q,\underline{t}\) is for \(Q\stackrel{{ a}}{{\rightarrow}}P\).
For CRIL, the independence of transitions is defined as the independent memory update among concurrent processes. The processes running concurrently are not in the subprocess relation. Note that as \(\mathsf{pid}\ p\cdot 1\), \(p\cdot 2\), \(\cdots\) are assigned to the subprocesses of the process with \(\mathsf{pid}\) of \(p\). The process with the \(\mathsf{pid}\) of \(p\) is concurrent to the process with the \(\mathsf{pid}\) of \(q\) if \(p\not\preceq q\) and \(q\not\preceq p\). Hence, we give the dependence relation for labels as follows.
**Definition 6**.: _For \(\alpha,\beta\in\mathsf{Lab}\uplus\mathsf{Lab}\) such that \(\mathsf{und}(\alpha)=(p_{1},Rd_{1},Wt_{1})\) and \(\mathsf{und}(\beta)=(p_{2},Rd_{2},Wt_{2})\), \(\alpha\ \ \mathfrak{t}_{\mathrm{lab}}\ \ \beta\) iff_
\[p_{1}\not\preceq p_{2}\ \wedge\ p_{2}\not\preceq p_{1}\ \wedge\ Rd_{1}\cap Wt_{2}=\varnothing\ \wedge Rd_{2}\cap Wt_{1}=\varnothing\]
The independence of transitions in LTS is defined as the transitions with independent labels. We define the Labeled Transition System with Independent transitions as the operational semantics of CRIL.
**Definition 7**.: _For \(t:(C_{1},A_{1})\stackrel{{\alpha}}{{\rightarrow}}(C^{\prime}_{1}, A^{\prime}_{1})\) and \(u:(C_{2},A_{2})\stackrel{{\beta}}{{\rightarrow}}(C^{\prime}_{2}, A^{\prime}_{2})\) in the combined LTS for CRIL, \(t\) and \(u\) are independent of each other, written as \(t\ \ \ \mathfrak{t}\ \ \mathfrak{t}_{\mathrm{lab}}\ \ \beta\)._
\((\mathcal{C}\times\mathcal{A},\mathsf{Lab}\uplus\mathsf{Lab},\rightarrow, \mathfrak{t})\) _is the LTS of CRIL with independence._
In the sequel, we write '\(\mathit{LTSI}_{\mathit{CRIL}}\)' for the LTS of CRIL with independence.
#### 3.3.1 Basic properties for reversibility
We take the axiomatic approach of [14], where the combination of the basic properties gives the proper reversibility. The first step is to show that the \(\mathit{LTSI}_{\mathit{CRIL}}\) is _pre-reversible_. For this purpose, we show \(\mathit{LTSI}_{\mathit{CRIL}}\) satisfies the following axioms: "**Square Property (SP)**", "**Backward Transitions are Independent (BTI)**", "**Well-Foundedness (WF)**", and "**Coinitial Propagation of Independence (CPI)**".
Square Property(SP)For \(a\in\mathsf{Lab}\), when \(C\stackrel{{ a}}{{\underset{\mathrm{prog}}{\rightleftharpoons}}}C^{\prime}\), we write \(C\stackrel{{ a}}{{\rightarrow}}_{\mathrm{prog}}C^{\prime}\) and \(C^{\prime}\stackrel{{ a}}{{\rightarrow}}_{\mathrm{prog}}C\). Similarly, when \(A\stackrel{{ a}}{{\underset{\mathrm{ann}}{\rightleftharpoons}}}A^{\prime}\), we write \(A\stackrel{{ a}}{{\rightarrow}}_{\mathrm{ann}}A^{\prime}\) and \(A^{\prime}\stackrel{{ a}}{{\rightarrow}}_{\mathrm{ann}}A\).
By the definition of the independence transitions, the square property of the \(\stackrel{{\alpha}}{{\rightarrow}}_{\mathrm{prog}}\) is immediately shown.
**Proposition 3**.: _Suppose \(C\stackrel{{\alpha}}{{\rightarrow}}_{\mathrm{prog}}C^{\prime}\), \(C\stackrel{{\beta}}{{\rightarrow}}_{\mathrm{prog}}C^{\prime\prime}\), and \(\alpha\ \mathfrak{t}_{\mathrm{lab}}\ \beta\). Then there are the cofinal transitions \(C^{\prime}\stackrel{{\beta}}{{\rightarrow}}_{\mathrm{prog}}C^{ \prime\prime\prime}\) and \(C^{\prime}\stackrel{{\alpha}}{{\rightarrow}}_{\mathrm{prog}}C^{ \prime\prime\prime}\)._
For annotation DAGs, we need to trace the difference of nodes and edges added or deleted by \(\stackrel{{\alpha}}{{\rightarrow}}_{\mathrm{ann}}\) to show the square property. We use the following notation to present differences in annotation DAGs:
\[\text{For }o:(V,E_{R},E_{W})\stackrel{{\alpha}}{{\rightarrow}}_{ \mathrm{ann}}(V^{\prime},E^{\prime}_{R},E^{\prime}_{W})\text{, }\text{iff}(o)=\begin{cases}(V^{\prime}-V,E^{\prime}_{R}-E_{R},E^{\prime}_{W}-E _{W})&\text{if }\alpha\in\mathsf{Lab},\\ (V-V^{\prime},E_{R}-E^{\prime}_{R},E_{W}-E^{\prime}_{W})&\text{if }\alpha\in\mathsf{Lab} \end{cases}\]
\[(V,E_{R},E_{W})\odot^{\alpha}(\Delta V,\Delta E_{R},\Delta E_{W})=\begin{cases}(V \cup\Delta V,E_{R}\cup\Delta E_{R},E_{W}\cup\Delta E_{W})&\text{if }\alpha\in\mathsf{Lab},\\ (V-\Delta V,E_{R}-\Delta E_{R},E_{W}-\Delta E_{W})&\text{if }\alpha\in\mathsf{Lab} \end{cases}\]
**Proposition 4**.: _Let \(\text{diff}(A\stackrel{{\alpha}}{{\rightarrow}}_{\mathrm{ann}}A^{ \prime})=(\Delta V^{\alpha},\Delta E^{\alpha}_{R},\Delta E^{\alpha}_{W})\) and \(\text{diff}(A\stackrel{{\beta}}{{\rightarrow}}_{\mathrm{ann}}A^{ \prime\prime})=(\Delta V^{\beta},\Delta E^{\beta}_{R},\Delta E^{\beta}_{W})\) with \(\alpha\ \mathfrak{t}_{\mathrm{lab}}\ \beta\). Then, \(\Delta V^{\alpha}\cap\Delta V^{\beta}=\Delta E^{\alpha}_{R}\cap\Delta E^{\beta}_ {R}=\Delta E^{\alpha}_{R}\cap\Delta E^{\beta}_{W}=\varnothing\)._
Proof.: For some \(v_{\alpha}\) and \(v_{\beta}\), \(\Delta V^{\alpha}=\{v_{\alpha}\}\) and \(\Delta V^{\beta}=\{v_{\beta}\}\). \(\alpha\)\(\iota_{\text{lab}}\)\(\beta\) implies \(v_{\alpha}\neq v_{\beta}\). All the edges of \(\Delta E^{\alpha}_{R}\uplus\Delta E^{\alpha}_{R}\) come into \(v_{\alpha}\) and all the edges of \(\Delta E^{\beta}_{R}\uplus\Delta E^{\beta}_{W}\) come into \(v_{\beta}\). Therefore, \(\Delta V^{\alpha}\cap\Delta V^{\beta}=\Delta E^{\alpha}_{R}\cap\Delta E^{\beta }_{R}=\Delta E^{\alpha}_{W}\cap\Delta E^{\beta}_{W}=\varnothing\).
**Proposition 5**.: _Suppose \(A\stackrel{{ a}}{{\rightarrow}}_{\text{ann}}A^{\prime}\) and \(A\stackrel{{\beta}}{{\rightarrow}}_{\text{ann}}A^{\prime\prime}\) with a \(\iota_{\text{lab}}\)\(\beta\). Then there is \(A^{\prime\prime\prime}\) such that \(A^{\prime\prime}\stackrel{{ a}}{{\rightarrow}}_{\text{ann}}A^{ \prime\prime\prime}\) and \(\operatorname{diff}(A\stackrel{{ a}}{{\rightarrow}}_{\text{ann}}A^{ \prime})=\operatorname{diff}(A^{\prime\prime}\stackrel{{ a}}{{ \rightarrow}}_{\text{ann}}A^{\prime\prime\prime})\)._
Proof.: Assume \(A=(V,E_{R},E_{W})\), \(A^{\prime\prime}=(V^{\prime\prime},E^{\prime\prime}_{R},E^{\prime\prime}_{W})\), and \(a=(p_{a},Rd_{a},Wt_{a})\). \(A^{\prime\prime}\stackrel{{ a}}{{\rightarrow}}_{\text{ann}}A^{ \prime\prime\prime}\) for some \(A^{\prime\prime\prime}\) since \(a\in\mathsf{Lab}\). \(a\)\(\iota_{\text{lab}}\)\(\beta\) implies that \(\max_{p_{a}}(A)=\max_{p_{a}}(A^{\prime\prime})\) and \(\mathsf{last}(r,E_{W})=\mathsf{last}(r,E^{\prime\prime}_{W})\) for \(r\in Rd_{a}\). Therefore, \(\operatorname{diff}(A\stackrel{{ a}}{{\rightarrow}}_{\text{ann}}A^{ \prime})=\operatorname{diff}(A^{\prime\prime}\stackrel{{ a}}{{ \rightarrow}}_{\text{ann}}A^{\prime\prime\prime})\).
**Proposition 6**.: _Suppose \(A\stackrel{{ a}}{{\rightarrow}}_{\text{ann}}A^{\prime}\) and \(A\stackrel{{\beta}}{{\rightarrow}}_{\text{ann}}A^{\prime\prime}\) with \(\underline{a}\)\(\iota_{\text{lab}}\)\(\beta\). Then there is \(A^{\prime\prime\prime}\) such that \(A^{\prime\prime}\stackrel{{ a}}{{\rightarrow}}_{\text{ann}}A^{ \prime\prime\prime}\) and \(\operatorname{diff}(A\stackrel{{ a}}{{\rightarrow}}_{\text{ann}}A^{ \prime})=\operatorname{diff}(A^{\prime\prime}\stackrel{{ a}}{{ \rightarrow}}_{\text{ann}}A^{\prime\prime\prime})\)._
Proof.: Assume \(\operatorname{diff}(A\stackrel{{\beta}}{{\rightarrow}}_{\text{ann}}A^{ \prime\prime})=(\Delta V^{\beta},\Delta E^{\beta}_{R},\Delta E^{\beta}_{W})\), and \(a=(p_{a},Rd_{a},Wt_{a})\). Let \(v=(p_{a},\max_{p_{a}}(V))\). \(\underline{a}\)\(\iota_{\text{lab}}\)\(\beta\) implies that no edges in \(\Delta E^{\beta}_{R}\uplus\Delta E^{\beta}_{W}\) go out from \(v\) and \(v^{\prime}\) such that \(v^{\prime}\stackrel{{ r}}{{\rightarrow}}v\) in \(A\). Therefore, \(A^{\prime\prime}\stackrel{{ a}}{{\rightarrow}}_{\text{ann}}A^{ \prime\prime\prime}\) for some \(A^{\prime\prime\prime}\). \(\underline{a}\)\(\iota_{\text{lab}}\)\(\beta\) and \(\underline{a}\in\underline{\mathsf{Lab}}\) derive \(\operatorname{diff}(A\stackrel{{ a}}{{\rightarrow}}_{\text{ann}}A^{ \prime})=\operatorname{diff}(A^{\prime\prime}\stackrel{{ a}}{{ \rightarrow}}_{\text{ann}}A^{\prime\prime\prime})\).
**Proposition 7**.: _Suppose \(A\stackrel{{\alpha}}{{\rightarrow}}_{\text{ann}}A^{\prime}\) and \(A\stackrel{{\beta}}{{\rightarrow}}_{\text{ann}}A^{\prime\prime}\) with \(\alpha\)\(\iota_{\text{lab}}\)\(\beta\). Then \(A^{\prime\prime}\stackrel{{\alpha}}{{\rightarrow}}_{\text{ann}}A^{ \prime\prime\prime}\), where \(A^{\prime\prime\prime}=A^{\prime\prime}\odot^{\alpha}\)\(\operatorname{diff}(A\stackrel{{\alpha}}{{\rightarrow}}_{\text{ann}}A^{ \prime})\)._
Proof.: Proposition 5 and 6 derive \(A^{\prime\prime}\stackrel{{\alpha}}{{\rightarrow}}_{\text{ann}}A^{ \prime\prime\prime}\).
**Proposition 8**.: _Suppose \(A\stackrel{{\alpha}}{{\rightarrow}}_{\text{ann}}A^{\prime}\), \(A\stackrel{{\beta}}{{\rightarrow}}_{\text{ann}}A^{\prime\prime}\), and \(\alpha\)\(\iota_{\text{lab}}\)\(\beta\). Then there are the cofinal transitions \(A^{\prime}\stackrel{{\beta}}{{\rightarrow}}_{\text{ann}}A^{ \prime\prime\prime}\) and \(A^{\prime\prime}\stackrel{{\alpha}}{{\rightarrow}}_{\text{ann}}A^{ \prime\prime\prime}\)._
Proof.: By proposition 4, \(\operatorname{diff}(A\stackrel{{\alpha}}{{\rightarrow}}_{\text{ann}}A^{ \prime})\) and \(\operatorname{diff}(A\stackrel{{\beta}}{{\rightarrow}}_{\text{ann}}A^{ \prime\prime})\) are disjoint if \(\alpha\)\(\iota_{\text{lab}}\)\(\beta\). Hence, the order of addition and deletion to/from \(A\) does not affect the result. Therefore, \((A\odot^{\alpha}\operatorname{diff}(A\stackrel{{\alpha}}{{ \rightarrow}}_{\text{ann}}A^{\prime}))\odot^{\beta}\operatorname{diff}(A \stackrel{{\beta}}{{\rightarrow}}_{\text{ann}}A^{\prime\prime})=(A \odot^{\beta}\operatorname{diff}(A\stackrel{{\beta}}{{\rightarrow}}_{ \text{ann}}A^{\prime\prime}))\odot^{\alpha}\operatorname{diff}(A \stackrel{{\alpha}}{{\rightarrow}}_{\text{ann}}A^{\prime})=A^{\prime \prime\prime}\). By proposition 7, we have \(A\stackrel{{\alpha}}{{\rightarrow}}_{\text{ann}}A^{\prime\prime}\) and \(A\stackrel{{\beta}}{{\rightarrow}}_{\text{ann}}A^{\prime\prime} \stackrel{{\alpha}}{{\rightarrow}}_{\text{ann}}A^{\prime\prime\prime}\) hold for such \(A^{\prime\prime\prime}\).
Combining proposition 3 with proposition 8 by **ProgAnn**, the square property holds.
**Lemma 1** (Square Property).: _Whenever \(t:(C_{P},A_{P})\stackrel{{\alpha}}{{\rightarrow}}(C_{Q},A_{Q})\), \(u:(C_{P},A_{P})\stackrel{{\beta}}{{\rightarrow}}(C_{R},A_{R})\), and \(t\)\(\iota\)\(u\), then there are cofinal transitions \(u^{\prime}:(C_{Q},A_{Q})\stackrel{{\beta}}{{\rightarrow}}(C_{S},A_{S})\), and \(t^{\prime}:(C_{R},A_{R})\stackrel{{\alpha}}{{\rightarrow}}(C_{S},A_{S})\)._
Backward Transitions are Independent (BTI)BTI is useful for reversibility because an available backward transition does not depend on any other backward transition. In CRIL, a label of \(LTSI_{CRIL}\) gives the information to establish BTI.
**Lemma 2** (Backward Transitions are Independent).: _Whenever \(t:(C_{P},A_{P})\stackrel{{ a}}{{\rightarrow}}(C_{Q},A_{Q})\), \(u:(C_{P},A_{P})\stackrel{{ b}}{{\rightarrow}}(C_{R},A_{R})\), and \(t\neq u\), then \(t\)\(\iota\)
Proof.: Assume \(A_{P}=(V,E_{R},E_{W})\), \(a=(p_{a},Rd_{a},Wt_{a})\), and \(b=(p_{b},Rd_{b},Wt_{b})\). Let \(v_{a}=(p_{a},\max_{p_{a}}(V))\) and \(v_{b}=(p_{b},\max_{p_{b}}(V))\).
Assume \(p_{a}\preceq p_{b}\). Then \(p_{a}=p_{b}\) holds from the operational semantics. \(p_{a}=p_{b}\) derives \(t=u\), which contradicts \(t\neq u\). Therefore, \(p_{a}\not\preceq p_{b}\) holds. Similarly, \(p_{b}\not\preceq p_{a}\) also holds.
Assume \(Rd_{a}\cap Wt_{b}\neq\varnothing\). There exists \(r\in Rd_{a}\cap Wt_{b}\). If \(r\in Wt_{a}\), then \(\mathsf{last}(r,E_{W})=v_{a}\) and \(\mathsf{last}(r,E_{W})=v_{b}\). Therefore \(p_{a}=p_{b}\), however it contradicts \(p_{a}\not\preceq p_{b}\). If \(r\not\in Wt_{a}\), then \(\mathsf{last}(r,E_{W})\stackrel{{ r}}{{\dashrightarrow}}v_{a} \in E_{R}\). \(r\in Wt_{b}\) derives \(\mathsf{last}(r,E_{W})=v_{b}\). Therefore \(v_{b}\stackrel{{ r}}{{\dashrightarrow}}v_{a}\in E_{R}\), however it contradicts that no edges go out from \(v_{b}\) derived from \(u\). Therefore \(Rd_{a}\cap Wt_{b}=\varnothing\). Similarly, \(Rd_{b}\cap Wt_{a}=\varnothing\) also holds.
Well-Foundedness (WF)For a backward transition \((C,A)\stackrel{{ a}}{{\rightarrow}}(C^{\prime},A^{\prime})\), the number of nodes of \(A^{\prime}\) is strictly less than that of \(A\). Since the number of vertices of annotation DAG is finite, it is not possible to remove vertices infinitely.
Coinitial Propagation of Independence (CPI)Given a commuting square with independence at one corner, CPI allows us to deduce independence between coinitial transitions at the other three corners.
**Lemma 3** (Coinitial Propagation of Independence).: _Suppose \(t:(C_{P},A_{P})\stackrel{{\alpha}}{{\rightarrow}}(C_{Q},A_{Q})\), \(u:(C_{P},A_{P})\stackrel{{\beta}}{{\rightarrow}}(C_{R},A_{R})\), \(u^{\prime}:(C_{Q},A_{Q})\stackrel{{\beta}}{{\rightarrow}}(C_{S},A_ {S})\), \(t^{\prime}:(C_{R},A_{R})\stackrel{{\alpha}}{{\rightarrow}}(C_{S},A_ {S})\), and \(t\u\). Then \(u^{\prime}\u t\)._
Proof.: \(t\u\) implies \(\alpha\u_{\text{lab}}\beta\). Since \(\beta\u_{\text{lab}}\alpha,u^{\prime}\u t\).
#### 3.3.2 Events
The properties above make \(\mathit{LTSI}_{\mathit{CRIL}}\) pre-reversible. Next, we check if \(\mathit{LTSI}_{\mathit{CRIL}}\) can derive events for establishing reversibility. Following [14], events in \(\mathit{LTSI}_{\mathit{CRIL}}\) are derived as an equivalence over transitions.
**Definition 8**.: _Let \(\sim\) be the smallest equivalence relation on transitions satisfying: if \(t:(C_{P},A_{P})\stackrel{{\alpha}}{{\rightarrow}}(C_{Q},A_{Q})\), \(u:(C_{P},A_{P})\stackrel{{\beta}}{{\rightarrow}}(C_{R},A_{R})\), \(u^{\prime}:(C_{Q},A_{Q})\stackrel{{\beta}}{{\rightarrow}}(C_{S},A_ {S})\), \(t^{\prime}:(C_{R},A_{R})\stackrel{{\alpha}}{{\rightarrow}}(C_{S},A_ {S})\), and \(t\u u\), then \(t\sim t^{\prime}\). The equivalence classes of forward transitions \([(C_{P},A_{P})\stackrel{{ a}}{{\rightarrow}}(C_{Q},A_{Q})]\), are the events. The equivalence classes of backward transitions \([(C_{P},A_{P})\stackrel{{\alpha}}{{\rightarrow}}(C_{Q},A_{Q})]\), are the reverse events._
Given \(\gamma=\alpha_{1}\cdots\alpha_{n}\in(\mathsf{Lab}\uplus\mathsf{Lab})^{*}\), a sequence of transitions \((C_{0},A_{0})\stackrel{{\alpha_{1}}}{{\rightarrow}}\cdots\stackrel{{ \alpha_{n}}}{{\rightarrow}}(C_{n},A_{n})\) is written as \(s:(C_{0},A_{0})\stackrel{{\gamma}}{{\rightarrow}}_{*}(C_{n},A_ {n})\).
Since the transitions of program configurations in \(\mathit{LTSI}_{\mathit{CRIL}}\stackrel{{\alpha}}{{\rightarrow}}_{ \text{prog}}\) have no control for reversibility, events are substantially derived from the operations of annotation DAGs.
**Definition 9**.: _Let \(\sim_{\text{ann}}\) be the smallest equivalence relation over operations of annotation DAGs satisfying: if \(o_{1}:A_{P}\stackrel{{\alpha}}{{\rightarrow}}_{\text{ann}}A_{Q}\), \(o_{2}:A_{P}\stackrel{{\beta}}{{\rightarrow}}_{\text{ann}}A_{R}\), \(o^{\prime}_{2}:A_{Q}\stackrel{{\beta}}{{\rightarrow}}_{\text{ann}}A_ {S}\), \(o^{\prime}_{1}:A_{R}\stackrel{{\alpha}}{{\rightarrow}}_{\text{ann}}A_ {S}\), and \(\alpha\u_{\text{lab}}\beta\), then \(o_{1}\sim o^{\prime}_{1}\). \([A\stackrel{{ a}}{{\rightarrow}}_{\text{ann}}A^{\prime}]_{\text{ann}}\) and \([A\stackrel{{ a}}{{\rightarrow}}_{\text{ann}}A^{\prime}]_{\text{ann}}\) are the forward and backward equivalence classes by \(\sim_{\text{ann}}\)._
**Proposition 9**.: _For \(t:(C_{P},A_{P})\stackrel{{\alpha}}{{\rightarrow}}(C_{Q},A_{Q})\) and \(t^{\prime}:(C_{R},A_{R})\stackrel{{\alpha}}{{\rightarrow}}(C_{S},A_ {S})\), the following holds. \(t\sim t^{\prime}\) iff \(o\sim_{\text{ann}}o^{\prime}\) and \(\exists\gamma_{\ell}(C_{P},A_{P})\stackrel{{\gamma}}{{\rightarrow}} _{*}(C_{R},A_{R})\) where \(o:A_{P}\stackrel{{\alpha}}{{\rightarrow}}_{\text{ann}}A_{Q}\) and \(o^{\prime}:A_{R}\stackrel{{\alpha}}{{\rightarrow}}_{\text{ann}}A_ {S}\)._
Intuitively, operations for annotation DAGs are independent if they add or remove nodes and edges at unrelated places. If \(o_{1}\sim_{\text{ann}}o_{2}\), then \(o_{1}\) and \(o_{2}\) add or remove the same fragment of annotation DAGs to or from the nodes of the same causality. In \(\mathit{LTSI}_{\mathit{CRIL}}\), the equivalence over operations of annotation DAGs is considered as an _event_. This shows that events for reversibility are consistently defined over \(\mathit{LTSI}_{\mathit{CRIL}}\), meaning the operational semantics is detailed enough to give the **IRE** property as follows, which is necessary for our objectives.
**Independence Respects Events (IRE)**
**Lemma 4** (Independence Respects Events).: _Suppose \(t\sim t^{\prime}\)\(\iota\). Then \(t\)\(\iota\) u._
Proof.: If \(t\sim t^{\prime}\), \(t\) has the same label as \(t^{\prime}\). Then, \(t\)\(\iota\) u.
#### 3.3.3 Causal Safety and Causal Liveness
Let \(\sharp(s,[A\stackrel{{ a}}{{\rightarrow}}A^{\prime}]_{\text{ann}})\) be the number of occurrences of transitions \(t\) in \(s\) such that \(t\in[(C,A)\stackrel{{ a}}{{\rightarrow}}(C^{\prime},A^{\prime})]\), minus the number of occurrences of transitions \(t\) in \(s\) such that \(t\in[(C,A)\stackrel{{ a}}{{\rightarrow}}(C^{\prime},A^{\prime})]\).
Using the result of [14], the properties of **SP**(Lemma 1), **BTI**(Lemma 2), **WF**, **CPI**(Lemma 3), and **IRE** (Lemma 4) make **Causal Safety (CS)** and **Causal Liveness (CL)** hold. Due to the fact that the causality is stored in the annotation DAGs, the properties can be stated in \(\textit{LTSI}_{CRIL}\) as below.
**Theorem 1** (Causal Safety).: _Whenever \((C_{P},A_{P})\stackrel{{ a}}{{\rightarrow}}(C_{Q},A_{Q})\), \(s:(C_{Q},A_{Q})\stackrel{{\gamma}}{{\rightarrow}}_{*}(C_{R},A_{R})\) with \(\sharp(s,[A_{P}\stackrel{{ a}}{{\rightarrow}}A_{Q}]_{\text{ann}})=0\), and \((C_{S},A_{S})\stackrel{{ a}}{{\rightarrow}}(C_{R},A_{R})\) then \((C_{P},A_{P})\stackrel{{ a}}{{\rightarrow}}(C_{Q},A_{Q})\)\(\iota\)\(t\) for all \(t\) in \(s\) such that \(\sharp(s,[A_{P}\stackrel{{ a}}{{\rightarrow}}A_{Q}]_{\text{ann}})>0\)._
**Theorem 2** (Causal Liveness).: _Whenever \((C_{P},A_{P})\stackrel{{ a}}{{\rightarrow}}(C_{Q},A_{Q})\), \(s:(C_{Q},A_{Q})\stackrel{{\gamma}}{{\rightarrow}}_{*}(C_{R},A_{R})\), \(\sharp(s,[A_{P}\stackrel{{ a}}{{\rightarrow}}A_{Q}])=0\), and \((C_{P},A_{P})\stackrel{{ a}}{{\rightarrow}}(C_{Q},A_{Q})\)\(\iota\)\(t:(C,A)\stackrel{{ b}}{{\rightarrow}}(C^{\prime},A^{\prime})\) for all \(t\) in \(s\) such that \(\sharp(s,[A\stackrel{{ a}}{{\rightarrow}}A^{\prime}])>0\) with \((C_{P},A_{P})\stackrel{{ a}}{{\rightarrow}}(C_{Q},A_{Q})\sim(C_{S},A_{S}) \stackrel{{ a}}{{\rightarrow}}(C_{R},A_{R})\), then we have \((C_{S},A_{S})\stackrel{{ a}}{{\rightarrow}}(C_{R},A_{R})\) with \((C_{P},A_{P})\stackrel{{ a}}{{\rightarrow}}(C_{Q},A_{Q})\sim(C_{S},A_{S}) \stackrel{{ a}}{{\rightarrow}}(C_{R},A_{R})\)._
Based on these properties, \(\textit{LTSI}_{CRIL}\) can be implemented correctly with the pointers for processes managed by a process map along with annotation DAGs as the operational semantics of CRIL.
## 4 Example: Airline ticketing
We show a version of the airline ticketing program [6] in CRIL in figure 6. Two agents attempt to sell three seats of an airline. This program has a data race for variable seats of the remaining seats because two agents may check the remaining seats simultaneously before making sales. Since the data race does not always happen, it is useful to roll back to the point where checking remaining seats is insufficient. Here, agent1 and agent2 are used to record the number of tickets sold by each agent.
Figure 6: An airline ticketing program in CRIL
\begin{table}
\begin{tabular}{|l|c|c|c|c|} \hline basic block & seats & agent1 & agent2 \\ \hline \((\epsilon,0)\) & \(b_{1}\) & 3 & 0 & 0 \\ \((\epsilon,1)\) & \(b_{2}\) & 3 & 0 & 0 \\ \((1,0)\) & \(b_{4}\) & 3 & 0 & 0 \\ \((2,0)\) & \(b_{9}\) & 3 & 0 & 0 \\ \((1,1)\) & \(b_{5}\) & 3 & 0 & 0 \\ \((1,2)\) & \(b_{6}\) & 2 & 0 & 0 \\ \((1,3)\) & \(b_{7}\) & 2 & 1 & 0 \\ \((2,1)\) & \(b_{10}\) & 2 & 1 & 0 \\ \((2,2)\) & \(\mathbf{b_{11}}\) & 1 & 0 & 0 \\ \((2,3)\) & \(b_{12}\) & 1 & 1 & 1 \\ \((2,4)\) & \(\mathbf{b_{10}}\) & 1 & 1 & 1 \\ \((1,4)\) & \(\mathbf{b_{5}}\) & 1 & 1 & 1 \\ \((2,5)\) & \(b_{11}\) & 0 & 1 & 1 \\ \((1,5)\) & \(b_{6}\) & -1 & 1 & 1 \\ \((2,6)\) & \(b_{12}\) & -1 & 1 & 2 \\ \((2,7)\) & \(b_{10}\) & -1 & 1 & 2 \\ \((2,8)\) & \(b_{13}\) & -1 & 1 & 2 \\ \((1,6)\) & \(b_{7}\) & -1 & 2 & 2 \\ \((1,7)\) & \(b_{5}\) & -1 & 2 & 2 \\ \((1,8)\) & \(b_{8}\) & -1 & 2 & 2 \\ \((6,2)\) & \(b_{2}\) & -1 & 2 & 2 \\ \((6,3)\) & \(b_{5}\) & -1 & 2 & 2 \\ \hline \end{tabular}
\end{table}
Table 3: A faulty execution
Table 3 shows a forward execution that ends \(\mathtt{seats}=-1\). Figure 7 is the annotation DAG when terminated at 'end main' in \(b_{3}\). To investigate the cause of the data race, we focus on the edges labeled with \(\mathtt{seats}\). The solid edges indicate that \(\mathtt{seats}\) is written in \((\varepsilon,0)\), \((1,2)\), \((2,2)\), \((2,5)\), and \((1,5)\). In particular, \(\mathtt{seats}\) defined at \((2,2)\) is used to update by processes 2 and 3 to cause the data race. (The steps in bold are involved in the problem.) To resolve the data race, each value of \(\mathtt{seats}\) should be checked exactly once, except for the last value of \(\mathtt{seats}\).
Figure 8 shows the airline program where \(\mathtt{sub1}\) and \(\mathtt{sub2}\) are replaced by those with the V-P operations. The parameter of the V-P operations works as a semaphore to check and update \(\mathtt{seats}\) as a critical region. Figure 9 is the annotation DAG by the forward execution with \(\mathtt{sub1}\) done first once and then \(\mathtt{sub2}\) done twice. Process 1 executes \(b^{\prime}_{5}\) setting \(\mathtt{semaphore}=1\) at \((1,1)\) first. (\(\mathtt{sem}\) is for \(\mathtt{semaphore}\) in the figure.) This prevents process 2 executing \(b^{\prime}_{10}\) at \((2,1)\) since \(\mathtt{semaphore}\) must be 0. Backwards, \(b^{\prime}_{14}\) and \(b^{\prime}_{15}\) work as V \(\mathtt{semaphore}\). In the backward execution, the order of basic blocks is stored in the annotation DAG. It works as follows:
* The sequence of \(\stackrel{{\mathtt{sem}}}{{\rightarrow}}\) is alternatively from V and P operations in the forward execution. \(\bot\stackrel{{\mathtt{sem}}}{{\longrightarrow}}(1,1)\) is by \(b^{\prime}_{5}\) and \((1,1)\stackrel{{\mathtt{sem}}}{{\longrightarrow}}(1,3)\) by \(b^{\prime}_{14}\), \(\cdots\), \((1,3)\stackrel{{\mathtt{sem}}}{{\longrightarrow}}(2,1)\) by \(b^{\prime}_{10}\), \((2,1)\stackrel{{\mathtt{sem}}}{{\longrightarrow}}(2,3)\) by \(b^{\prime}_{15}\),\(\cdots\).
* When \(\mathtt{seats}=0\), \(\mathtt{semaphore}\) is released with no operation. \((2,7)\stackrel{{\mathtt{sem}}}{{\longrightarrow}}(1,5)\stackrel{{ \mathtt{sem}}}{{\longrightarrow}}(1,6)\) by \(b^{\prime}_{5}\) and \(b^{\prime}_{8}\) and \((1,6)\stackrel{{\mathtt{sem}}}{{\longrightarrow}}(2,9)\stackrel{{ \mathtt{sem}}}{{\longrightarrow}}(2,10)\) by \(b^{\prime}_{10}\) and \(b^{\prime}_{13}\).
* In backward, \(\mathtt{sub2}\) is ready since \((2,10)\) is \(\mathtt{last}(E_{W},\mathtt{sem})\).
* Then, \(\mathtt{sub1}\) is done with no operation and \((2,7)\) is P in \(\mathtt{sub2}\). The order of V and P is kept until reaching \(\bot\).
Figure 8: An airline ticketing with semaphore
Figure 9: The annotation DAG after the forward execution with semaphore
## 5 Concluding remarks
We have proposed CRIL as a reversible concurrent intermediate language. CRIL is an extension of RIL [17] to enable running multiple subroutines as processes running in parallel. CRIL is intended to be fairly low-level in that each instruction is at a level similar to the three-address codes to mediate the translation from a high-level program to a machine-oriented code. The operational semantics of CRIL defined as \(\textit{LTSI}_{\textit{CRIL}}\) is shown to have the properties of Causal Safety and Causal Liveness under the independence of concurrent processes and shared memory update. By the result of [14], \(\textit{LTSI}_{\textit{CRIL}}\) also satisfies other properties: Parabolic lemma, Causal Consistency, Unique Transition, and Independence of Diamonds.
As related work, [2] provides a compiler from ROOPL++ to PISA [23] with no intermediate language, where the translation from an object-oriented source program to the low-level PISA code is a big task. [7] proposes an annotation to a concurrent imperative program while executing forward, where the annotation is attached directly to the source program for reversing the execution. [8] investigates its properties of reversibility. CRIL uses a similar idea as Hoey's, but CRIL is at a rather lower level to provide a finer granularity for detailed analysis in translation, such as optimization. [9] presents a collection of simple stack machines with a fork and merge mechanism, where the causality is embedded in the runtime.
For future work, we have focused only on the fundamental properties. We will investigate further how more properties in reversibility contribute to behavioral analysis for concurrent programs. Currently, the dependency of the heap memory M is treated as one memory resource. More detailed dependency is necessary for practical use. Deriving the optimization technique in the front-end part of compilers is future work via the reversible version of SSA, such as RSSA [18] for concurrent imperative programs. CRIL is based on the shared memory model. Incorporating channel-based communications is also future work to use for the message-passing model like Erlang [13].
AcknowledgementWe thank Dr. Irek Ulidowski of the University of Leicester for giving valuable suggestions to the draft. We also thank Prof. Nobuko Yoshida of the University of Oxford, Prof. Hiroyuki Seki, Prof. Koji Nakazawa, and Prof. Yuichi Kaji of Nagoya University for fruitful discussions. We thank the anonymous reviewers for providing fruitful comments. This work is supported by JSPS Kakenhi 21H03415.
|
2301.13547 | Machine learning of evolving physics-based material models for
multiscale solid mechanics | In this work we present a hybrid physics-based and data-driven learning
approach to construct surrogate models for concurrent multiscale simulations of
complex material behavior. We start from robust but inflexible physics-based
constitutive models and increase their expressivity by allowing a subset of
their material parameters to change in time according to an evolution operator
learned from data. This leads to a flexible hybrid model combining a
data-driven encoder and a physics-based decoder. Apart from introducing
physics-motivated bias to the resulting surrogate, the internal variables of
the decoder act as a memory mechanism that allows path dependency to arise
naturally. We demonstrate the capabilities of the approach by combining an FNN
encoder with several plasticity decoders and training the model to reproduce
the macroscopic behavior of fiber-reinforced composites. The hybrid models are
able to provide reasonable predictions of unloading/reloading behavior while
being trained exclusively on monotonic data. Furthermore, in contrast to
traditional surrogates mapping strains to stresses, the specific architecture
of the hybrid model allows for lossless dimensionality reduction and
straightforward enforcement of frame invariance by using strain invariants as
the feature space of the encoder. | I. B. C. M. Rocha, P. Kerfriden, F. P. van der Meer | 2023-01-31T10:50:07Z | http://arxiv.org/abs/2301.13547v1 | # Machine learning of evolving physics-based material models for multiscale solid mechanics
###### Abstract
In this work we present a hybrid physics-based and data-driven learning approach to construct surrogate models for concurrent multiscale simulations of complex material behavior. We start from robust but inflexible physics-based constitutive models and increase their expressivity by allowing a subset of their material parameters to change in time according to an evolution operator learned from data. This leads to a flexible hybrid model combining a data-driven encoder and a physics-based decoder. Apart from introducing physics-motivated bias to the resulting surrogate, the internal variables of the decoder act as a memory mechanism that allows path dependency to arise naturally. We demonstrate the capabilities of the approach by combining an FNN encoder with several plasticity decoders and training the model to reproduce the macroscopic behavior of fiber-reinforced composites. The hybrid models are able to provide reasonable predictions of unloading/reloading behavior while being trained exclusively on monotonic data. Furthermore, in contrast to traditional surrogates mapping strains to stresses, the specific architecture of the hybrid model allows for lossless dimensionality reduction and straightforward enforcement of frame invariance by using strain invariants as the feature space of the encoder.
**Keywords:** Concurrent multiscale (FE\({}^{2}\)) modeling, Surrogate modeling, Hybrid learning
## 1 Introduction
Recent advances in materials science and manufacturing techniques are paving the way for the design of materials with highly-tailored microstructures, including metamaterials [1, 2], novel composite material systems [3, 4], printed cementitious materials [5] and multifunctional living materials [6]. The common thread in these new developments is a shift from traditional design focused on tailoring structures to material constraints towards tailoring material microstructures to macroscopic constraints. This shift in turn requires the development of highly-detailed models of material behavior across spatial scales and a shift to virtual structural certification, as trial-and-error design becomes infeasible [7, 8, 9].
Scale bridging has been traditionally performed through a bottom-up approach: physics-based constitutive models at smaller scales are calibrated using experiments and used to perform numerical simulations (using _e.g._ the Finite Element (FE) method) on representative lower-scale domains from which higher-scale physics-based models can be calibrated [10, 11]. However, physics-based constitutive models come with _a priori_ assumptions that often fail to reproduce complex lower-scale behavior [10]. The alternative is to opt for an FE\({}^{2}\) (or Computational Homogenization) approach: lower-scale FE models are embedded at every Gauss point of a higher-scale model and material behavior is directly upscaled with no constitutive assumptions at the higher scale [12, 13, 14]. Yet, the computational cost associated with repeatedly solving a large number of micromodels quickly becomes a bottleneck, in particular for many-query procedures such as design exploration and optimization that require several higher-scale simulations to be performed.
Since the bottleneck of FE\({}^{2}\) lies in computing lower-scale models, a popular approach to reduce computational effort is to substitute the original FE micromodels with either structure-preserving reduced-order models [15, 16, 17, 18, 19, 20, 21] or purely data-driven surrogates [22, 23, 24, 25, 26, 27] trained offline. More recently, Recurrent Neural Networks (RNN) have become the model of choice especially for strain path-dependent materials, with a large body of literature dedicated to their use and tuning to different applications [28, 29, 30, 31, 32, 33, 34]. RNNs can reproduce complex long-term time dependencies in material behavior by learning latent representations of the material state, making them fast and flexible surrogates. However,
these learned representations are not a priori related to actual thermodynamic internal state variables and the model is therefore poorly interpretable (see [35] for an interesting discussion on the subject). Furthermore, training for path dependency requires sampling from a potentially infinite-dimensional space of arbitrarily-long strain paths. This means training RNNs to reproduce complex material behavior often requires an inordinate amount of data (_curse of dimensionality_) and their purely data-driven nature limits their ability to extrapolate away from paths seen during training.
In order to address these drawbacks, a growing number of recent works are shifting focus to models with a fusion of data-driven and physics-based components. Inspired by physics-informed neural networks ([36]), the authors in [37] opt for data-driven models with physics-inspired bias by enforcing thermodynamic principles in a weak sense through an augmented loss function. In a similar vein, the model in [38] learns hyperelasticity by linking together several carefully crafted neural nets to represent quantities with clear physical meaning, improving the interpretability of the resulting model. In [39] the authors extend a similar hyperelastic surrogate with a network that learns plastic flow direction and the evolution of a yield surface parametrized by a level set function, resulting in a hyperelastic-plastic model with superior extrapolation capabilities. A common thread in the aforementioned approaches, however, is that their learning architectures are heavily dependent on the type of model being learned (_e_.\(g\). hyperelasticity, plasticity), making extensions to other models a convoluted task. In contrast, the authors in [40, 41] propose a surrogate for heterogeneous micromodels constructed by directly employing unmodified versions of the constitutive models used for the micro constituents and using a customized network architecture to infer a homogenization operator from data that combines their responses. Nevertheless, the method employs a highly-specialized iterative online prediction routine requiring extra implementation effort and with increased computational overhead when compared to that of traditional surrogates mapping strains to stresses. Finally, in [42, 43, 44] a dictionary of candidate physics-based models is assumed and the role of machine learning shifts instead to that of performing model selection and/or design of experiments.
In this work we explore an alternative approach for constructing hybrid surrogate models for path-dependent multiscale simulations. We start from the premise that existing physics-based models -- \(e\)._g_. the ones used to describe microscale constituents -- are not flexible enough to reproduce macroscale behavior but nonetheless encapsulate crucial physical features such as frame invariance and loading/unloading conditions. It is our aim to avoid learning these features directly from data, as that would require either an excessively large dataset or a highly-specialized learning architecture. We therefore opt for keeping the constitutive model as intact as possible and instead increasing flexibility by allowing some (or all) of its material parameters to evolve in time. The resulting model can be seen in Fig. 1: a data-driven encoder that learns the evolution of a set of material properties is linked to a physics-based material model decoder that maps strains to stresses. In contrast to other strategies in literature, we keep the architecture as general as possible: a general feature extractor parses macroscopic strains into features for the encoder -- which can be as simple as the strains themselves or other derived quantities (_e_.\(g\). strain invariants) -- and any type of constitutive model can in principle act as decoder (_e_.\(g\). hyperelasticity, plasticity, damage). By relegating stress computations to the decoder, we effectively introduce physics-based bias to the model.1 Furthermore, by letting the material model handle the evolution of its own internal variables, the model benefits from a recurrent component with interpretable memory structure that allows path dependency to arise naturally. The strategy we explore here is related to the one we propose in [46], but in that work we let an encoder learn local strain distributions for several virtual material points with fixed properties. We see the two approaches as being complementary, and therefore with potential for being used in combination to form a flexible range of hybrid surrogates.
Footnote 1: In purely data-driven surrogates, we accept some bias in exchange for reduced variance — \(e\)._g_. by employing regularization or adopting prior distributions for model parameters [45] — in order to counter overfitting and improve generalization. But in that case the bias is merely a way to reduce complexity, with no physical interpretation and no _a priori_ impact on the extrapolation capabilities of the model.
The remainder of the work is organized as follows. Section 2 contains a primer on concurrent multiscale (FE\({}^{2}\)) modeling and discusses the difficulties of training purely data-driven surrogates. In Section 3, we particularize the model of Fig. 1 to the case of a feedforward neural network encoder and discuss aspects related to offline training and online numerical stabilization. In Section 4 we assess the performance of the hybrid model in reproducing the behavior of fiber-reinforced composites using different encoder features and decoder models. Finally, some concluding remarks and future research directions are discussed in Section 5.
## 2 Concurrent multiscale (FE\({}^{2}\)) modeling
In this section we present a short discussion on FE\({}^{2}\) modeling. The goal is not to be comprehensive -- the interested reader is referred to [13, 14] for detailed discussions on the subject -- but rather to expose the computational bottleneck associated with the method and pinpoint where surrogate models can be used to alleviate the issue. We then demonstrate how a Recurrent Neural Network (RNN) can be used as surrogate model and showcase some of the difficulties associated with their training and their extrapolation capabilities.
### Scale separation and coupling
In FE\({}^{2}\) we assume the problem being solved can be split into a homogeneous macroscopic domain \(\Omega\) and a heterogeneous microscopic domain \(\omega\ll\Omega\) where small-scale geometric features are resolved. Here we opt for a first-order homogenization approach assuming the displacements on both scales can be related by:
\[\mathbf{u}^{\omega}=\boldsymbol{\varepsilon}^{\Omega}\mathbf{x}^{\omega}+ \widetilde{\mathbf{u}} \tag{1}\]
where microscopic displacements \(\mathbf{u}^{\omega}\) are split into a linear contribution proportional to the macroscopic strains \(\boldsymbol{\varepsilon}^{\Omega}\) and a fluctuation term \(\widetilde{\mathbf{u}}\) that accounts for microscopic heterogeneities.
Since \(\boldsymbol{\varepsilon}^{\Omega}\) varies throughout the macroscopic domain, a micromodel for \(\omega\) is embedded at each Gauss point in \(\Omega\) and a microscopic boundary-value equilibrium problem assuming small displacements and strains is solved:
\[\nabla\cdot\boldsymbol{\sigma}^{\omega}=\mathbf{0}\qquad\boldsymbol{ \varepsilon}^{\omega}=\frac{1}{2}\left(\nabla\mathbf{u}^{\omega}+\left( \nabla\mathbf{u}^{\omega}\right)^{\mathrm{T}}\right) \tag{2}\]
microscopic stress \(\boldsymbol{\sigma}^{\omega}\) is related to microscopic strain \(\boldsymbol{\varepsilon}^{\omega}\) with traditional physics-based constitutive models for each phase in the heterogeneous domain. In the general case where the material models feature internal variables \(\boldsymbol{\alpha}\), we can write the constitutive update for the microscale domain as:
\[\mathcal{M}^{\omega}\begin{cases}\boldsymbol{\alpha}_{t}^{\omega}=\mathcal{A} \left(\boldsymbol{\varepsilon}_{t}^{\omega},\boldsymbol{\alpha}_{t-1}^{\omega },\boldsymbol{\theta}^{\omega}\right)\\ \boldsymbol{\sigma}_{t}^{\omega}=\mathcal{S}\left(\boldsymbol{\varepsilon}_{t}^ {\omega},\boldsymbol{\alpha}_{t}^{\omega},\boldsymbol{\theta}^{\omega}\right) \end{cases} \tag{3}\]
where \(\boldsymbol{\theta}^{\omega}\) are the material parameters of the microscopic constituents, the operators \(\mathcal{A}\) and \(\mathcal{S}\) can be split into an arbitrary number of blocks with different models (_e.g._ elasticity, elastoplasticity, damage) for the different material phases, and \(\boldsymbol{\alpha}^{\omega}\) is a concatenation of the internal variables of every microscopic Gauss point and therefore fully describes the path-dependent state of the microscopic problem.
In order to determine the strains \(\boldsymbol{\varepsilon}^{\Omega}\) that serve as boundary conditions for the micromodels, a macroscopic small-strain equilibrium problem is solved:
\[\nabla\cdot\boldsymbol{\sigma}^{\Omega}=\mathbf{0}\qquad\boldsymbol{ \varepsilon}^{\Omega}=\frac{1}{2}\left(\nabla\mathbf{u}^{\Omega}+\left( \nabla\mathbf{u}^{\Omega}\right)^{\mathrm{T}}\right) \tag{4}\]
Figure 1: The hybrid surrogate combining a data-driven encoder for material parameters and a physics-based material model decoder.
but this time no constitutive assumptions are adopted. Macroscale stresses are instead directly homogenized from the microscopic response:
\[\mathbf{\sigma}^{\Omega}=\frac{1}{|\omega|}\int_{\omega}\mathbf{\sigma}^{\omega}\mathrm{d}\omega \tag{5}\]
which couples the macroscopic strain \(\mathbf{\varepsilon}^{\Omega}\) with the microscopic solution. Since Eq. (1) also couples the solutions in the opposite direction, a bidirectional coupling is formed which requires the two-scale equilibrium problem to be solved iteratively.
### Data-driven surrogate modeling
The coupled problem of Section 2.1 is extremely computationally demanding. The lower-scale domain \(\omega\) usually features complicated geometric features and must therefore be modeled with dense FE meshes in order to ensure accuracy. Worse yet, an independent microscopic problem must be solved at every integration point in \(\Omega\) for every iteration of every time step of the simulation. This nested nature quickly forms a computational bottleneck.
Since the bulk of the computational effort lies in solving the micromodels, a popular approach to make multiscale analysis viable for practical applications is to substitute the microscopic FE models by data-driven surrogates. The idea is to perform a number of micromodel simulations under representative boundary conditions and use the resulting stress-strain pairs to train a machine learning model to be deployed when performing the actual two-scale simulations of interest. Naturally, the approach tacitly assumes that the number of offline micromodel computations required to train the model is much smaller than the number of times the microscopic behavior will be computed online. In the following, we use a simple example to demonstrate a number of difficulties associated with training such a model to reproduce path-dependent material behavior.
### Example: A one-dimensional RNN surrogate
For this demonstration, we train a Long Short-term Memory (LSTM) network [47] to reproduce one-dimensional (single stress/strain component) elastoplasticity. The architecture of the model is shown in Fig. 2a and is implemented in PyTorch [48]. In order to minimize the risk of overfitting, a pragmatic model selection procedure is performed by first training the model with several non-monotonic strain paths and gradually increasing cell size until reasonable accuracy is obtained. This leads to a parsimonious model with a single LSTM cell with 5 latent units.
At this point it is interesting to draw a parallel between the network and the micromodel whose behavior is being reproduced: the concatenation of the hidden state \(\mathbf{h}\) and cell state \(\mathbf{c}\) of the LSTM cell can be seen as a lower-dimensional surrogate for the set of microscopic internal variables \(\mathbf{\alpha}^{\omega}\) of Eq. (3). However, in contrast to the variables in \(\mathbf{\alpha}\), the latent variables \(\mathbf{h}\) and \(\mathbf{c}\) have no physical interpretation and evolve purely according to heuristic memory mechanisms that mimic patterns inferred during training.
First, we train the LSTM using only monotonic data. Since only one strain component is being modeled, this initial dataset is composed simply of one strain path in tension and one in compression. The trained model is then used to predict a tension path with one unloading-reloading cycle. Having never seen unloading during training, the network reverses course and unloads on top of its loading path (Fig. 2b). This result is hardly surprising, but sheds light on the potentially deceiving nature of the training procedure: even though we are only concerned with a single strain component, predictions actually take place in an augmented space that describes strain paths in time which can be arbitrarily high-dimensional (as paths can be arbitrarily long).
We can further demonstrate this manifestation of the _curse of dimensionality_ with the two additional examples of Fig. 3. In Fig. 3a we train the network with two unloading paths and it fails to predict a third one at an intermediate strain level. Here it can be deceiving to assume the third path can be interpolated from the other two: in the 48-dimensional space of strain paths (we use paths with 48 time steps each) the network is actually operating far away from training data. In Fig. 3b the network tries to reproduce a path seen during training but we first let the material rest at zero strain for five time steps before loading starts and for another five time steps at the end of the path. With purely data-driven latent dynamics, the initial rest disturbs the memory structure of the network and causes large deviations for a large portion of the path. For the rest at the end of the path, we see that the surrogate fails to predict the characteristic that the stress does not change upon constant deformation.
Training data-driven models to accurately reproduce path dependency is therefore not straightforward: their latent representations of material state are not interpretable and even phenomena as trivial as resting at zero strain must be learned from data. At the core of successful applications of RNNs to this task are either extensive datasets obtained
Figure 3: 1D LSTM surrogate trained with unloading/reloading and used to predict unseen unloading paths.
Figure 2: An LSTM recurrent neural network as surrogate for 1D path-dependent material behavior trained with only monotonic data.
with carefully crafted sampling strategies [33, 49] or highly tailored datasets for specific macroscopic problems [28]. Alternatively, active learning frameworks may be used to skip offline training altogether [50, 51], but at the cost of producing slower surrogates.
## 3 A hybrid surrogate model
In this work we attempt to avoid the curse of dimensionality by relegating to a physics-based material model some of the tasks the RNN of Section 2.3 has to explicitly learn from data. In this section, we further formalize the hybrid approach of Fig. 1 by looking at the roles of each model component and their dependencies in time. We then particularize the model for the case of a feedforward neural network (FNN) encoder and discuss feature selection and numerical stabilization strategies.
### Evolving material parameters
Physics-based material models are traditionally formulated with a fixed set of parameters \(\mathbf{\theta}\) either directly computed from a specific set of (numerical) experiments or indirectly from stress-strain measurements in a Maximum Likelihood Estimation (MLE) approach2. Here we start from the premise that letting (part of) \(\mathbf{\theta}\) evolve in time increases flexibility and allows the model to capture more complex material behavior. Conversely, keeping the remainder of the model intact improves interpretability and provides physics-based bias to the data-driven model tasked to learn this evolution.
Footnote 2: The parameters \(\mathbf{\theta}\) can also be estimated through Bayesian inference and would therefore be described by a multivariate probability density instead of a fixed set of values. Regardless, that density would still be stationary in time.
In Fig. 4, the hybrid model of Fig. 1 is unrolled in time for a number of consecutive time steps and represented as a graph showing the dependencies between variables. Filled and hollow nodes represent observed and latent variables, respectively, and are color coded to represent the different model components in Fig. 1. Similar to the microscale models of Eq. (3), we assume the constitutive behavior at the macroscale is given by a physics-based material model:
\[\mathcal{M}^{\Omega}\begin{cases}\mathbf{\alpha}_{t}^{\Omega}=\mathcal{A}\left( \mathbf{\varepsilon}_{t}^{\Omega},\mathbf{\alpha}_{t-1}^{\Omega},\mathbf{\theta}_{t}^{ \Omega}\right)\\ \mathbf{\sigma}_{t}^{\Omega}=\mathcal{S}\left(\mathbf{\varepsilon}_{t}^{\Omega},\mathbf{ \alpha}_{t}^{\Omega},\mathbf{\theta}_{t}^{\Omega}\right)\end{cases} \tag{6}\]
but now with time-dependent parameters \(\mathbf{\theta}_{t}\). Note that the model response at time \(t\) depends on the material state at time \(t-1\) through a set of internal variables \(\mathbf{\alpha}_{t-1}^{\Omega}\) (Fig. 4). This gives the model a recurrent nature not unlike that of the RNN of Fig. 1(a) with its state variables \(\mathbf{c}\) and \(\mathbf{h}\). The advantage here is that \(\mathbf{\alpha}\) has clear physical interpretation (plastic strains, damage variables, etc) and its evolution is handled by the fixed operator \(\mathcal{A}\) composed of clearly interpretable algorithmic steps grounded in physics and/or classical material phenomenology (_e.g._ a return mapping algorithm).
Figure 4: Graph representation of the hybrid model architecture combining a data-driven encoder and a physics-based decoder. Filled circles represent observable variables and hollow circles represent latent variables.
On the encoder side, we let the material properties \(\mathbf{\theta}\) evolve according to an evolution operator \(\mathcal{D}\) whose shape is learned from data:
\[\mathbf{\theta}_{t}=\mathcal{D}\left(\mathbf{\varphi}_{t}\right) \tag{7}\]
as a function of a set of features \(\mathbf{\varphi}\) that are themselves obtained from the macroscopic strains through a feature extractor \(\mathcal{F}\):
\[\mathbf{\varphi}_{t}=\mathcal{F}\left(\mathbf{\varepsilon}_{t}^{\Omega}\right) \tag{8}\]
where \(\mathbf{\varphi}_{t}\) could be simply the strains themselves or other quantities derived from it. More importantly, note that \(\mathbf{\theta}_{t}\) depends only on the current features \(\mathbf{\varphi}_{t}\) and we therefore assume the encoder is not recurrent (Fig. 4). This choice effectively limits the flexibility of \(\mathcal{D}\) and makes the hybrid surrogate fully rely on the more robust model \(\mathcal{M}^{\Omega}\) to explain path-dependent phenomena, helping counter the curse of dimensionality associated with sampling strain paths. For instance, it opens up the possibility to train the surrogate exclusively with monotonic data, as we will demonstrate in the examples of Section 4.
In the following sections, we particularize the model for the case of \(\mathcal{D}\) being a fully-connected neural network and for specific choices of \(\mathcal{F}\) and \(\mathcal{M}\). Nevertheless, the general architecture of Figs. 1 and 4 is meant to be as flexible as possible:
* The nature and dimensionality of \(\mathbf{\varphi}\) is not tied to that of \(\mathbf{\varepsilon}^{\Omega}\) since strains are also given directly to \(\mathcal{M}^{\Omega}\);
* Other machine learning models for regression can also be used as \(\mathcal{D}\), and it could in principle be split into different models handling the evolution of different subsets of \(\mathbf{\theta}\). Any number of model parameters may also be left out of \(\mathbf{\theta}\) and either fixed as constants or optimized to constant values during training;
* No assumption is made on the form of \(\mathcal{M}^{\Omega}\) or the nature or dimensionality of \(\mathbf{\alpha}^{\Omega}\). Instead of a single model, it could also for instance be a mixture of physics-based models combined with analytical homogenization techniques.
### Feature extractors
A pragmatic choice for \(\mathcal{F}\) is to simply assume \(\mathbf{\varphi}\) is the macroscopic strain vector \(\mathbf{\varepsilon}^{\Omega}\) itself. It is also a familiar one, as we can then relate the resulting model to conventional surrogates mapping strains to stresses. However, since macroscopic strains are also directly passed on to the decoder, the architecture gives us the freedom to experiment with different features.
Fig. 5 shows the two model architectures we explore in this work. For the two variants in Fig. 5a we either use \(\mathbf{\varepsilon}^{\Omega}\) itself or a set of small-strain invariants of the macroscopic strain tensor of increasing dimensionality:
\[\textbf{I}_{\varepsilon}^{\Omega}=\begin{bmatrix}I_{1}^{\varepsilon}\end{bmatrix} \quad\mathrm{or}\quad\textbf{I}_{\varepsilon}^{\Omega}=\begin{bmatrix}I_{1}^{ \varepsilon}&I_{2}^{\varepsilon}\end{bmatrix} \tag{9}\]
where the variants are given by the well-known expressions:
\[I_{1}^{\varepsilon}=\mathrm{tr}\left(\mathbf{\varepsilon}\right),\quad I_{2}^{ \varepsilon}=\frac{1}{2}\left(\mathrm{tr}\left(\mathbf{\varepsilon}\right)^{2}- \mathrm{tr}\left(\mathbf{\varepsilon}^{2}\right)\right) \tag{10}\]
Additionally, since the current study focus on elastoplasticity, it is also interesting to explore feature spaces including invariants from the deviatoric strain tensor:
\[\textbf{I}_{\varepsilon}^{\Omega}=\begin{bmatrix}J_{2}^{\varepsilon}\end{bmatrix} \quad\mathrm{or}\quad\textbf{I}_{\varepsilon}^{\Omega}=\begin{bmatrix}I_{1}^{ \varepsilon}&J_{2}^{\varepsilon}\end{bmatrix} \tag{11}\]
where:
\[J_{2}^{\varepsilon}=\frac{1}{3}\left(I_{1}^{\varepsilon}\right)^{2}-I_{2}^{ \varepsilon} \tag{12}\]
By using features based on invariants and since the decoder material model is itself already frame invariant for small strains, it follows that the resulting surrogate will naturally inherit this beneficial characteristic. This stands in contrast with traditional black-box surrogates mapping strains to stresses. Furthermore, opting for invariant-based features can be seen as a physics-based dimensionality reduction operation that can potentially reduce the amount of data needed to train the hybrid model.
We also investigate the possibility of extracting features from the outputs of a precalibrated physics-based material model \(\overline{\mathcal{M}}\) subjected to the same strain path seen at the macroscale (Fig. 4(b)). Note that this specific architecture introduces an additional recurrent component to the model through the set \(\overline{\mathbf{\alpha}}\) of internal variables of \(\overline{\mathcal{M}}\). From a machine learning perspective, the role of \(\overline{\mathcal{M}}\) would be analogous to that of a temporal convolution operator or an RNN cell appended to the encoder. The key difference, however, is that \(\overline{\mathcal{M}}\) is fixed _a priori_ and therefore should not require extra sampling effort with respect to the more straightforward extractor in Fig. 4(a).
Naturally, different choices for \(\overline{\mathcal{M}}\) yield models with distinct learning capabilities, and we therefore assume \(\overline{\mathcal{M}}\) encapsulates relevant information about not only the current values of \(\mathbf{\epsilon}^{\Omega}\) but also of its history. In the present scenario where the data is coming from micromodel computations, we opt for the intuitive choice of having \(\overline{\mathcal{M}}\) be one of the known constitutive models used to describe the microscopic material phases. We can therefore conceptually see \(\overline{\mathcal{M}}\) as an imaginary representative material point at the microscale that is always subjected to the average micromodel strain. We then use either a subset of its internal variables \(\overline{\mathbf{\alpha}}\) or a set of invariants \(\mathbf{\Gamma}_{\mathbf{\sigma}}\) of its stress outputs as features.
### Neural network encoder
For simplicity, we opt for modeling the evolution of \(\mathbf{\theta}\) using classical feedforward neural networks with fully-connected layers. As both architectures in Fig. 5 ultimately compute macroscopic stresses given macroscopic strains, we can use supervised learning to train the model with a straightforward Maximum Likelihood approach. Gathering the complete set of network weights in a vector \(\mathbf{w}\) and seeing the complete surrogate as a monolithic model that computes an approximation \(\widehat{\mathbf{\sigma}}\) for stresses, we adopt the following observation model for the snapshot stresses \(\mathbf{\sigma}\):
\[\mathbf{\sigma}=\widehat{\mathbf{\sigma}}\left(\mathbf{\epsilon},\mathbf{w}\right)+\xi, \quad\xi\sim\mathcal{N}\left(\xi|\mathbf{0},\beta^{-1}\mathbf{I}\right) \tag{13}\]
where the superscript \(\Omega\) is dropped for convenience, \(\mathbf{I}\) is an identity matrix, and \(\xi\) is an additive Gaussian noise3. Under the assumption of a squared loss, maximizing the likelihood of a training dataset with \(N\) observations amounts to minimizing the loss function [45]:
Footnote 3: Even though our observations come from a computer model and can be considered noiseless, the surrogate \(\widehat{\mathbf{\sigma}}\) is in general not arbitrarily flexible and the random variable \(\xi\) is therefore still necessary to explain why the model does not exactly fit every single observation in the dataset.
\[L=\frac{1}{2}\sum_{n=1}^{N}\|\mathbf{\sigma}_{n}-\widehat{\mathbf{\sigma}}\left(\mathbf{ \epsilon}_{n},\mathbf{w}\right)\|^{2} \tag{14}\]
with the variance of the noise that explains data misfit being simply \(\beta=N/2L\). The resulting loss function is the same one used for conventional data-driven surrogates and is therefore straightforward to implement.
Nevertheless, it is worth noting that since we cannot directly observe \(\mathbf{\theta}\), computing the gradients of \(L\) with respect to \(\mathbf{w}\) involves backpropagating derivatives through the decoder \(\mathcal{M}\). Furthermore, since \(\mathbf{w}\) affects the evolution of the internal variables \(\mathbf{\alpha}\), backpropagation in time becomes necessary. Starting from Eq. (14) and walking back through
Figure 5: The two types of FNN-based model architectures explored in this work, with different feature extraction steps.
the graph of Fig. 4, the gradient of the loss at time step \(t\) of a given strain path is given by:
\[\frac{\partial L_{t}}{\partial\mathbf{w}}=\frac{\partial L}{\partial\widehat{ \boldsymbol{\sigma}}_{t}}\left\{\frac{\partial\widehat{\boldsymbol{\sigma}}_{t }}{\partial\boldsymbol{\theta}_{t}}\frac{\partial\boldsymbol{\theta}_{t}}{ \partial\mathbf{w}}+\frac{\partial\widehat{\boldsymbol{\sigma}}_{t}}{ \partial\boldsymbol{\alpha}_{t}}\frac{\partial\boldsymbol{\alpha}_{t}}{ \partial\boldsymbol{\theta}_{t}}\frac{\partial\boldsymbol{\theta}_{t}}{ \partial\mathbf{w}}+\frac{\partial\widehat{\boldsymbol{\sigma}}_{t}}{ \partial\boldsymbol{\alpha}_{t}}\sum_{i=t-1}^{1}\left[\left(\prod_{\ell=t}^{t+ 1}\frac{\partial\boldsymbol{\alpha}_{\underline{t}}}{\partial\boldsymbol{ \alpha}_{\underline{t}-1}}\right)\frac{\partial\boldsymbol{\alpha}_{ \underline{t}}}{\partial\boldsymbol{\theta}_{\underline{t}}}\frac{\partial \boldsymbol{\theta}_{\underline{t}}}{\partial\mathbf{w}}\right]\right\} \tag{15}\]
where the remaining gradient chain \(\partial\boldsymbol{\theta}/\partial\mathbf{w}\) is computed with conventional backpropagation through the network. If \(\mathcal{M}\) is implemented in a code base that allows for automatic differentiation (_e.g._ in PyTorch), these time dependencies are naturally taken into account as long as a persistent gradient tape is used within each strain path4. In this work we instead implement network training directly into an existing FE code, and therefore opt for the pragmatic approach of computing all partial derivatives of quantities derived from \(\mathcal{M}\) using finite differences.
Footnote 4: This is already the case for RNNs, so switching from RNNs to the present model should require little to no changes to the way training is performed.
Finally, in order to enforce upper and lower bounds for \(\boldsymbol{\theta}\) and avoid unphysical parameter values (_e.g._ negative elasticity moduli), we apply sigmoid activation to the final layer of the network and scale the parameters back from a \([0,1]\) range using predefined bounds:
\[\theta_{i}=\theta_{i}^{\mathrm{low}}+\theta_{i}^{\sigma}\left(\theta_{i}^{ \mathrm{upp}}-\theta_{i}^{\mathrm{low}}\right) \tag{16}\]
### Material decoders
As previously mentioned, any constitutive model can in principle be used as \(\mathcal{M}\). For the present study we focus on reproducing elastoplasticity and therefore narrow our choices down to the following set of potential decoders with increasing levels of complexity. The simplest one is a linear-elastic isotropic material with no internal variables:
\[\sigma_{ij}=D_{ijkl}\varepsilon_{kl}\quad\mathrm{with}\quad D_{ijkl}=G\left( \delta_{ij}\delta_{kl}+\delta_{il}\delta_{jk}\right)+\left(K-\frac{2}{3}G \right)\delta_{ij}\delta_{kl} \tag{17}\]
where index notation is used for convenience. For this model, \(\boldsymbol{\theta}\) comprises only the bulk and shear moduli \(K\) and \(G\), or equivalently the Young's modulus \(E\) the Poisson's ratio \(\nu\).
The second decoder option is a simple plasticity model with \(J_{2}\) (von Mises) flow. The stress update in this case becomes:
\[\sigma_{ij}=D_{ijkl}\left(\varepsilon_{ij}-\varepsilon_{ij}^{\mathrm{p}}\right) \tag{18}\]
where strain is additively decomposed into elastic and plastic (\(\varepsilon^{\mathrm{p}}\)) contributions. The yield criterium and plastic flow rule are given by:
\[\phi=\sqrt{3J_{2}^{\sigma}}-\sigma_{\mathrm{y}}\leq 0\quad\mathrm{and}\quad \Delta\varepsilon_{ij}^{\mathrm{p}}=\Delta\gamma\sqrt{\frac{3}{2}}\frac{S_{ij} }{\left\|S_{ij}\right\|_{\mathrm{F}}} \tag{19}\]
where \(\mathbf{S}\) is the deviatoric part of the stresses, \(\gamma\) is a plastic multiplier, \(\sigma_{\mathrm{y}}\) is a yield stress parameter and we write the Frobenius norm as \(\left\|\cdot\right\|_{\mathrm{F}}\). In order to keep the model as simple as possible, we assume \(\sigma_{\mathrm{y}}\) is a material constant and therefore end up with a perfectly-plastic model with associative flow. The internal variables of this model are components of the plastic strain vector \(\varepsilon^{\mathrm{p}}\) and the only new material parameter is the yield stress \(\sigma_{\mathrm{y}}\).
Finally, we also consider the more complex pressure-dependent, non-associative plasticity model proposed by Melro _et al._[52]. Stress update is the same as in Eq. (18), but yield surface and plastic flow are given by:
\[\phi=6J_{2}^{\sigma}+2I_{1}^{\sigma}\left(\sigma_{\mathrm{c}}-\sigma_{\mathrm{t }}\right)-2\sigma_{\mathrm{c}}\sigma_{\mathrm{t}}\leq 0\quad\mathrm{and}\quad\Delta \varepsilon_{ij}^{\mathrm{p}}=\Delta\gamma\left(3S_{ij}+\frac{1-2\nu_{\mathrm{ p}}}{1+\nu_{\mathrm{p}}}I_{1}^{\sigma}\delta_{ij}\right) \tag{20}\]
where \(\delta_{ij}\) is the Kronecker delta, \(\sigma_{\mathrm{t}}\) and \(\sigma_{\mathrm{c}}\) are yield stresses in tension and compression, respectively, and \(\nu_{\mathrm{p}}\) is a new parameter controlling plastic contraction and allowing for compressible plastic flow. Hardening can be described by making the yield stresses general functions of \(\varepsilon^{\mathrm{p}}\), but when used as a decoder we assume \(\sigma_{\mathrm{t}}\) and \(\sigma_{\mathrm{c}}\) do not depend on \(\varepsilon^{\mathrm{p}}\) and instead let the decoder \(\mathcal{D}\) describe their evolution.
The model by Melro _et al._[52] is also the one used to describe the microscopic material phase responsible for the nonlinear behavior observed when homogenizing micromodel response, and can therefore be seen as the natural choice for \(\mathcal{M}\). Nevertheless, the other two decoders can provide interesting insights on the effect of introducing different levels of bias to the hybrid model.
### Online predictions and inherited stability
The architecture of Fig. 1 is developed to be minimally intrusive and allow for existing material models to be used as decoders with minimum effort. We therefore implement the online routine of the model as a wrapper around an existing implementation of \(\mathcal{M}\). The basic structure of the wrapper can be seen in Algorithm 1. The hybrid nature of the model allows for a robust approach that ensures the numerical stability of the original model \(\mathcal{M}\) is inherited by the surrogate. This is achieved by only updating \(\boldsymbol{\theta}\) at the end of each time step, after the global implicit Newton-Raphson scheme converges. Material properties are therefore fixed while the global solver is iterating, and that means the tangent stiffness \(\mathbf{D}\) comes directly from \(\mathcal{M}\) and inherits its stability features.
```
Input: strain \(\boldsymbol{\varepsilon}_{\mathrm{new}}^{\Omega}\) at macroscopic Gauss point Output: stress \(\boldsymbol{\sigma}^{\Omega}\) and stiffness \(\mathbf{D}^{\Omega}\) at macroscopic Gauss point
1 use nested model with converged parameters and internal state: \(\left(\boldsymbol{\sigma}^{\Omega},\mathbf{D}^{\Omega},\boldsymbol{\alpha}_{ \mathrm{new}}\right)\leftarrow\mathcal{M}\left(\boldsymbol{\varepsilon}_{ \mathrm{new}}^{\Omega},\boldsymbol{\alpha}_{\mathrm{old}},\boldsymbol{\theta}\right)\);
2ifglobal solver has converged :
3 store latest converged strain: \(\boldsymbol{\varepsilon}_{\mathrm{old}}\leftarrow\boldsymbol{\varepsilon}_{ \mathrm{new}}\);
4 commit material history: \(\boldsymbol{\alpha}_{\mathrm{old}}\leftarrow\boldsymbol{\alpha}_{\mathrm{new}}\);
5 compute new features: \(\boldsymbol{\varphi}_{\mathrm{new}}\leftarrow\mathcal{F}\left(\boldsymbol{ \varepsilon}_{\mathrm{new}}\right)\);
6 update model parameters for the upcoming time step: \(\boldsymbol{\theta}\leftarrow\mathcal{D}\left(\boldsymbol{\varphi}_{\mathrm{ new}}\right)\);
7iffirst global iteration of time stepandGauss point is unstable :
8 stabilize encoder: \(\mathcal{D}\leftarrow\mathtt{stabilizeNetwork}\left(\boldsymbol{\varepsilon}_{ \mathrm{new}}^{\Omega}\right)\);
9 recompute features: \(\boldsymbol{\varphi}_{\mathrm{old}}\leftarrow\mathcal{F}\left(\boldsymbol{ \varepsilon}_{\mathrm{old}}^{\Omega}\right)\);
10 recompute model parameters for the current time step: \(\boldsymbol{\theta}\leftarrow\mathcal{D}\left(\boldsymbol{\varphi}_{\mathrm{old}}\right)\);
11return\(\boldsymbol{\sigma}^{\Omega},\mathbf{D}^{\Omega}\)
```
**Algorithm 1**Material wrapper implementing the online component of the hybrid surrogate.
As an example, the \(J_{2}\) plasticity model of Eq. (19) is unconditionally stable as long as its hardening modulus \(h\geq 0\) for any \(\left(\boldsymbol{\varepsilon}_{t}^{\Omega},\boldsymbol{\alpha}_{t}^{\Omega}\right)\), which is the case for the perfectly-plastic version we consider here. It then follows that any hybrid surrogate with \(J_{2}\) decoder is also unconditionally stable. Note that this is only possible because strains are directly passed on to the decoder and would therefore not be an option for conventional surrogates (_e.g_. the RNN of Fig. 3). For those surrogates, the tangent stiffness would come directly from the jacobian of a highly-flexible data-driven model, often at the cost of numerical stability.
### Numerical stabilization
Nevertheless, the decoder \(\mathcal{M}\) may be inherently unstable even with fixed material constants. This is for instance the case for the model by Melro _et al_. [52]: the non-associative flow rule of Eq. (20) can cause the tangent stiffness \(\mathbf{D}^{\Omega}\) to lose positive definiteness under certain strain conditions and for certain combinations of model parameters. To accommodate such a scenario and open up the possibility for online model adaptivity in other contexts, we propose a scheme for updating the encoder \(\mathcal{D}\) on the fly in order to enforce extra constraints locally.
Back to Algorithm 1, at the beginning of a new time step we keep \(\boldsymbol{\theta}\) fixed to the one obtained with converged strains from the previous step and let the solver make a first strain prediction. After this first iteration, a stability criterion is checked and used to define a new loss function that can be used to update network weights in case instability is detected. Here we employ the determinant of the acoustic tensor \(\mathbf{Q}\):
\[\mathbf{Q}=\mathbf{n}_{d}^{\mathrm{T}}\mathbf{D}^{\Omega}\mathbf{n}_{d} \tag{21}\]
where \(\mathbf{n}_{d}\) is the vector normal to the strain localization direction creating the instability, which we find through an angle sweep procedure as in [53]. We then use \(\det\left(\mathbf{Q}\right)\) as a metric of stability and trigger a retraining procedure in case a negative value is detected. We then introduce a new loss function:
\[L_{\mathrm{Q}}=-\frac{\left\langle\det\left(\mathbf{Q}\right)\right\rangle_{-}} {\det\left(\mathbf{Q}_{0}\right)} \tag{22}\]
the start of the simulation. We minimize this new loss at every unstable point for a small number of epochs with low learning rate, and to discourage significant drifts from the original model we finish the stabilization procedure by updating the network using the original loss of Eq. (14) for a single minibatch. Finally, \(\boldsymbol{\theta}\) is updated using the retrained model and is kept fixed for the remaining iterations5. Note that the local constraint of Eq. (22) is therefore only enforced in a soft way and remaining instabilities might still cause the global solver to diverge, in which case we cancel the current increment, go back to the beginning of the time step and allow for the procedure to be triggered again.
Footnote 5: Changing \(\mathcal{D}\) and therefore \(\boldsymbol{\theta}\) after every iteration would not work in favor of improving stability, but rather have the opposite effect.
## 4 Numerical examples
The proposed model was implemented in an in-house Finite Element code developed using the open-source C++ numerical analysis library Jem/Jive [54]. In order to allow for seamless online retraining, network training was also implemented within the same code. We start this section by describing the datasets and model selection strategies used to build the surrogates. We then investigate the performance of the approach under several choices of encoders and decoders. Finally, we use the model within an FE\({}^{2}\) simulation and demonstrate the online stabilization procedure of Section 3.5. All simulations are performed on cluster nodes equipped with Xeon E5-2630V4 processors and \(128\,\mathrm{GB}\) RAM running CentOS 7.
### Data sampling and model selection
Models are trained to reproduce the behavior of the fiber-reinforced composite micromodel shown in Fig. 6. Fibers are modeled as linear-elastic and the matrix is described by the pressure-dependent non-associative elastoplastic model by Melro _et al_. [52] (Section 3.4). Microscale material properties are adopted from [10]. The microscopic geometry shown in Fig. 6 results from an RVE study performed in [10] and is therefore considered representative. Following the discussion in Section 3, our aim is to investigate up to which extent it is possible to circumvent the curse of dimensionality associated with path dependency by training surrogates exclusively on monotonic strain paths and having time-dependent behavior arise naturally from a physics-based decoder. We therefore limit ourselves to monotonic paths for training. For consistency, we also employ exclusively monotonic data to perform model selection.
For efficiency, we limit the present investigation to 2D simulations (_i.e._ three strain components) in the plane perpendicular to the fibers, but nevertheless expect the discussion and conclusions to generalize to 3D simulations as long as appropriate orthotropic decoders are employed. Datasets with \(2000\) monotonic strain paths are generated under both plane strain and plane stress assumptions. Fig. 7 shows the complete plane strain dataset, with a similar one also being generated for plane stress. Each path is generated with an FE\({}^{2}\) simulation of a single macroscopic element under displacement control along a fixed direction in strain space sampled from a uniform distribution. To circumvent convergence issues, we employ an adaptive time stepping technique that progressively reduces time step size when the simulation does not converge and gradually increases it back for subsequent increments. The simulations are stopped once a strain norm of \(10\,\mathrm{\char 37}\) is reached. As the adaptive scheme leads to paths with different numbers of
Figure 6: The micromodel used in the examples of this work.
time increments, we balance the dataset by ensuring every path is composed of \(30\) steps with strain norms as equally spaced as possible.
To keep model selection straightforward and avoid the need for cumbersome k-fold cross validation or bootstrapping, we train a preliminary model with enough flexibility and an extensive training dataset and gradually increase the size of the validation set until the validation error converges to a good estimate of the expected prediction error [55]. This results in validation sets with \(500\) paths selected at random from the original datasets, leaving \(1500\) paths to be used for training. We then perform model selection by gradually increasing the complexity of our FNN encoders until the validation error stabilizes. From experimenting with different architectures, we find that encoders with 5 hidden layers of 50 units each with Scaled Exponential Linear Unit (SELU) [56] activation provide enough flexibility for all the examples treated here. To ensure enough regularization when computing learning curves with small datasets, we employ Bernoulli dropout layers with a rate of \(1\,\%\) after every hidden layer. Networks are trained for \(20\,000\) epochs and the model with lowest historical validation error is kept after training, further reducing the risk of overfitting on small datasets.
To assess the capabilities of the trained surrogates, we compute an additional test dataset comprising \(50\) monotonic,
Figure 8: Examples from a test dataset with 50 paths of each type. They are not used to train any of the networks or perform model selection.
Figure 7: The complete plane strain dataset used to train the surrogates, comprising \(2000\) monotonic strain-stress paths. A similar dataset is generated under plane stress conditions.
\(50\) unloading-reloading and \(50\) slow cycling paths, examples of which are shown in Fig. 8. To keep the comparisons fair, none of these paths are used to perform model selection and are therefore only considered after the surrogates are trained. We will use example curves like those from Fig. 8 for visual inspection of the model performance, but also the complete sets of \(50\) curves each for more rigorous statistical analysis.
### Elastic decoder
It is interesting to first consider the simple linear-elastic decoder of Eq. (17), as it has no internal variables and therefore leads to a surrogate model comparable in nature to a conventional FNN trained on stress-strain pairs. As we will demonstrate, however, the limited physical bias provided by such simple model already proves advantageous. Here we let both elastic properties be controlled by the learned encoder:
\[\boldsymbol{\theta}=\begin{bmatrix}E&\nu\end{bmatrix} \tag{23}\]
where the bounds \(10^{1}<E<10^{5}\) and \(0<\nu<0.5\) are enforced as described in Eq. (16).
We first perform a feature selection study and investigate how efficiently the model learns as the size of the dataset is increased. From the original plane strain training dataset of \(1500\) monotonic strain paths, we draw datasets with sizes ranging between \(1\) and \(150\) paths without replacement and use them to train networks with different encoder features. To get a reliable estimate of the expected prediction error, we repeat this process \(50\) times for each dataset size and encoder type, and for comparison we also do the same for conventional FNNs trained directly on stress targets (keeping the same architecture but going directly from the final hidden layer to stresses). This amounts to a total of \(3400\) trained networks from which we can compute an estimate of the prediction error by averaging \(\|\boldsymbol{\sigma}-\widehat{\boldsymbol{\sigma}}\|\) over the \(500\) paths left for validation.
Fig. 8(a) plots averages of the validation error over the \(50\) training datasets used for each size. Although the hybrid architecture does not show an advantage over the FNN when the encoder is trained on strain features, there is a clear gain in learning speed when using only the two first strain invariants as features. Apart from accelerating learning and resulting in lossless dimensionality reduction, using invariants also results in a surrogate which is frame invariant under small strains. For comparison, we also train a conventional FNN on the same set of features, but those are unsurprisingly not enough to describe general strain states and much of the material response is interpreted by the FNN as observation noise. We zoom into the first part of the learning curves in Fig. 8(b), this time also showing single standard deviation uncertainty bands coming from the variance among the \(50\) training datasets. The hybrid network outperforms conventional FNNs in the low data regime and tends to be less sensitive to changes in dataset starting from about \(20\) training paths. Nevertheless, the extra flexibility of conventional FNNs allow them to achieve lower validation errors if significantly more training paths are used.
Training the invariant-based hybrid network with the complete dataset of \(1500\) curves leads to surrogates with validation errors of about \(4\,\mathrm{MPa}\), accurately representing the monotonic behavior of the original micromodel. Fig. 10
Figure 9: Learning curves of models with elastic decoders and conventional FNN models. Mean error over the 500 validation monotonic paths.
shows representative predictions of this model for paths from the test set. As expected, this surrogate with no internal variables is not capable of predicting non-monotonic strain paths, and effectively behaves like a hyperelastic material model just as the conventional FNN would.
Nevertheless, the flexible and interpretable encoder-decoder architecture of Fig. 1 allows for new creative approaches in feature selection. As a demonstration, we keep the trained network of Fig. 10 intact and only modify its feature extractor to introduce a simple path-dependent mechanism:
\[\boldsymbol{\varphi}_{T}\equiv\begin{bmatrix}\overline{I}_{1}^{\epsilon}& \overline{I}_{2}^{\epsilon}\end{bmatrix}_{T}=\operatorname*{argmax}_{0<t<T} \left(\left(I_{1}^{\epsilon}\right)_{t}^{2}+\left(J_{2}^{\varepsilon}\right)_ {t}\right) \tag{24}\]
which freezes the evolution of \(\boldsymbol{\theta}\) if the path becomes non-monotonic. Note that the network does not need to be retrained and this modification can be employed exclusively when making online predictions, as the new features reduce to the original ones for the monotonic paths used for training.
We plot two representative non-monotonic paths predicted by the modified model in Fig. 11. From the hyperelastic behavior of Fig. 10, the modified surrogate now behaves as a damage model: the non-linear material behavior is explained by a loss of stiffness which is made persistent by the history-aware feature extractor. Nevertheless, although an improvement to the original model, it is unreasonable to expect the physical bias introduced by a purely elastic model to reliably represent an elastoplastic micromodel. We therefore move to decoders with more relevant physics.
### \(J_{2}\) decoder
In this section we choose as decoder \(\mathcal{M}\) the elastoplastic model of Eq. (19) with \(J_{2}\) plastic flow. Standing on its own, the model is _a priori_ perfectly plastic (constant \(\sigma_{y}\))6, but here we let its yield stress be controlled by the data-driven
Figure 11: Predicting unloading with a linear-elastic decoder through history-aware feature extraction.
Figure 10: Performance of the elastic decoder model for different test scenarios.
encoder:
\[\mathbf{\theta}=\left[\sigma_{y}\right] \tag{25}\]
while enforcing \(10^{1}<\sigma_{y}<10^{3}\) and keeping the Young's modulus and Poisson's ratio fixed to values obtained from a single linear micromodel simulation. In contrast to the model with elastic decoder of the previous section, we now employ prior knowledge of the micromodel behavior and assume that all non-linearity should be explained by plasticity and do not let the elastic properties be dictated by the encoder. Still, the assumption of isotropic and incompressible plastic flow is a departure from the more complex pressure-dependent and non-associative behavior shown by the micromodel. Here we are therefore concerned with the effect of trading the flexibility of an elastic decoder for significantly more physical bias from a lower-fidelity representation of material behavior.
At this point it is interesting to compare the performance of the hybrid surrogate with predictions coming from the state-of-the-art mesoscale material model for polymer composites proposed by Vogler _et al_. [57]. It is an orthotropic elastoplastic model with pressure-dependent non-associative flow precalibrated with a small number of monotonic uniaxial and biaxial stress-strain curves obtained from simulations on the exact same micromodel of Fig. 6 (see [10] for details on the calibration procedure). For this section, we switch to a dataset in plane stress, allowing the \(J_{2}\) model to describe richer nonlinear behavior under biaxial strain states.
Fig. 12 shows the evolution of the validation set loss when training the \(J_{2}\)-decoded model with \(1500\) plane stress training paths. The error quickly stabilizes at around \(20\,\mathrm{MPa}\), significantly lower than the \(44\,\mathrm{MPa}\) average prediction error obtained with the precalibrated mesomodel. The added flexibility with respect to the original perfectly-plastic \(J_{2}\) model can be seen in the test set curves plotted in Fig. 13: the data-driven encoder leads to correct predictions of nonlinear hardening (Fig. 13a) and pressure-dependent plastic flow (Fig. 13b). The figures also highlight the inability of the mesomodel to predict the behavior in certain regions of the strain space, particularly under compression-dominated scenarios.
The minimum validation error attained by the model is, however, nevertheless significantly higher than the \(4\)\(\mathrm{MPa}\) obtained with the elastic decoder of the previous section. This result is not entirely surprising, as the elastic decoder introduces much less bias into the model and therefore allows for a greater degree of flexibility when fitting monotonic data. On the other hand, what cannot be directly gleaned from Fig. 12 is that the \(J_{2}\) decoder benefits from having physics-based memory coming from its internal variables that allows for making predictions of non-monotonic behavior based solely on our assumption that nonlinearities come from plastic strain and therefore without ever having to see it during training.
In Fig. 14 we plot predictions of the \(J_{2}\) surrogate for two different unloading-reloading paths from the test dataset. The model predicts unloading very well without being trained for it. Nevertheless, as Fig. 12 suggests, the model struggles to predict monotonic behavior under a number of different scenarios, from which it follows that any non-monotonic predictions along the same directions will also be inaccurate. Fig. 15 shows three examples of this behavior.
The choice of decoder therefore involves a tradeoff between bias and flexibility that can be deceiving to base solely on validation error computed on monotonic data. Indeed, the decoder used in the next section outperforms \(J_{2}\)-based
Figure 12: Evolution of the mean validation loss for the first 200 training epochs of a network with \(J_{2}\) decoder. Single dataset with 1500 monotonic paths.
Figure 14: Network predictions with \(J_{2}\) decoder for unloading paths after being trained exclusively with monotonic paths.
Figure 13: Predictions from the network with \(J_{2}\) decoder. Letting the yield stress evolve extends the model to more complex plasticity behavior.
Figure 15: Examples of strain paths not well predicted by the \(J_{2}\) decoded model.
decoders in most situations, but nevertheless a choice for the simpler decoder might still be justified -- _e.g_. if the unconditional numerical stability of a \(J_{2}\) decoder is desirable.
### Non-associative pressure-dependent elastoplastic decoder
As one final exploration on model selection, we use as decoder the same elastoplastic model by Melro _et al_. used to describe the matrix material at the microscale [52]. As mentioned in Section 3.4, this model is the natural choice for \(\mathcal{M}\), as it attempts to explain the observed microscopic non-linear behavior with the same model from which the behavior arises. As before we keep the elastic properties of the model intact and let only the yield stresses and the plastic Poisson's ratio change in time:
\[\boldsymbol{\theta}=\begin{bmatrix}\sigma_{\text{t}}&\frac{\sigma_{\text{c}}} {\sigma_{\text{t}}}&\nu_{\text{p}}\end{bmatrix} \tag{26}\]
where \(10^{1}<\sigma_{\text{t}}<10^{4}\), \(0<\nu_{\text{p}}<0.5\) and \(1<\frac{\sigma_{\text{c}}}{\sigma_{\text{t}}}<100\). We opt for the ratio \(\frac{\sigma_{\text{c}}}{\sigma_{\text{t}}}\) instead of simply \(\sigma_{\text{c}}\) in order to also enforce \(\sigma_{\text{c}}>\sigma_{\text{t}}\).
We expand upon the feature selection study of Fig. 9 by looking at several feature extractors coming both directly from strains and from the output of a precalibrated Melro model \(\overline{\mathcal{M}}\) with the same properties used at the microscale (Fig. 5b). Aside from the familiar choice of strain features (\([\varepsilon_{xx}~{}\varepsilon_{yy}~{}\gamma_{xy}]\to\textit{Melro}\)), we look into invariants of the strain tensor (\([I_{1}^{e}~{}I_{2}^{e}]\to\textit{Melro}\)), combinations including invariants of the deviatoric strain tensor (\([J_{2}^{e}]\to\textit{Melro}\), \([I_{1}^{e}~{}J_{2}^{e}]\to\textit{Melro}\)), plastic strain internal variables coming from the precalibrated feature extractor (\(\begin{bmatrix}\mathbb{P}_{xx}^{e}~{}\mathbb{P}_{yy}^{p}~{}\mathbb{P}_{xy}^{p }\end{bmatrix}\to\textit{Melro}\)) and stress invariants coming from the extractor (\(\begin{bmatrix}I_{1}^{\overline{y}}~{}J_{2}^{\overline{x}}\end{bmatrix}\to \textit{Melro}\)). We also include the precalibrated mesomod by Vogler _et al_. [57] and selected curves from Fig. 9a for comparison purposes. As before, we train \(50\) networks of each type for each size of dataset ranging from \(1\) to \(150\) paths drawn from the original dataset with \(1500\) paths. Each trained network is then used to compute the validation error over the \(500\) monotonic validation paths and the \(150\) test paths (\(50\) extra monotonic paths, \(50\) paths with unloading-reloading and \(50\) slow cycle paths). This results in an extensive study comprising \(6800\) trained networks and over one million test set simulations.
Results are summarized in Fig. 16, with each point in a curve being the average over \(50\) networks. Once again using invariants as features proves beneficial, leading to lossless dimensionality reduction and frame invariant surrogates. All tested models perform better than the precalibrated mesomod, with a gap of more than one order of magnitude for the best performing surrogates. Interestingly, models with Melro-based decoders seem to learn as fast and be as flexible as models with elastic decoders, already for the monotonic curves in the validation dataset. This suggests that the new decoder does not impose extra undesirable bias in learning the specific material behavior treated here other than the assumptions that had already been introduced by elasticity (_e.g_. symmetries and couplings encoded by the elastic stiffness tensor). Any benefits reaped when extrapolating to non-monotonic paths, as we will see in the following, are therefore obtained at a negligible price in terms of monotonic behavior accuracy. This stands in contrast with the discussion on the \(J_{2}\) decoder of the previous section.
Figure 16: Expected validation errors for Melro-decoded surrogates with different feature extractors (averages over \(50\) datasets).
Figure 17: Monotonic test set predictions from feature-deficient Melro models (complete training dataset with 1500 paths).
Figure 18: Learning curves for unloading-reloading test errors of Melro-decoded surrogates (averages of \(50\) datasets).
Although Fig. 16 is not enough to discern between several of our encoder choices, it is interesting to take a closer look at the two clearly underperforming options. Fig. 17 shows predictions from \([J_{2}^{\varepsilon}]\to\mathit{Melro}\) and \(\left[\overline{z}_{xx}^{\mathrm{p}}\ \overline{z}_{yy}^{\mathrm{p}}\ \overline{z}_{xy}^{\mathrm{p}}\right]\to \mathit{Melro}\) for the same monotonic test path. The model with a single feature struggles to predict the entirety of the path, indicating that further reducing the dimensionality of the feature space is not possible for this dataset. The oscillatory stress predictions make this model unsuitable for online stress evaluation in a multiscale setting. For the model with plastic strain features, the feature extractor shows no plastic strains until high stress levels while in the micromodel plasticity starts much earlier, forcing the surrogate to remain in the elastic regime until a sudden jump brings it back to the expected path.
Moving to unloading-reloading paths, we compare the performance of different feature sets by plotting the average test error over the \(50\) unloading-reloading paths in Fig. 18a. Here an interesting observation can be made: even the surrogate \([I_{1}^{\varepsilon}\ I_{2}^{\varepsilon}]\to\mathit{Elastic}\) -- which cannot predict unloading at all -- attains a lower test error than the precalibrated mesomodel. This apparent contradiction can be explained by plotting in Fig. 18b the average error computed only at unloading or reloading time steps: use of an elastic decoder -- and therefore of a conventional FNN or an RNN trained with insufficient data -- excels at predicting monotonic response but is consistently inaccurate for non-monotonic paths and shows little improvement when more monotonic paths are added to the training dataset.
In contrast, the best-performing Melro models are consistently more accurate than the precalibrated mesomodel even when trained on very little data. We plot in Fig. 19 selected representative unloading paths from the test dataset for four of the surrogates. Unloading is once again well captured without having been seen during training, and since it emerges from a purely physical mechanism, it is reasonable to expect unloading at different points along the path to yield comparable results (_c.f._ Fig. 3a). Nevertheless, relatively small differences in unloading slope can still lead to large differences in stress at the end of the unloading branches. Furthermore, the model can struggle with tension-compression switches and predict spurious hysteresis loops.
Indeed, we observe a consistent inability by the models to properly predict switches between tension and compression within the same path. This becomes clear when looking at slow cycling test paths composed of several of these switches (Fig. 8c). We plot learning curves for the test error on slow cycling paths in Fig. 20, for complete paths
Figure 19: Response of Melro-decoded surrogates with different features for selected unloading/reloading test paths (\(1500\) monotonic training paths).
as well as exclusively for the non-monotonic branches of the paths. In contrast with results up until now, here we see larger differences in performance for different feature sets. As expected, elastic decoders are once again shown to be unsuitable to predict non-monotonic paths, and the difference here is even more pronounced than in for single-unloading paths (_c.f._ Fig. 18) as most of the path is composed of unloading/reloading branches. The model encoded with stress invariants coming from an elastoplastic feature extractor performs best among the models we test. But crucially, none of the surrogates manages to surpass the precalibrated mesomodel in this case.
As a demonstration, we select a representative path from the test dataset and plot predictions made with four different feature sets in Fig. 21. As expected, larger errors are observed for more pronounced tension-compression switches as models either over- or undershoot the stress levels at compression-tension switch points. Interestingly, most models manage to converge back to the correct stress path after reloading, since hardening behavior is completely dictated by their non-recurrent data-driven encoders. The exception is the model with stress invariant features (\(\left[I_{1}^{\overline{q}}\ J_{2}^{\overline{q}}\right]\to\textit{Melro}\)), performing significantly better than the rest but showing a number of undesired oscillations in stress response due to the (physically) recurrent nature of its features forcing its neural network encoder to operate in extrapolation.
### \(\text{FE}^{2}\) example
We conclude our discussion with an \(\text{FE}^{2}\) demonstration using the proposed hybrid surrogate. We model the tapered macroscopic bar with geometry and boundary conditions shown in Fig. 22. The model is meshed with 1620 linear triangles with a single Gauss point each and is loaded in tension until plastic strain localization takes place. The combination of the tapered geometry with the several circular voids along the model result in a complex range of stress states throughout the model. In contrast to the cases considered so far, this example also covers non-proportional strain paths. To facilitate convergence, the substepping approach proposed in [58] is employed and an adaptive stepping algorithm is used at the macroscale that automatically reduces time step size and recomputes the current increment if either the micro- or macroscopic Newton-Raphson solver fails to converge.
We use the \(\left[I_{1}^{\overline{q}}\ J_{2}^{\overline{q}}\right]\to\textit{Melro}\) model of the previous section as surrogate, trained on the complete set of \(1500\) monotonic training strain paths. The global load-displacement curve at the right edge of the model is plotted for the full-order \(\text{FE}^{2}\) solution and using the hybrid surrogate in Fig. 23. Since we update decoder properties in an explicit fashion (_i.e_. once per time step, see Algorithm 1), we use a displacement increment \(\Delta u=3.5\times 10^{-3}\,\mathrm{mm}\) for the approximate model, \(10\) times smaller than the one used for the full-order model.
As mentioned in Section 3.5, the model by Melro _et al_. can suffer from numerical stability issues even with fixed material properties, and it is reasonable to expect these issues to become worse when letting properties evolve with time. Indeed, with no additional stabilization the model using the network fails to converge at the point marked in Fig. 23. In contrast, the stabilization procedure of Section 3.5 allows for a complete path to be obtained. For this first result, we stabilize the network for 5 epochs with a learning rate of \(1\times 10^{-5}\) for the stabilization loss (Eq. (22)) and \(1\times 10^{-9}\) for retraining on a single monotonic training path selected at random. We also consider a model with an unloading/reloading switch after the onset of macroscopic plasticity. Results are shown in Fig. 23. The surrogate approximates the full-order behavior fairly accurately and several orders of magnitude faster than the full-order model.
Figure 20: Slow cycling test errors for Melro-decoded surrogates (averages of \(50\) datasets for each size).
Figure 21: Response of Melro-decoded surrogates with different features for selected slow cycling test paths (\(1500\) monotonic training paths).
Figure 22: FE\({}^{2}\) example: geometry, mesh and boundary conditions. Full-order (left) and surrogate-based (right) FE\({}^{2}\) simulations are compared.
Figure 23: \(\mathrm{FE}^{2}\)example: load-displacement curves with and without online stabilization, compared to the ground-truth solution.
Figure 24: Performance of the surrogate model for stabilization strategies of varying intensities with and without retraining after stabilization.
We now look closer on the performance of the proposed online stabilization approach. We empirically find that retraining the network until every violating material point is fully stabilized is not strictly necessary in order to achieve convergence, and therefore opting for a small number of stabilization epochs proves to be an efficient approach. It is nevertheless interesting to investigate the impact of the number of stabilization epochs and of the subsequent retraining minibatch on the original dataset. We solve the monotonic example of Fig. 23a with different numbers of stabilization/retraining epochs ranging from \(2\) to \(100\) and compute the validation loss (on the \(500\)-path validation set used for model selection) at the end of every macroscopic time increment in order to keep track of how much the stabilized network deviates from its original pretrained state.
Results are shown together with the corresponding load-displacement curves in Fig. 24. All curves remain stable at first, as stabilization is only triggered when the first unstable points are detected. From that point, models which do not undergo retraining after stabilization lose accuracy at a rate proportional to the number of stabilization epochs. However, this unintuitively does not lead to improved global stability: the loss of accuracy by the surrogate leads to spurious global softening (_c.f._ Fig. 24b) which in turn leads to further need for stabilization. Models stabilized for \(50\) and \(100\) epochs continuously fail to converge and we opt for terminating the simulation after \(100\) cancelled time increments. On the other hand, models retrained with as little as a single strain path (out of the original \(1500\)) after each stabilization epoch are able to maintain the original model accuracy while offering enough stability gains to allow the simulation to converge until the final step, with little change in global behavior for different stabilization regimes.
More insight can be obtained on the different stabilization strategies by plotting the cumulative execution time of the simulation and the cumulative number of detected unstable strain states with time increments for different numbers of stabilization epochs. Results can be seen in Fig. 25. In general, simulations without retraining tend to run faster and result in improved stability, although any gains are quickly overshadowed by losses in accuracy (_c.f._ Fig. 24). Stabilizing for more epochs results in a reduction in the total number of unstable points detected, but beyond 5 epochs this does not result in an overall reduction in the computational cost of the simulation given the increased effort spent on individual stabilization operations.
As one final result, we run the monotonic simulation with the hybrid surrogate for different time step sizes. As previously mentioned, the hybrid approach allows for explicit update of \(\mathbf{\theta}\) within an implicit simulation by obtaining the tangent stiffness matrix directly from the decoder. This however introduces a time step size dependency whose impact merits investigation. We plot in Fig. 26 predictions with step sizes spanning four orders of magnitude, including the same one used to obtain the full-order response. The combination of the explicit property update with the online stabilization procedure indeed introduces an upper bound for time step size for this specific problem. It stands to reason that the sensitivity to time step size also depends on the choice of decoder and on which material properties are included in \(\mathbf{\theta}\). Further investigation into the matter in future works is therefore warranted.
Figure 25: Impact of stabilization regime on execution time and number of unstable points throughout the simulation.
## 5 Conclusions
In this paper, we propose a hybrid surrogate modeling architecture for multiscale modeling of heterogeneous materials. The model is composed of a data-driven encoder for material properties and a physics-based decoder that computes stresses. In the resulting architecture, the encoder increases the flexibility of existing material models by letting their properties evolve in time, while the decoder provides beneficial bias and interpretability to the model. The model is conceived with flexibility in mind, allowing existing implementations of physics-based material models to be used with no extra modifications. Furthermore, by letting the decoder directly receive strain inputs, the encoder architecture is highly flexible and allows for preservation of frame independence. A semi-explicit online prediction algorithm is also proposed that allows for imposing extra constraints to model behavior in a semi-supervised way.
We demonstrate the architecture by reproducing pressure-dependent elastoplastic behavior coming from homogenized fiber-reinforced composite micromodels. The simple model with a linear-elastic decoder learned faster than conventional data-driven surrogates, allowed for lossless feature space dimensionality reduction through the use of strain invariants, and was able to approximate path-dependent behavior through a simple history-aware feature extractor. Models with perfectly-plastic \(J_{2}\) decoders were shown to successfully learn nonlinear hardening and pressure dependency and predict unloading-reloading while being trained exclusively on monotonic data, outperforming a state-of-the-art mesomodel for composites in accuracy for arbitrary loading directions. Employing as decoder the same plasticity model used at the microscale led to highly-accurate monotonic response and fairly accurate extrapolation to unloading/reloading behavior. Finally, the model was used to solve a complex FE\({}^{2}\) model and the benefit of the online stabilization procedure was demonstrated.
We find the approach to be a promising new way to build hybrid surrogates which therefore merits further research on a number of fronts. The current architecture is not by construction concerned with enforcing unconditional thermodynamic consistency or other physical constraints of interest. Although we do find empirically that well-trained surrogates with thermodynamically consistent decoders tend to perform well, some constitutive models might not be suitable for having their properties evolve in time. Fortunately, the framework can cope with extra constraints without necessarily giving up on its flexibility, by enforcing them locally through online retraining. Although training exclusively on monotonic paths already allows for path dependency to be fairly well captured, some decoders might perform better in extrapolation if trained with a (small) number of extra non-monotonic and non-proportional strain paths -- for instance when encoder and decoder can each explain the same phenomenon on their own (_e.g._ pressure dependency in the model by Melro _et al._). We also foresee combining the present approach with the one in [46] into a unified family of flexible hybrid surrogates with a range of possible combinations of feature extractors for physics-rich time convolution, fixed-property models with learned strain distributions and evolving material models.
Figure 26: FE\({}^{2}\) example: Effect of time step size on surrogate predictions.
## Acknowledgements
The authors gratefully acknowledge the TU Delft AI Initiative for their support through the SLIMM AI Lab. FM also acknowledges financial support from the Netherlands Organization for Scientific Research (NWO) under Vidi grant nr. 16464.
|
2308.00030 | Electroweak mass difference of mesons | We consider electroweak (EW) gauge boson corrections to the masses of
pseudoscalar mesons to next to leading order (NLO) in $\alpha_s$ and $1/N_C$.
The pion mass shift induced by the $Z$-boson is shown to be
$m_{\pi^\pm}-m_{\pi^0} = -0.00201(12)$ MeV. While being small compared to the
electromagnetic mass shift, the prediction lies about a factor of $\sim 4$
above the precision of the current experimental measurement, and a factor
$O(10)$ below the precision of current lattice calculations. This motivates
future implementations of these EW gauge boson effects on the lattice. Finally,
we consider BSM contributions to the pion mass difference. | Antonio Pich, Arthur Platschorre, Mario Reig | 2023-07-31T18:00:01Z | http://arxiv.org/abs/2308.00030v1 | # Electroweak mass difference of mesons
###### Abstract
We consider electroweak (EW) gauge boson corrections to the masses of pseudoscalar mesons to next to leading order (NLO) in \(\alpha_{s}\) and \(1/N_{C}\). The pion mass shift induced by the \(Z\)-boson is shown to be \(m_{\pi^{\pm}}-m_{\pi^{0}}=-0.00201(12)\) MeV. While being small compared to the electromagnetic mass shift, the prediction lies about a factor of \(\sim 4\) above the precision of the current experimental measurement, and a factor \(O(10)\) below the precision of current lattice calculations. This motivates future implementations of these EW gauge boson effects on the lattice. Finally, we consider BSM contributions to the pion mass difference.
## I Introduction
At very low energies, the strong interaction of mesons is successfully described by the chiral Lagrangian, a perturbative expansion in derivatives of the Goldstone fields and light quark masses. The effective action is entirely determined by the symmetries, and once the parameters of the theory are fixed by observation of several meson quantities, a highly predictive theory emerges, chiral perturbation theory [1; 2; 3].
In QCD with 3 light flavours, the global symmetry is \(SU(3)_{L}\times SU(3)_{R}\), giving 8 Goldstone bosons after spontaneous symmetry breaking by the formation of quark condensates. Turning on quark masses, \(M_{q}=\mathrm{diag}(m_{u},m_{d},m_{s})\), explicitly breaks the flavour symmetry and the meson fields get a mass. The effective action does not allow one to obtain the meson masses purely as a function of quark masses, but it is possible to find relations that connect ratios of the meson masses to (renormalization-scheme independent) ratios of quark masses, one example being the renowned Gell-Mann-Oakes-Renner relation \(\frac{m_{KK}^{2}-m_{K0}^{2}}{m_{s}^{2}}=\frac{m_{s}-m_{d}}{m_{u}+m_{d}}\).
The process of gauging part of the global symmetries also breaks the chiral flavour symmetry, generating masses for the pseudoscalar mesons. This is well-known for the case of electromagnetism (EM) which breaks the shift symmetries of the charged mesons, thereby generating the pion and kaon mass shifts: \(\delta m_{\pi}=m_{\pi^{\pm}}-m_{\pi^{0}}\). This quantity has been computed using current algebra [4] and in chiral perturbation theory with explicit resonance fields [5], giving \(\delta m_{\pi}\) compatible with the experimental result [6],
\[\delta m_{\pi}|_{\mathrm{exp}}=m_{\pi^{\pm}}-m_{\pi^{0}}=4.5936\pm 0.0005\ \mathrm{MeV}\,. \tag{1}\]
The pion mass shift is a quantity that can also be computed on the lattice. This direction was initiated in [7] and currently has reached a level of considerable accuracy [8; 9]. The most precise lattice result [8]:
\[\delta m_{\pi}=m_{\pi^{\pm}}-m_{\pi}^{0}=4.534\pm 0.042\pm 0.043\ \mathrm{MeV}\,, \tag{2}\]
is compatible with the experimental measurement in Eq. 1. While the error on the lattice still has to be substantially reduced to reach the experimental precision, given the rate of improvement of lattice precision in recent years it is not unreasonable to think that in a near future the size of both errors might be comparable.
In this letter we show that heavy EW gauge bosons induce small, but possibly _observable_ mass shifts between the neutral and charged mesons, for both the pion and the kaon. Due to the chiral structure of the weak interaction, to leading order (LO) in \(G_{F}\), only the \(Z\) boson contributes to the mass shifts. Similar results to LO in \(\alpha_{s}\) were noted in [10].
By doing a calculation at NLO in both \(\alpha_{s}\) and \(1/N_{c}\), our results will show that the expected mass shift induced by the \(Z\) lies well above the uncertainty of the current experimental measurement and slightly below the lattice uncertainties. This implies that future lattice simulations should be sensitive to the effects of the EW gauge bosons, reflecting the need for an implementation on the lattice. This direction is particularly interesting to learn about flavour symmetry breaking by the weak interaction in the chiral limit. Finally, we discuss future directions including effects of new physics on the mass differences of mesons.
## II Electroweak interaction and the pion mass difference
QCD with 3 light flavours has a \(SU(3)_{L}\times SU(3)_{R}\) global flavour symmetry. Starting at order \(O(p^{2})\), and neglecting momentarily quark masses, the effective Lagrangian below the chiral symmetry breaking scale is of the form:
\[\mathcal{L}_{2}=\frac{F^{2}}{4}\mathrm{Tr}\left(D^{\mu}U\left(D_{\mu}U\right)^ {\dagger}\right)\,, \tag{3}\]
where \(F\) is the chiral coupling constant and the \(SU(3)\) matrix \(U=\exp\left[i\frac{\sqrt{2}}{F}\Phi\right]\) incorporates the pseudoscalar
Goldstone octet
\[\Phi=\begin{pmatrix}\frac{x^{0}}{\sqrt{2}}+\frac{y^{0}}{\sqrt{6}}&\pi^{+}&K^{+}\\ \pi^{-}&-\frac{x^{0}}{\sqrt{2}}+\frac{y^{0}}{\sqrt{6}}&K^{0}\\ K^{-}&\overline{K}^{0}&-\frac{2}{\sqrt{6}}\eta^{0}\end{pmatrix}\,. \tag{4}\]
In the SM, the \(SU(2)\times U(1)\) subgroup of this flavour symmetry is gauged. In general, gauging a subgroup of \(SU(3)_{L}\times SU(3)_{R}\) by gauge bosons \(L\) and \(R\) is done by introducing a covariant derivative of the form:
\[D_{\mu}U=\partial_{\mu}U-iQ_{L}\ell_{\mu}U+iUr_{\mu}Q_{R}\,. \tag{5}\]
For the SM gauge bosons this amounts to introducing:
\[D_{\mu}U= \partial_{\mu}U-i\frac{g}{\sqrt{2}}\left(W_{\mu}^{+}T_{W}^{-}+W_{ \mu}^{-}T_{W}^{+}\right)U-ie\left(A_{\mu}-\tan\theta_{W}Z_{\mu}\right)[Q_{\rm em },U]-i\frac{g}{\cos\theta_{W}}Z_{\mu}T_{3L}U\,, \tag{6}\]
where we have explicitly included the photon and the EW gauge bosons with the generators:
\[T_{W}^{-}=\left(T_{W}^{+}\right)^{\dagger}=\begin{pmatrix}0&V_{ud}&V_{us}\\ 0&0&0\\ 0&0&0\end{pmatrix}\,, \tag{7}\]
and the diagonal matrices \(T_{3L}={\rm diag}(1/2,-1/2,-1/2)\) and \(Q_{\rm em}={\rm diag}(2/3,-1/3,-1/3)\). The heavy EW gauge bosons are introduced as spurions in order to track the pattern of explicit symmetry breaking. However, since these particles lie well above the cut-off of the effective theory, usually taken to be \(\Lambda_{\chi{\rm SB}}\sim 4\pi F\), special care has to be taken in deriving explicit results from this Lagrangian. We shall return to this issue momentarily.
Expanding Eq. 3 to quadratic order in \(\Phi\), we can see that non-zero Goldstone masses are generated by terms of the form:
\[-\frac{F^{2}}{2}{\rm Tr}\left(Q_{L}UQ_{R}U^{\dagger}\right)\dot{=}\frac{1}{2} {\rm Tr}\left([Q_{L},\Phi][\Phi,Q_{R}]\right) \tag{8}\]
where \(Q_{L}\) and \(Q_{R}\) are spurion matrices representing the action of gauge fields.
One notices that not all of these terms are breaking the shift symmetries in the chiral limit, because meson self-energies are generated by loop diagrams with no external gauge bosons. Consequently, terms involving different gauge bosons do not contribute at LO to the meson masses. Since the \(W^{\pm}\) couplings are purely left-handed, they cannot contribute to \(Q_{R}\) and, therefore, do not generate any meson mass shift.
The only contribution to \(Q_{R}\) comes from the spurion \(Q_{\rm em}\), which as seen from Eq. 6 occurs for both the photon and the \(Z\), and acts as:
\[[Q_{\rm em},\Phi]=\begin{pmatrix}0&\pi^{+}&K^{+}\\ -\pi^{-}&0&0\\ -K^{-}&0&0\end{pmatrix}\,. \tag{9}\]
This implies that only charged mesons can get a mass and this occurs through the interaction with neutral gauge bosons, which contribute as:
\[\frac{eg}{2\cos\theta_{W}}{\rm Tr}\left([T_{3L},\Phi]\,[\Phi,Q_{\rm em}] \right)\left(A_{\mu}-\tan\theta_{W}Z_{\mu}\right)Z^{\mu}\,, \tag{10}\]
and:
\[\frac{e^{2}}{2}{\rm Tr}\left([Q_{\rm em},\Phi][\Phi,Q_{\rm em}]\right)\left(A _{\mu}-\tan\theta_{W}Z_{\mu}\right)(A^{\mu}-\tan\theta_{W}Z^{\mu})\,. \tag{11}\]
Again, the term involving \(A_{\mu}Z^{\mu}\) cannot contribute to meson masses. Combining Eq. 10 and Eq. 11, and retaining only the relevant terms involving \(A_{\mu}A^{\mu}\) and \(Z_{\mu}Z^{\mu}\), the interaction reads:
\[e^{2}\left(\pi^{+}\pi^{-}+K^{+}K^{-}\right)\left(A_{\mu}A^{\mu}-Z_{\mu}Z^{\mu} \right). \tag{12}\]
An order of magnitude estimate can be given at this point for the \(Z\)-boson induced mass shift using naive dimensional analysis:
\[\Delta m_{\pi}^{2}=\frac{e^{2}}{4\pi^{2}M_{Z}^{2}}\Lambda_{\chi{\rm SB}}^{4} \rightarrow\delta m_{\pi}\sim 0.002\ {\rm MeV}\,. \tag{13}\]
The fact that this estimate lies above the current experimental uncertainty and is comparable to the lattice precision motivates us to perform a more careful analysis.
As in the electromagnetic (EM) contribution [5], we capture the effects of both \(A_{\mu}\) and \(Z_{\mu}\) by adding the following local operators involving the spurion matrices \(Q_{\rm em}\) and \(Q_{L,R}^{Z}\equiv\frac{g}{\cos\theta_{W}}\,\mathcal{Q}_{L,R}\):
\[\mathcal{L}_{2}^{C}=e^{2}C_{em}\langle Q_{\rm em}UQ_{\rm em}U^{\dagger} \rangle+4\sqrt{2}G_{F}C_{Z}\langle\mathcal{Q}_{L}U\mathcal{Q}_{R}U^{\dagger} \rangle\,, \tag{14}\]
with \(4\sqrt{2}G_{F}\) the low-energy coupling of the \(Z\) boson,
\[\mathcal{Q}_{L}=\begin{pmatrix}\frac{1}{2}-\frac{2}{3}x&0&0\\ 0&-\frac{1}{2}+\frac{1}{3}x&0\\ 0&0&-\frac{1}{2}+\frac{1}{3}x\end{pmatrix}\,, \tag{15}\] \[\mathcal{Q}_{R}=\begin{pmatrix}-\frac{2}{3}x&0&0\\ 0&\frac{1}{3}x&0\\ 0&0&\frac{1}{3}x\end{pmatrix} \tag{16}\]
and \(x=\sin^{2}\theta_{W}\). The determination of \(C_{Z}\) to NLO in \(\alpha_{s}\) and \(1/N_{c}\) is the goal of this letter.
The coefficients \(C_{\rm em}\) and \(C_{Z}\) are low-energy constants determined from the high-energy theory and determine the electromagnetic and electroweak meson mass differences \(\Delta m_{P}^{2}\equiv m_{P^{\pm}}^{2}-m_{P^{0}}^{2}\) of pions and kaons in the chiral limit:
\[\Delta m_{\pi}^{2}=\Delta m_{K}^{2}=\frac{2e^{2}}{F^{2}}\left(C_{\rm em}-\frac{ C_{Z}}{M_{Z}^{2}}\right). \tag{17}\]
In [5] it was shown that the EM mass shift from resonance exchange saturates the constant \(C_{\rm em}\) and is given in terms of the resonance parameters \(F_{V},M_{V}\) by:
\[\Delta m_{\pi}^{2}|_{\rm em}=\frac{3\alpha_{\rm em}}{4\pi F^{2}}F_{V}^{2}M_{V} ^{2}\ln\frac{F_{V}^{2}}{F_{V}^{2}-F^{2}}\,. \tag{18}\]
A corresponding resonance loop calculation including the \(Z\) boson in order to determine \(C_{Z}\) is subtle. The reason is that the parameter \(M_{Z}\) lies well above the cut-off, \(\Lambda_{\chi\rm SB}\), and the \(Z\) therefore must be integrated out.
The resulting EFT is QCD with four-fermion operators that encode all the information of the chiral symmetry breaking by the EW bosons. Using the renormalization group (RG) to run the Wilson coefficients of these operators down to a scale \(\mu\sim 1\) GeV allows matching to the operators in Eq. 14 of the chiral Lagrangian and thereby a determination of \(C_{Z}\).
### Z-induced left-right four quark operators
Integrating out the \(Z\) boson introduces 4-fermion operators that break the chiral \(SU(3)_{L}\times SU(3)_{R}\) symmetry. The relevant left-right (LR) operators are:
\[[Q_{1}^{LR}]_{ijk\ell}=(\overline{q}_{Li}\gamma^{\mu}q_{Lj})\left(\overline{q} _{Rk}\gamma^{\mu}q_{R\ell}\right) \tag{19}\]
\[[Q_{2}^{LR}]_{ijk\ell}=(\overline{q}_{Li}q_{Rk})\left(\overline{q}_{R}q_{Lj} \right)\,, \tag{20}\]
with \(i,j,k,\ell\) being light-quark flavour indices. While \(Q_{1}^{LR}\) is generated by a \(Z\)-exchange at tree level, \(Q_{2}^{LR}\) is obtained after applying a Fierz-identity on the gluon corrections to \(Q_{1}^{LR}\).
The effective lagrangian below \(M_{Z}\) reads:
\[\mathcal{L}_{\rm eff}=-4\sqrt{2}G_{F}\sum_{ijk\ell}\left(\mathcal{Q}_{L} \right)_{ij}\left(\mathcal{Q}_{R}\right)_{k\ell}\left[C_{1}Q_{1}^{LR}+C_{2}Q_ {2}^{LR}\right]_{ijk\ell}\,, \tag{21}\]
with \(C_{1,2}\) being the Wilson coefficients.
When QCD effects are taken into account, the renormalised Wilson coefficients at the \(M_{Z}\) scale become [11]:
\[C_{1} =1+\frac{\alpha_{s}}{4\pi}\frac{3}{N_{c}}\left[-\ln\frac{M_{Z}^{2 }}{\mu^{2}}-\frac{1}{6}\right]\,, \tag{22}\] \[C_{2} =\frac{\alpha_{s}}{4\pi}\left[-6\ln\frac{M_{Z}^{2}}{\mu^{2}}-1 \right]\,, \tag{23}\]
where the non-logarithmic corrections are scheme dependent. The operators above will mix under RG flow and their evolution down to the scale of interest (\(\sim 1\) GeV) can be calculated by standard procedures [12], using their anomalous dimension matrices:
\[\frac{d\vec{C}}{d\ln\mu}=\gamma^{T}\vec{C}\,. \tag{24}\]
Up to order \(O(\alpha_{s}^{2})\), this matrix can be expanded as:
\[\gamma=\frac{\alpha_{s}}{4\pi}\gamma^{0}+\left(\frac{\alpha_{s}}{4\pi}\right) ^{2}\gamma^{1}+O(\alpha_{s}^{3})\,, \tag{25}\]
with \(\gamma^{0},\gamma^{1}\) given by [13]:
\[\gamma^{0}=\left(\begin{array}{cc}\frac{6}{N_{c}}&12\\ 0&-6N_{c}+\frac{6}{N_{c}}\end{array}\right)\,,\quad\gamma^{1}=\left(\begin{array} []{cc}\frac{137}{6}+\frac{15}{2N_{c}}-\frac{22}{3N_{c}}f&\frac{200}{3}N_{c}- \frac{6}{N_{c}}-\frac{44}{3}f\\ \frac{71}{4}N_{c}+\frac{6}{N}-2f&-\frac{203}{6}N_{c}^{2}+\frac{479}{6}+\frac{ 15}{2N_{c}^{2}}+\frac{10}{3}N_{c}f-\frac{22}{3N_{c}}f\end{array}\right)\,. \tag{26}\]
Solving Eq. 24 yields the evolution:
\[\vec{C}(\mu)=T\,\exp\left[\int_{\alpha_{s}(M_{Z})}^{\alpha_{s}(\mu)}d\alpha_{ s}\frac{\gamma^{T}}{\beta(\alpha_{s})}\right]\vec{C}(M_{Z})\,, \tag{27}\]
where we have introduced the QCD \(\beta\) function as:
\[\beta=-2\alpha_{s}\left[\beta_{0}\frac{\alpha_{s}}{4\pi}+\beta_{1}\left( \frac{\alpha_{s}}{4\pi}\right)^{2}+O(\alpha_{s}^{3})\right]\,. \tag{28}\]
The coefficients used are given by \(\beta_{0}=\frac{11N_{c}-2f}{3}\) and \(\beta_{1}=\frac{34}{3}N_{c}^{2}-\frac{10}{3}N_{c}f-\frac{N_{c}^{2}-1}{N_{c}}f\)[14] where \(f\) is the number of active flavours.
To NLO and after integrating out the \(b\) and \(c\) quark, the Wilson coefficients at the scale \(\mu\sim 1\) GeV are:
\[C_{1}=0.92\,,\;\;\;C_{2}=-2.45\,. \tag{29}\]
Similar enhancements of \(C_{2}\) are noticed in [15].
### Matching to the chiral Lagrangian at large \(N_{c}\)
We proceed to match the resulting EFT to the chiral Lagrangian. We do so by calculating the expectation value of the matrix elements of the 4-fermion operators in the large-\(N_{c}\) limit in which products of colour-singlet currents factorise.
In this limit, the operator \(Q_{1}^{LR}\) reduces to the product of a left and a right currents:
\[[Q_{1}^{LR}]_{ijk\ell}=\mathcal{J}_{L,ji}^{\mu}\,\mathcal{J}_{\mu,\ell k}^{R}\,. \tag{30}\]
Since the low-energy representation of these currents starts at \(O(p)\) in the chiral-perturbation-theory expansion, the large-\(N_{C}\) expression of \(Q_{1}^{LR}\) is of \(O(p^{2})\) and, therefore, does not contribute to the \(O(p^{0})\) operator in Eq. 14. Owing to its different scalar-pseudoscalar structure, the operator \(Q_{2}^{LR}\) does contribute at \(O(p^{0})\), receiving a chiral enhancement of the form:
\[[Q_{2}^{LR}]_{ijk\ell} =\langle\overline{q}_{L}^{i}q_{R}^{k}\rangle\langle\overline{q}_ {R}^{\ell}q_{L}^{j}\rangle\,\left\{1+O\left(\frac{1}{N_{c}}\right)\right\} \tag{31}\] \[=\frac{1}{4}B_{0}^{2}F^{4}U_{ki}U_{j\ell}^{\dagger}\,\left\{1+O \bigg{(}\frac{1}{N_{c}}\bigg{)}\right\}+O\big{(}p^{2}\big{)}\,, \tag{32}\]
with \(B_{0}=-\langle\bar{q}q\rangle/F^{2}=m_{\pi^{\pm}}^{2}/(m_{u}+m_{d})\).
Matching the contribution of \(Q_{2}^{LR}\) to the effective theory, a LO estimate in \(N_{c}\) can be given for \(C_{Z}\):
\[C_{Z}=-\frac{1}{4}\,B_{0}^{2}(\mu)\,F^{4}\,C_{2}(\mu)\,. \tag{33}\]
One can easily check that, in the large-\(N_{c}\) limit, the \(\mu\) dependence of \(C_{2}(\mu)\) is exactly cancelled by the quark-mass factors in \(B_{0}^{2}(\mu)\), as it should.
### 1/\(N_{c}\) corrections to \(Q_{1}^{LR}\)
As shown in [10], the low-energy constants in Eq. 14 can be related to the two-point correlation function of a left and a right QCD currents, \(\Pi_{LR}(Q^{2})\), which converges nicely in the UV. This fact allows one to evaluate the leading non-zero \(O(p^{0})\) contributions of \(Q_{1}^{LR}\), originating from loops of Goldstone bosons and vector and axial-vector resonance fields, which are NLO corrections in \(1/N_{c}\). The full details of the calculation are given in the Appendix. Integrating only the low-energy region \(0\leq Q^{2}\leq\mu^{2}\) (contributions from \(Q^{2}>\mu^{2}\) are already included in the Wilson coefficients), one finds
\[\Delta C_{Z}|_{Q_{1}^{LR}}=\frac{3}{32\pi^{2}}\left\{\sum_{A}F_{A_{i}}^{2}M_{ A_{i}}^{4}\log\left(1+\frac{\mu^{2}}{M_{A_{i}}}\right)-\sum_{V}F_{V_{i}}^{2}M_{ V_{i}}^{4}\log\left(1+\frac{\mu^{2}}{M_{V_{i}}}\right)\right\}C_{1}(\mu)\,. \tag{34}\]
Since we are interested in the matrix element of the operator \(Q_{1}^{LR}\) at around the \(\mu\sim 1\) GeV scale, we work in the lightest-resonance approximation with their couplings fixed through the Weinberg conditions [16; 17]:
\[F_{V}^{2}=\frac{M_{A}^{2}}{M_{A}^{2}-M_{V}^{2}}\,F^{2}\,,\qquad F_{A}^{2}= \frac{M_{V}^{2}}{M_{A}^{2}-M_{V}^{2}}\,F^{2}\,. \tag{35}\]
Within the single-resonance approximation that we have adopted, \(M_{A}=\sqrt{2}M_{V}\)[17]. For the numerical evaluation we will take \(M_{V}=M_{\rho}=775.26\pm 0.23\) MeV and \(F=F_{\pi}=92.1\pm 0.8\) MeV [14]. As expected from its loop suppression, \(\left.\Delta C_{Z}\right|_{Q_{1}^{LR}}\) is of \(O(F^{2})\sim O(N_{c})\) and, therefore, is a NLO correction in \(1/N_{c}\) of about \(O(10\%)\) with respect to the leading \(O(F^{4})\sim O(N_{c}^{2})\) contribution from \(Q_{2}^{LR}\) in Eq. 33.
### EW contribution to the pion mass difference
Using Eq. 17 and the results above in Eqs. 33, 34 and 35, the pion mass shift induced by the \(Z\) reads:
\[\Delta m_{\pi}^{2}|_{Z}=\frac{e^{2}}{M_{Z}^{2}}\left\{\frac{F^{2}}{2}B_{0}^{2} (\mu)C_{2}(\mu)+\frac{3}{16\pi^{2}}C_{1}(\mu)\frac{M_{A}^{2}M_{V}^{2}}{M_{A}^ {2}-M_{V}^{2}}\left[M_{V}^{2}\log\left(1+\frac{\mu^{2}}{M_{V}^{2}}\right)-M_{A }^{2}\log\left(1+\frac{\mu^{2}}{M_{A}^{2}}\right)\right]\right\}. \tag{36}\]
This translates into a \(Z\)-induced pion mass difference:
\[\delta m_{\pi}|_{Z}\approx\frac{\Delta m_{\pi}^{2}|_{Z}}{2m_{\pi}}=-0.00201(7)( 2)(10)\,\,\mbox{MeV}\,, \tag{37}\]
where we have used \(m_{\pi}=134.9768\pm 0.0005\) MeV [14] and \((m_{u}+m_{d})/2=3.381\pm 0.040\) MeV [18]. The first error displays the parametric uncertainty induced by the
different inputs. The second uncertainty accounts for the renormalization-scale dependence in the interval \(\mu\in[0.8,1.2]\) GeV which, as shown in the figure, is tiny. We have added half the difference between the LO and NLO results as an estimate of unknown higher-order effects (third error).
We notice that the \(Z\)-boson contribution is about a factor of \(\sim 4\) larger than the experimental error in Eq. 1 and \(\sim O(10)\) smaller than the current lattice precision in Eq. 2, reinforcing the motivation to incorporate these effects on the lattice. The renormalization scale dependence of this result for energies in the range \([0.8,1.2]\) GeV is plotted in figure 1.
## Discussion
Before closing we comment on several points that deserve mention.
* The estimate in Eq. 37 is based on a NLO evaluation of the Wilson coefficients \(C_{1,2}(\mu)\), which depends on the precise values of the strong coupling at \(M_{Z}\), \(\alpha_{s}(M_{Z})=0.1184\pm 0.0008\)[18], and at the different matching scales (known to percent level or better).
* Our result \(\delta m_{\pi}|_{Z}\) appears to be of the same order as the two-loop EM effect, which naively one expects to be: \[\delta m_{\pi}|_{\rm em}^{(2)}\approx\left(\frac{\alpha_{\rm em}}{2\pi}\right) \delta m_{\pi}|_{\rm em}^{(1)}\,.\] (38)
* BSM models that generate 4-quark LR operators at energies below the new physics scale, \(\Lambda_{\rm NP}\gg\Lambda_{\chi\rm SB}\), will induce similar pion mass shifts. This is the case, for example, of the \(Z^{\prime}\) models studied in [11], and similar SM extensions. Since the QCD corrections dominate near the GeV scale, a reasonable estimate is just the rescaling: \[\delta m_{\pi}|_{\rm NP}=\frac{g_{\rm NP}^{2}}{\Lambda_{\rm NP}^{2}}\frac{ \delta m_{\pi}|_{Z}}{4\sqrt{2}G_{F}}\,.\] (39) If new physics is instead light, as proposed in [19; 20], one should rescale the resonance calculation for EM effects [5].
## Acknowledgments
We would like to thank Prateek Agrawal, Hector Gisbert, Victor Miralles and Fernando Romero for helpful discussions and enlightening comments on the early drafts of this letter. Antonio Pich is supported by Generalitat Valenciana, Grant No. Prometeo/2021/071, and MCIN/AEI/10.13039/501100011033, Grant No. PID2020-114473GB-I00. Arthur Platschorre is supported by a STFC Studenship No. 2397217 and Prins Bernhard Cultuurfondsbeurs No. 40038041 made possible by the Pieter Beijer fonds and the Data-Piet fonds.
## Appendix
In the large-\(N_{C}\) limit, the strong interaction reduces to tree-level hadronic diagrams. Keeping only those terms that are relevant for our calculation, the effective Lagrangian describing the mesonic world contains the LO Goldstone term \(\mathcal{L}_{2}\) and the vector and axial-vector couplings (kinetic terms are omitted) [17]:
\[\mathcal{L}_{V,A}=\sum_{V_{i}}\frac{F_{V_{i}}}{2\sqrt{2}}\,\langle V_{i}^{\mu \nu}f_{+\mu\nu}\rangle+\sum_{A_{i}}\frac{F_{A_{i}}}{2\sqrt{2}}\,\langle A_{i} ^{\mu\nu}f_{-\mu\nu}\rangle\,, \tag{40}\]
where \(f_{\pm}^{\mu\nu}=u^{\dagger}F_{L}^{\mu\nu}u\pm uF_{R}^{\mu\nu}u^{\dagger}\) with \(U=u^{2}\) the Goldstone \(SU(3)\) matrix and \(F_{L,R}^{\mu\nu}\) the left (\(\ell^{\mu}\)) and right (\(r^{\mu}\)) field strengths. The spin-1 resonances are described through the antisymmetric tensors \(V_{i}^{\mu\nu}\) and \(A_{i}^{\mu\nu}\)[5; 21].
The left and right QCD currents are easily computed, taking derivatives with respect to the external \(\ell^{\mu}\) and \(r^{\mu}\) fields:
\[\mathcal{J}_{L}^{\mu} = i\frac{F^{2}}{2}\,D^{\mu}UU^{\dagger}+\sum_{V_{i}}\frac{F_{V_{i} }}{\sqrt{2}}\,\partial_{\nu}(uV_{i}^{\mu\nu}u^{\dagger}) \tag{41}\] \[+\sum_{A_{i}}\frac{F_{A_{i}}}{\sqrt{2}}\,\partial_{\nu}(uA_{i}^{ \mu\nu}u^{\dagger})+\cdots\]
while \(\mathcal{J}_{R}^{\mu}\) is obtained from this expression exchanging \(u\leftrightarrow u^{\dagger}\) and putting a negative sign in the axial contributions.
The bosonization of \([Q_{1}^{LR}]_{ijk\ell}\) is formally given by [22]
\[\langle[Q_{1}^{LR}(x)]_{ijkl}\rangle_{G}=\frac{\partial\Gamma}{\partial\ell _{\mu}^{ij}(x)}\,\frac{\partial\Gamma}{\partial r^{\mu,kl}(x)}-i\,\frac{ \partial^{2}\Gamma}{\partial\ell_{\mu}^{ij}(x)\,\partial r^{\mu,kl}(x)} \tag{42}\]
with \(\Gamma[\ell,r]\) the effective theory generating functional. The first term is just the product of the two currents and receives \(O(p^{0})\) contributions from loop diagrams with vector and axial-vector internal propagators. The second term (the derivative of \(\mathcal{J}_{L}^{\mu}\) with respect to \(r^{\mu}\)) generates an additional \(O(p^{0})\) contribution through Goldstone loops. The combined result can be written in the form:
\[\sum_{ijkl}\mathcal{Q}_{L}^{ij}\mathcal{Q}_{R}^{kl}\,\,[Q_{1}^{LR }]_{ijkl}=\frac{3}{32\pi^{2}}\,\langle\mathcal{Q}_{L}U\mathcal{Q}_{R}U^{\dagger}\rangle\] \[\quad\times\int_{0}^{\infty}dQ^{2}\,\left\{\sum_{V}\frac{F_{V_{i} }^{2}M_{V_{i}}^{4}}{M_{V_{i}}^{2}+Q^{2}}-\sum_{A}\frac{F_{A_{i}}^{2}M_{A_{i}}^ {4}}{M_{A_{i}}^{2}+Q^{2}}\right\}, \tag{43}\]
where the Weinberg conditions [16]
\[\sum_{i}\left(F_{V_{i}}^{2}-F_{A_{i}}^{2}\right)\,\,=\,\,F^{2}\,,\]
\[\sum_{i}\left(M_{V_{i}}^{2}F_{V_{i}}^{2}-M_{A_{i}}^{2}F_{A_{i}}^{2}\right)\ =\ 0\,, \tag{44}\]
have been used in order to simplify the final expression. Eq. 43 agrees with the result obtained in [10], using the alternative Proca description of spin-1 fields. Performing the integration in the low-energy region \(0\leq Q^{2}\leq\mu^{2}\) one obtains the result for \(\left.\Delta C_{Z}\right|_{Q_{1}^{LR}}\) in Eq. 34.
|
2309.05261 | Gall Bladder Cancer Detection from US Images with Only Image Level
Labels | Automated detection of Gallbladder Cancer (GBC) from Ultrasound (US) images
is an important problem, which has drawn increased interest from researchers.
However, most of these works use difficult-to-acquire information such as
bounding box annotations or additional US videos. In this paper, we focus on
GBC detection using only image-level labels. Such annotation is usually
available based on the diagnostic report of a patient, and do not require
additional annotation effort from the physicians. However, our analysis reveals
that it is difficult to train a standard image classification model for GBC
detection. This is due to the low inter-class variance (a malignant region
usually occupies only a small portion of a US image), high intra-class variance
(due to the US sensor capturing a 2D slice of a 3D object leading to large
viewpoint variations), and low training data availability. We posit that even
when we have only the image level label, still formulating the problem as
object detection (with bounding box output) helps a deep neural network (DNN)
model focus on the relevant region of interest. Since no bounding box
annotations is available for training, we pose the problem as weakly supervised
object detection (WSOD). Motivated by the recent success of transformer models
in object detection, we train one such model, DETR, using
multi-instance-learning (MIL) with self-supervised instance selection to suit
the WSOD task. Our proposed method demonstrates an improvement of AP and
detection sensitivity over the SOTA transformer-based and CNN-based WSOD
methods. Project page is at https://gbc-iitd.github.io/wsod-gbc | Soumen Basu, Ashish Papanai, Mayank Gupta, Pankaj Gupta, Chetan Arora | 2023-09-11T06:37:12Z | http://arxiv.org/abs/2309.05261v1 | # Gall Bladder Cancer Detection from US Images with Only Image Level Labels
###### Abstract
Automated detection of Gallbladder Cancer (GBC) from Ultrasound (US) images is an important problem, which has drawn increased interest from researchers. However, most of these works use difficult-to-acquire information such as bounding box annotations or additional US videos. In this paper, we focus on GBC detection using only image-level labels. Such annotation is usually available based on the diagnostic report of a patient, and do not require additional annotation effort from the physicians. However, our analysis reveals that it is difficult to train a standard image classification model for GBC detection. This is due to the low inter-class variance (a malignant region usually occupies only a small portion of a US image), high intra-class variance (due to the US sensor capturing a 2D slice of a 3D object leading to large viewpoint variations), and low training data availability. We posit that even when we have only the image level label, still formulating the problem as object detection (with bounding box output) helps a deep neural network (DNN) model focus on the relevant region of interest. Since no bounding box annotations is available for training, we pose the problem as weakly supervised object detection (WSOD). Motivated by the recent success of transformer models in object detection, we train one such model, DETR, using multi-instance-learning (MIL) with self-supervised instance selection to suit the WSOD task. Our proposed method demonstrates an improvement of AP and detection sensitivity over the SOTA transformer-based and CNN-based WSOD methods. Project page is at [https://gbc-iitd.github.io/wsod-gbc](https://gbc-iitd.github.io/wsod-gbc).
Keywords:Weakly Supervised Object Detection Ultrasound Gallbladder Cancer
## 1 Introduction
GBC is a deadly disease that is difficult to detect at an early stage [15, 12]. Early diagnosis can significantly improve the survival rate [14]. Non-ionizing radiation, low cost, and accessibility make US a popular non-invasive diagnostic modality for
patients with suspected gall bladder (GB) afflictions. However, identifying signs of GBC from routine US imaging is challenging for radiologists [11]. In recent years, automated GBC detection from US images has drawn increased interest [3, 5] due to its potential for improving diagnosis and treatment outcomes. Many of these works formulate the problem as an object detection, since training a image classification model for GBC detection seems challenging due to the reasons outlined in the abstract (also see Fig. 1).
Recently, GBCNet[3], a CNN-based model, achieved SOTA performance on classifying malignant GB from US images. GBCNet uses a two-stage pipeline consisting of object detection followed by classification, and requires bounding box annotations for GB as well as malignant regions for training. Such bounding box annotations surrounding the pathological regions are time-consuming and require an expert radiologist for annotation. This makes it expensive and non-viable for curating large datasets for training large DNN models. In another recent work, [5] has exploited additional unlabeled video data for learning good representations for downstream GBC classification and obtained performance similar to [3] using a ResNet50 [13] classifier. The reliance of both SOTA techniques on additional annotations or data, limits their applicability. On the other hand, the image-level malignancy label is usually available at a low cost, as it can be obtained readily from the diagnostic report of a patient without additional effort from clinicians.
Instead of training a classification pipeline, we propose to solve an object detection problem, which involves predicting a bounding box for the malignancy. The motivation is that, running a classifier on a focused attention/ proposal region in an object detection pipeline would help tackle the low inter-class and high intra-class variations. However, since we only have image-level labels available, we formulate the problem as a Weakly Supervised Object Detection (WSOD) problem. As transformers are increasingly outshining CNNs due to their ability to aggregate focused cues from a large area [9, 6], we choose to use transformers in our model. However, in our initial experiments SOTA WSOD methods for transformers failed miserably. These methods primarily rely on training a classification pipeline and later generating activation heatmaps using attention and drawing a bounding box circumscribing the heatmaps [10, 2] to show localiza
Figure 1: (a) Low inter-class variability. The first two GBs show benign wall thickening, and the third one shows malignant thickening. However, the appearance of the GB in all three images is very similar. (b) High intra-class variability. All three images have been scanned from the same patient, but due to the sensor’s scanning plane, the appearances change drastically.
tion. However, for GBC detection, this line of work is not helpful as we discussed earlier.
Inspired by the success of the Multiple Instance Learning (MIL) paradigm for weakly supervised training on medical imaging tasks [22, 20], we train a detection transformer, DETR, using the MIL paradigm for weakly supervised malignant region detection. In this, one generates region proposals for images, and then considers the images as bags and region proposals as instances to solve the instance classification (object detection) under the MIL constraints [8]. At inference, we use the predicted instance labels to predict the bag labels. Our experiments validate the utility of this approach in circumventing the challenges in US images and detecting GBC accurately from US images using only image-level labels.
**Contributions:** The key contributions of this work are:
* We design a novel DETR variant based on MIL with self-supervised instance learning towards the weakly supervised disease detection and localization task in medical images. Although MIL and self-supervised instance learning has been used for CNNs[24], such a pipeline has not been used for transformer-based detection models.
* We formulate the GBC classification problem as a weakly supervised object detection problem to mitigate the effect of low inter-class and large intra-class variances, and solve the difficult GBC detection problem on US images without using the costly and difficult to obtain additional annotation (bounding box) or video data.
* Our method provides a strong baseline for weakly supervised GBC detection and localization in US images, which has not been tackled earlier. Further, to assess the generality of our method, we apply our method to Polyp detection from Colonoscopy images.
## 2 Datasets
**Gallbladder Cancer Detection in Ultrasound Images:** We use the public GBC US dataset [3] consisting of 1255 image samples from 218 patients. The
Figure 2: Samples from the GBCU [3] and Kvasir-SEG [17] datasets. Four images from each of the disease and non-disease classes are shown on the left and right, respectively. Disease locations are shown by drawing bounding boxes.
dataset contains 990 non-malignant (171 patients) and 265 malignant (47 patients) GB images (see Fig. 2 for some sample images). The dataset contains image labels as well as bounding box annotations showing the malignant regions. Note that, we use only the image labels for training. We report results on 5-fold cross-validation. We did the cross-validation splits at the patient level, and all images of any patient appeared either in the train or validation split.
**Polyp Detection in Colonoscopy Images:** We use the publicly available Kvasir-SEG [17] dataset consisting of 1000 white light colonoscopy images showing polyps (c.f. Fig. 2). Since Kvasir-SEG does not contain any control images, we add 600 non-polyp images randomly sampled from the PolypGen [1] dataset. Since the patient information is not available with the data, we use random stratified splitting for 5-fold cross-validation.
## 3 Our Method
**Revisiting DETR:** The DETR [6] architectures utilize a ResNet[13] backbone to extract 2D convolutional features, which are then flattened and added with a positional encoding, and fed to the self-attention-based transformer encoder. The decoder uses cross-attention between learned object queries containing positional embedding, and encoder output to produce output embedding containing the class and localization information. The number of object queries, and the decoder output embeddings is set to 100 in DETR. Subsequently, a feed-forward network generates predictions for object bounding boxes with their corresponding labels and confidence scores.
**Proposed Architecture:** Fig. 3 gives an overview of our method. We use a COCO pre-trained class-agnostic DETR as proposal generator. The learned object queries contain the embedded positional information of the proposal. Class-agnostic in
Figure 3: Overview of the proposed Weakly Supervised DETR architecture. The location information in the object queries learned by the class-agnostic DETR ensures generation of high-quality proposals. The MIL framework uses the proposal embeddings generated at the class-aware branch.
dicates that all object categories are considered as a single object class, as we are only interested in the object proposals. We then finetune a regular, class-aware DETR for the WSOD task. This class-aware DETR is initialized with the checkpoint of the class-agnostic DETR. The learned object queries from the class-agnostic DETR is frozen and shared with the WSOD DETR during finetuning to ensure that the class-aware DETR attends similar locations of the object proposals. The class-agnostic DETR branch is frozen during the finetuning phase. We finally use the MIL-based instance classification with the self-supervised instance learning over the finetuning branch. For GBC classification, if the model generates bounding boxes for the input image, then we predict the image to be malignant, since the only object present in the data is the cancer.
**MIL Setup:** The decoder of the fine-tuning DETR generates \(R\)\(d\)-dimensional output embeddings. Each embedding corresponds to a proposal generated by the class-agnostic DETR. We pass these embeddings as input to two branches with FC layers to obtain the matrices \(X^{c}\in\mathbb{R}^{R\times N_{c}}\) and \(X^{r}\in\mathbb{R}^{R\times N_{c}}\), where \(R\) is the number of object queries (same as proposals) and \(N_{c}\) is the number of object (disease) categories. Let \(\sigma(\cdot)\) denote the softmax operation. We then generate the class-wise and detection-wise softmax matrices \(C\in\mathbb{R}^{R\times N_{c}}\) and \(D\in\mathbb{R}^{R\times N_{c}}\), where \(C_{ij}=\sigma((X^{c})_{j}^{T})i\) and \(D_{ij}=\sigma(X^{r}_{i})j\), and \(X_{i}\) denotes the \(i\)-th row of \(X\). \(C\) provides classification probabilities of each proposal, and \(D\) provides the relative score of the proposals corresponding to each class. The two matrices are element-wise multiplied and summed over the proposal dimension to generate the image-level classification predictions, \(\phi\in\mathbb{R}^{N_{c}}\):
\[\phi_{j}=\sum_{i=1}^{R}C_{ij}\cdot D_{ij} \tag{1}\]
Notice, \(\phi_{j}\in(0,1)\) since \(C_{ij}\) and \(D_{ij}\) are normalized. Finally, the negative log-likelihood loss between the predicted labels, and image labels \(y\in\mathbb{R}^{N_{c}}\) is computed as the MIL loss:
\[\mathcal{L}_{\text{mil}}=-\sum_{i=1}^{N_{c}}[y_{i}\log\phi_{i}+(1-y_{i})\log{( 1-\phi_{i})}] \tag{2}\]
The MIL classifier further suffers from overfitting to the distinctive classification features due to the mismatch of classification and detection probabilities [24]. To tackle this, we further use a self-supervised module to improve the instances.
**Self-supervised Instance Learning:** Inspired by [24], we design a instance learning module with \(N_{r}\) blocks in a self-supervised framework to refine the instance scores with instance-level supervision. Each block consists of an FC layer. A class-wise softmax is used to generate instance scores \(x^{n}\in\mathbb{R}^{R\times(N_{c}+1)}\) at \(n\)-th block. \(N_{c}+1\) includes the background/ no-finding class. Instance supervision of each layer (\(n\)) is obtained from the scores of the previous layer (\(x^{(n-1)}\)). The instance supervision for the first layer is obtained from the MIL head. Suppose \(\hat{y}^{n}\in\mathbb{R}^{R\times(N_{c}+1)}\) is the pseudo-labels of the instances. An instance (\(p_{j}\)) is labelled 1 if it overlaps with the highest-scoring instance by a chosen threshold.
Otherwise, the instance is labeled 0 as defined in Eq. (3):
\[m_{j}^{n}=\operatorname*{argmax}_{i}x_{ij}^{(n-1)}\ ;\qquad\hat{y}_{ij}^{n}= \begin{cases}1,&IoU(p_{j},p_{m_{j}^{n}})\geq\tau\\ 0,&\text{otherwise}\end{cases} \tag{3}\]
The loss over the instances is given by Eq. (4):
\[\mathcal{L}_{ins}=-\frac{1}{N_{r}}\sum_{n=1}^{N_{r}}\frac{1}{R}\sum_{i=1}^{R} \sum_{j=1}^{N_{e}+1}w_{i}^{n}\hat{y}_{ij}^{n}\log x_{ij}^{n} \tag{4}\]
Here \(x_{ij}^{n}\) denotes the score of \(i\)-th instance for \(j\)-th class at layer \(n\). Following [24], the loss weight \(w_{i}^{n}=x_{i\,m_{j}^{n}}^{(n-1)}\) is applied to stabilize the loss. Assuming \(\lambda\) to be a scaling value, the overall loss function is given in Eq. (5):
\[\mathcal{L}=\mathcal{L}_{mil}+\lambda\mathcal{L}_{ins} \tag{5}\]
**Comparison with SOTA:** Tab. 1 shows the bounding box localization results of the WSOD task. Our method surpasses all latest SOTA WSOD techniques by 9 points, and establishes itself as a strong WSOD baseline for GBC localization in US images. Our method also achieves 7-point higher AP score for polyp detection. We present visualizations of the predicted bounding boxes in Fig. 4 which shows that the localization by our method is more precise and clinically relevant as compared to the baselines.
**Generality of the Method:** We assess the generality of our method by applying it to polyp detection on colonoscopy images. The applicability of our method on two different tasks - (1) GBC detection from US and (2) Polyp detection from Colonoscopy, indicates the generality of the method across modalities.
**Ablation Study:** We show the detection sensitivity to the self-supervised instance learning module in Tab. 2 for two variants, (1) vanilla MIL head on DETR, and (2) MIL with self-supervised instance learning on DETR. Tab. 2 shows the Average Precision and detection sensitivity for both diseases. The results establish the benefit of using the self-supervised instance learning. Other ablations related to the hyper-parameter sensitivity is given in Supplementary Fig. S1.
**Classification Performance:** We compare our model with the standard CNN-based and Transformer-based classifiers, SOTA WSOD-based classifiers, and SOTA classifiers using additional data or annotations (Tab. 3). Our method beats the SOTA weakly supervised techniques and achieves 1.2% higher sensitivity for GBC detection. The current SOTA GBC detection models require additional bounding box annotation [3] or, US videos [5, 7]. However, even without these additional annotations/ data, our method reaches 86.1% detection sensitivity. The results for polyp classification are reported in Tab. 4. Although our method has a slightly
Figure 4: Qualitative analysis of the predicted bounding boxes. Ground truths are in blue, and predictions are in green. We compare with SOTA WSOD techniques and our proposed method. Our method predicts much tighter bounding boxes that cover the clinically significant disease regions.
lower specificity, the sensitivity surpasses the baselines reported in literature [16], and the SOTA WSOD based baselines.
## 5 Conclusion
GBC is a difficult-to-detect disease that benefits greatly from early diagnosis. While automated GBC detection from US images has gained increasing interest from researchers, training a standard image classification model for this task is challenging due to the low inter-class variance and high intra-class variability of malignant regions. Current SOTA models for GBC detection require costly bounding box annotation of the pathological regions, or additional US video data, which limit their applicability. We proposed to formulate GBC detection as a weakly supervised object detection/ localization problem using a DETR with self-supervised instance learning in a MIL framework. Our experiments show that the approach achieves competitive performance without requiring additional annotation or data. We hope that our technique will simplify the model training at
\begin{table}
\begin{tabular}{l c c c} \hline \hline
**Method** & **Acc.** & **Spec.** & **Sens.** \\ \hline TS-CAM [10] & 0.704 \(\pm\) 0.017 & 0.394 \(\pm\) 0.042 & 0.891 \(\pm\) 0.054 \\ SCM [2] & 0.751 \(\pm\) 0.026 & 0.523 \(\pm\) 0.014 & 0.523 \(\pm\) 0.016 \\ OD-WSCL[21] & 0.805 \(\pm\) 0.056 & 0.609 \(\pm\) 0.076 & 0.923 \(\pm\) 0.034 \\ WS-DETR [19] & 0.857 \(\pm\) 0.071 & 0.812 \(\pm\) 0.088 & 0.882 \(\pm\) 0.034 \\ Point-Beyond-Class [18] & 0.953 \(\pm\) 0.007 & 0.993 \(\pm\) 0.004 & 0.924 \(\pm\) 0.011 \\ \hline Ours & 0.878 \(\pm\) 0.067 & 0.785 \(\pm\) 0.102 & 0.932 \(\pm\) 0.022 \\ \hline \hline \end{tabular}
\end{table}
Table 4: Comparison with SOTA WSOD baselines in classifying Polyps from Colonoscopy images.
\begin{table}
\begin{tabular}{l l c c c} \hline \hline
**Type** & **Method** & **Acc.** & **Spec.** & **Sens.** \\ \hline \multirow{2}{*}{CNN Classifier} & ResNet50 [13] & 0.867 \(\pm\) 0.031 & 0.926 \(\pm\) 0.069 & 0.672 \(\pm\) 0.147 \\ & InceptionV3 [23] & 0.869 \(\pm\) 0.039 & 0.913 \(\pm\) 0.032 & 0.708 \(\pm\) 0.078 \\ \hline \multirow{4}{*}{Transformer Classifier} & ViT [9] & 0.803 \(\pm\) 0.078 & 0.901 \(\pm\) 0.050 & 0.860 \(\pm\) 0.068 \\ & DEIT [25] & 0.829 \(\pm\) 0.030 & 0.900 \(\pm\) 0.040 & 0.875 \(\pm\) 0.063 \\ & PVTv2 [26] & 0.824 \(\pm\) 0.033 & 0.887 \(\pm\) 0.057 & 0.894 \(\pm\) 0.076 \\ & RadFormer [4] & 0.921 \(\pm\) 0.062 & 0.961 \(\pm\) 0.049 & 0.923 \(\pm\) 0.062 \\ \hline \multirow{4}{*}{Additional Data/ Annotation} & USCL [7] & 0.889 \(\pm\) 0.047 & 0.895 \(\pm\) 0.054 & 0.869 \(\pm\) 0.097 \\ & US-UCL [5] & 0.920 \(\pm\) 0.034 & 0.926 \(\pm\) 0.043 & 0.900 \(\pm\) 0.046 \\ & GBCNet [3] & 0.921 \(\pm\) 0.029 & 0.967 \(\pm\) 0.023 & 0.919 \(\pm\) 0.063 \\ & Point-Beyond-Class [18] & 0.929 \(\pm\) 0.013 & 0.983 \(\pm\) 0.042 & 0.731 \(\pm\) 0.077 \\ \hline \multirow{4}{*}{SOTA WSOD} & TS-CAM [10] & 0.862 \(\pm\) 0.049 & 0.879 \(\pm\) 0.049 & 0.751 \(\pm\) 0.045 \\ & SCM [2] & 0.795 \(\pm\) 0.101 & 0.783 \(\pm\) 0.130 & 0.849 \(\pm\) 0.072 \\ & OD-WSCL [21] & 0.815 \(\pm\) 0.144 & 0.805 \(\pm\) 0.129 & 0.847 \(\pm\) 0.214 \\ & WS-DETR [19] & 0.839 \(\pm\) 0.042 & 0.843 \(\pm\) 0.028 & 0.833 \(\pm\) 0.034 \\ \hline WSOD & Ours & 0.834 \(\pm\) 0.057 & 0.817 \(\pm\) 0.061 & 0.861 \(\pm\) 0.089 \\ \hline \hline \end{tabular}
\end{table}
Table 3: Performance comparison of our method and other SOTA methods in GBC classification. We report accuracy, specificity, and sensitivity.
the hospitals with easily available data locally, enhancing the applicability and impact of automated GBC detection.
|
2309.13403 | Game of Travesty: Decoy-based Psychological Cyber Deception for
Proactive Human Agents | The concept of cyber deception has been receiving emerging attention. The
development of cyber defensive deception techniques requires interdisciplinary
work, among which cognitive science plays an important role. In this work, we
adopt a signaling game framework between a defender and a human agent to
develop a cyber defensive deception protocol that takes advantage of the
cognitive biases of human decision-making using quantum decision theory to
combat insider attacks (IA). The defender deceives an inside human attacker by
luring him to access decoy sensors via generators producing perceptions of
classical signals to manipulate the human attacker's psychological state of
mind. Our results reveal that even without changing the classical traffic data,
strategically designed generators can result in a worse performance for
defending against insider attackers in identifying decoys than the ones in the
deceptive scheme without generators, which generate random information based on
input signals. The proposed framework leads to fundamental theories in
designing more effective signaling schemes. | Yinan Hu, Quanyan Zhu | 2023-09-23T15:27:26Z | http://arxiv.org/abs/2309.13403v1 | # Game of travesty: decoy-based psychological cyber deception for proactive human agents
###### Abstract
The concept of cyber deception has been receiving emerging attention. The development of cyber defensive deception techniques requires interdisciplinary work, among which cognitive science plays an important role. In this work, we adopt a signaling game framework between a defender and a human agent to develop a cyber defensive deception protocol that takes advantage of the cognitive biases of human decision-making using quantum decision theory to combat insider attacks (IA). The defender deceives an inside human attacker by luring him to access decoy sensors via generators producing perceptions of classical signals to manipulate the human attacker's psychological state of mind. Our results reveal that even without changing the classical traffic data, strategically designed generators can result in a worse performance for defending against insider attackers in identifying decoys than the ones in the deceptive scheme without generators, which generate random information based on input signals. The proposed framework leads to fundamental theories in designing more effective signaling schemes.
## I Introduction
Cyber deception has been a growing class of proactive defense techniques over the past several decades that contribute to combat increasingly intelligent, stealthy, and sophisticated attacks. Important cyber deception technologies including moving target defense [1], honey-x [2] (such as honeypots, honeytokens), etc help defenders reach a better security outcome against ever-growingly sophisticated attacks and threats, among which advanced persistent threats (APT) and insider threats [3] serve as two typical examples. Reports have revealed that cyber deception technologies have reduced the cost arising from data breaches by 51% in 2022 [4]. Cyber deception techniques take advantage of the human aspects to achieve two-fold purposes: one is to conceal the truth, and the other is to reveal the false. The ultimate goal of applying defensive cyber deception techniques is to delay, stop, or interrupt attacks. Many techniques can achieve the concept of deception: dazzling, mimicking [5], inventing, decoying [6]. Useful defensive deception protocols characterize the strategic interactions among three classes of agents: defenders, users, and adversaries. A useful framework to design cyber deception mechanisms needs to capture several main features. First, the defender must strategically treat users and adversaries with different purposes. In general, the defender should enhance the efficacy of access for a normal user and reduce the efficacy of access for adversaries. In addition, a sophisticated adversary behaves intelligently but also suffers from limitations arising from human aspects.
Inter-disciplinary work is needed to help to develop next-generation deception techniques incorporating psychological models to characterize the behaviors of human attackers and system users. The interdisciplinary nature of the concept of deception constitutes a major challenge for researchers in building cyber deceptive defense systems.
Many game-theoretical models [7] characterize the methods and mechanisms in the concept of deception in detection frameworks in cyber security. One major limitation of applying game-theoretical formulation solely to formulate threats is that such models often assume all agents are fully rational, while in real practices the behaviors of attackers and defenders often deviate from rationality [7], in part because devices in the network are operated by humans.
One aspect of making breakthroughs in research in deception techniques is to adopt more accurate models in cognition to form more accurate predictions of the human attacker's behaviors. Such a direction is called cyber-psychology.
Studies have shown that human reveals bounded rationality in decision-making due to a variety of cognitive biases. As a result, biases have played a cornerstone component in a successful deception mechanism not only in cyber security but also in social sciences.
There are some other phenomena in cognitive science such as order effect, disjunction effect, violation of the total law of probability, etc, that are missed by previous deception mechanisms. New models need to be raised to characterize those phenomena. Game-theoretical models [7] assume that both the sensor and the receiver present full rationality and may lead to a more conservative strategy for the defensive systems that manipulate data incoming to the sensors. One difference between human decision-making theories and general decision theories is that human suffers from cognitive bias of various kinds such as margin effect, order effect, etc, that incur human agents to arrive at choices leading to suboptimal outcomes.
There is literature catching the cognitive biases of humans arising from risk preferences and applying alternative frameworks such as quantal response [8] or prospect theory [9]. In behavioral economics [10], human's bounded rationality is presented in a variety of ways. Recently, there are experimental studies [11] where experts play attackers who aim to hack into systems to gather information and aim to avoid decoys, while the defense system adopts cyber deception and cyber psychology techniques to prevent real systems from being attacked and credentials from being stolen. The goal of the experimental study is to verify/invalidate the two hypotheses: one, defensive cyber tools and psychological deception impede attackers who seek to penetrate computer systems and exfiltrate information; and two, defensive deception tools are effective even if an attacker is aware of their use.
Experimental data essentially show that both hypotheses are
true. But there is a lack of theories explaining why they are true. Constructing theories characterizing human agents' behaviors that take advantage of bounded rationality is beneficial in understanding human behavior to counteract them.
The quantum decision theories [12] catch the bounded rationality arising from order effect, disjunct effect, and violation of the total law of probability. We are not arguing that the human brain acts like a quantum computer in the physical sense. Instead, we argue that the quantum decision theory functions as a _generative parsimonious generative black-box model_ for human's decision-making processes that have been corroborated by experiments such as [13]. In this paper, we consider a scenario where sensors generate manipulated data for receivers, who are human agents. We assume the sensors constitute as part of the defense systems and the human agents want to attack sensors. The defensive systems aim at deceiving the human agents to mislead them to attack the wrong nodes. Such a system is called human-sensor system in cybersecurity.
The purpose of this paper is to develop an appropriate framework decoying as a method of cyber deception to characterize the sensor's manipulation of the traffic data and the attacker's strategies for attacking the sensors. The challenge is to consider the receivers are human agents who make decisions suffering from a variety of bounded rationality arising from cognitive biases such as marginal effect, order effect, violation of the total law of probability, etc.
In this paper, we propose the 'traversty game' (TG) framework as a signaling game framework based on quantum information for constructing a cyber defensive deception to bring forth a desirable security scenario where the defender interacts with human adversaries to reduce the efficacy of the attacks by taking advantage of bounded rationality in human's decision-making process. The defender, or the deceptive defensive system, has a private type that characterizes the state of the system. That is, what connects the network and the human agent could be a regular network sensor or a decoy. It is common knowledge that a normal sensor and a decoy produce traffic data whose message volumes obey different distributions. The defensive system contains a sensor and a generator. The sensor collects original data traffic from the network and distorts data traffic. The generator is a mechanism that produces verbal messages to manipulate the human agent's perception of the classical traffic data. The cyber deceptive defensive system associates classical (maybe distorted) traffic data with the manipulated perception on the traffic data to deliver to human agents composite signals, which are characterized as 'prospect states' [14] in quantum decision theory [12]. Upon receiving the prospect states, the human agent (receiver) formulates the true state of the defensive system (a normal sensor or a decoy) into a quantum hypothesis testing problem and designs optimal prospect operator-valued measurements to minimize his weighted risk. The human agent then decides whether to access the system or not. The goal of the human agent, no matter his type, is to access sensors and avoid decoys. We thus formulate the human agent's objective as the weighted Bayesian risk that depends on the misdetection error rate and false alarm error rate. After the generator is implemented, both the defensive system and the human agent update their intelligence of each other through the Bayes' rule. The optimal behavior of the defensive deceptive system is to guide the human agent to access the true sensor while preventing it from accessing the decoy if the type were normal and vice versa. Correspondingly, the optimal behavior of the human agent is to access the system if he were to find out the defensive system was likely normal and vice versa. Furthermore, we adopt the concept of repeated games with incomplete information [15][16] to study the temporal evolution of the strategies of both the defender and the human agent when both parties gather more and more information about each other.
We formulate the decision problem for the human agent and derive that under mild assumptions, the anticipated behavior of the human attacker resembles quantum likelihood ratio test (QLRT). In the meantime, we formulate the defender's problem of designing optimal mixed type-dependent prospect states as a mixed-integer programming problem. We characterize how defense systems could make up the weakness of attackers as human agents by taking advantage of the bounded rationality. In particular,we adopt the concept of prospect probabilities [14], where the likelihood consists of two terms: utility factor and attraction factor [17]. The utility factor represents probability that arises from the classical signals, while the latter term does not arise from the actual data traffic but the perception of the data traffic due to quantum interference of different psychological state corresponding to the same classical signal. The attraction factor could lead the human agent towards (or away from) a certain choice.
The main contribution of this work is two-fold. First, we develop a holistic framework to capture cyber-psychology techniques, specifying how a defender could implement cyber deception techniques by manipulating perceptions of signals to mislead an inside human attacker using her bounded rationality. Second, we illustrate and analyze human attacker's detection performance of decoys to show how strategically designed perceptions can influence human's decision-making and thus mitigate insider's attacks. Our analytical and numerical results provide hope for building next-generation deception mechanisms to combat human-related insider attacks in network security.
The rest of the paper is organized as follows. In section II we formulate the human-sensor system in cyber deception as a signaling game. In section II-E we characterize the optimal behavior of the human agent and the cyber defensive system using the concept of equilibrium. In section III we extend our signaling game formulation into a dynamic scenario, studying how the efficacy of the attacks evolve through time and how the defensive system and human agent can change their strategies as they both gather more intelligence from each other. In section IV we provide a numerical case study on honeypots to illustrate our proposed framework. Finally we conclude in section V.
### _Related work:_
Game theory for cyber deceptionIn network security, game-theoretic frameworks have been widely applied for building proactive defense, particularly defensive cyber deception
[18] to enhance the security and privacy for network users. Games with incomplete information [15] provide a generic protocol to characterize the asymmetry of information induced by deception. Typical game frameworks that have been applied in network security include zero-sum games [7], Bayesian Stackelberg security games [19], partially observable stochastic games [20]. These complete or incomplete information game frameworks capture and interpret the attackers' and defenders' behaviors by computing appropriate concepts of equilibrium, depending on the information structure. In this work, we adopt the framework of a signaling game to characterize the relationship between the defensive deception system and human attacker, yet introduce quantum decision theory and the concept of quantum information to exploit the cognitive bias of the human attackers.
Cyber deception through manipulating psychological statesThere has been surging studies in formulating cyber deception techniques via psychological manipulation. Authors in [11] have experimentally verified that it is not only the messages from the defensive system but also the perception of the messages, that will influence the human attacker's behavior. Authors in [21] propose an instance-based-learning (IBL) model of a human insider attacker using adaptive-control-of-thought-rational (ACT-R) [22] as the cognitive architecture, which takes into consideration features in cognitive science: forgetting, power law of practice, partial matching process, etc. The IBL model also formulates how memory retrieval dynamics lead to cognitive biases. Our proposed framework adopts quantum decision theory, a generative parsimonious model to capture other biases in human's decision-making process [12] such as order effect and disjunction fallacy, etc. In addition, our proposed work focuses on how the defender system takes advantage of human attacker's biases by designing strategic generators in decoy systems that take advantage of human attacker's biases to combat.
Insider threat/attack mitigation designsSeveral works have proposed guidelines for adopting honeypots in insider threats mitigation programs [23][24]. Game-theoretical frameworks have been adopted for formulating insider threats. Authors in [25] use game-theoretical frameworks to develop detection mechanisms for insider-threats. Authors in [26] adopt risk measures and extend risk analysis to cooperate with organizational culture. These works seek to contribute to an assessable understanding of the behaviors of adversarial insiders to develop more accurate best-responding strategies to combat insider threats but ignore the human aspects that lead to non-compliance of fully rational behaviors, such as the cognitive biases of various kinds in the human decision-making process. The authors in [27] have adopted the framework of mechanism design to address compliance and non-compliance for selfish and adversarial insiders. Our work adopts the concept of decoys to detect and monitor the behavior of insider attackers, deterring them from accessing normal sensors by taking advantage of cognitive biases of insiders to strategically design different perceptions of messages to influence their decision-making.
### _Notations_
Throughout the paper, we use the following notations. We may introduce new notations in specific paragraphs later. We use the following notations:
\(\mathcal{H}_{\mathcal{C}}\): the (Hilbert) space overall signals;
\(|s\rangle\in\mathcal{H}_{\mathcal{C}}\): a generic state associated with signal \(s\in S\);
\(\mathcal{H}_{\mathcal{C}}\): the (Hilbert) space over all states of mind;
\(|\mathbf{\varphi}\rangle\in\mathcal{H}\): a generic state of mind;
\(\mathcal{H}=\mathcal{H}_{\mathcal{C}}\otimes\mathcal{H}\): the Hilbert space (over the set of real numbers \(\mathbb{R}\)) of all 'prospects'.
\(\mathcal{H}^{*}\): the dual space of \(\mathcal{H}\);
\(\mathcal{S}\): the subset of positive,Hermitian, bounded operators on \(\mathcal{H}\) whose trace is \(1\);
\(S\): the space of signals;
\(\Delta(\cdot)\): the set of probability measures over the given space;
**1**: the identity operator. Its domain and range depend on the context;
\(p_{k}\in\Delta(X)\): the common prior/common posterior belief of the true state after \(k-1\) observations have been generated;
\(X=\{1,2\}\): the state space of the system. A generic state is denoted as \(x\): \(x=1,2\) represents the system is abnormal and normal, respectively; We denote \(\dim(X)=M=2\).
\(a,b\in\mathbb{R}^{S\times K}\): generic perception matrices from the defender based on its true type \(0,1\).
In addition, for any operator \(A\in B(\mathcal{H})\), we denote its conjugate transpose as \(A^{\dagger}\).
## II The Formulation of Traversy Game
### _Purpose of formulation_
Insider threats [3] has long been an important issue in cyber-security because, in contrast to external attackers, insiders are aware of the structure of the defensive system, know about vulnerability, and more likely to launch strategic attacks to destroy the system effectively. Thus defensive deception techniques such as decoy systems have been implemented for the detection and mitigation of insider threats [24]. The
Fig. 1: The human-sensor system in a network security scheme. The defensive cyber deception system consists of a normal sensor and a decoy, each of which is cascaded with a generator. The human agent is a receiver taking manipulated network traffic associated with perception messages. The normal sensor and the decoy produce manipulated traffic data obeying different distributions. The location of the decoy is a private type for the defensive system. The receiver is also associated with three private types: user, prospect attacker, and quantum attacker. The goal of the defensive system lure the human attacker to access the decoy rather than the normal sensor. The goal of a human attacker aims at recognizing the decoy to avoid while making use of normal sensors.
goal of designing a defensive deception mechanism is to expose the vulnerabilities of their decoys to attract adversaries to access honeypots/decoys to trigger alerts and thus the defensive system can gather their information. To address the challenge, we need a novel configuration of decoy sensors and normal sensors to develop a next-generation defensive cyber deception system that exploits human biases to attract human attackers to focus on the decoy. Previous literature has pointed out [7] that future defensive systems in network security must also consider human factors when predicting attackers' behaviors. Human agents are subject to decision-making biases and exhibit other non-rational behavior. To this end, it is effective to introduce cyber-psychological techniques. Cyber-psychology [28] is the scientific field that integrates human behavior and decision-making into the cyber domain allowing us to understand, expect, and further influence attacker's behaviors. Experimental studies [11] have shown that by providing information to the human factor, their mental model on the cyber defensive system was influenced and their decisions changed accordingly. To our best knowledge, there is still a lack of theoretical frameworks to interpret and address how those cyber-psychological methods can work effectively to mitigate attacks.
### _The game formulation_
In this section, we propose a game-theoretical framework on cyber defensive deception systems that mitigates insider threats by adopting cyber-psychological techniques. We show that cyber-psychological techniques can demonstrate a better deterrence of insider threats than their classical counterparts. We consider the protocols whose scheme is depicted in Figure 1. In short, the defensive deception system (she) and the receiver (human agent, he) play a signaling game \(\mathcal{G}\). The defensive system consists of two sensors: one normal, and one decoy, each of which is cascaded with a generator that generates psychological messages reflecting the perception of the manipulated data traffic. The defensive system connects one of the sensors to the human agent. The (human) receiver knows there is a decoy sensor but the placement of the decoy is unknown and serves as the defensive system's private type \(x\in\{0,1\}\). He could only make decisions based on classical traffic data and the perception messages associated with the data. Normal sensor accepts observations obeying distribution \(g_{0}\), while decoy accepts observations obeying \(g_{1}\). Denote \(\hat{s}\) as random variables characterizing the random observations \(s\) as corresponding realizations. We say that human-agent faces a hypothesis testing problem:
\[H_{1}:\hat{s}\sim g_{1}(s),\ \ H_{0}:\hat{s}\sim g_{0}(s).\]
The goal of the defense system is to strategically configure the normal sensor, the decoy sensor, as well as the generators to attract human attackers to access the decoy. The defensive system earns a reward when the adversarial agent accesses the decoy since such access provides information and triggers the alert of the defensive system [11].
Meanwhile, the cyber deception system obtains observations \(y\) from the network traffic (see figure 1). Both normal and decoy sensors produce manipulated observations \(s^{\prime}\) and pass them into cascading generators. Based on manipulated observations \(s^{\prime}\) and the private type \(x_{0}\), the connecting generator produces psychological signals characterized by a set of coefficients \(\{a_{sk},b_{sk}\}_{s,k}\). The distorted observations together with psychological signals constitute the prospect state \(|\Phi_{1}\rangle,|\Phi_{0}\rangle\) in the following way:
\[|\Phi_{1}(s)\rangle=\sum_{k}a_{sk}|s\varphi_{k}\rangle,\ |\Phi_{0}(s)\rangle= \sum_{k}b_{sk}|s\varphi_{k}\rangle, \tag{1}\]
where we inherit Dirac's notations as introduced in section I. Such a quantum state can be interpreted as messages announced to change the human agents' perceptions. Such a generator produces stochastic messages manipulating the user's perception of messages. One example is an announcement like 'the message comes from a real sensor'. For instance, authors in [29] conducted experiments where the test takers are informed of the existence of a decoy system. The quantum-mechanical representation of messages can be referred to in [12]. Upon observing \(|\Phi\rangle\), the human agent, randomly measures the prospect using one of the prospect basis \(|s\varphi_{k}\rangle\) and updates his prior belief on the defender's type. His mind is a composite prospect state [14]. The human agent arrives at a decision \(\alpha=\delta(|s\varphi_{k}\rangle)\in[0,1]\), indicating the probability that the attacker thinks the hypothesis \(H_{1}\) holds true.
Sensor's (Defender's) problemThe defender strategically designs manipulated classical observations from both sensors to mislead or promote human judgment. In the meantime, the defender creates type-dependent perceptions \(a=(a_{sk})_{s,k},b=(b_{sk})_{sk}\in\mathbb{R}^{S\times K}\) regarding every signal \(s\in S\) corresponding to the type \(x=1\) and \(x=0\) accordingly. The defender will earn a positive reward when the normal user accesses the normal sensor and a negative one when the attacker accesses the normal user or avoid accessing the decoy.
Sensor's actions and strategiesDepending on the true type \(x\) of deception system and well as the classical signal \(s\), a generic defender's action involves a pair of prospect states \(|\Phi_{x}(s)\rangle\in\mathcal{H}\), We may also equivalently characterize the sensor's actions as two matrices \((a_{sk},b_{sk})\) since they can be written as in (1).
If we consider that the defensive system adopts mixed strategies, we could characterize the mixed strategies as density operators \(\rho_{1},\rho_{0}\) as follows:
\[\rho_{1}=\sum_{s,k,k^{\prime}}f_{1}(s)a_{sk}a_{sk^{\prime}}|s \varphi_{k}\rangle\langle s\varphi_{k}^{\prime}|, \tag{2}\] \[\rho_{0}=\sum_{s,k,k^{\prime}}f_{0}(s)b_{sk}b_{sk^{\prime}}|s \varphi_{k}\rangle\langle s\varphi_{k}^{\prime}|, \tag{3}\]
where \(f_{1},f_{0}\) are probability density functions over \(M\). Another way to characterize the sensor's actions is via the utility factor
and attraction factor. Denote
\[\langle\Phi_{1}|P|\Phi_{1}\rangle =\sum_{s,k,k^{\prime}}a_{sk}a_{sk^{\prime}}\langle s\Phi_{k}|P|s \varphi_{k^{\prime}}\rangle\] \[=\sum_{s,k}a_{sk}^{2}\langle s\varphi_{k}|P|s\varphi_{k}\rangle+ \sum_{s,k\neq k^{\prime}}a_{sk}a_{sk^{\prime}}\langle s\varphi_{k}|P|s\varphi_ {k^{\prime}}\rangle\] \[\equiv u_{1}(s)+q_{1}(s)\] \[\langle\Phi_{0}|P|\Phi_{0}\rangle =\sum_{s,k,k^{\prime}}b_{sk}b_{sk^{\prime}}\langle s\varphi_{k}|P| s\varphi_{k^{\prime}}\rangle\] \[=\sum_{s,k}b_{sk}^{2}\langle s\varphi_{k}|P|s\varphi_{k}\rangle+ \sum_{s,k\neq k^{\prime}}b_{sk}b_{sk^{\prime}}\langle s\varphi_{k}|P|s\varphi_ {k^{\prime}}\rangle\] \[\equiv u_{0}(s)+q_{0}(s),\]
where \(u\) is the utility factor and \(q\) is the attraction factor of the prospect state upon the decision operator \(P\). We here define
\[u_{1}(s) =\sum_{|s\varphi_{k}\rangle\in\mathcal{R}}a_{sk}^{2}, \tag{4}\] \[q_{1}(s) =\sum_{|s\varphi_{k}\rangle,|s\varphi_{k}\rangle\in\mathcal{R}}a _{sk}a_{sk^{\prime}},\] (5) \[u_{0}(s) =\sum_{|s\varphi_{k}\rangle\in\mathcal{R}}b_{sk}^{2},\] (6) \[q_{0}(s) =\sum_{|s\varphi_{k}\rangle,|s\varphi_{k^{\prime}}\rangle\in \mathcal{R}}b_{sk}b_{sk^{\prime}} \tag{7}\]
According to [30], we adopt some calibration rules to construct the attraction factor \(p\) so that it is related to the utility factor \(u\) as
\[q_{j}(s)=\zeta\min\{u_{j}(s),1-u_{j}(s)\},\ j=0,1,\]
where we can further denote \(\zeta\in[-1,1],\ \text{and}\ \text{notice}\ \text{again}\ \text{that}\ u_{j}(s)\in[0,1].\ \text{Furthermore}\ u_{j}(s)=1\ \text{only}\ \text{when}\ \text{all}\ |s\varphi_{k}\rangle\in\mathcal{R}\ \text{for}\ \text{all}\ k\in K\ \text{for}\ \text{the}\ \text{given}\ s.\ \text{The opposite goes with}\ u_{j}(s)=0.\ \text{A}\ \text{similar}\ \text{ goes with}\ u_{0}(s)=1.\ \text{Here}\ \text{we}\ \text{use}\ \text{the parameter}\ \zeta\) to simplify the hyperbolic tangent function used in [17]. We introduce the following assumption:
**Assumption 1**.: _The coefficients \(a,b\in\mathbb{R}^{S\times K}\) as in (1) exist for every \(u_{1},u_{0}\in[0,1]^{S}\)._
Assumption 1 guarantees that we can construct \(a_{sk},b_{sk}\) using these equalities (of course, there may not only be one exact choice of \(a,b\) reaching the same utility factor and attraction factor). Now it is equivalent to use the quantities defined in (4)(5)(6)(7) to characterize the defender system's behavior.
Defender's utility/loss functionWe now formulate the defender's loss function to minimize. The goal of the defender is to mitigate the human attacker's performance in identifying decoys so his objective function is the genuine detection rate introduced in (16). This is because every time the human attacker commits an error, or equivalently, access to the decoy sensor, an alert will be triggered and the defensive system can gather intelligent information from the human agent [18].
The defensive deception system designs type-dependent distributions \(\rho_{1},\rho_{0}\) under the type \(x\) by minimizing the following objective \(J_{S}^{x}:X\times\Delta(M)\times\mathcal{B}(\mathcal{H})\rightarrow\mathbb{R}\) as
\[J_{S}^{x}(x,\rho_{x},P_{1}^{*})=\text{Tr}(\rho_{1}P_{1}^{*}),\]
where \(P_{1}^{*}\in B(\mathcal{H})\) denotes the optimal prospect-projection-based decision policy for the human agent. Using the theory of potential games [31], we know the human is equivalent to minimize the following objective \(J_{D}:\Delta(M)\times\mathcal{B}(\mathcal{H})\rightarrow\mathbb{R}\):
\[\begin{split}&\min_{\begin{subarray}{c}a,b\\ f_{1},f_{0}\end{subarray}}J_{D}(a,b,P_{1}^{*})=J_{S}^{1}(1,\rho_{1},P_{1}^{*}) +J_{S}^{0}(0,\rho_{0},P_{1}^{*})\\ &\Leftrightarrow\min_{\begin{subarray}{c}\rho_{1},f_{0}\end{subarray} }\delta_{\mathcal{H}(\left|s\varphi_{k}\right\rangle)>0}\langle s\varphi_{k}| \rho_{1}|s\varphi_{k}\rangle+1,\end{split} \tag{8}\]
where we compute the trace using the prospect basis \(\{\left|s\varphi_{k}\right\rangle\}_{s,k}\). If we adopt \(u_{1},u_{0},f_{1},f_{0}\) as the defender's type-dependent strategy, we can introduce the objective function \(F:[0,1]\times[0,1]\times L^{1}(S)\times L^{1}(S)\rightarrow\mathbb{R}\) as follows:
\[\min_{\begin{subarray}{c}f_{1},f_{0}\\ f_{1},f_{0}\end{subarray}}F(u_{1},u_{0},f_{1},f_{0})\Leftrightarrow\min_{ \begin{subarray}{c}f_{1},f_{0}\\ f_{1}^{*},f_{0}\end{subarray}}\sum_{\begin{subarray}{c}a\in\mathcal{H}_{s} \end{subarray}}f_{1}(s)u_{1}(s). \tag{9}\]
with \(\mathcal{H}_{s}:=\{s:\exists k,\ |s\varphi_{k}\rangle\in\mathcal{R}\}\).
**Proposition 1**.: _Let \((a^{*},b^{*})\) be an optimal solution for the optimization problem (8). Let \(u_{1}^{*},u_{0}^{*}:S\rightarrow[0,1],q_{1}^{*},q_{0}^{*}:S\rightarrow[-1,1]\) be the optimal solution for the optimization problem (9). Then we can construct the relation in (4)(5)(6)(7)._
The proof can be viewed in the appendix VI-A.
The belief updatesUpon receiving the prospect state \(\left|\Phi\right\rangle\in\mathcal{H}\), the human agent first updates the prior belief regarding the defender's type into posterior belief:
\[p(H_{x}|\left|s\varphi_{k}\right\rangle)=\frac{p(H_{x})\text{Tr}(P_{sk}\rho_ {x}P_{sk}^{\dagger})}{p(H_{1})\text{Tr}(P_{sk}\rho_{1}P_{sk}^{\dagger})+p(H_{ 0})\text{Tr}(P_{sk}\rho_{0}P_{sk}^{\dagger})},\ x=0,1, \tag{10}\]
where \(P_{sk}\in\mathcal{H}\) is the projection operator upon a specific the prospect state basis \(\left|s\varphi_{k}\right\rangle\): that is, \(P_{sk}=\left|s\varphi_{k}\right\rangle\langle s\varphi_{k}\right|\).
Human's actionsThe human agent first can estimate the defender's strategies, characterized as mixed prospect states \(\rho_{1}^{*},\rho_{0}^{*}\) at equilibrium. Thus he can construct two density operators under each hypothesis as psychological prospects in (2)(3). The human's action \(\alpha\in[0,1]\) characterizes the probability that the human agent thinks the traffic data come from a decoy (therefore not to access). The human agent arrives at a decision rule \(\delta:\mathcal{H}\rightarrow[0,1],\ \alpha=\delta(\left|s\varphi_{k}\right\rangle)\) upon receiving the prospect state \(\left|s\varphi_{k}\right\rangle\in\mathcal{H}\) from the deceptive defense system through a measurement operator \(P\in B(\mathcal{H})\) as follows:
\[\delta(\left|s\varphi_{k}\right\rangle)=\langle s\varphi_{k}|P|s\varphi_{k}\rangle. \tag{11}\]
Equivalently, the human agent's strategy space is the space of all projective operator-valued measurements (POVM). The human agent applies the concept of Neyman-Pearson hypothesis testing scenario [32]: that is, a human agent aims at maximizing the probability of detection (accessing the normal user) while constraining the probability of false alarm (choosing to access while the target sensor is a decoy). Based on the \(\left|s\varphi_{k}\right\rangle\), the human attacker's empirical false alarm rate is \(p(H_{0}|\left|s\varphi_{k}\right\rangle)\) so we can express his strategy space \(A_{H}\) as follows.
\[A_{H}=\{\delta:\mathcal{H}\rightarrow[0,1]:\delta(\left|s\varphi_{k}\right\rangle) \text{$p(H_{0}|\left|s\varphi_{k}\right\rangle)<\beta$}\},\]
where \(\beta\) is the tolerance that the human agent could have regarding his false alarm rate. The posterior belief \(p(H_{j}|\left|s\varphi_{k}\right\rangle)\) is expressed in (10).
Human agent's type-dependent utility/loss function
The human attacker wants to avoid decoys and access normal sensors. The human agent also suffers from cognitive biases characterized by quantum decision theory. Now the prior belief \(p\) is constructed and updated in terms of the defense system's type \(x\). We now assume that the human agent arrives at a decision based on the posterior belief \(p(H_{1}|\Phi)\): if it is too high, then the human agent will choose \(0\) to avoid the cost of low. Human's optimization problem can be expressed as
\[\max_{\delta\in A_{H}}\delta(|s\varphi_{k}\rangle)p(H_{1}|\,|s\varphi_{k}\rangle)\]
### _Game elements_
We can now summarize our discussions in the previous section and propose our novel protocol for the game in the following definition.
**Definition 1** ('Traversty game'(TG)).: _We define 'game of travesty', a signaling game_
\[\mathcal{G}=\langle\mathcal{I},X,A_{S},A_{H},F_{S},J_{H},p\rangle,\]
_where \(\mathcal{I}=\{defender,human\ attacker\}\) represents the set of players; \(x\in X=\{0,1\}\) be the defender's type (normal or decoy); \(A_{S}=M\times H\) be the classical message space from the defender; \(A_{H}\subset[0,1]\) represents the human agent's action space; \(\mathcal{H}\) be the space of perceptual message from the generator; \(F_{S}:[0,1]^{2}\times[L^{1}(S)]^{2}\times B(\mathcal{H})\rightarrow\mathbb{R}\) be the defender's objective function; \(J_{H}:M\times\mathcal{H}\times A_{H}\rightarrow\mathbb{R}\) be the human agent's type-dependent objective function; \(p\in\Delta(X)\) be the common prior belief of the private types of the defender and the human agent._
### _Relation to classical signaling games_
The proposed traversty game can be considered as a generalization of the hypothesis testing game raised in [33] with two-sided incomplete information, heterogeneous receivers, and adoption of quantum probabilistic model. The framework in [33] consolidates hypothesis testing formulation into signaling game framework [34] where one party, upon knowing the true hypothesis, can strategically manipulate observations to undermine the detection performance. If the defender cannot design perceptions of classical messages using generators, then the travesty game framework reduces to hypothesis testing game framework in [33]. The adoption of the prospect state enhances the cyber deception design by taking advantage of human's bounded rationality to provide the defender extra degrees of freedom. Such degrees of freedom characterize how the human agents' perceptions' of classical messages can contribute to their decision-making process.
There are several scenarios where the defender's strategies are reduced to classical counterparts. Denote \(a,b\) as the matrices of coefficients of the defender in (1). Then when \(a=R_{1}I,b=R_{0}I\), where \(R_{0},R_{1}\) are some column permutation matrices and \(I\) is an identity matrix, then the 'quantum effect' vanishes as the defender associates a unique fundamental'mindset'' regarding every classical signal \(s\).
### _Equilibrium Analysis_
We aim at computing the perfect Bayesian Nash equilibrium [35] (PBNE) to characterize the behaviors of the defense system and the human agents. We can define PBNE of the game \(\mathcal{G}\) as follows:
**Definition 2** (Perfect Bayesian Nash Equilibrium for the game \(\mathcal{G}\)).: _We define the perfect Bayesian Nash equilibrium (PBNE) of the signaling game \(\mathcal{G}\) as the following tuple \((u_{1}^{*},u_{0}^{*},\delta^{*},p)\) meeting the following requirements:_
1. _(Human agent's sequential rationality)_ \[\delta^{*}(|s\varphi_{k}\rangle)\in\arg\min_{\delta\in A_{H}}J_{H}(|s\varphi_{ k}\rangle,u_{1}^{*},u_{0}^{*},\delta),\] (12)
2. _(Defensive system's sequential rationality)_ \[(u_{1}^{*},u_{0}^{*})\in\arg\min_{u_{0},u_{1}}F(u_{1},u_{0},f_{1},f_{0},\delta ^{*}),\;x\in\{0,1\},\] (13)
3. _(Belief consistency) The belief is updated according to Bayes' rule:_ \[p(H_{j}|\,|s\varphi_{k}\rangle)=\frac{p(H_{j},|\,|s\varphi_{k}\rangle)\langle s \varphi_{k}|\rho_{j}|s\varphi_{k}\rangle}{\sum_{f=0,1}p(H_{f})\langle s \varphi_{k}|\rho_{f}^{*}|s\varphi_{k}\rangle},\;\;j=0,1.\]
We can derive the human agent's optimal decision rule as follows.
**Proposition 2**.: _Consider \(\mathcal{G}\) to be the travesty game in definition (1). Let \((a_{k}^{*},b_{k}^{*})_{\delta\in S\times\delta\times K}\) be defender's coefficients of optimal type-dependent prospect states satisfying the utility factors \(u_{1}^{*},u_{0}^{*}\) in (4)(6), which are characterized as the defender's strategies at equilibrium (13). Then the human attacker's optimal decision rule \(\delta^{*}:\mathcal{H}\rightarrow[0,1]\) at equilibrium defined in (12) receiving the prospect state \(|s\varphi_{k}\rangle\) reduced from superposition state \(|\Phi\rangle\) can be derived as_
\[\delta^{*}(|s\varphi_{k}\rangle)=\begin{cases}1&\frac{f_{1}(s)(\delta^{*}_{ \delta})^{2}}{f_{0}(s)(\delta^{*}_{\delta})^{2}}>(\frac{1}{p}-1)\frac{p(H_{0} )}{p(H_{1})},\\ 0&\text{otherwise}\end{cases} \tag{14}\]
Proof.: See the appendix.
The optimal decision rule \(\delta^{*}\) decomposes the space of prospect states \(\mathcal{H}\) into region of rejection \(\mathcal{R}\) and region of acceptance \(\mathcal{R}_{0}\) as follows:
\[\mathcal{R}=\text{span}\{|s\varphi_{k}\rangle\}_{\delta(|s\varphi_{k}\rangle)= 1},\;\mathcal{R}^{\perp}=\text{span}\{|s\varphi_{k}\rangle\}_{\delta(|s \varphi_{k}\rangle)=0}.\]
Referring to the definition of the decision rule (11) we notice that the diagonal elements of \(P_{1}\) have been specified. We now assume the off-diagonal elements as
\[\langle s\varphi_{k^{\prime}}|P_{1}|s\varphi_{k}\rangle=\begin{cases}\frac{1} {N_{s}}&|s\varphi_{k}\rangle,|s\varphi_{k^{\prime}}\rangle\in\mathcal{R},\\ 0&\text{otherwise},\end{cases} \tag{15}\]
where \(N_{s}\) is the number of vectors among \(\{|s\varphi_{k}\rangle\}\) that lie in \(\mathcal{R}\).
**Proposition 3**.: _The operator \(P_{1}\) defined in (15) and (14) is a projection operator._
Proof.: It is clear that \(P_{1}\geqslant 0\) from proposition 2. From (15) we know \(P_{1}\) is symmetric. In addition \(P_{1}^{2}|s\varphi_{k}\rangle=P_{1}(\sum_{|s\varphi_{k}\rangle\in\mathcal{R}} \frac{1}{N_{s}}|s\varphi_{k}\rangle)=P_{1}|s\varphi_{k}\rangle\) so \(P_{1}^{2}=P_{1}\). Thus \(P_{1}\) is a projection operator [36].
**Assumption 2** (No change of classical message).: _We assume that the defensive deception system does not change the classical message. That is, \(g_{1}=f_{1},\,g_{0}=f_{0}\)._
Equipped with the human agent's optimal decision rule \(\delta^{*}\) in (2), we can simplify (13) and derive the following.
**Proposition 4**.: _Let assumption 2 hold. Let \(\mathcal{G}\) be the signaling game in Definition 1. Let \(\delta^{*}\) be the human attacker's optimal decision rule defined in (12) upon receiving prospect states with coefficients \(a^{*},b^{*}\) defined in (1). Denoting \(\tau_{\varepsilon}=\frac{P(P_{1})P_{0}(s)}{P(k_{0})f_{1}(s)}\frac{1}{P}-1\), we thus derive the defender's strategies \(\mu^{*}_{1}(s),\mu^{*}_{0}(s)\) at equilibrium defined in (13) as by the following cases for every \(s\in S\):_
1. _When_ \(\tau_{\varepsilon}>1\)_, we pick_ \(u^{*}_{1}(s)=0\) _and thus_ \(u^{*}_{0}(s)=0\)_;_
2. _When_ \(0<\tau_{\varepsilon}<1\)_, we pick region of acceptance until_ \[1-u_{1}(s)=\tau_{\varepsilon}.\] _so_ \(1-u^{*}_{0}(s)=1\) _or equivalently_ \(u^{*}_{0}(s)=0\)_. Then_ \(u^{*}_{1}(s)=1-\tau_{\varepsilon}\)_._
_The corresponding region of classical rejection can be written as \(\mathcal{R}_{s}=\{s:\,0<\tau_{\varepsilon}<1\}\)._
Proof.: The proof is provided in the appendix VI-C.
After obtaining \(u^{*}_{1}(s),u^{*}_{0}(s),\,s\in S\), we can reconstruct the optimal prospect states \(a^{*},b^{*}\) by solving (7)(5)(6)(4). To measure the efficacy of cyber deception systems in counteracting the attacks, we can define the genuine detection rate and false alarm rate
\[P_{D}(\tau)=\text{Tr}(\rho^{*}_{1}P^{*}_{1}(\tau)),\,P_{F}(\tau)=\text{Tr}( \rho^{*}_{0}P^{*}_{1}(\tau)). \tag{16}\]
As a comparison, we denote the vanilla detection rate and false alarm rate of the insider attack (IA) as
\[\bar{P}_{D}(\tau)=\sum_{s:\delta^{*}(s;\tau)=1}f_{1}(s),\,\bar{P}_{F}(\tau)= \sum_{s:\delta^{*}(s;\tau)=1}f_{0}(s). \tag{17}\]
We now show that the role of the generator is to create more room for the attacker to deceive the human attacker to lower their probability of identifying the decoy system.
**Remark:** We also find out when \(\tau\rightarrow\infty\) (the whole region of \(S\) is of classical acceptance region ) or \(\tau\to 0\)(the whole region of \(S\) is of classical rejection region), the detection rate \(P_{D}(\tau)\) is close to \(\bar{P}_{D}(\tau)\). That is, the quantum effect in decision-making vanishes when the prospect probability is close to 1 or 0, which is consistent with the discussion in Vincent's work [30] in quantum prospect theory.
### _Some metrics evaluating the quantum advantage/disadvantage_
Quantum advantage and quantum disadvantageWe can define the following metrics as quantum advantages as follows.
**Definition 3** (Quantum advantage/disadvantage).: _We define the quantum advantage of_
\[QA(\tau)=P_{D}(\tau)/\bar{P}_{D}(\tau),\]
_where \(P_{D}:\mathbb{R}\rightarrow\mathbb{R}\) is the detection rate for the human attacker under manipulation defined in (16) and \(\bar{P}_{D}:\mathbb{R}\rightarrow\mathbb{R}\) be the counterpart for a non-adversarial human attacker without bounded rationality defined in (17)._
The quantum advantage (QA) is a crucial evaluation for the effect of introducing the generator in the defender system. It depends on the threshold \(\tau\) as well as the calibration parameter \(\zeta\). It measures the impact of manipulation of mind states upon human attacker's performance on detecting decoys. We say that the human attacker gains a quantum advantage in identifying decoys if \(QA(\tau)>1\) and suffers a quantum disadvantage if \(QA(\tau)<1\).
**Proposition 5**.: _Let \(QA\) be the quantum advantage in definition 3. Then for all choices of \(\tau>0\) and for all choices of \(f_{1},f_{0}\in L^{1}(S)\), we arrive \(P_{D}(\tau)\leqslant\bar{P}_{D}(\tau)\)._
\[0\leqslant QA(\tau)\leqslant 1+\zeta.\]
Proof.: See the appendix VI-D
## III Dynamic scenario
We now extend \(\mathcal{G}\) into a multi-stage game \(\mathcal{G}^{N}\) with finite horizon \(N\). For each stage \(k\in[N]\), the sensor system generates manipulated observations and the human agent launches access to one of the sensors. After both the defense system and the human agent take actions, cost/reward is incurred. The system belief on the defender's true type is updated. We assume that the defender never changes his type during the game. Therefore, the defender system exposes more about his type (normal sensor or decoy) as she produces more messages. We introduce the concept of the history of the actions taken by both the sensor and the human agent as follows.
**Definition 4** (History of action profiles).: _We define the history of action profiles up to stage \(N\), denoted as \(h^{(j)}\in\mathcal{H}^{\otimes j}\times[0,1]^{\otimes j},\,j\in[N_{1}]\), as follows:_
\[h^{(j)}=(\psi^{(j)},\alpha^{(j)}),\]
_where \(|\psi\rangle^{(j)}=(|\psi\rangle,\ldots,|\psi\rangle_{j})\in\mathcal{H}^{ \otimes j}\) as a generic history of base vectors from the prospect state up to stage \(j\) and \(\delta^{(j)}(\psi^{(j)})\in[0,1]^{\otimes j}=A^{\otimes j}_{H}\) refers to history of the detector's actions up to stage \(j\)._
In general, at the beginning of every stage \(j\), the defender's mixed strategy and the human agent's optimal decision rule should depend on the history \(h^{(j-1)}\). Here we denote \(\psi\in\{|\varphi_{\mathbb{R}}\rangle\}_{s,k}\) as a generic base vector in the prospect state basis. We assume in the following that
**Assumption 3** (Action-independent assumption).: _At every stage \(j\in[N_{1}]\), the human attacker's optimal decision rule \(\delta^{(j)*}(\cdot|h^{(j)}\rangle\in\bar{\Gamma}^{(j)})\), the attacker's optimal mixed strategies of generating manipulated messages \(u^{(j)*}_{1}(\cdot|h^{(j)}\rangle,u^{(j)*}_{0}(\cdot|h^{(j)}\rangle)\in[0,1]^{S}\) and the posterior belief \(p(\cdot|h^{(j)}\rangle)\) depend only on the attacker's history of mixed strategies. Specifically, we have for \(k\in\{0,1\}\),_
\[\bar{\delta}^{(j)*}(\psi_{j}|h^{(j)}) =\bar{\delta}^{(j)*}(\psi_{j}|\psi^{(j)}), \tag{18}\] \[u^{(j)*}_{k}(\cdot|\,h^{(j)}) =u^{(j)*}_{k}(\cdot|\,\psi^{(j)}),\] (19) \[p(H_{k}|h^{(j)}) =p(H_{k}|\,\psi^{(j)}). \tag{20}\]
The three assumptions (18)(19)(20) imply the only useful information accumulated throughout stages is the attacker's mixed strategies. The multi-stage game \(\mathcal{G}^{N_{1}}\) based on the base game \(\mathcal{G}\) is played as follows: before stage 1, the defender observes from Nature his type (\(H_{0}\) or \(H_{1}\)); at the beginning of stage \(j\in[N_{1}]\), the sensor observes the sender's message and sends prospect state \(|\Phi\rangle\in\mathcal{H}\) according on his mixed strategies \(\rho_{1}^{(j)},\rho_{0}^{(j)}\in B(\mathcal{H})\) to the human agent, who makes a decision based on the current prospect state and the history of prospect states \(\psi^{(j)}\in\mathcal{H}^{\otimes j}\) regarding the defender's type. At stage \(j\in[N]\), we are now ready to define the human agent's hypothesis testing game problem for the human agent as
\[\max_{\delta^{(j)}\in\bar{\Gamma}^{(j)}} \delta^{(j)}(\psi_{j})p(H_{1}|\;\psi^{(j)}),\] (21) s.t. \[\delta^{(j)}(\psi_{j})p(H_{0}|\psi^{(j)})<\beta^{(j)}.\]
We still inherit the substitutions and characterize the defender's strategies at stage \(j\) as the pairs \(u_{1}^{j},u_{0}^{j}\in\mathbb{R}^{S}\). Then we can equivalently express the defender's problem at stage \(j\) upon knowing \(\delta^{(j)*}\) as follows:
\[\max_{u_{1}^{j},u_{0}^{j}\in\mathbb{R}^{S}}\;\sum_{s\in\mathcal{G}^{j}}f_{1}(s )u_{1}^{j}(s) \tag{22}\]
with \(R_{i}^{j}=\{s:f_{1}(s)u_{1}^{j}(s)>\tau f_{0}(s)u_{0}^{j}(s)\}\). We now argue that the sequential perfect Bayesian Nash equilibrium (s-PBNE) by applying one-shot deviation principle [35] into solving (22) and (21).
**Proposition 6**.: _Let \(\mathcal{G}^{N}\) be the multistage game of finite horizon \(N\). Let the assumption 2 hold. The samples of signals generated during the \(j\) stages are denoted as \(\{s_{t}\}_{t\leq j}\). Then we derive the sequential perfect Bayesian Nash equilibrium as the following tuple \(\langle u_{1}^{j*},u_{0}^{j*},\delta^{(j)*},p\rangle\) as_
\[u_{1}^{j*}(s) =\begin{cases}0&\tau_{s}^{(j)}>1,\\ 1-\tau_{s}^{(j)}&\text{otherwise}.\end{cases} \tag{23}\] \[u_{0}^{j*}(s) =\begin{cases}0&\tau_{s}^{(j)}>1,\\ 1&\text{otherwise}.\end{cases}\] (24) \[\delta^{(j)*}(\psi_{j}|\;h^{(j-1)}) =\begin{cases}1&\prod_{t\leq j-1}\frac{f_{1}(s)(a_{0}^{(j)})^{2} }{f_{0}(s_{t})(b_{0}^{(j)})^{2}}>\left(\frac{1}{B^{\prime}}-1\right)\frac{p( H_{0})}{p(H_{1})},\\ 0&\text{otherwise}\end{cases}\]
Proof.: We can derive the equilibrium by backward induction [15], alternatively solving the optimization problem for every stage \(j\in[N]\).
The equilibrium results in proposition 6 implies how the defender should change the way of configuring prospect states produced by the generator based human attacker's action history and similarly, how the human attacker adopts her optimal decision threshold based on the history of classical signals received.
## IV Case Study: honeypot detection
In this section, we apply the proposed cyber deception scheme discussed in section II to implement cyber-psychological techniques to build next-generation honeypots [24] to mitigate inside human attacks. A honeypot is a monitored and regulated decoy disguised as a valuable asset to attract attackers to compromise so as to detect, deflect, and to gather information for cyber attacks in networks. According to [37], honeypots can help enhance system security in the following ways: to begin with, honeypots squander attacker's resources without giving away valuable information in return; also, honeypots serve as an intrusion detection node, providing warnings for system administrative; last but not least, once compromised, honeypots provide useful information for network administrative to analyze on the attacker. However, honeypots can also be identified by proactive attackers and become ineffective, especially when they are at fixed locations and isolated from the network system. Attackers can adopt proactive detection techniques, such as those in [33], to identify honeypots more accurately and further either implement anti-honeypot techniques [38]. Here inspired by the experiments introduced in [11], we undermine the attacker's performance in identifying honeypots using cyber-psycholoigical techniques. Specifically, we adopt generators to produce verbal messages to change the perception of attacker's judgment upon the type of the sensors that they receive traffic data from.
### _The dataset_
To simulate normal traffic and honeypot-related traffic, we select a portion of the KDD cup 1999 dataset [39], which was generated partially for the 1998 DARPA intrusion detection program. The raw dataset is binary, containing five million connection records from 7 weeks of network traffic. There are in total \(N=494021\) pieces of connection records in our selected dataset, of which 396093 come from honeypot-related traffic. We assume that all attack traffic is attracted by honeypots and all normal traffic is collected by normal sensors since regular users have no reason to access honeypots. Thus we can estimate a prior belief regarding the type of sensors as \(p(H_{1})\approx 0.802,\;p(H_{0})\approx 0.198\).
The signal \(s\) of log-in attempts as a feature obeys Bernoulli distribution: \(s=0\) means that log-in attempt is successful and \(s=1\) means the log-in attempt is a failure. Honeypots and normal sensors respond failure/success to attacker's log-in attempts with different parameters \(\theta_{1},\theta_{0}\), i.e.
\[g_{1}(1) =\theta_{1},\;g_{1}(0)=1-\theta_{1}, \tag{25}\] \[g_{0}(1) =\theta_{0},\;g_{0}(0)=1-\theta_{0}.\]
In our selected dataset, there are 69939 out of 97928 successful log-in attempts in normal traffic, while there are 3298 out of 396093 successful log-in attempts in honeypot traffic. Thus we select \(\theta_{0}\approx 0.719,\theta_{1}\approx 0.008\). We adopt the scheme in Figure 1 to implement psychological techniques to mislead attackers by creating perceptions of messages based on the traffic data. For every message \(s\), the generator produces verbal messages leading the human attackers to generate perceptions regarding the type of sensor.
We denote the number of base states in the mind space to be \(K\), indicating how likely the human attacker thinks the defender is a honeypot: upon receiving the classical signal \(s\), the state \(|s\phi_{1}\rangle\) indicates that human thinks the defender is the
most likely a honeypot, while the state \(|s\varphi_{K}\rangle\) represents that human thinks the defender is the least likely the honeypot. If the generator wants to keep the human agent uninformed, as described in [11], he can produce a prospect state as a uniform superposition of all the basis as follows:
\[|\Phi(s)\rangle=\frac{1}{\sqrt{K}}\sum_{j=1}^{K}|s\varphi_{j}\rangle.\]
For every realization of message \(s\), the generator designs optimal type-dependent prospect state coefficients \(a^{*},b^{*}\in\mathbb{R}^{S\times K}\) in (4)-(7) via PBNE in proposition 4, indicating perception of the likelihood of honeypot the defender imposes the human agent upon delivering the message \(s\). We also analyze the optimal decision rules of human agents under the verbal messages of generators at equilibrium.
### _Numerical Results_
We select parameters \(\beta=0.4,\zeta=0.2\) and the number of base states in the mind space \(K=4\). In Figure 2, we plot the cyber defender's optimal strategies \(u_{0}^{*},u_{1}^{*}\) at equilibrium in terms of various choices of \(\beta\). We observe that in the classical rejection region (i.e. the space of signals that causes a human attacker to identify that the sensor is a decoy), the generator in the defender system produces perceptions leading to only'rejects' with a certain probability. In Figure 5 we plot the defender's strategies at equilibrium in terms of the coefficients \(a,b\) of the prospect states produced by the generators. The coefficients suggest an optimal way of mixing different weights of psychological minds regarding every classical signal \(s\). We observe that when \(\beta\) becomes close to 1, the defender's equilibrium strategies are close to \(u_{1}(0)=1,u_{0}(0)=1\). On the other hand, if \(\beta\) is close to 0, the defender's strategies converge to \(u_{1}=0,u_{0}=0\), corresponding to the upper right and lower left corner of the ROC curves (to be described later) characterizing the detection performance.
In Figure 3, we plot the receiver-operational-characteristic (ROC) curve. We observe that depending on different calibration parameters \(\zeta\), the human agent's detection performances vary, but in general are all worse than fully rational counterparts. In particular, higher \(\zeta\) leads to better detection performance, as higher \(\zeta\) the quantum inference strengthens the probability of correct identification when the decoy sensor is connected to the human agent.
### _Multi-stage analysis_
To have a better understanding of the In Figure 4, we plot the evolution of defender's optimal strategies \(\{u_{1}^{*},u_{0}^{*\}\}_{I\in[N]}\) (that is, optimal type-dependent utility factors) through time at equilibrium as introduced in (24)(23). We select the time horizon \(N=30\) and fix the prior belief \(p(H_{1}),p(H_{0})\). We observe that the defender's stage equilibrium strategies converge to a pooling strategy \(u_{0}=u_{1}=1\), suggesting that the attacker can make the prospect states totally uninformative to the human agent by designing false perceptions upon the signals.
## V Conclusion
In this work, we have proposed the game of travesty (TG) to design a novel defensive deception to combat the proactive detection of decoys from insider human attackers. The defensive deception system is a signaling game where the defender consists of a sensor or a decoy cascaded by a generator, which converts classical signals into prospect states to manipulate the perception of messages into human attackers. We have analyzed the behaviors of the inside human attacker as well as the defender by computing the perfect Bayesian Nash equilibrium. Furthermore, we analyze the human attacker's performance of detecting decoys at equilibrium and compare it with the ones without manipulation of perceptions of classical signals. We have illustrated via ROC curves that the insider human attacker performs worse than the ones with full rationality, giving the defender more room to evade detection when she implements decoys in the network.
Fig. 3: ROC curves human agent’s detection performance \(\zeta\). We choose distributions of classical signals under each state as \(g_{1},g_{0}\) given in (25)
Fig. 2: The defender’s optimal strategies \(u_{1}^{*}\) (upper figure) and \(u_{0}^{*}\) (lower figure) at PBNE in \(\mathcal{G}\) under different choices of \(\beta\). We set the calibration parameter \(\zeta=0.2\) and the tolerance \(\beta=0.4\). The classical signal obeys truncated Gaussian as in (25) with support of length \(S=2\). The dimension of mind states \(K=4\). |
2309.03604 | Estimating the Coverage Measure and the Area Explored by a Line-Sweep
Sensor on the Plane | This paper presents a method for determining the area explored by a
line-sweep sensor during an area-covering mission in a two-dimensional plane.
Accurate knowledge of the explored area is crucial for various applications in
robotics, such as mapping, surveillance, and coverage optimization. The
proposed method leverages the concept of coverage measure of the environment
and its relation to the topological degree in the plane, to estimate the extent
of the explored region. In addition, we extend the approach to uncertain
coverage measure values using interval analysis. This last contribution allows
for a guaranteed characterization of the explored area, essential considering
the often critical character of area-covering missions. Finally, this paper
also proposes a novel algorithm for computing the topological degree in the
2-dimensional plane, for all the points inside an area of interest, which
differs from existing solutions that compute the topological degree for single
points. The applicability of the method is evaluated through a real-world
experiment. | Maria Costa Vianna, Eric Goubault, Luc Jaulin, Sylvie Putot | 2023-09-07T09:57:26Z | http://arxiv.org/abs/2309.03604v1 | # Estimating the Coverage Measure and the Area Explored by a Line-Sweep Sensor on the Plane
###### Abstract
This paper presents a method for determining the area explored by a line-sweep sensor during an area-covering mission in a two-dimensional plane. Accurate knowledge of the explored area is crucial for various applications in robotics, such as mapping, surveillance, and coverage optimization. The proposed method leverages the concept of coverage measure of the environment and its relation to the topological degree in the plane, to estimate the extent of the explored region. In addition, we extend the approach to uncertain coverage measure values using interval analysis. This last contribution allows for a guaranteed characterization of the explored area, essential considering the often critical character of area-covering missions. Finally, this paper also proposes a novel algorithm for computing the topological degree in the 2-dimensional plane, for all the points inside an area of interest, which differs from existing solutions that compute the topological degree for single points. The applicability of the method is evaluated through a real-world experiment.
Plane exploration; topological degree; robotics; interval analysis.
## I Introduction
Mobile robots are increasingly being used to carry out dangerous tasks that otherwise would put human lives at risk, such as bomb disposal, firefighting, and search and rescue missions. Their use in these situations can considerably reduce the risk to human workers while providing more detailed and accurate information about the situation. Additionally, mobile robots can be equipped with specialized tools, such as cameras, grippers, and cutting devices, that enable them to perform a wide range of tasks that would be difficult or impossible for humans to do. In the context of these operations, the robotic platform often needs to perform an area-covering mission. During these missions, a designated part of the robot's environment is thoroughly searched or monitored to develop a complete understanding of the situation or identify potential threats or opportunities.
Determining the area explored by a mobile robot during an area-covering mission is important to establish if the mission is successful. It is also essential for validating path-planning algorithms that will lead to complete coverage of an area of interest [1] or complete avoidance of an area of risk. Overall, determining the explored area is essential for ensuring efficient and safe operations, planning future actions, and gaining valuable insights from the acquired data.
In addition, we are also interested in determining the coverage measure of a point in the environment. The coverage measure represents how many times this point was covered by the robot's sensors or tools, in other words, how many times it was explored.
Counting the number of times an area was explored is of interest for different reasons, for example, when assessing revisiting missions. In these missions the robot is required to come back to a previous point, therefore to revisit it, to improve the quality of information collected around this point through redundancy. Indeed, studies have shown that target classification improves dramatically when a multi-view approach is adopted. Usually, single-view approaches do not provide enough information to make a confident identification with, for example, Synthetic Aperture Sonars (SAS) [2] and Synthetic Aperture Radars [3]. A multi-view method is also essential when recognizing or reconstructing 3-dimensional objects from 2-dimensional data such as camera images [4]. In these examples, counting how many times a point or an area, as a set of points, has already been explored will be essential to determine the mission completeness. On the contrary, if the robot is not supposed to cover areas previously visited, the coverage measure will be useful for planning optimal paths, reducing unnecessary effort.
In this context, in this work, we present a technique for quantifying the extent of coverage achieved by a mobile robot during a sweep exploration in a two-dimensional environment. Sweep exploration refers to missions where the robot uses a line-sweep sensor. Line-sweep sensors are one-dimensional sensors that provide data along a single axis and must sweep the environment in order to create a two-dimensional representation of its surroundings. With this purpose, we establish a relation between the exploration problem and the topological degree and we demonstrate how it can be used to determine the coverage measure.
Topological concepts have already been explored for counting [5] and for addressing coverage problems in robotics contexts, e.g. [6, 7]. The main advantage of the approach presented in this paper, is that we determine the number of times an area was explored, with the coverage measure, and different from more common approaches, such as grid-based analysis, our topological method does not require a previous discretization of the environment into fixed cells. We demonstrate that the whole environment can be characterized from very basic information on the robot's state and on the range of visibility of the exploration sensors, resulting in
a method of low computational complexity. This approach has already been explored at [8], but here we deepen its mathematical definition and extend it to address previous limitations such as the coverage measure of points on the maximal range of visibility and of points that are swept on the opposite direction of movement.
We also address the crucial issue of uncertainty in a robot's trajectory to achieve a guaranteed estimation of the explored area. In [9], a method to estimate the explored area considering the uncertain state of a robot was presented. We extend their method by introducing the concept of uncertain coverage measure.
Our last contribution is an algorithm for computing the winding number of a continuous cycle with respect to all the point in the two-dimensional plane. Algorithms for general topological degree computation have already been proposed by different works [10, 11]. However, methods available in the literature will compute the winding number of a cycle with respect to a single point, needing to be applied to each point individually for a full characterization of the plane. In this context, we present a set-membership approach that efficiently determines the winding number for a whole area of interest. The resulting algorithm and all the concepts defined in this work are applied to determine the area explored by a real autonomous underwater vehicle doing an exploration mission with two line-sweep sensors.
## II Problem Statement
We are interested in the problem of a mobile robot that explores an unknown planar environment. We assume that the robot's pose can be fully described by a function of time: \(\boldsymbol{x}:\mathbb{R}\rightarrow\mathbb{R}^{3}\) that is at least \(C^{2}\). The robot's visible area at time \(t\) is a subset \(\mathbb{V}(t)\subset\mathbb{R}^{2}\) of the environment that is sensed by the robot's embedded exteroceptive sensors.
We define \(\mathbb{V}\) as a set-valued function that depends on the robot's pose and the geometry and technology of the sensors employed. In this work, we focus on the problem of line-sweep exploration sensors and we treat the example of one that osculates the environment on the robot's left side as it moves around the plane, Figure 1. In this context, the robot's pose at instant \(t\) can be represented by the vector
\[\boldsymbol{x}(t)=\begin{pmatrix}x(t)&y(t)&\psi(t)\end{pmatrix}^{T}\]
where the pair \((x,y)\) represents the robot's position in the plane and \(\psi\) its orientation. Let \(L\in\mathbb{R}^{+}\) be the sensor's visible range, the visible set in this configuration can be defined as
\[\mathbb{V}(t)=\{\boldsymbol{p}\in\mathbb{R}^{2}|p_{rx}=0\text{ and }0\leq p_{ry}\leq L\} \tag{1}\]
where
\[\boldsymbol{p}_{r}=\begin{pmatrix}p_{rx}&p_{ry}\end{pmatrix}^{T}=R^{-1}(\psi(t ))(\boldsymbol{p}-\begin{pmatrix}x&y\end{pmatrix}^{T}) \tag{2}\]
represents in the robot's coordinate frame a point \(\boldsymbol{p}\) in the environment and \(R(\psi(t))\) is the rotation matrix associated with the robot's orientation angle \(\psi(t)\).
The set \(\mathbb{A}_{\mathbb{E}}\) corresponds to the area explored by the robot during a time interval \([0,T]\), for some maximal value \(T>0\). It can be defined as the union of the robot's visible area along its trajectory
\[\mathbb{A}_{\mathbb{E}}=\bigcup_{t\in[0,T]}\mathbb{V}(t) \tag{3}\]
Figure 2 shows the resultant \(\mathbb{A}_{\mathbb{E}}\) if we consider the illustrated robot's trajectory and the visible set function described by (1).
The robot's visibility region in this case can be parameterized by \(u\in U\subseteq\mathbb{R}\). In the considered example \(U=[0,L]\) represents the lateral distance of a point in the visible area to the robot. We can define the sweep function \(\boldsymbol{f}:U\times[0,T]\rightarrow\mathbb{R}^{2}\) as a continuously differentiable function whose image over the space \(U\times t\), with \(t\in[0,T]\), represents the visible area \(\mathbb{V}(t)\),
\[\mathbb{V}(t)=\boldsymbol{f}(U,t) \tag{4}\]
By analogy to a common terminology adopted in sonar imagery [12], we name space \(W=U\times[0,T]\) the Waterfall
Fig. 1: (a): Mobile robot with a line sweep exploration sensor on the plane. At instant \(t\) the point \(\boldsymbol{p}\) is sensed by the robot ; (b): The point \(\boldsymbol{p}_{r}\) is the representation of point \(\boldsymbol{p}\) in the robot’s coordinate frame \(X_{r}Y_{r}\).
Fig. 2: Area explored by a line-sweep sensor on the robot’s left side along its trajectory.
Space. Points in \(W\) are of the form \((u,t)\), \(u\) representing the parameterization of the visible area, \(t\) the time of exploration. All points \((u,t)\in W\) are points that were in the robot's visible area at least once and therefore, points that were explored during the mission. The robot's pose \(\mathbf{x}\), its visible area \(\mathbb{V}\) and \(\mathbb{A}_{\mathbb{E}}\) are all defined inside an absolute coordinate system, the Mosaic Space \(M\subseteq\mathbb{R}^{2}\) or the World Frame, as it is usually called in robotics. The sweep function \(\mathbf{f}\) maps points from the Waterfall to the Mosaic space, Figure 3.
The coverage measure, or how many times a point in the environment was explored by the robot during a mission, is given by the function \(c_{m}:M\rightarrow\mathbb{N}_{0}\). A point is considered to be revisited if once in the robot's visibility range, it goes out of reach and then is sensed again later in time. In Figure 4, for example, point \(\mathbf{p}\) is sensed for the first time at instant \(t_{1}\) and revisited at instant \(t_{2}\), in this case, \(c_{m}(\mathbf{p})=2\).
Let \(det\) be the determinant function and \(J_{\mathbf{f}}\) represents the Jacobian matrix of the sweep function. We adopt the following condition:
\[\forall\mathbf{w}\in W,det(J_{\mathbf{f}}(\mathbf{w}))>0 \tag{5}\]
that implies that the robot is constantly moving and that the sensor sweeps the environment on the same direction of its advancement movement. By assuming this condition is met, we can say that the number of times that a point appears in the waterfall space corresponds to the number of times that this point was explored during a mission. If \(Ker\)\(\mathbf{f}\) is the kernel of function \(\mathbf{f}\), considering the definitions stated in this Section: for \(\mathbf{p}\in M\), it can be concluded that
\[c_{m}(\mathbf{p})=\#Ker\ (\mathbf{f}-\mathbf{p}) \tag{6}\]
The explored area \(\mathbb{A}_{\mathbb{E}}\) can be characterized as the set of points that were sensed by the robot at least once and therefore in terms of the coverage measure of its points:
\[\mathbb{A}_{\mathbb{E}}=\{\mathbf{p}\in M|c_{m}(\mathbf{p})\geq 1\} \tag{7}\]
Describing the mosaic space using the coverage measure of its points is the method adopted in this work for defining the explored area. To achieve this, the following section establishes a connection between the topological degree and the coverage measure and this relation is explored with this purpose.
## III Coverage Measure and Topological Degree
In [8] a relation between the coverage measure of a point in the plane and the topological degree has been explored. Here we give a general axiomatic definition of the notion of topological degree and recap the main properties that we use.
**Definition 1** (Topological degree).: _Let \(D\) be an open subset of \(\mathbb{R}^{n}\) and \(\mathbf{f}\) a continuous function from its closure \(\overline{D}\) to \(\mathbb{R}^{n}\). A degree of \(\mathbf{f}\) is a family of functions \(deg:\ (\mathbf{f},D,\mathbf{p})\rightarrow\mathbb{Z}\) for all \(D\) open subsets of \(\mathbb{R}^{n}\), \(\mathbf{f}\) continuous and \(\mathbf{p}\in\mathbb{R}^{n}\backslash\mathbf{f}(\partial D)\) such that:_
* _(identity)_ \(deg(Id_{D},D,\mathbf{p})=1\) _if_ \(\mathbf{p}\in D\)__
* _(excision)_ \(deg(\mathbf{f},D,\mathbf{p})=deg(\mathbf{f},D_{1},\mathbf{p})+deg(\mathbf{f},D_{2},\mathbf{p})\) _where_ \(D_{1}\)_,_ \(D_{2}\) _are opens in_ \(D\) _with_ \(\mathbf{p}\not\in\mathbf{f}(\overline{D}(D_{1}\cup D_{2}))\)__
* _(homotopy invariance)_ \(deg(\mathbf{h}(\alpha,.),D,\mathbf{p}(\alpha))\) _is independent of_ \(\alpha\) _for any homotopy_ \(\mathbf{h}:\ [0,1]\times\overline{D}\rightarrow\mathbb{R}^{n}\)_, and_ \(\mathbf{p}(\alpha)\not\in\mathbf{h}(\alpha,\partial D)\) _for all_ \(\alpha\in[0,1]\)_._
When such a family of function exists, it is known to be unique [13]. In particular, when \(\mathbf{f}\) is at least continuously differentiable, and \(\mathbf{p}\) is a regular value of \(\mathbf{f}\) (i.e. the determinant of the Jacobian of \(\mathbf{f}\), \(det(J_{\mathbf{f}})\), is non zero on each \(\mathbf{d}\) with \(\mathbf{f}(\mathbf{d})=\mathbf{p}\)):
\[deg(\mathbf{f},D,\mathbf{p})=\sum_{\mathbf{d}\in\mathbf{f}^{-1}(\mathbf{p})}sign(det(J_{\mathbf{f}}( \mathbf{d}))) \tag{8}\]
As well known in complex analysis, the topological degree of differentiable functions from the unit ball \(D^{2}\) in \(\mathbb{R}^{2}\) to \(\mathbb{R}^{2}\) is linked to the winding number of \(\mathbf{f}(\partial D^{2})\). We are going to take the homological view on winding numbers in this paper. Let \(S^{1}=\partial D^{2}\) be the 1-sphere, \(\mathbf{p}\) a point in the interior of the image by \(\mathbf{f}\) of \(D^{2}\). Function \(\mathbf{f}\) maps \(S^{1}\), on a cycle in \(\mathbb{R}^{2}\), and the winding number is the number of times this cycle turns around \(\mathbf{p}\). By convention, counterclockwise turns count positively and clockwise turns negatively.
**Definition 2** (Winding number).: _Let \(\mathbf{f}:\ D^{2}\rightarrow\mathbb{R}^{2}\) be a continuous function and \(\mathbf{p}\in\mathbf{f}(D^{2})\backslash\mathbf{f}(S^{1})\). Consider its restriction \(\mathbf{f}_{|S^{1}}:\ S^{1}\rightarrow\mathbb{R}^{2}\backslash\{\mathbf{p}\}\). It induces a linear map in homology:_
\[\tilde{\mathbf{f}}:\ H_{1}(S^{1})\to H_{1}(\mathbb{R}^{2}\backslash\{\mathbf{p}\})\]
_i.e. from \(\mathbb{Z}\) to \(\mathbb{Z}\), i.e. is of the form \(\tilde{\mathbf{f}}(C)=\eta C\), where \(C\) represents an equivalence class in \(H_{1}(S^{1})\). This \(\eta\) is called the winding number of \(\gamma=\mathbf{f}(S^{1})\) around point \(\mathbf{p}\in\mathbf{f}(D^{2})\backslash\mathbf{f}(S^{1})\). For all other points in \(\mathbb{R}^{2}\backslash\partial D^{2}\) the winding number is set to zero._
Fig. 3: Waterfall and Mosaic Spaces for the line-sweep sensor example.
We can now state the relation between the topological degree and the winding number:
**Lemma 1**.: _Let \(\mathbf{f}\) be a continuously differentiable map from \(D^{2}\) to \(\mathbb{R}^{2}\) and let \(\mathbf{y}\in\mathbb{R}^{2}\backslash\mathbf{f}(\partial D^{2})\) such that \(\mathbf{f}^{-1}(\mathbf{y})\) is finite and \(\mathbf{y}\) is a regular point for \(\mathbf{f}\). Then \(deg(\mathbf{f},D^{2},\mathbf{y})\) is equal to the winding number \(\eta(\mathbf{f}(\partial D^{2}),\mathbf{y})\) of \(\mathbf{f}(\partial D^{2})\) at \(\mathbf{y}\)._
Proof.: For all \(\mathbf{y}\in\mathbb{R}^{2}\backslash\mathbf{f}(\partial D^{2})\), either there exists no \(\mathbf{d}\) such that \(\mathbf{y}=f(\mathbf{d})\), or there exists a finite, non-zero number of \(\mathbf{d}\), \(\mathbf{d}_{1},\ldots,\mathbf{d}_{m}\) in \(D^{2}\), such that \(\mathbf{f}(\mathbf{d}_{i})=\mathbf{y}\).
In the first case, this means that both, \(deg(\mathbf{f},D^{2},\mathbf{y})\) is zero and \(\mathbf{y}\) is in the complement of \(\mathbf{f}(D^{2})\) and the winding number \(\eta(\mathbf{f}(\partial D^{2}),\mathbf{y})\) is also zero.
In the second case, \(\mathbf{y}\) being regular for \(\mathbf{f}\), we have \(deg(\mathbf{f},D,\mathbf{y})=\sum\limits_{i=1}^{m}sign(det(J_{\mathbf{f}}(\mathbf{d}_{i})))\). Take small enough open neighborhoods \(U_{i}\) of \(\mathbf{d}_{i}\) in \(D\) such that the sign of \(det(J_{\mathbf{f}}(\mathbf{d}))\) is the same as the sign of \(det(J_{\mathbf{f}}(\mathbf{d}_{i}))\) for all \(\mathbf{d}\in U_{i}\). This is always possible since \(J_{\mathbf{f}}\) is continuous. Note that this implies that \(\mathbf{f}\) restricted to \(U_{i}\) induces an homeomorphism onto its image. Also we can always choose the \(U_{i}\) to have empty pairwise intersections and to have \(\mathbf{f}\) being an homeomorphism from \(\overline{U}_{i}\) onto its image, by taking them small enough (the \(\mathbf{d}_{i}\) are isolated points within \(D\)).
Now, the map \(\tilde{\mathbf{f}}\) is the same as the map induced in homology \(\tilde{\mathbf{f}}\) by \(\mathbf{f}:\,D^{2}\backslash\bigcup\limits_{i=1}^{m}U_{i}\rightarrow\mathbb{R}^{ 2}\backslash\{\mathbf{y}\}\). We note also that within \(D^{2}\backslash\bigcup\limits_{i=1}^{m}U_{i}\), the cycle \(\partial D^{2}\) is homologous to the sum of the \(\partial(U_{i})\), for \(i=1,\ldots,m\). Hence \(\tilde{\mathbf{f}}(\partial D^{2})=\sum\limits_{i=1}^{m}\tilde{\mathbf{f}}(\partial( U_{i}))\).
But \(\mathbf{f}(\partial(U_{i}))\) is a Jordan curve homeomorphic (by \(\mathbf{f}\)) to \(\partial(U_{i})\), since we chose \(U_{i}\) such that \(\mathbf{f}\) restricted to \(\overline{U_{i}}\) onto its image is a homeomorphism. Hence \(\tilde{\mathbf{f}}(\partial U_{i})\) is either plus or minus identity, according to the orientation of \(\tilde{\mathbf{f}}(\partial U_{i})\), i.e. \(\tilde{\mathbf{f}}(\partial U_{i})=sign(det(J_{\mathbf{f}}(\mathbf{d})))\) for any \(\mathbf{d}\in U_{i}\), which we know is equal to \(sign(det(J_{\mathbf{f}}(\mathbf{d}_{i}))\). Hence
\[\eta(\mathbf{f}(\partial D^{2}),\mathbf{y})=\sum\limits_{i=1}^{m}sign(det(J_{\mathbf{f}}( \mathbf{d}_{i})))=deg(\mathbf{f},D^{2},\mathbf{y})\]
Now let \(\mathbf{f}\) represent the sweep function, mapping from the Waterfall Space \(W\), which is homeomorphic to \(D^{2}\), to the Mosaic Space \(M\). According to (8) and under hypothesis (5), for \(\mathbf{p}\in\mathbb{R}^{n}\backslash\mathbf{f}(\partial W)\),
\[deg(\mathbf{f},W,\mathbf{p})=\sum\limits_{\mathbf{w}\in\mathbf{f}^{-1}(\mathbf{p})}+1=\#Ker\ (\mathbf{f}-\mathbf{p}) \tag{9}\]
Finally, from (6), it can be concluded that \(deg(\mathbf{f},W,\mathbf{p})=c_{m}(\mathbf{p})\). Moreover, from Definition 2,
\[\eta(\gamma,\mathbf{p})=c_{m}(\mathbf{p}), \tag{10}\]
where \(\gamma=\mathbf{f}(\partial W)\) represents the sensor's contour, a counter-clockwise oriented closed curve that surrounds all the points that have been explored, Figure 5, and \(\eta(\gamma,\mathbf{p})\) is its winding number with respect to \(\mathbf{p}\).
Throughout the remainder of this Section, we extend the relation between the coverage measure and the topological degree so it comprehends more general scenarios.
### _Coverage Measure for Points with Undefined Winding Numbers_
When the robot's pose and its visible set are well defined, the coverage measure of all the points in the environment during a mission can be uniquely determined. However, if we adopt the method proposed by [8], using relation (10), the coverage measure of a point \(\mathbf{p}\in\gamma\) will be undefined considering the definition of winding numbers.
For example, in Figure 6, point \(\mathbf{p}_{1}\in\gamma\) is the image by \(\mathbf{f}\) of a point \((0,t)\in W\), for some \(t\in[0,T]\). This point is inside the robot's visible area \(\mathbb{V}(t)\) and according to the definition of the coverage measure on (6), \(c_{m}(\mathbf{p}_{1})=1\) even if \(\eta(\gamma,\mathbf{p}_{1})\) is undefined. In this context, to extend the validity of (10), we define a bounded function \(\overline{\eta}\) as the extension of the winding number function to the full domain \(\mathbf{f}(W)\). For that, we consider the following-adapted from [14]:
**Definition 3** (Limit Superior).: _Let \(M\) be a metric space and \(g\) a function from \(M\) to \(\mathbb{R}\). For any limit point \(\mathbf{y}\in M\) the limit superior, when it exists, is defined as:_
\[\underset{\mathbf{p}\rightarrow\mathbf{y}}{limsup}g(\mathbf{p})=\lim\limits_{e\to 0}\ (sup\{g(\mathbf{p})\ |\ \mathbf{p}\in B(\mathbf{y},\epsilon)\backslash\{\mathbf{y}\}\})\]
_where \(B(\mathbf{y},\epsilon)\) denotes the ball within \(M\), centered at \(\mathbf{y}\), of radius \(\epsilon\)._
The sweep function \(\mathbf{f}\) is a continuous map from a compact subset \(W\) to \(\mathbb{R}^{2}\), therefore \(\mathbf{f}(W)\backslash\mathbf{f}(\partial W)\) is composed of a
Fig. 5: The sensor’s contour \(\gamma\) for the mission represented in Figure 2.
Fig. 6: The coverage measure of point \(\mathbf{p}_{1}\) is equal to \(1\) and of point \(\mathbf{p}_{2}\) is equal to \(2\), but the winding number of \(\gamma\) with respect to these points is undefined.
disjoint union of opens \(V_{i}\), \(i\in I\), for some index set \(I\). All points of \(\mathbf{f}(\partial W)\) are limits of some sequence of points \(\mathbf{f}(\mathbf{y})\), with \(\mathbf{y}\in\hat{W}\). We can now state:
**Lemma 2**.: _Consider a function \(w:\ \bigcup\limits_{i\in I}V_{i}\rightarrow\mathbb{Z}\). Suppose that \(w\) is bounded on \(\bigcup\limits_{i\in I}V_{i}\) then there is an upper semi-continuous extension of \(w\), \(\overline{w}:\ \mathbf{f}(W)\rightarrow\mathbb{Z}\) defined as:_
\[\overline{w}(\mathbf{p})=\left\{\begin{array}{ll}w(\mathbf{p})&\text{ if }\mathbf{p}\in \bigcup\limits_{i\in I}V_{i}\\ \underset{\mathbf{p}^{\prime}\in\bigcup\limits_{i\in I}V_{i}\rightarrow\mathbf{p}}{ limsup}w(\mathbf{p}^{\prime})&\text{ otherwise}\end{array}\right.\]
Proof.: This is immediate: the limit sup exists since \(w\) is bounded on \(\bigcup\limits_{i\in I}V_{i}\), and the definition of \(\overline{w}\) precisely imposes that \(\overline{w}\) is upper semi-continuous.
Supposing that the number of connected components of \(\mathbf{f}(W)\backslash\mathbf{f}(\partial W)\) is finite, as the winding number is constant on each component, this defines a bounded function \(\eta\) that we can extend to the full domain \(\mathbf{f}(W)\) by Lemma 2 to obtain \(\overline{\eta}\). Finally, if the condition expressed in (5) is satisfied, we can say that for any \(\mathbf{p}\in M\),
\[\overline{\eta}(\gamma,\mathbf{p})=c_{m}(\mathbf{p}) \tag{11}\]
Considering Definition 3, if \(\mathbf{p}\in\gamma\), its coverage measure will be equal to the coverage measure of points on the open \(V_{i}\) with the biggest winding number value for which \(\mathbf{p}\) is a limit, as expected by the original definition on (6).
This new definition extends the applicability of the method but condition (5) is still necessary for (11) to be true. Next section introduces new concepts to remove this constraint.
### _Coverage Measure for Points Swept Backwards_
Condition (5) is necessary for (11) to be true. It ensures that the area surrounded by the sensor's contour \(\gamma\) never shrinks during a mission and that \(\gamma\) is indeed an enclosing curve for \(\mathbb{A}_{\mathbb{E}}\).
If condition (5) is not satisfied, the inconsistency in the equality (11) is illustrated in Figures 7,8 and 9. At the beginning of the mission, in Figure 7, the robot moves from its initial state \(\mathbf{x}(0)\) to state \(\mathbf{x}(t_{1})\), \(t_{1}>0\). During the interval \([0,t_{1}]\), condition (5) is satisfied. Point \(\mathbf{p}\in M\) is sensed for the first time at instant \(\hat{t}_{1}\in[0,t_{1}]\) and this occurrence is represented in the mission's Waterfall Space \(W\) by point \(\mathbf{w}_{1}\). The sensor's contour associated with this first part of the mission is the closed curve \(\gamma_{1}=\mathbf{f}(\partial([0,L]\times[0,t_{1}]))\) and \(\eta(\gamma_{1},\mathbf{p})=sign(det(J_{\mathbf{f}}(\mathbf{w}_{1})))=1\) is indeed equal to the coverage measure of \(\mathbf{p}\) at \(t_{1}\).
The mission continues as the robot advances to state \(\mathbf{x}(t_{2})\), \(t_{2}>t_{1}\) and point \(\mathbf{p}\) is revisited at \(\hat{t}_{2}\). For the time interval \([0,t_{2}]\), we have \(\mathbf{f}^{-1}(\mathbf{p})=\{\mathbf{w}_{1},\mathbf{w}_{2}\}\) and \(\gamma_{2}=\mathbf{f}(\partial([0,L]\times[0,t_{2}]))\) represents the sensor's contour. As illustrated in Figure 8, at \(\hat{t}_{2}\), point \(\mathbf{p}\) is swept in the opposite direction with respect to the robot's advancement movement. In this context, the Jacobian of function \(\mathbf{f}\) at \(\mathbf{w}_{2}\) is negative and
\[\eta(\gamma_{2},\mathbf{p})=\sum_{i=1}^{2}sign(det(J_{\mathbf{f}}(\mathbf{w}_{i})))=1-1=0\]
although, according to (6), \(c_{m}(\mathbf{p})=2\) at \(t_{2}\).
Exploration ends at state \(\mathbf{x}(T)\), \(T>t_{2}\) and the complete mission is represented in Figure 9. Point \(\mathbf{p}\) is sensed for the third and last time at \(\hat{t}_{3}\) and at the end of the mission \(\mathbf{f}^{-1}(\mathbf{p})=\{\mathbf{w}_{1},\mathbf{w}_{2},\mathbf{w}_{3}\}\). At \(\hat{t}_{3}\), point \(\mathbf{p}\) is sensed by a forward movement of the sensor on the plane, therefore,
\[\eta(\gamma,\mathbf{p})=\sum_{i=1}^{3}sign(det(J_{\mathbf{f}}(\mathbf{w}_{i})))=1-1+1=1\]
but \(c_{m}(\mathbf{p})=3\) is expected.
Fig. 8: Condition established in Equation (5) is not satisfied for all the points in \(W\). At \(t_{2}\), \(c_{m}(\mathbf{p})=2\).
Fig. 7: Mission during time interval \([0,t_{1}]\), point \(\mathbf{p}\) is sensed for the first time at \(\hat{t}_{1}\) and \(c_{m}(\mathbf{p})=1\).
Fig. 9: The mission ends at \(T\) and the point \(\mathbf{p}\) is sensed for the last time at \(\hat{t}_{3}\), the final coverage measure of this point is \(3\) although \(\eta(\gamma,\mathbf{p})=1\).
To address this problem, we can divide the Waterfall Space \(W\) into two sets, \(\mathbb{S}^{+}\) and \(\mathbb{S}^{-}\),
\[\mathbb{S}^{+} =\{\mathbf{y}\in W|det(J_{\mathbf{f}}(\mathbf{y}))>0)\} \tag{12}\] \[\mathbb{S}^{-} =\{\mathbf{y}\in W|det(J_{\mathbf{f}}(\mathbf{y}))<0)\} \tag{13}\]
We define two new positively oriented contours, \(\gamma^{+}\) and \(\gamma^{-}\) as the image by \(\mathbf{f}\) of the boundaries of these sets, as illustrated in Figure 10,
\[\gamma^{+} =\mathbf{f}(\partial\mathbb{S}^{+}) \tag{14}\] \[\gamma^{-} =\mathbf{f}(\partial\mathbb{S}^{-}) \tag{15}\]
For a regular value \(\mathbf{p}\in M\) we will have \(Ker\ (\mathbf{f}-\mathbf{p})\subset\mathbb{S}^{+}\cup\mathbb{S}^{-}\), furthermore we can say that
\[Ker\ (\mathbf{f}-\mathbf{p})=Ker(\mathbf{f}-\mathbf{p})_{\mathbb{S}^{+}}\cup Ker(\mathbf{f}-\mathbf{p}) _{\mathbb{S}^{-}} \tag{16}\]
and we can rearrange (6):
\[c_{m}(\mathbf{p}) =\#Ker\ (\mathbf{f}-\mathbf{p})_{\mathbb{S}^{+}}+\#Ker\ (\mathbf{f}-\mathbf{p})_{ \mathbb{S}^{-}} \tag{17}\] \[c_{m}(\mathbf{p}) =\sum_{\mathbf{w}\in f^{-1}_{\mathbb{S}^{+}}(\mathbf{p})}+1\ \ \ +\sum_{\mathbf{w}\in f^{-1}_{ \mathbb{S}^{-}}(\mathbf{p})}+1 \tag{18}\]
Considering the definitions of sets \(\mathbb{S}^{+}\) and \(\mathbb{S}^{-}\) on (12) and (13), respectively,
\[c_{m}(\mathbf{p})=\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!
A generalization of the results stated in the remaining of this Section for all the points in the plane can be easily obtained considering a decomposition of cycles \(\gamma\in[\gamma]\) in \(\gamma^{+}\) and \(\gamma^{-}\) as proposed in (20).
## IV Computing the Coverage Measure
We are interested in determining the coverage measure of all the points inside an area of interest. Thus, we developed an algorithm, that is presented in this Section, for computing the extended winding number function \(\overline{\eta}\) for a cycle \(\gamma:S_{1}\rightarrow\mathbb{R}^{2}\) with respect to all the points inside a subset of \(\mathbb{R}^{2}\). We also present its extension for dealing with an uncertain cycle \([\gamma]\).
### _Computing the Extended Winding Number of \(\gamma\)_
Let \(\mathbb{W}_{i}\) be a winding set associated with a cycle \(\gamma\), defined for a natural number \(i\), by definition
\[\mathbb{W}_{i}:=\{\mathbf{p}\in\mathbb{R}^{2}|\eta(\gamma,\mathbf{p})\geq i\} \tag{24}\]
There are, for example, two non-empty winding sets associated with the curve \(\gamma\) of Figure 5, \(\mathbb{W}_{1}\) and \(\mathbb{W}_{2}\) represented in Figure 12. As demonstrated in [16], the winding number \(\eta(\gamma,\mathbf{p})\) of any point \(\mathbf{p}\in\mathbb{R}^{2}\setminus\gamma\) can be calculated using the winding sets of \(\gamma\),
\[\eta(\gamma,\mathbf{p})=\sum_{i>0}\chi_{\mathbb{W}_{i}}(\mathbf{p}) \tag{25}\]
where \(\chi_{\mathbb{W}_{i}}\) is the characteristic function for the winding set \(\mathbb{W}_{i}\). Equations (24) and (25) are still valid if \(\eta\) is replaced by its extension \(\overline{\eta}\).
The algorithm starts by computing all the non-empty winding sets \(\mathbb{W}_{i}\), for \(i\in\mathbb{N}\), associated with the sensor's contour \(\gamma\), through a combinatorial approach. For that, we consider that a self-intersection or vertex of \(\gamma\) is determined by two parameters \(t_{0},t_{1}\in S_{1}\), \(t_{0}\neq t_{1}\) and that it is a point \(\mathbf{p}\) such that \(\mathbf{p}=\gamma(t_{0})=\gamma(t_{1})\). The multiplicity of such a self-intersection is the number, finite or infinite, of distinct \(t\in S_{1}\) such that \(\mathbf{p}=\gamma(t)\) minus one. Then, we make the following assumptions, similar to those of [17], so that the winding number of a point can be easily obtained using (25):
* \(\gamma\) has a finite number of self-intersections, each one of them with multiplicity one.
* in addition, we assume the two tangent vectors to \(\gamma\) at each vertex to be linearly independent.
Such a cycle divides \(\mathbb{R}^{2}\backslash\gamma\) into a finite number of connected open regions, one of which is not compact. Each one of these regions can be seen as a \(2-cell\) of the CW-complex \(C(\gamma)\), constructed from the cycle \(\gamma\). To be fully formal, we would need to use the fact that \(\gamma\) determines a cell decomposition of the one-point compactification of the plane, homeomorphic to the 2-sphere \(S_{2}\), Figure 13. The 0-cells of \(C(\gamma)\) are self-intersections of \(\gamma\), and the 1-cells are parts of the curve separating the 2-cells, connected components of \(\gamma\) minus its self-intersections.
Since all open 2-cells are homotopy equivalent to a point within that cell and considering the degree axioms presented in Definition 1, we can conclude that all the points within the same open 2-cell of \(C(\gamma)\) have the same winding number with respect to \(\gamma\). In this context, a correct and coherent numbering of the 2-cells is enough for determining the winding number value of all the points in the plane.
For this purpose, we can use a combinatorial rule proposed by Mobius in 1865 [18]. The rule says that two contiguous regions that are separated by a 1-cell are numbered with a value that must differ by exactly 1. The winding number of the region on the left is greater, considering the curve's orientation. This method leads to a unique numbering of the space considering that the winding number in the non-compact region, to whom we will be referring as \(A_{0}\), is known and equal to \(0\) for all of its points. This is true because since
is not bounded by \(\mathbf{f}(\partial W)\), differently from the other 2-cells of \(C(\gamma)\), we know that \(A_{0}\subseteq\mathbb{R}^{2}\backslash\mathbf{f}(W)\). This implies, from Definition 2, that for any \(\mathbf{p}\in A_{0}\), \(\eta(\gamma,\mathbf{p})=0\).
As a direct application of Mobius rules, a method proposed by Alexander [17] allows a coherent numbering of the regions only through an analysis of the tangent vectors to the curve on its self-intersections. Let \(\mathbf{v}\) be a vertice of \(\gamma\) represented by the pair \((t_{0},t_{1})\). Considering the assumptions adopted for \(\gamma\), a self-intersection \(\mathbf{v}\) will divide the plane into four regions. There are only two rules for numbering these four regions, according to whether \(\dot{\gamma}(t_{1})\) goes from the right to the left or the left to the right with respect to \(\dot{\gamma}(t_{0})\), as illustrated in Figure 14.
In Figures 15,16 and 17 we consecutively apply the Alexander numbering rules to the example considered previously. We start by numbering regions around \(\mathbf{v}_{0}\), Figure 15. We assume that \(A_{0}\) has a winding number value of \(0\) and that the later self-intersection, represented by the dashed line, crosses the previous one from left to the right. The same is done around vertices \(\mathbf{v}_{1}\) and \(\mathbf{v}_{2}\) at Figure 16 and 17, respectively, resulting in a complete characterization of the plane in terms of winding number values.
Once a numbering is obtained for all the regions according to Alexander's rules, we can construct the winding sets \(W_{i}\) of \(\gamma\), for \(i\in\mathbb{N}\), as the closure of the union of the regions with a number greater than or equal to \(i\)[16]. Then, the winding number for a point can be easily computed using (25).
### _Computing the Extended Winding Number of \([\gamma]\)_
If the sensor's contour \(\gamma\) is uncertain, the winding sets associated with the mission will also be uncertain. An uncertain set can be represented as a thick set, the following definition was proposed in [19].
**Definition 4**.: _We denote \(\llbracket\mathbb{X}\rrbracket\in\mathbb{I}\mathscr{P}(\mathbb{R}^{n})\) a thick set of \(\mathbb{R}^{n}\) if there are two subsets of \(\mathbb{R}^{n}\) called the lower bound \(\mathbb{X}^{-}\) and the upper bound \(\mathbb{X}^{+}\) such that_
\[\llbracket\mathbb{X}\rrbracket =\llbracket\mathbb{X}^{-},\mathbb{X}^{+}\rrbracket \tag{26}\] \[=\llbracket\mathbb{X}\in\mathscr{P}(\mathbb{R}^{n})\ \rvert\ \mathbb{X}^{-}\subseteq\mathbb{X}\subseteq\mathbb{X}^{+}\}\]
Fig. 16: Numbering of regions according to Alexander around \(\mathbf{v}_{1}\).
Fig. 17: Numbering of regions according to Alexander around \(\mathbf{v}_{2}\).
Fig. 18: Representation of thick sets.
_A thickest partitions the environment into three zones, the clear zone \(\mathbb{X}^{-}\), the penumbra \(\mathbb{X}^{+}\backslash\mathbb{X}^{-}\) (both illustrated in Figure 18) and the dark zone \(\mathbb{R}^{n}\backslash\mathbb{X}^{+}\)._
Let \(\mathbb{W}_{i}^{\gamma}\), with \(i\in\mathbb{N}\), be a winding set associated with a cycle \(\gamma\). To the set \([\gamma]\) of all the possible sensor's contour we associate \([\![\mathbb{W}_{i}]\!]=[\mathbb{W}_{i}^{-},\mathbb{W}_{i}^{+}]\), such that,
\[\mathbb{W}_{i}^{-} =\bigcap_{\gamma\in[\gamma]}\mathbb{W}_{i}^{\gamma} \tag{27}\] \[\mathbb{W}_{i}^{+} =\bigcup_{\gamma\in[\gamma]}\mathbb{W}_{i}^{\gamma} \tag{28}\]
In the exploration context, the clear zone of \([\![\mathbb{W}_{i}]\!]\), represented by \(\mathbb{W}_{i}^{-}\), translates as a set of points that were certainly explored at least \(i\) times. Analogously, the dark zone \(\mathbb{R}^{2}\backslash\mathbb{W}_{i}^{+}\) is a set of points that have a coverage measure smaller than \(i\), independently of which of the functions in \([\mathbf{x}]\) is the ground truth. The penumbra \(\mathbb{W}_{i}^{+}\backslash\mathbb{W}_{i}^{-}\) is a set of points whose coverage measure is equal to \(i\) for some \(\gamma\in[\gamma]\).
We redefine the characteristic function to deal with thick sets on the plane, we have \([\chi]:\mathbb{R}^{2}\to\mathbb{IN}_{0}\) and
\[[\chi]_{[\![\mathbb{W}_{1}]\!]}(\mathbf{p})=\begin{cases}[1,1],&\text{if }\mathbf{p}\in \mathbb{W}_{i}^{-},\\ [0,1],&\text{if }\mathbf{p}\in\mathbb{W}_{i}^{+}\backslash\mathbb{W}_{i}^{-},\\ [0,0],&\text{otherwise}\end{cases} \tag{29}\]
Then, we have
\[[\![\overline{\eta}]\!]([\gamma]\!],\mathbf{p})=\sum_{i>0}\chi_{[\![\mathbb{W}_{i} ]\!]}(\mathbf{p}) \tag{30}\]
In Figure 19 we have an illustration of thick sets \([\![\mathbb{W}_{1}]\!]\) and \([\![\mathbb{W}_{2}]\!]\) for the example considered through out this paper and in Figure 20 the resultant coverage measure considering these sets.
This defines the notion of uncertain winding number (and uncertain coverage measure). Under some assumptions, given below, that are realistic for applications, we need only a slightly generalized Alexander rule to efficiently compute the uncertain coverage measure.
As in [20], we will suppose that \([\mathbf{x}]\) is given by two time-varying sets: an outer approximation of the set of the robot's pose, \([\mathbf{s}](t)\), at time \(t\), in the plane, and \([\mathbf{v}](t)\), an outer-approximation of the set of linear velocities of the robot, at time \(t\), in the plane. Hence:
\[\begin{array}{cccc}[\mathbf{s}]:&\mathbb{R}&\to&\mathbb{R}^{2}\\ [\mathbf{v}]:&\mathbb{R}&\to&\mathbb{R}^{2}\end{array}\]
Consider the following notion of uncertain self-intersection. These are points \(\mathbf{p}\) in the plane such that \(\mathbf{p}\in[\mathbf{s}](t_{1})\cap[\mathbf{s}](t_{2})\) for some \(t_{1}<t_{2}\). The set of pairs of such times \(t_{1}\), \(t_{2}\), for a given \(\mathbf{p}\), is denoted by \(T_{x}\). Supposing that for all \(\mathbf{p}\) uncertain self-intersection, for all \((t_{1},t_{2})\in T_{x}\), for all \(v_{1}\in[\mathbf{v}](t_{1})\), \(v_{2}\in[\mathbf{v}](t_{2})\), \(v_{1}\) is not colinear with \(v_{2}\) (or \(v_{1}\) and \(v_{2}\) are transverse to each other), we get the following uncertain Alexander rules:
### _Implementation_
The method above was numerically implemented using the Codac library [21].1 We consider that we have on the input of the algorithm a well defined function or a tube describing the robot's pose \(\mathbf{x}\), speed \(\dot{\mathbf{x}}\) and acceleration \(\ddot{\mathbf{x}}\). From these inputs, the sensor's contour \(\gamma\) is obtained through a concatenation of \(\mathbf{x}=\mathbf{f}(0,[0,T])\) with \(\mathbf{x}_{aux1}=\mathbf{f}([0,L],T)\), \(\mathbf{x}_{R}=\mathbf{f}(L,[0,T])\) and \(\mathbf{x}_{aux2}=\mathbf{f}([0,L],0)\), as illustrated in Figure 3 and we have
Footnote 1: The code is available on GitHub github.com/marialuizacvianna/extended_winding
\[\gamma=\mathbf{x}*\mathbf{x}_{aux1}*\mathbf{x}_{R}^{-1}*\mathbf{x}_{aux2}^{-1}\]
where \(\mathbf{x}_{R}^{-1}(t)=\mathbf{x}_{R}(T-t)\) and \(\mathbf{x}_{aux2}^{-1}(t)=\mathbf{x}_{aux2}(T-t)\). We parameterize \(\gamma\) with \(\tau\in[0,1]\) that is not a time representation. The speed vector along \(\gamma\) can be computed using \(\dot{\mathbf{x}}\) and \(\ddot{\mathbf{x}}\).
The next step in the algorithm is to compute the set of time pairs \(\mathbb{T}\) that represent the self-intersections of \(\gamma\).
\[\mathbb{T}=\{(\tau_{1},\tau_{2})\in[0,1]^{2}|\tau_{1}<\tau_{2}\text{ and }\gamma(\tau_{1})=\gamma(\tau_{2})\}\]
This set can be obtained with the algorithm presented in [22] available in [21]. For the example considered throughout this
Fig. 20: Coverage measure considering the uncertain winding sets associated with \([\gamma]\).
paper, first presented in Figure 2, we obtain the following set of self-intersections
\[\mathbb{T}=\{(\tau_{1},\tau_{4}),(\tau_{2},\tau_{5}),(\tau_{6},\tau_{7}),(\tau_{ 3},\tau_{8})\}\]
where \(0\leq\tau_{1}<\tau_{2}<\ldots<\tau_{8}\leq 1\). These pairs correspond to the vertices illustrated in Figure 13: \(\mathbf{v}_{0}=\gamma(\tau_{3})=\gamma(\tau_{8})\), \(\mathbf{v}_{1}=\gamma(\tau_{6})=\gamma(\tau_{7})\), \(\mathbf{v}_{2}=\gamma(\tau_{0})=\gamma(\tau_{1})\) and \(\mathbf{v}_{3}=\gamma(\tau_{2})=\gamma(\tau_{5})\). Then, the set of 1-cells of \(\gamma\) can be defined as
\[\mathbb{E}=\{a_{0},a_{1},a_{2},a_{3},a_{4},a_{5},a_{6},a_{7}\}\]
where \(\partial a_{i}=\gamma(\tau_{i+1})-\gamma(\tau_{i})\), for \(i=1,\ldots\#\mathbb{E}-1\) and \(\partial a_{0}=\gamma(\tau_{1})-\gamma(\tau_{\#\mathbb{E}})\).
Determining if a vector \(\mathbf{a}\) crosses another vector \(\mathbf{b}\) from the right to the left can be mathematically translated by the cross product \(\mathbf{a}\times\mathbf{b}\) being positive. In this case, to each of the vertices represented by a pair \((\tau_{i},\tau_{j})\in\mathbb{T}\) we associate an update value \(u\in\{-1,+1\}\) that determines if \(\gamma_{j}\) crosses \(\partial(\dot{U}_{i})\) from the right to the left \(u=-1\) or the left to the right \(u=+1\).
We use the update value of each edge's initial vertex and the combinatorial method presented in this Section for defining a winding number value for the area on its right and left sides. Finally, the winding sets can be easily obtained knowing that \(\partial\mathbb{W}_{i}\) is a concatenation of the edges in \(\mathbb{E}\) for which the value on the area on its left side is equal or greater than \(i\).
We choose to represent sets using interval arithmetic and we rely on interval analysis tools [23], such as separators and a Set Inversion Via Interval Analysis (SIVIA) algorithm [24], for classifying, in terms of their coverage measure, all the points inside an area of interest. The set inversion algorithm bisects the environment, up to a precision that is chosen by the user, such that the plane is divided into boxes that do not intersect \(\gamma^{+}\) and \(\gamma^{-}\). The advantage of this method is that it is known, from the properties of the topological degree, that all the points that belong to a set in the plane that does not intersect the considered cycles will have the same winding number value. Therefore, this method limits the number of computations that have to be done to determine the winding number for all the points inside an area. For boxes \([\mathbf{b}]\in\mathbb{IR}^{2}\) for which \([\mathbf{b}]\cap\gamma^{+}\neq\emptyset\) or \([\mathbf{b}]\cap\gamma^{-}\neq\emptyset\) is true, an uncertain winding number value will be computed. For that, we use the following adaptation of the characteristic function for thick sets to deal with sets of \(\mathbb{R}^{2}\) on the input: \([\chi]:\mathscr{D}(\mathbb{R}^{2})\to\mathbb{IN}_{0}\),
\[[\chi]_{[\mathbb{W}_{i}]}([\mathbf{b}])=\begin{cases}[1,1],&\text{if for all }\mathbf{p}\in[\mathbf{b}]\,\ \mathbf{p}\in\mathbb{W}_{i}^{-},\\ &\\ \end{cases} \tag{31}\]
## V Experiments
We apply the method presented in this paper on a dataset acquired during a mission performed by the AUV daurade, Figure 22, on November 2015. This robot was built by ECA robotics and used by Direction General de l'Armement - Techniques Navales (DGA - TN) and by the Service Hydrographique et Oceanographique de la Marine (SHOM). The mission took place in the Road-Sted of Brest (Britanny, France), it consists of a 45 minutes survey path.
Daurade explores using two side-scan sonars, one that explores its right side and the other its left side. The visible area of both sensors can be individually modeled as a line-sweep sensor on the plane. Assuming a configuration in which there is no visibility gap and no overlap between the range of visibility of the two sensors, the whole can be represented as a line-sweep sensor.
Fig. 21: Uncertain Alexander numbering with \(w\in\mathbb{Z}\): (a): \([\mathbf{v}](t_{2})\) comes from the right; (b): \([\mathbf{v}](t_{2})\) comes from the left.
Fig. 22: The AUV Daurade.
The robot's pose underwater is estimated by the integration of data acquired by an Inertial Measurement Unit (IMU) coupled with a Doppler Velocity Logger (DVL) and a pressure sensor, for depth estimation. Initially, we assume that this estimation \(\tilde{\mathbf{x}}\) is exact, as illustrated in Figure 23, and that the robot maintains a constant depth during the mission, resulting in the sensor's contour \(\tilde{\gamma}\) presented in Figure 24. Figure 25 displays the separation of \(\tilde{\gamma}\) into \(\tilde{\gamma}^{+}\) and \(\tilde{\gamma}^{-}\).
The characterization of the explored area is done by calculating winding numbers \(\eta(\tilde{\gamma}^{+},\mathbf{p})\) and \(\eta(\tilde{\gamma}^{-},\mathbf{p})\) for all \(\mathbf{p}\) inside the area considered of interest. The algorithm proposed in Section IV is used for this purpose. In Figure 26 we can see the resultant paving. Uncertain boxes, surrounding contours \(\tilde{\gamma}^{+}\) and \(\tilde{\gamma}^{-}\) are represented in black. The uncertain winding
Fig. 23: Estimated robot’s trajectory \(\tilde{\mathbf{x}}\) without incertitude. The robot is represented at its final pose at the end of the mission.
number value for each of these boxes can also be defined with the proposed algorithm, in Figure 27, we give an overview of the classification of these boxes for a part of the mission.
Then, if we take into consideration the incertitude around sensors measurements, propagated through integration during pose estimation, we obtain \([\mathbf{x}]\), Figure 28. We represent the uncertain pose by a guaranteed envelope of the ground truth \(\mathbf{x}^{*}\) using a box-valued function named tube on the interval analysis literature. The sonar's contour \([\gamma]\) will also be uncertain and represented by a tube, as displayed in Figure 29.
In the considered scenario, some self-intersections of \([\gamma]\) do not respect the conditions established by our algorithm, notably, the non colinearity condition that ensures that the
Fig. 27: Coverage measure for boxes that intersect the sensor’s contour.
Fig. 26: Result of the SIVIA algorithm for the classification of the explored area. Boxes in black have an uncertain coverage measure value.
Fig. 28: The inclusion function \([\mathbf{x}]\).
Fig. 29: \([\gamma]\).
environment is divided into four regions around the self-intersection so the Alexander rules can be applied for numbering. As a result, the problem at hand cannot be directly solved using the proposed method. We apply, however, our algorithm around one uncertain self-intersection in \([\gamma]\) that respects our limitation in order to exemplify the extension of the Alexander algorithm to uncertain curves, as it was presented in Figure 21. The result is illustrated in Figure 30. One can note that the method presented in this paper can still be used to characterize the whole environment in this situation. For that, the mission must be divided into multiple missions, along the time of exploration, that respect individually the required constraints.
## VI Conclusion
In conclusion, this article has extended the link between the topological degree and the line-sweep exploration problem, allowing for a characterization of the area explored by a mobile robot in a two-dimensional plane. An interval analysis-based algorithm for computing the winding number for all the points inside a set has also been proposed, and its efficiency and scalability make it suitable for deployment on resource-constrained robotic platforms. A real-world experiment has shown that the proposed algorithm consistently produces reliable characterizations of the explored area, but it has also shown the limitations of the method that should be addressed by future work. Other future research directions may involve extending the algorithm to three-dimensional environments and exploration sensors with a two-dimensional visible area. Furthermore, the algorithm's applicability in collaborative multi-robot systems and its integration with simultaneous localization and mapping (SLAM) techniques could be explored. For the latter, we could imagine a scenario where the coverage measure is used to reduce the exteroceptive data that has to be compared to find possible feature matching, therefore, reducing the complexity of SLAM algorithms. Finally, we will examine the link between uncertain topological degrees and methods based on persistent homology, as in e.g. [7].
## Acknowledgments
We acknowledge the support of the "Engineering of Complex Industrial Systems" Chair Ecole Polytechnique-ENSTA Paris-Telecom Paris, partially funded by DGA/AID, Naval Group, Thales and Dassault Aviation.
|
2309.13417 | A Review on Practical Challenges of Aerial Quantum Communication | The increasing demand for the realization of global-scale quantum
communication services necessitates critical investigation for a practical
quantum secure communication network that relies on full-time all-location
coverage. In this direction, the non-terrestrial quantum key distribution is
expected to play an important role in providing agility, maneuverability, relay
link, on-demand network, and last-mile coverage. In this work, we have
summarized the research and development that has happened until now in the
domain of quantum communication using non-terrestrial platforms with a specific
focus on the associated challenges and the relevant models. Further, to extend
the analysis beyond the existing know-how, a hybrid model involving the
features of Vasylyev et al. model and Liorni et al. model is introduced here.
The hybrid model entails us adapting a spherical beam to an elliptic beam
approximation and effectively capturing the characteristics of transmittance in
densely humid weather conditions and at low altitudes. Further, to understand
the potential impact of the weather conditions of a region on atmospheric
attenuation, as an example the average monthly visibility of Pune city was
analyzed for the years 2021 and 2022. In addition, a simulation of a generic
model is performed using a software-defined network paradigm where quantum
teleportation is simulated between distant parties using a swarm of drones in
NetSquid. | Umang Dubey, Prathamesh Bhole, Arindam Dutta, Dibya Prakash Behera, Vethonulu Losu, Guru Satya Dattatreya Pandeeti, Abhir Raj Metkar, Anindita Banerjee, Anirban Pathak | 2023-09-23T16:03:23Z | http://arxiv.org/abs/2309.13417v1 | # A Review on Practical Challenges of
###### Abstract
The increasing demand for the realization of global-scale quantum communication services necessitates critical investigation for a practical quantum secure communication network that relies on full-time all-location coverage. In this direction, the non-terrestrial quantum key distribution is expected to play an important role in providing agility, maneuverability, relay link, on-demand network, and last-mile coverage. In this work, we have summarized the research and development that has happened until now in the domain of quantum communication using non-terrestrial platforms with a specific focus on the associated challenges and the relevant models. Further, to extend the analysis beyond the existing know-how, a hybrid model involving the features of Vasylyev _et al._'s model and Liorni _et al._'s model is introduced here. The hybrid model entails us adapting a spherical beam to an elliptic beam approximation and effectively capturing the characteristics of transmittance in densely humid weather conditions and at low altitudes. Further, to understand the potential impact of the weather conditions of a region on atmospheric attenuation, as an example the average monthly visibility of Pune city was analyzed for the years 2021 and 2022. In addition, a simulation of a generic model is performed using a software-defined network paradigm where quantum teleportation is simulated between distant parties using a swarm of drones in NetSquid.
Quantum Key Distribution Modelling Aerial Quantum Communication Drone-based QKD Acquisition-Pointing and Tracking (APT) Atmospheric Turbulence Quantum Software Defined Networking Free-space QKD.
## 1 Introduction
Quantum communication offers a fundamentally secure way to establish long-distance communication channels, making it highly relevant for secure communication in critical applications where traditional encryption methods may be vulnerable to future quantum attacks. Quantum communication has many facets and the two most important facets are secure quantum communication and teleportation, both are unique in some sense, teleportation does not have any classical analog, and quantum cryptography can be unconditionally secure whereas classical cryptography can never achieve that feature.
Quantum key distribution (QKD) is one of the cornerstones of quantum cryptography. It is a method of exchanging symmetric keys among parties, by leveraging the principles of quantum mechanics to ensure provable security against adversaries. Fiber and free-space are the most commonly used transmission mediums for QKD. However, there are several challenges in establishing practical and secure networks. These challenges include device imperfections, such as detector noise, polarization-extinction-ratio, and signal loss depending on the transmission medium. In fiber-based QKD, the losses increase significantly with the distance, making it unfeasible over larger geographical areas. Free-space QKD offers the advantage of extended coverage and flexibility but is susceptible to losses caused by atmospheric
turbulence, fog, and other environmental factors in the communication channel [1; 2]. Satellite-based QKD is considered a potential candidate for long-distance communication, however, along with the free-space propagation challenges, it faces a limited operational timing window, non-agility, and higher infrastructural costs. These factors collectively impede achieving higher key rates in satellite-based QKD systems. However, to realize a practical quantum secure communication network that would ideally provide full-time all-location coverage, all the modes of transmission need to function in an integrated fashion. Here, the utilization of aerial platforms [3] may offer a highly flexible, cost-effective, and re-configurable approach for expanding the reach of quantum communications across time and space. In Fig. 1, we have illustrated the concept of aerial quantum communication, with a hierarchical quantum network operating in different atmospheric layers. Deploying of aerial quantum nodes such as drones, high altitude platforms (HAPs), hot-air balloons, unmanned aerial vehicles (UAVs), and aircraft can serve as temporary relays. It can also act as intermediate mobile nodes between terrestrial ground stations and satellites and can be used for resolving the last-mile quantum key exchange challenge for inner-city or field networks due to their rapid deployment capabilities. Moreover, for higher altitudes, low-velocity aircraft can provide longer link duration and broader transmission coverage.
Present-day drones, or UAVs, encompass a wide spectrum of capabilities, spanning take-off weights ranging from a few grams to several tons. They can operate at cruising altitudes that vary from a few meters above the ground to altitudes exceeding 20 kilometers. Furthermore, their flight duration can extend up to 25 days. Considering these recent advancements it is imperative to consider these UAVs to establish mobile quantum networks (QNs), enabling on-demand and real-time coverage across diverse spatial and temporal scales. This will enable quantum communication [32] from distances of kilometers (local-area networks) to hundreds of kilometers (wide-area networks). This approach represents a flexible and economically viable means of expanding the reach of secure communication while delivering real-time coverage as needed.
Several works have been reported in this area, which includes air-to-ground QKD demonstration using the Dornier-228 aircraft by Nauerth _et al._[19], downlink QKD demonstration using the hot-air balloon by Wang _et al._[20], the basis detection and compensation experiment using the Z-9 helicopter by Zhang _et al._[22], the free-space QKD based on a moving pick-up truck by Bourgoin _et al._[23], uplink QKD demonstration using the Twin Otter research aircraft by Pugh _et al._[26], the drone-based QKD test using DJI S1000+ octocopter by Hill _et al._[33] and drone-based entanglement distribution using UAV by Liu _et al._[34; 32]. The work by Liu _et al._ laid the foundations for establishing re-configurable mobile QNs. Recently drone-based QKD, with an average secure key rate larger than 8 kHz using decoy-state BB84 protocol with polarization encoding was demonstrated [29]. There have been a few demonstrations of the satellite QKD also, including a B92 protocol implementation [25] using the SOCRATES (Space Optical Communications Research Advanced Technology Satellite), and a 600 km DS-QKD implementation [21] using the QEYSSAT microsatellite. In Table 1, we have reported the developments in aerial quantum communication to date.
Considering the fact that aerial QKD is emerging as a potential candidate for the efficient implementation of a practical secure quantum communication network. It is interesting to address the implementation challenges and their impact on the performance of aerial QKD systems. Consequently, in Section 2, the technological challenges are presented in detail.
Figure 1: (Color online) Concept of aerial quantum communication [3]
\begin{table}
\begin{tabular}{l l l l l l l l} \hline
**Year** & **Distance** & **Secure** & \(\lambda\) & **Pulse repe-** & **QKD** & **QBER** & **Demonstration** \\ & (km) & **key rate** & (nm) & **-tition rate** & **protocol** & **QBER** & **Demonstration** \\ \hline
1989 & 30cm & - & - & 403 bits & - & 66 bits & On table at IBM [4] \\ \hline
1992 & 32cm & - & - & 217 bits & - & 2-4\% & Free air optical path [5] \\ \hline
1997 & 0.205 & 50 Hz & 772 & - & B92 & 1-6\% & Over indoor paths [6] \\ \hline
1998 & \(\sim 1\) & 3.5-45 KHz & 772 & 10 MHz & B92 & 1.5 \% (D) & Los Alamos (D) \\ & & & & & 2.1\% (N) & National Laboratory (N) [7] \\ \hline
2002 & 9.81 & 50.78 Kb (D) & 772 & - & BB84 & 5\% (D) & Los Alamos Ski Club, \\ & & 118.06 Kb (N) & & & 2.1\% (N) & The National Forest Service [8] \\ \hline
2002 & 23.4 & 1.5-2 Kbps & - & - & BB84 & 5\% & Tx-Zugspitze, South Germany [9] \\ & & & & & & Rx- Mountain of Karwendelspitzer \\ \hline
2004 & 0.73 & 1 Mbps & 845 & 250 ps & B92 & 1.1\% & Free-space [10] \\ \hline
2004 & 13 & 10 bps & 702 & - & BB84 & 5.83\% & Tx - Dashu Mountain \\ & & & & & & & Hefei of China (elevation- 281 m) \\ & & & & & & & Alice-West Campus of USTC \\ & & & & & & & Bob-Feixi of Hefei [11] \\ \hline
2006 & 144 & 417 bits & 710 & 249 MHz & BB84 & 4.8\% & La Palma and Tenerife [12] \\ \hline
2006 & 1.5 & 850 bps & 404 & - & BB84 for & 5.4\% & Free-space [13] \\ & & & & & pol. ent. p. & & \\ \hline
2006 & 0.48 & 50 Kbps & 850 & - & BB84 & 3-5\% & Free space, Munich [14] \\ \hline
2007 & 144 & 12.8, 42 bps & 850 & 10 MHz & DS BB84 & 6.48\% & La Palma and Tenerife [15] \\ \hline
2008 & 1.575 & 85 bps & 815 & - & BBM92 & 4.92\% & Free-space [16] \\ \hline
2008 & \(\sim 1.5\) & 300 bps & 407 & - & Modified E91 & \(\sim 3\%\) & Free-space [17] \\ & & & -810 & & & & \\ \hline
2010 & 1.305 & 2.7 Kbps & 404 & - & BBM92 & 2.48 & Free-space [18] \\ \hline
2013 & 20 & 7.9 bps & 850 & 10 MHz & BB84 & 4.8\% & Dornier 228 turboprop aircraft \\ & & & & & & & and the optical ground station [19] \\ \hline
2013 & \(\sim 96\) & 159.4 bps (MP) & 850 & 100 MHz & DS & 4.04\% & MP: Over a turntable \\ & & & 48 bps (FP) & & & & FP: Hot-air balloon [20] \\ \hline
2014 & 600 & 100 Kb & - & 76 MHz & DS & 4.3-5.51\% & QEYSSAT- 600 km \\ & & & & & & & altitude microsatellite [21] \\ \hline
2014 & 2.5-7.5 & - & 850 & 1 MHz & BB84 & - & Tx: Helicopter (100 kmph) \\ & & & & & & Rx: Top floor of a building \\ & & & & & & & in an airport [22] \\ \hline
2015 & \(\sim 0.650\) & 40 bps & 532, & 80 MHz & DS BB84 & 6.16\% & Pickup truck \\ & & & & & & traveling at 33 kmph \\ & & & 1550 & & & & angular speed [23] \\ \hline
2017 & 1200 & 1.1 Kbps & 850 & 100 MHz & DS BB84 & 1-3\% & Micius- 635 kg satellite [24] \\ \hline
2017 & 802 & \(\sim 10\)-100 bps & 800 & 10 MHz & B92 & \(<5\%\) & SOCRATES- 50 kg \\ & & & & & & microsatellite [25] \\ \hline
2017 & 3-10 & 868 Kb & 785 & 400 MHz & DS BB84 & 3-5\% & Twin Otter- research aircraft [26] \\ \hline
2017 & - & - & 650 & 500 KHz & DS BB84 & - & On table (towards DJI S1000+ \\ & & & & & & ococopter QKD) [27] \\ \hline
2021 & 0-0.04 & - & - & - & BB84 & \(\sim 50\%\) & Amov- lab’s Z410 drone \\ & & & & & & with T- engine 2216 \\ & & & & & & & and Pixhawk flight control QKD \\ \hline
2022 & 30 cm & 4 - 15.3 kbps & 850 & 100 MHz & BB84 & 2.4\% & Hand-held sender [28] \\ \hline
2023 & 0.2 & 8 KHz & 850 & 50 MHz & BB84 & 2.22-2.32\% & Drone-QKD [29] \\ \hline
2021- & \(10^{a}\) & - & 850 & 50 MHz & 3 states & 2.22-2.32\% & a. Drone-Drone: DJI S1000+ \\ & 2023 & & & & & & drone to Alta 8 Pro drone [27; 30] \\ & & & & & & & b. Drone-Car [30; 31] \\ & & & & & & & c. Car-Car [30; 31] \\ \hline \end{tabular}
\end{table}
Table 1: Developments towards aerial quantum communication around the world, where \(\lambda\): Wavelength, QBER: Quantum bit error rate, D: Day, N: Night, DS: Decoy state, pol. ent. p.: polarization-entangled photons, MP: Moving platform, FP: Floating platform, Tx: Transmitter, Rx: Receiver
In Section 3, we introduce a hybrid model for low-altitude communication that takes into account real-world scenarios. In Section 4, we discuss the link configurations, budgeting, and margin in detail, along with time synchronization. Section 5 presents the simulation of quantum teleportation using a swarm of drones based on quantum software-defined networking (QSDN) oriented architecture. Finally, the paper is concluded in Section 6.
## 2 Technological challenges
There are several challenges associated with the implementation of aerial quantum communication. One of the major challenges in achieving long-distance aerial quantum communication is the loss of signal in the transmission medium, this can be caused due to various physical reasons. Before we describe them, we may note that in an optical fiber, the losses increase exponentially with the length of the fiber and it is denoted by the attenuation coefficient (\(\beta_{\mathrm{a}}\)), expressed in dB/km. It depends on the fiber material, manufacturing tolerances, and wavelength. It is about 2 dB/km at 800 nm, 0.35 dB/km at 1310 nm, and 0.2 dB/km at 1550 nm. Secure quantum communication is usually done through telecom-grade optical fiber using light of wavelength about 1550 nm, where the attenuation is minimum at \(\sim\)0.2 dB/km. It can be slightly reduced further by using ultra-low-loss-fiber with a nominal attenuation coefficient of 0.158 dB/km and that can increase the distance for quantum key distribution to some extent. However, to perform secure quantum communication, beyond a few hundred km, one would be required to use a free-space route. Now, we may note that below fiber-based optical communication using light of wavelength below 800 nm is unusable as attenuation due to Rayleigh scattering increases considerably. Here appears an interesting point: there exists a high transmission window for free-space communication at around 770 nm. It is weakly dispersive and essentially non-birefringent at these wavelengths. This provides a great advantage to free-space communication. However, free-space transmission has some drawbacks, too. Particularly, its performance depends on the atmospheric conditions. For example, the transmission of the signal through a turbulent medium may lead to arrival time-jitter, beam wander, beam pointing error, beam divergence, etc. In this section, we will systematically discuss the technological challenges that arise due to these issues with a specific focus on how to model the effect of atmospheric conditions. To begin with we may discuss the effect of atmospheric turbulence.
### Atmospheric turbulence
Air turbulence [35] in the atmosphere plays a significant role in free-space optical (FSO) communication as it can affect the operating laser beam, leading to beam divergence, beam wandering, scintillation, etc. Several efforts have been made to mathematically describe the effect of atmospheric turbulence on the FSO [36]. One such effort led to the development of energy cascade theory [37].
The energy cascade theory is a fundamental concept in the study of turbulence in the Earth's atmosphere. It explains energy transferred from large-scale turbulent motion to smaller and smaller scales. It states that the outer scale eddy \(L_{o}\), and inner scale eddy \(l_{o}\), form the bounds of an inertial sub-range. The eddies in the inertial range are statistically homogeneous and isotropic. Within this range, large eddies break into smaller eddies transferring energy. This process carries on until inner scale eddy \(l_{o}\) is reached. After this, energy dissipates through viscosity. In 1940s, Andrey Kolmogorov [38] obtained a beautiful expression for the wavenumber spectrum (now known as the Kolmogorov spectrum) in the turbulence inertial subrange. The Kolmogorov spectrum describes the refractive index fluctuations as
\[\phi_{n}(k)=0.033C_{n}^{2}k^{\frac{-11}{3}},\frac{1}{L_{o}}<<k<<\frac{1}{l_{o}} \tag{1}\]
where, \(k\) is the wavenumber, and \(C_{n}^{2}\) is the refractive index structure parameter.
The refractive index variations arise due to changes in temperature and pressure with varying altitudes. The refractive index structure constant, \(C_{n}^{2}\), is a parameter used to characterize refractive index of air variations thus, the strength of air turbulence. It has values ranging from \(10^{-17}m^{\frac{-2}{3}}\) to \(10^{-13}m^{\frac{-2}{3}}\) to describe weak to strong turbulence, respectively [39]. It serves as a valuable tool for assessing both the scintillation index and the Rytov variance.
Certain models offer a means to depict the impact of atmospheric turbulence on \(C_{n}^{2}\)[40]. Among these, the Hufnagel-Valley Boundary (HVB) model [41] is used for long-range propagation. The model incorporates various on-site conditions such as wind speed, iso-plannic angle, and altitude. Using the HVB model, \(C_{n}^{2}\) was plotted for different wind velocities as shown in Fig. 2a. Higher wind velocities have higher \(C_{n}^{2}\) values depicting a highly turbulent atmosphere. Fried [42] proposed another model for determining \(C_{n}^{2}\). It is valid for only short-range propagation. For the Fried model, \(C_{n}^{2}\) was plotted using the turbulence strength parameter, \(K_{o}\) values for strong, moderately strong, and moderate conditions are shown in Fig. 2b. \(C_{n}^{2}\) increases for increasing \(K_{o}\) values showing more turbulent environments. Further, an alternative model used for describing the refractive index structure constant at low altitudes is known as
#### 2.1.1 Scintillation and beam wandering
Atmospheric turbulence affects the propagation of the optical beams leading to wavefront distortions. It can cause fluctuations in the intensity of the beam, such that we obtain speckled patterns on the beam wavefront at the receiver end. This phenomenon is known as scintillation. It occurs because the turbulent atmosphere causes different parts of the beam to experience varying refractive index gradients. Scintillation causes loss in signal-to-noise ratio and deep signal fades. Aperture averaging [44] is one of the techniques used to mitigate scintillation.
Beam wandering arises as a result of two distinct factors: atmospheric turbulence along the path of the beam and random errors in the transmitter's pointing mechanism. These two factors operate independently and their effects accumulate over the course of propagation. When transmitting an optical signal through free space, one observes the random displacement of the instantaneous centroid of the signal, often referred to as the "hot spot" or point of maximum irradiance. This quivering, which is assumed to follow a Gaussian distribution with variance \(\sigma^{2}\), is commonly known as beam or centroid wandering. In essence, this wandering phenomenon is a consequence of both pointing error, denoted as \(\sigma_{pe}^{2}\), stemming from Gaussian jitter and off-target tracking, and atmospheric turbulence, represented by \(\sigma_{tb}^{2}\). These two effects are mutually independent, and their combined effect results in the total variance of wandering, denoted as \(\sigma^{2}=\sigma_{pe}^{2}+\sigma_{tb}^{2}\)[45]. The impact of \(\sigma_{pe}^{2}\) and \(\sigma_{tb}^{2}\) varies depending on the different weather conditions, wavelength used, beam size and shapes, etc. In Fig. 3, variance of the beam centroid wandering resulting from turbulence (\(\sigma_{tb}^{2}\)), pointing error (\(\sigma_{pe}^{2}\)) and the long-term beam waist (\(w_{lt}^{2}\)) are plotted for \(\lambda=800\) nm and initial radius of collimated beam \(w_{0}\) = 5 cm a. It is observed that \(w_{lt}^{2}\gg\sigma_{tb}^{2}\gg\sigma_{pe}^{2}\) for all distances. The parameters \(w_{lt}^{2},\sigma_{tb}^{2}\) and \(\sigma_{pe}^{2}\) are shown to have a logarithmic growth with increasing distance. Other parameters are the outer scale of turbulence \(L_{0}=1\) m and \(C_{n}^{2}=1.28\)x\(10^{-14}m^{-2/3}\) (night-time operation).
#### 2.1.2 Atmospheric attenuation
Signal loss and link failure are caused by atmospheric attenuation due to absorption, scattering, and scintillation. All these effects vary with time and depend on the current local conditions, weather, and distance. The atmospheric attenuation \((\tau)\) in dB for distance \(L\) (km) and \(\beta_{\mathrm{a}}\) attenuation coefficient, can be given by:
\[\tau=4.3429\beta_{\mathrm{a}}L \tag{2}\]
The absorption loss is mainly due to the carbon dioxide molecules and water particles, whereas the scattering loss is due to the snow, fog, clouds and rain present in the atmosphere. For weather conditions such as clear weather to dense fog weather, scattering loss varies between 0.21 dB/km to 0.84 dB/km [46]. It can be characterized as follows:
**Attenuation coefficient due to fog and rain:** Attenuation due to scattering of the optical signal depends on the visibility range of the link. And the visibility varies depending on different weather conditions. The attenuation factor for different weather conditions such as fog and rain is given by:
\[\beta_{\mathrm{fog}}=\left(\frac{3.91}{V}\right)\left(\frac{\lambda}{550} \right)^{-p} \tag{3}\]
Figure 2: Plot for structure parameter constant \(C_{n}^{2}\) with altitude (a) using HVB model with varying velocities (b) using Fried model for moderate, moderately strong, and strong conditions and (c) using SLC model.
\[\beta_{\rm rain}=\left(\frac{2.8}{V}\right) \tag{4}\]
where, \(V\) (km) is the visibility and \(p\) is the size distribution coefficient of scattering.
Attenuation for thick fog, light fog, and haze conditions can be modeled by the Kim [47] or Kruse [48] model. Kim model is able to describe attenuation for visibility less than 1 km. For thick fog conditions where visibility is under 0.5 km, \(p=0\), Thus, attenuation is the same for all operating wavelengths. As visibility increases, the attenuation reduces overall. Higher wavelength values have slightly less attenuation when compared to lower wavelength values. See Fig. 3(a) and Fig. 3(b) to visualize the effect of fog.
Size distribution, \(p\) are chosen depending on the visibility range as defined in the Kruse and Kim models. According to the Kim model,
\[p=\begin{cases}1.6&\text{when $V>50$ km}\\ 1.3&\text{when $6$ km}<V<50$ km}\\ 0.16\ V+0.34&\text{when $V<6$ km}\\ V\ -0.5&\text{when $0.5$ km}<V<1$ km}\\ 0&\text{when $V<0.5$ km}.\end{cases} \tag{5}\]
According to the Kruse model,
\[p=\begin{cases}1.6&\text{when $V>50$ km}\\ 1.3&\text{when $6$ km}<V<50$ km}\\ 0.585\ V^{1/3}&\text{when $V<6$ km}.\end{cases} \tag{6}\]
Figure 4: Specific attenuation vs visibility using wavelengths 850 nm, 950 nm and 1550 nm which are frequently used in FSO communication for (a) thick fog condition and (b) light fog and haze condition.
Figure 3: (Color online) Variance \(\sigma_{pe}^{2},\sigma_{tb}^{2}\) and \(w_{lt}^{2}\) for varying distances.
We have investigated the average visibility of Pune city for the last two years using the data collected from the Indian Meteorological Department (IMD) (refer to Fig. 5). Pune city is chosen just as an example, as we plan to perform experimental aerial quantum communication in Pune. We observe that due to the changing weather conditions of any region, there are variations in the average visibility of the atmosphere. Therefore, the performance of any aerial quantum communication system would depend on the date and time when it's used. Additionally, the integration of weather monitoring systems and predictive algorithms can aid in optimizing system performance by adjusting parameters in response to changing weather conditions. Overall, understanding and mitigating the effects of weather and visibility is crucial for reliable aerial quantum communication.
**Atmospheric extinction**: An additional significant source of signal loss during the free-space transmission of an optical beam is atmospheric extinction. This phenomenon results from the combined impact of aerosol absorption and Mie/Rayleigh scattering. When we consider free-space communication at a constant altitude \(\overline{h}\), this phenomenon can be quantified using the straightforward Beer-Lambert equation, \(\eta_{\mathrm{atm}}\left(\overline{h}\right)=e^{-\alpha\left(\overline{h} \right)z}\), where, \(\alpha\left(\overline{h}\right)\) is the extinction factor which varies depending on both the altitude and the wavelength of the signal [49]. Neglecting refraction, the atmospheric transmissivity can be expressed as
\[\eta_{\mathrm{atm}}\left(\overline{h},\phi\right)=\exp\left\{-\int_{0}^{z \left(\overline{h},\phi\right)}dx\,\alpha\left[\overline{h}\left(x,\phi \right)\right]\right\}, \tag{7}\]
while taking into consideration a generic zenith angle (\(\phi\)).
**Atmospheric transmittance:** Atmospheric transmittance is a measure of the amount of incoming electromagnetic radiation (such as visible light, infrared, or microwave radiation) that passes through the Earth's atmosphere without being absorbed, scattered, or otherwise attenuated. Different wavelengths of electromagnetic radiation are affected differently as they pass through Earth's atmosphere. The variation in transmittance with wavelength is primarily due to the absorption and scattering properties of the atmospheric constituents, like gas molecules, aerosols, etc., at different wavelengths.
In Fig. 6, we have presented a simulation for the atmospheric transmittance for a 1 km FSO link as a function of different wavelengths along the zenith for the downlink configuration was carried out using the MODTRAN software, which was developed by the Spectral Sciences Inc. (SSI) and the Air Force Research Laboratory of The United States of America (USA) for an urban location with the tropical atmospheric model and 9 km visibility.
The results provide an indication for the identification of the optimum wavelengths necessary for the free-space link establishment like the APT coarse and fine-tracking laser beams, entangled pair distribution, and time synchronization.
#### 2.1.3 Beam divergence loss
One of the major sources of loss in establishing a point-to-point link, with accuracy for single mode fiber (SMF) coupling (where the SMFs typically have the mode field diameter of around 5 \(\mu m\)), is the diffraction-induced beam broadening.
The optical beam propagation through the atmosphere spreads out owing to the diffraction, leaving the receiver with a narrow field of view (FOV), not being able to collect a fraction of the transmitted power, resulting in the beam divergence loss, also known as the geometric loss.
Figure 5: (Color online) Comparison of average monthly visibility for Pune city for the years 2021 and 2022.
One may consider the Gaussian beam as a quasi-monochromatic optical mode source with wavelength \(\lambda\), and employ it for achieving free-space quantum communication. If this beam travels a distance of \(z\), due to diffraction the spot size of the beam, \(w_{D}\) will become:
\[w_{D}=w_{0}\sqrt{\left(1-\frac{z}{R_{0}}\right)^{2}+\left(\frac{z}{z_{R}} \right)^{2}} \tag{8}\]
where, the initial beam spot size is \(w_{0}\) (smaller than the aperture of the transmitter), radius of curvature is \(R_{0}\), and (\(z_{R}=\frac{\pi w_{0}^{2}}{\lambda}\)) is the Rayleigh length1. Only a fraction of the initial beam is detectable and this fraction is determined by the diffraction-induced transmissivity,
Footnote 1: For collimated Gaussian beam \((R_{0}=\infty)\), and consequently, the spot size can be considered as \(w_{D}=w_{0}\sqrt{1+\left(\frac{z}{z_{R}}\right)^{2}}\).
\[\eta_{D}(z)=1-e^{-\frac{2a_{r}^{2}}{w_{D}^{2}}} \tag{9}\]
which may be approximated as,
\[\eta_{D}\simeq\eta_{D}^{\text{far}}:=\frac{2a_{r}^{2}}{w_{D}^{2}}\ll 1 \tag{10}\]
where \(a_{r}\) is the aperture of the receiving telescope and \(w_{D}\) spot size of the beam.
Employing the PLOB (Pirandola-Laurenza-Ottaviani-Banchi) bound [50] with the transmittance, we can estimate the upper bound of the maximum number of secret bits that can be distributed by a QKD protocol across a free-space communication channel by
\[\mathcal{U}\left(z\right)=\frac{2}{\ln 2}\frac{a_{r}^{2}}{w_{D}^{2}} \tag{11}\]
bits per use.
Hence, it is important to choose the optimum transmitter and receiver optics aperture areas for the optimal beam diameters and low-pointing errors. Therefore, using Eq. (12), a simulation for the beam divergence loss, \(L\) (dB) as a function of the diffraction-limited link distances within a local area network for small transmitting and receiving optics aperture diameters was carried out (refer to Fig. 7),
\[L\text{(dB)}=-10\left[\left(2\log\frac{4}{\pi}\right)+\log\left(\frac{A_{t}A_ {r}}{\lambda^{2}z^{2}}\right)\right] \tag{12}\]
where, \(A_{t}\): aperture area of the transmitter optics, and \(A_{r}\): aperture area of the receiver optics.
Similarly, the beam divergence loss as a function of the transmitter and receiver optics diameter at 500 m link distance is obtained (see Fig. 8). These results can aid in the identification of the proper transmitter and receiver optics aperture areas for the APT units to achieve longer link coverage, low pointing errors, and low diffraction-induced beam divergence loss.
Figure 6: Simulated atmospheric transmittance for zenith with different wavelengths.
It can be observed that the transmitter and receiver optics diameter of up to some centimeters, which can give the Rayleigh lengths of up to some hundreds of meters with low beam divergence loss are sufficient for the free-space communication within a local area mobile network. Further, an increase in the transmitting optics aperture area will effectively reduce the transmitter beamwidth, delivering the signal with more intensity, and hence reducing the beam divergence loss. However, it may lead to tight acquisition, pointing, and tracking requirements and will also increase the overall mass and the cost of the payload.
Similarly, increasing the receiving aperture area scales the receiving signal power and reduces the beam divergence loss. However, it will also increase the collection of the amount of background noise by the receiver. Therefore it implies that the effective performance improvement does not always scale linearly with the increasing transmitter and receiver optics aperture areas and an optimum choice needs to be made for the trade-off [51]. Also, for a long-distance link, we can reduce the effects of beam divergence loss by exploiting several shorter link segments and using the optical relay method [34] which is feasible, especially for drone-based platforms.
The overall transmissivity includes the multiplication of three types of optical transmissivity [45],
\[\eta=\eta_{D}\eta_{eff}\eta_{atm} \tag{13}\]
where, \(\eta_{D}\) is turbulence or diffraction-induced transmissivity, \(\eta_{eff}\) is receiver's efficiency and \(\eta_{atm}\) is atmospheric loss. Overall transmissivity reduces with increasing altitude as shown in Fig. 9.
Up to this point, we have delved into the significant and inevitable challenges faced in free-space quantum communication. These challenges encompass various factors, including atmospheric turbulence, scintillation, beam wandering, atmospheric attenuation, and beam divergence loss, all of which we have extensively discussed. In addressing these real-world effects, Vasylyev et al. introduced a model utilizing an elliptic beam appro
strong compatibility with actual experimental data in their influential paper [52; 53]. Liorni et al. extended this model for broader application in low earth orbit (LEO) satellite-based quantum communication [54] They factored in considerations such as the refractive index structure constant and the density of scattering particles, maintaining consistency with LEO satellite conditions, and evaluated their model under various weather scenarios. Now, our focus shifts to assessing the combined and realistic impact of these factors at lower altitudes, where communication can be facilitated using drones. To do this, we adapt their approach by incorporating the refractive index structure constant applicable to lower altitudes [40; 43] and introduce our hybrid methodology tailored for shorter altitude ranges.
## 3 A hybrid model for low altitude signal transmission
In this section, we present a hybrid model using the model that exploits the properties of the Gaussian elliptical beam proposed by Vasylyev _et al._, [52; 53]. Furthermore, we apply the generalized approach and incorporate day-time and night-time conditions, as introduced by Liorni _et al._ in their seminal paper [54]. Their approach influences the transmittance value significantly, as transmittance relies on both beam parameters \(\mathbf{V}\) and the diameter of the receiving aperture \(a\). In order to enhance the readers' grasp of the elliptic beam approximation and its modified version, we provide a concise elucidation of the fundamental theory. A Gaussian beam is projected through a link that traverses both the atmosphere and a vacuum, originating from either a space transmitter (drone) or a ground station. This link is distinguished by its non-uniform characteristics. Typically, the changing intensity transmittance of this signal (the received beam) as it passes through a circular aperture with a radius \(a_{r}\) in the receiving telescope is formulated as follows (see Refs. [53; 55] for details)
\[\eta=\int_{|\mathbf{\rho}|^{2}=a_{r}^{2}}\mathrm{d}^{2}\mathbf{\rho}\left|\mathrm{u} \left(\mathbf{\rho},z\right)\right|^{2} \tag{14}\]
In this context, the function \(u\left(\mathbf{\rho},z\right)\) represents the beam envelope at the receiver plane, which is situated at a distance \(z\) from the transmitter. The quantity \(\left|u\left(\mathbf{\rho},z\right)\right|^{2}\) signifies the normalized intensity concerning the entire \(\mathbf{\rho}\) plane, where \(\mathbf{\rho}\) denotes the position vector within the transverse plane. The vector parameter \(\mathbf{V}\) provides a comprehensive description of the beam's state at the receiver plane (see Fig. 1 in Ref. [52]) and it's described as
\[\mathbf{V}=\left(x_{0},y_{0},W_{1},W_{2},\theta\right), \tag{15}\]
where \(x_{0},y_{0}\), \(W_{1}\), \(W_{2}\) and \(\theta\) represent the coordinates of the beam centroid, the dimensions of the elliptical beam profile (characterized by its principal semi-axes), and the orientation angle of the elliptical beam, respectively. The transmittance is influenced by these beam parameters in conjunction with the radius of the receiving aperture (\(a_{r}\)).
In the context of an elliptical beam's interaction with a circular aperture characterized by a radius denoted as \(a_{r}\), the notion of transmittance is precisely described by Equation (14). The transmittance for this scenario can be articulated as follows [53]
\[\eta\left(x_{0},y_{0},W_{1},W_{2},\theta\right) = \frac{2\,\chi_{\mathrm{ext}}}{\pi W_{1}W_{2}}\int_{0}^{a_{r}} \rho\,\mathrm{d}\rho\int_{0}^{2\pi}\mathrm{d}\varphi\,\mathrm{e}^{-2\mathrm{ A}\left(\rho\mathrm{cos}\varphi-\rho_{0}\right)}\mathrm{e}^{-2\mathrm{B}\rho^{2} \sin^{2}\varphi}e^{-2\mathrm{C}\left(\rho\mathrm{cos}\varphi-\rho_{0}\right) \rho\sin\varphi} \tag{16}\]
Here, the symbol \(a_{r}\) signifies the aperture's radius, while \(\rho\) and \(\varphi\) are used to express the polar coordinates of the vector \(\mathbf{\rho}\), we may write \(x=\rho\cos\varphi\) and \(y=\rho\sin\varphi\), and \(x_{0}=\rho_{0}\cos\varphi_{0}\) and \(y_{0}=\rho_{0}\sin\varphi_{0}\), where \(\rho_{0}\) and \(\varphi_{0}\) denote the polar coordinates associated with the vector \(\mathbf{\rho}_{0}\). Additionally, the expressions of the constants are, \(\mathrm{A}=\left(\frac{\cos^{2}\left(\theta-\varphi_{0}\right)}{W_{1}^{2}}+ \frac{\sin^{2}\left(\theta-\varphi_{0}\right)}{W_{2}^{2}}\right),\,\mathrm{B}= \left(\frac{\sin^{2}\left(\theta-\varphi_{0}\right)}{W_{1}^{2}}+\frac{\cos^{2} \left(\theta-\varphi_{0}\right)}{W_{2}^{2}}\right),\) and \(\mathrm{C}=\left(\frac{1}{W_{1}^{2}}-\frac{1}{W_{2}^{2}}\right)\sin 2\left( \theta-\varphi_{0}\right).\) Here, \(\chi_{\mathrm{ext}}\) accounts for the influence of _atmospheric extinction_, which encompasses factors like back-scattering and absorption that occur within the atmosphere [56].
With this elliptic beam approximation method one can relate the atmospheric effect in free-space link at receiver's end. To make it more acceptable and useful in real life situation, for free-space quantum communication, Liorni _et al._ proposed a generalized model [54]. Their model was generalized in the sense that it involved a non-uniform link distribution between a drone and the ground, as described. To calculate the moments of the distributions related to the parameters of the elliptic Gaussian beam, we adopt the same Heaviside function as employed in Liorni's model. We proceed to assess the expressions for the first and second moments of the beam parameters (\(\mathbf{V}\)) by making adaptations to Equations (4) through (9) from the Ref. [54], aligning them with the conditions specific to drone-based communication. We assume that the orientation angle \(\theta\) of the the elliptical profile follows a uniform distribution within the interval \(\left[0,\frac{\pi}{2}\right]\). In the context of up-links, the mean value and variance of the beam's centroid position are consistent in both
the \(x\) and \(y\) directions and are equivalent to, \(\left\langle x_{0}\right\rangle=\left\langle y_{0}\right\rangle=0\), and \(\left\langle x_{0}^{2}\right\rangle=\left\langle y_{0}^{2}\right\rangle=0.419 \,\sigma_{R}^{2}w_{D}^{2}\Omega^{-\frac{7}{2}}\),where the term \(\sigma_{R}=1.23\,C_{n}^{2}k^{\frac{7}{2}}z^{\frac{4}{2}}\) is referred to as _Rytov parameter_ which is an useful indicator of integrated turbulence strength for extended propagation; \(\Omega=\frac{k\,w_{D}^{2}}{2z}\) represents the Fresnel number, where \(k\) denotes the optical wave number and \(w_{D}\) represents the beam spot size at the receiver. In the chosen reference frame, the condition is set so that \(\left\langle x_{0}\right\rangle=\left\langle y_{0}\right\rangle=0\). The mean and (co)variance of \(W_{i}^{2}\) can be expressed as,
\[\left\langle W_{i}^{2}\right\rangle = \frac{w_{D}^{2}}{\Omega^{2}}\left(1+\frac{\pi}{8}\,zn_{0}w_{D}^{ 2}+2.6\,\sigma_{R}^{2}\Omega^{\frac{5}{6}}\right),\] \[\left\langle\Delta W_{i}^{2}\Delta W_{j}^{2}\right\rangle = (2\delta_{ij}-0.8)\,\frac{w_{D}^{4}}{\Omega^{\frac{7}{26}}} \left(1+\frac{\pi}{8}\,zn_{0}w_{D}^{2}\right)\sigma_{R}^{2},\]
where, \(n_{0}\) denotes the scattering particles density2. Similar expressions are relevant for down-links when considering the position of the elliptic beam centroid, \(\left\langle x_{0}\right\rangle=\left\langle y_{0}\right\rangle=0\), and \(\left\langle x_{0}^{2}\right\rangle=\left\langle y_{0}^{2}\right\rangle= \alpha_{p}\,z\),also the semi-axes of the elliptic beam profile are,
Footnote 2: To estimate the value of \(n_{0}\), which primarily comprises water droplets, we utilize the atmospheric water vapor content profile. This profile serves as our for understanding the scattering particles [57, 58]
\[\left\langle W_{i}^{2}\right\rangle = \frac{w_{D}^{2}}{\Omega^{2}}\left(1+\frac{\pi}{24}\,zn_{0}w_{D}^{ 2}+1.6\,\sigma_{R}^{2}\Omega^{\frac{5}{6}}\right),\] \[\left\langle\Delta W_{i}^{2}\Delta W_{j}^{2}\right\rangle = (2\delta_{ij}-0.8)\,\frac{3}{8}\,\frac{w_{D}^{4}}{\Omega^{\frac{ 7}{26}}}\left(1+\frac{\pi}{24}\,zn_{0}w_{D}^{2}\right)\sigma_{R}^{2},\]
In this context, the symbol \(\alpha_{p}\approx 2\)\(\mu\)rad denotes the approximate angular pointing error. Afterward, we employ the knowledge of the probability distribution related to the elliptic beam parameters (as expressed in equation 15) to compute the probability distribution transmittance (PDT) using equation 16 through a random sampling procedure using a Monte Carlo methodology.
#### 3.0.1 Performance analysis of simulation result
The proposed hybrid approach primarily relies on short-altitude communication, employing Gaussian beam-based quantum communication via drones. To validate the applicability and performance integrity of the proposed model in the context of FSO communication, we need to analyze the probability distribution of the transmittance (PDT) of this model. In our analysis, we appropriately employ both normal and uniform distributions [59] for beam parameters (\(\mathbf{V}\)) and incorporate specific optical values [32] to emulate our model (refer to Table 2).
To generate PDT plots, we utilize random M5-tuples, generating a substantial number of values (\(10^{6}\) values), and approximate the results to five decimal places to get well-suited for PDT representation. We present the transmittance performance in various scenarios encompassing both up-link and down-link configurations as well as day and night conditions, at altitudes of \(30\) m and \(220\) m (refer to Fig. 10). Notably, for the down-link configuration, the transmittance probability exhibits similar trends in both day and night conditions (refer to Fig. 10 (a) and 10 (b)). At an altitude of \(30\) m, we observe peak transmittance probability values occurring for transmittance values of about \(0.25\) and \(0.5\). In this scenario, the probability distribution is relatively broad when compared to the \(220\) m altitude scenario. Conversely, at \(30\)
\begin{table}
\begin{tabular}{c c c} \hline Parameter & Value & Description \\ \hline \(w_{D}\) & 1.15 cm & Down-link / up-link \\ \(a_{r}\) & 2.64 cm & Down-link / up-link \\ \(\lambda\) & 810 nm & Wavelength of the signal light \\ \(\beta\) & 0.7 & Parameter in \(\chi_{\text{ext}}(\phi)\) \\ \(\alpha_{p}\) & \(2\times 10^{-6}\) rad & Pointing error \\ \(\overline{h}\) & \(18.5\,\text{m}-240\,\text{m}\) & Altitude of drone \\ \(n_{0}\) & 0.61 m\({}^{-3}\) & Night-time condition \\ \(n_{0}\) & 0.01 m\({}^{-3}\) & Day-time condition \\ \(C_{n}^{2}\) & \(\frac{4.008\times 10^{-13}}{h^{-0.64}}\) & Night-time condition \\ \(C_{n}^{2}\) & \(\frac{3.13\times 10^{-13}}{h}\) & Day-time condition \\ \hline \end{tabular}
\end{table}
Table 2: Parameters linked to the optical and technical attributes of the transmission link with weather conditions.
m altitude, the peak transmittance probability occurs only in the vicinity of a transmittance value of \(0.5\), with a sharply peaked distribution and higher magnitude, evident in both day and night conditions. In the up-link configuration, peak transmittance values are consistently located near a transmittance value of \(0.5\) for both day and night conditions (see Fig. 10 (c) and 10 (d)). The distribution nature is broader and slightly lower in value for the \(30\) m altitude compared to the \(220\) m altitude scenario. This observation is attributed to the lower losses incurred at low altitudes (\(30\) m), as there is relatively less interaction with the atmosphere. Conversely, at high altitudes (\(220\) m), the losses are substantial, resulting in a sharper distribution.
We have also generated plots illustrating the variation in transmittance concerning altitude (\(\overline{h}\)) and zenith angle (\(\phi\)), as shown in Fig. 11, for both up-link and down-link configurations, encompassing both day and night conditions. To generate these plots, we have utilized random sets of M5-tuples, each containing \(1000\) values drawn from an appropriate probability distribution. These random samples of beam parameters allowed us to simulate the transmittance values across various combinations of altitude and zenith angle. Notably, the curvature of the transmittance values across different combinations of altitude and zenith angle exhibits similar trends for all the cases. These findings align with the results obtained from the PDT analysis. It is worth mentioning that due to the relatively low altitude of the drone-based FSO communication system, the variation in transmission remains nearly consistent across different environmental conditions. However, it is important to note that our hybrid approach can be extended to consider various values of \(C_{n}^{2}\) for higher altitudes, as detailed in Appendix A, to gain a deeper understanding of its applicability under such conditions.
Figure 10: (Color online) Plot of PDT variation with different altitude positions at the zenith position for our hybrid model: (a) PDT at day time condition under down-link configuration, (b) PDT at night time condition under down-link configuration, (c) PDT at day time condition under up-link configuration, (d) PDT at night time condition under up-link configuration.
## 4 Link configuration, budgeting and margin and time synchronization
### Link configuration
For longer link distances, it is assumed that the key generation rate of an uplink configuration is roughly one magnitude lower than that of the downlink [23; 60], while in the down-link scenario, pointing errors are notably relevant. In the up-link, pointing errors can be mitigated since ground stations can employ more extensive and sophisticated optical systems. However, the turbulence is more concentrated near the earth's surface so for the uplink transmission the turbulence-induced distortion at the beginning significantly increases the beam wandering and divergence angle resulting in a larger channel attenuation as compared to the case of the downlink transmission.
A comparison of the atmospheric transmittance for a 1 km FSO link as a function of different angles with the zenith for the uplink and downlink configurations with different wavelengths was carried out using MODTRAN software. Fig.12 and Fig.13 show the simulated atmospheric transmittance for an urban location with the tropical atmospheric model and 9 km visibility.
From the simulation results, we can observe that for the shorter links, the transmittance for both uplink and downlink configurations is comparable. And since aerial platforms can fly at much lower altitudes, the total link budget will have minor deviations between the uplink and downlink in terms of geometric loss, atmospheric turbulence, and other types of attenuation.
Figure 11: (Color online) Variation of transmittance with altitude (\(\overline{h}\)) and zenith angle (\(\phi\)) for our hybrid model: (a) Transmittance at day time condition under down-link configuration, (b) Transmittance at night time condition under down-link configuration, (c) Transmittance at day time condition under up-link configuration, (d) Transmittance at night time condition under up-link configuration.
#### 4.1.1 Integrated acquisition, pointing, and tracking (APT)
For aerial quantum communication, distributing the photons simultaneously raises a higher requirement of the dynamically established aerial vehicle-to-ground station links, and to keep the polarization and time series stable during the whole distribution process. Thus there is a need to integrate all the elements for polarization compensation, adaptive optics, collimation, and tracking into an integrated APT unit and perform two-stage tracking, viz. coarse and fine [29]. We have presented a high-level architecture of an APT system in Fig. 14.
An APT unit consists of a motorized three-axis (pitch, yaw, and roll) gimbal mount along with a telescope platform. The coarse pointing alignment of the transmitter/receiver telescope is enabled by moving the telescope platform by the gimbal mount using a proportion-integration-differentiation (PID) error signal. This is calculated from the target image using a coaxial zoom camera. The target for this imaging identification is an uncollimated laser beam, typically of the NIR or IR wavelength range on the corresponding receiver or transmitter side.
The telescope on each APT unit collimates light to a beam size optimum for reducing the beam divergence loss, as discussed in Section 2.1.3. A carbon-fiber base plate can be used for the telescope platform, where the composite structure design can be optimized for the best thermal stability. Typically 90-degree off-axis parabolic mirror (OAPM) of aperture comparable to the desired beam width is used for collimation. Whereas, the beacon laser beams for the second stage-fine tracking pass through the central hole of the parabolic mirror. The beacon laser has a small aperture, however as it propagates through the link it provides broader FOV, which helps in the coarse tracking. Subsequently, the fine-tracking is performed using a fast-steering mirror (FSM) and a position-sensitive detector (PSD). The PSD is placed at the image position of the dichroic mirror (as shown in Fig. 14). It captures the position of the fine-tracking laser and generates error signals to give feedback to the FSM. Accordingly, FSM aligns itself to reduce this error and achieve tracking with accuracy within the 5 \(\mu\)m range.
The PSD is mounted at the image position of the transmitter or receiver fiber port to a dichroic mirror (DM). It monitors the focal position of the beacon light to generate the error signal and feedback to the FSM. With proper feedback electronic controls, the transmitter and receiver unit can be pointed at each other within the accuracy of SMF coupling.
APT systems for aerial quantum communication face significant challenges due to the aerial platform-induced jitter, vibrations, and the need for precise synchronization. Mechanical vibrations and jitter from aerial platforms can disrupt optical alignment, requiring real-time feedback-based compensation mechanisms like fast steering mirrors. Effective vibration isolation is also crucial, as environmental factors such as wind and atmospheric turbulence also impact the stability. Moreover, there are constraints with the SWaP (size, weight, and power) factors, as the payload needs to be lightweight and power-efficient for aerial deployment. Overcoming these challenges demands advanced technology and robust testing to maintain a stable optical link while minimizing the system's physical footprint.
### Link budgeting
A link budget aims to calculate and analyze the overall performance of a communication link or system, mainly to figure out what distances one could reach with given equipment and to determine whether additional power is available for FSO links under given atmospheric conditions, especially in wireless communication.
QKD systems rely on optical communications link analysis to have enough photons arriving at the receiver. The main factors that must be considered regarding optical communications are the distance between the transmitter and the receiver, the operating wavelength, all the losses related to atmospheric conditions, geometrical losses, channel turbulence, background noise, and optical losses.
Link budget calculates the minimum power or signal strength required for a communication link to function under specific conditions. In contrast, the link margin represents the additional power or signal strength added to ensure reliability. The link margin is directly related to the link budget.
#### 4.2.1 Link margin
The link margin is the gap between the actual received power and the minimum required received signal level.
\[\text{Link Margin}=P_{\text{t}}-A_{\text{tx}}-20\text{log}\left(\frac{\sqrt{2} L\theta_{div}}{\text{D}}\right)-A_{\text{rx}}-\alpha_{\text{fog}}L-S_{\text{r}} \tag{17}\]
where, \(P_{\text{t}}\) is the transmitted power, \(A_{\text{tx}}\) is the coupling losses at the transmitter, \(L\) is the range of the FSO link, \(\theta_{div}\) is the half-angle divergence, \(\alpha_{\text{fog}}\) is the attenuation losses due to moisture and \(S_{\text{r}}\) is the sensitivity of the receiver.
It is imperative that the link margin remains positive, and efforts should be directed toward its maximization. If the link margin becomes negative, the FSO link will no longer be operational.
In Fig. 15, we have simulated the link margin as a function of link range with various aperture diameters of the receiver lens. It was observed that with increasing distances, the link margin decreases. However, as we increase the aperture diameter of the receiving optics, the link margin increases.
### Time synchronization
Time synchronization is essential to provide a time reference that allows two distant users to generate the correlated information simultaneously. Generally, components like lasers, modulators, and detectors can introduce jitter due to
Figure 14: Schematic of integrated acquisition, pointing, and tracking (APT) unit for aerial quantum communication, where the abbreviations used are as follows- C: collimator, F: optical fiber, QS: quantum signal, DM: dichroic mirror, PSD: position sensitive detector, FSM: fast-steering mirror, OAPM: off-axis parabolic mirror, CMOS: camera/sensor.
their finite response times and inherent noise. Precise compensation for this jitter can be mitigated by the use of stable and precise reference clocks, the implementation of delay compensation techniques, and the use of high-quality optical components with low jitter, which is necessary to ensure accurate synchronization. For aerial quantum communication, the distance between the transmitter and receiver continuously changes; hence, time synchronization is implemented in a particular manner.
A fault-tolerant synchronization based on de Bruijn sequences is suitable for timing and synchronization over high-loss space-to-ground communication channels. It provides an efficient sequence position encoding, which exploits achieving robustness to beacon corruption in the decoding process [61].
A fiber optic two-way quantum clock synchronization combined with microwave frequency transfer technology gives picosecond scale synchronization precision, which promises femtosecond precision over intercity optical fiber links in the future [62].
Qubit-based synchronization (Qubit4sync) with a cross-correlation scheme is a synchronization procedure that only needs the same photons encoding the quantum state exchanged in QKD protocol. This avoids additional hardware, makes it cheaper, and lowers failure probability due to hardware [63].
Qubit-based clock synchronization using the Bayesian probabilistic algorithm efficiently finds the clock offset without sacrificing the secure key. In comparison with other protocols, it is more robust to channel loss, noise, and clock drift [64].
In satellite-to-ground large-distance, quantum communication where independent reference clocks are employed GPS pulse-per-second (PPS) signal and an assistant pulse laser are used for time synchronization [65].
In 2021, the Space Application Centre (SAC) of ISRO used a novel synchronization technique enabled with NavIC for a distance of 300 m to achieve a secure key rate of 300 kbps [66].
## 5 Simulation of quantum teleportation using entanglement swapping through a swarm of drone network
In this section, we have presented a use case where we simulate quantum teleportation between two distant nodes using entanglement swapping through a swarm of drones. We have performed the simulation using the Network Simulator for Quantum Information using Discrete events (NetSquid). NetSquid [67] is a software tool for the modeling and simulation of scalable quantum networks developed by QuTech. This QN simulation directs towards a software-defined networking (SDN)-based architecture [68] to manage the distribution of end-to-end entangled pairs between two ground stations (GSs). The architecture is adaptable for quantum computing and QKD services.
In the simulation scheme presented in Fig. 16, a swarm of drones comprising of \(n\) quantum repeaters (QR), designated as \(D_{n}^{QR}\), is distributed between two end stations performing the quantum teleportation. The drones nearest to the end stations, Alice and Bob can be referred to as \(D_{1}^{QR}\) and \(D_{n}^{QR}\). We consider that each QR drone has quantum memory (QM), which can house two quantum particles entangled with the adjacent neighboring QR drones' particles. When the QR drone performs a Bell state measurement (BSM) on its two quantum particles, the measurement will result in the entanglement swapping amongst the two neighboring QR drone particles. The entire scheme is discussed in detail below:
Figure 15: (Color online) Link margin versus link range with various aperture diameters of the receiver lens
* A swarm of \(n\) QR drones (\(D_{1}^{QR}\) to \(D_{n}^{QR}\)), is distributed between the two end stations performing quantum teleportation, Alice and Bob.
* Each QR drone (\(D_{i}^{QR}\)) possesses two particles, each entangled with the two subsequent neighboring QR drones' particles (\(D_{i-1}^{QR}\) and \(D_{i+1}^{QR}\)). The entangled pairs may be stored on the QR drones using quantum memories before the take-off or distributed in real-time (refer to Fig. 17).
* The end stations, say Alice and Bob share an entangled pair with the \(D_{1}^{QR}\) and \(D_{n}^{QR}\), respectively.
* Quantum entanglement swapping is executed at \(D_{1}^{QR}\) resulting in the entanglement between Alice and \(D_{2}^{QR}\).
* In this way, the entanglement swapping [69] is repeated consequently for the rest of the QR drone chain, from the \(D_{2}^{QR}\) to \(D_{n}^{QR}\). At the end after \(n\) entanglement swapping, Alice's particle gets entangled with Bob's particle.
* After the establishment of an entanglement pair between Alice and Bob, for the quantum teleportation Alice performs a complete measurement of the _von Neumann_ type on the joint system, consisting of her particle from the shared EPR pair and the _arbitrary unknown state_ (\(|\psi\rangle\)) particle whose information needs to be shared.
* She then sends the outcome of her measurement to Bob through the classical channel, who then applies the required unitary (rotation) operations on his EPR particle to receive \(|\psi\rangle\). Hence, the state is teleported from Alice's lab to Bob's lab.
The simulation of the above quantum teleportation scheme was carried out for different configurations on NetSquid. We have calculated the fidelities of the resulting teleported states and performed the time analysis for the execution of the entire scheme. In the Node-to-Node configuration, quantum teleportation between Alice and Bob separated at a \(5\) km distance without any intermediate QR drone was carried out. While in the End-to-End configuration, quantum teleportation over \(50\) km distance using entanglement swapping as per the above scheme, through ten QR drones, each separated at \(5\) km distance between Alice and Bob was carried out. The results are shown in the Table 3.
## 6 Conclusion
In this work, we have emphasized the necessity and importance of non-terrestrial platforms for future quantum communication which will explore free-space mediums in an optimal way to provide end-to-end solutions. We have
Figure 16: Scheme for the quantum teleportation using entanglement swapping using a swarm of drones [68]. Figure 17: Entanglement swapping [69].
\begin{table}
\begin{tabular}{c c c} \hline Parameters & Node-to-Node & End-to-End \\ \hline Fidelity & 0.964 & 0.1516 \\ Time (ns) & 5 & 236111 \\ \hline \end{tabular}
\end{table}
Table 3: Simulation of quantum teleportation using entanglement swapping on NetSquid for different configurations.
attempted to adequately address the challenges of aerial quantum communication. We have introduced a hybrid model that elaborates on the characteristics of transmittance with the variation of zenith angle in densely humid medium and low altitude signal transmission. Further, we have analyzed the average visibility of Pune city for the last two years for a feasibility study to implement aerial quantum communication using Drones. Finally, we have simulated quantum teleportation between two distant nodes via a swarm of quantum drone networks utilizing QSDN. The SDN technology will have a significant role in near-future integrated quantum networks and services. Our work aims to stimulate further research and explore the boundaries in this promising field.
## Acknowledgements
The authors acknowledge the support from R&D IT, MeitY, India.
We also thank Ms. Akshara Jayanand Kaginalkar, C-DAC, Pune for the availability of the meteorological data.
|
2309.07462 | Are Large Language Model-based Evaluators the Solution to Scaling Up
Multilingual Evaluation? | Large Language Models (LLMs) excel in various Natural Language Processing
(NLP) tasks, yet their evaluation, particularly in languages beyond the top
$20$, remains inadequate due to existing benchmarks and metrics limitations.
Employing LLMs as evaluators to rank or score other models' outputs emerges as
a viable solution, addressing the constraints tied to human annotators and
established benchmarks. In this study, we explore the potential of LLM-based
evaluators, specifically GPT-4 in enhancing multilingual evaluation by
calibrating them against $20$K human judgments across three text-generation
tasks, five metrics, and eight languages. Our analysis reveals a bias in
GPT4-based evaluators towards higher scores, underscoring the necessity of
calibration with native speaker judgments, especially in low-resource and
non-Latin script languages, to ensure accurate evaluation of LLM performance
across diverse languages. | Rishav Hada, Varun Gumma, Adrian de Wynter, Harshita Diddee, Mohamed Ahmed, Monojit Choudhury, Kalika Bali, Sunayana Sitaram | 2023-09-14T06:41:58Z | http://arxiv.org/abs/2309.07462v2 | # Are Large Language Model-based Evaluators the Solution to Scaling Up Multilingual Evaluation?
###### Abstract
Large Language Models (LLMs) have demonstrated impressive performance on Natural Language Processing (NLP) tasks, such as Question Answering, Summarization, and Classification. The use of LLMs as evaluators, that can rank or score the output of other models (usually LLMs) has become increasingly popular, due to the limitations of current evaluation techniques including the lack of appropriate benchmarks, metrics, cost, and access to human annotators. While LLMs are capable of handling approximately \(100\) languages, the majority of languages beyond the top \(20\) lack systematic evaluation across various tasks, metrics, and benchmarks. This creates an urgent need to scale up multilingual evaluation to ensure a precise understanding of LLM performance across diverse languages. LLM-based evaluators seem like the perfect solution to this problem, as they do not require human annotators, human-created references, or benchmarks and can theoretically be used to evaluate any language covered by the LLM. In this paper, we investigate whether LLM-based evaluators can help scale up multilingual evaluation. Specifically, we calibrate LLM-based evaluation against 20k human judgments of five metrics across three text-generation tasks in eight languages. Our findings indicate that LLM-based evaluators may exhibit bias towards higher scores and should be used with caution and should always be calibrated with a dataset of native speaker judgments, particularly in low-resource and non-Latin script languages.
+
Footnote †: Contact: [email protected]
## 1 Introduction
Large Language Models (LLMs) perform impressively on many tasks today, surpassing human-level performance on some tasks and domains (OpenAI, 2023; Touvron et al., 2023; Google et al., 2023). LLM performance evaluation on standard NLP benchmarks can help estimate how well an LLM is likely to perform in the real world. However, LLM benchmarking has limitations due to a number of factors, including the lack of evaluation benchmarks that represent real-world tasks, benchmark saturation, data contamination, and the low correlation between automated metrics and human judgment (Jacovi et al., 2023; Chang et al., 2023; Reiter, 2018; Liu and Liu, 2008). As a result, several evaluation approaches have been explored beyond benchmarking to estimate the capabilities of these models (Chang et al., 2023).
While LLMs exhibit strong performance in various tasks in English, their capabilities are restricted when it comes to other languages. As a result, the digital divide may worsen, preventing a significant portion of the global population from reaping the benefits of LLMs and potentially causing them to be disproportionately harmed by LLMs. Ahuja et al. (2023) conduct a comprehensive benchmarking of LLMs across \(16\) tasks and \(71\) languages and show that generative LLMs such as GPT3 (Brown et al., 2020; OpenAI, 2022), GPT4 (OpenAI, 2023) and BLOOMZ (Muennighoff et al., 2022) are worse than SOTA fine-tuned models such as TULRv6 (Patra et al., 2023) and XLM-R (Conneau et al., 2020) on many languages and tasks. They find that LLMs perform worse in languages that are transcribed in non-Latin scripts and under-resourced languages. In fact, performance on languages beyond the top \(50\) highest-resourced languages is largely unknown, due to the lack of language coverage in multilingual benchmarks (Ahuja et al., 2022) and the lack of other systematic evaluations beyond benchmarking covering a diverse set of languages. Certain language families, such as Indo-European, are over-represented in multilingual benchmarks with other language families such
as Niger-Congo and Sino-Tibetan having very little presence. There is a scarcity of benchmarks designed to assess tasks that simulate actual LLM usage in real-world scenarios. The metrics employed in these benchmarks might not consistently align with human evaluations and could be ill-suited for languages with rich morphology or complex writing systems as well as phenomena arising from language contact such as borrowing, code-mixing, and transliteration.
Clearly, evaluation by native speakers proficient in a language is the gold standard for getting an accurate picture of the performance of a model, particularly in complex tasks without well-defined automated metrics. However, budget constraints, turnaround time, and the lack of easy access to native speakers in some languages lead to challenges in scaling. This leads to a situation in which the performance of LLMs is unknown for most languages of the world, leading to an urgent need to scale up multilingual evaluation Ahuja et al. (2022) to ensure that LLMs perform well on many languages of the world.
A surprising property of generative LLMs is that they are not only able to perform tasks that they are trained for such as text completion and generation, but can also be taught to perform other tasks, such as classification and sequence labeling via prompting and in-context learning. This has led to the uses of LLMs not just for generative tasks, but also tasks such as sentiment analysis, reasoning Mao et al. (2023), and picking the less harmful alternative from a pair of LLM-bot responses Bai et al. (2022). The success of these LLMs in these tasks has led to the question of whether LLMs can replace human annotators, or help augment human evaluation Gilardi et al. (2023).
Considering the urgent need to assess LLMs in a broader range of languages to identify performance disparities, and acknowledging that obtaining access to native speakers can be challenging or costly, utilizing LLMs as multilingual evaluators appears to be an ideal solution. However, since LLMs have demonstrated inferior performance even in some high-resource languages and have not been evaluated extensively across languages on dimensions such as toxicity, fairness, and robustness (due to the absence of such benchmarks), it is prudent to proceed with caution. Failing to do so can lead to misleading results which may further widen the digital divide.
In this work, we study whether LLM-based evaluation can be the answer to scaling up multilingual evaluation. In other words, can LLMs serve as substitutes or supplements for human native speakers in delivering useful and accurate insights regarding LLM outputs in non-English languages, while considering diverse aspects of interest like linguistic acceptability, task accomplishment, and safety? Our main contributions are as follows:
* We present the first evaluation of LLMs as multilingual evaluators to examine whether LLMs can be used to scale up multilingual evaluation.
* We calibrate LLM judgments across three tasks, eight languages, and five dimensions by comparing them to over \(20\)K human judgments on the same tasks, languages, and dimensions.
* We evaluate a variety of prompting strategies for LLM-based evaluation in the multilingual setting
* We provide a framework for evaluating LLM-evalators in the multilingual setting that can generalize across tasks, metrics, and languages.
* We suggest best practices and provide recommendations for future work.
## 2 Related work
LLMs have recently become popular for evaluation and annotation. Broadly, there are two main uses of LLMs as evaluators: LLMs can be used as alternatives to metrics that compare human and machine-generated text, such as BLEU Papineni et al. (2002) and ROUGE Lin (2004). Word overlap-based metrics are limited, and LLM-based scorers have been shown to outperform them. GPTScore Fu et al. (2023) is a popular LLM-based framework that can be used to score model outputs based on human-created references along various dimensions. However, these scores still rely on having examples of human-created reference data.
The second use case of LLMs as evaluators is when the LLM is presented with the output of a system (usually an LLM, sometimes the same model) and asked to judge its quality or safety without any human output to compare against. The LLM is taught how to perform this evaluation with the help
of the task description, rubric, and sometimes, one or more examples in the prompt. This is the use case we focus on in this work.
Gilardi et al. (2023) prompt ChatGPT to annotate Tweets across various dimensions such as topic and stance and find that it outperforms crowdworkers. Shen et al. (2023) explore the use of GPT3.5 as an evaluator for abstractive summarization and find that although GPT is a useful evaluator, as the quality of summarization improves, the quality of evaluation degrades. Along similar lines, Wang et al. (2023) evaluate ChatGPT on various NLG tasks and find that it has a high correlation with human judgments. Kocmi and Federmann (2023) evaluate the effectiveness of LLMs on evaluation of translation quality and find that LLMs starting from GPT3.5 and above achieve SOTA performance on translation evaluation benchmarks. Fernandes et al. (2023) leverage LLMs for fine-grained annotation of errors in Machine Translation outputs. LLM-based evaluators have also been used to score and refine outputs they produce, as described in Madaan et al. (2023), ultimately producing outputs that are scored higher on human and automated metrics than the original outputs. Naismith et al. (2023) explore the use of LLM-based evaluators on scoring written discourse for coherence and find a strong correlation with human judgments. The success of LLM-based evaluators has led many to question whether LLM-based evaluation can replace or augment human evaluation (Chiang and Lee, 2023).
However, there have been studies showing that LLM-based evaluators can have some biases. Pangakis et al. (2023) highlight the need for validating LLM-based evaluators on a task-by-task basis. Liu et al. (2023) perform NLG evaluation using GPT-4 and find that although it correlates well with human judgments, it may potentially be biased towards preferring LLM-generated texts. Wang et al. (2023) point out that GPT4-based evaluators have positional bias and scores can be easily altered by changing the order of appearance. There are also several ethical issues with the use of LLMs as evaluators described in Chiang and Lee (2023). Zhang et al. (2023) suggest that wider and deeper LLMs are fairer evaluators, while Chan et al. (2023) introduce a framework for multiple evaluator agents to reach a consensus, mimicking the situation of having multiple annotators.
Although there has been some work measuring the calibration of LLM-based evaluators to human judgments, previous studies have focused on English, and ours is the first work (to the best of our knowledge) that addresses this problem in the multilingual context.
## 3 Experimental Setup
We perform experiments on a text generation application that is powered by GPT-4. We evaluate the following sub-tasks:
* **Open Prompt**: This takes in a short prompt and generates a document according to the instructions in the prompt. The document generated is \(2,048\) tokens; roughly corresponding to one page in English and Spanish, and slightly less in other languages.
* **Continue Writing**: This takes in two passages ("left" and "right") and generates content that makes a smooth transition between them. One of the two passages may be empty. The passage may be up to \(1,000\) tokens long.
* it takes in a document of at least \(500\) words and generates a brief summary. It may take an optional user prompt specifying the output format (e.g., keypoints).
We cover the following languages: English, French, German, Spanish, Chinese, Japanese, Italian, Brazilian Portuguese, and Czech. We refer to Brazilian Portuguese (pt-br) as Brazilian in our figures and tables. Of these, the first six are classified as very high resource languages (Class 5, or "the winners"), while the last three are classified as Class 4 ("the underdogs") according to Joshi et al. (2020). We plan to extend our study to lower-resource languages in the future. We study the following dimensions of interest: linguistic acceptability, quality, task completion, and safety. We break these down into five metrics defined as follows:
* **Linguistic Acceptability (LA)**: This measures whether the text sounds right to a native speaker. The values of this metric are [0, 1, 2], with \(0\) corresponding to "not acceptable", \(1\) corresponding to "some errors, but acceptable" and \(2\) to "perfectly acceptable". We chose LA as opposed to grammaticality to ensure a comparable, native-speaker-led evaluation that did not require formal training in the language.
* **Output Content Quality (OCQ)**: Whether the general quality of the content is good or not, with values [0, 1, 2]. A score of \(0\) could indicate that the output is in the wrong language, is repetitive, or sounds like it has been scraped from the web, or translated. A score of 1 indicates that the output is okay in terms of grammar and word choice but still sounds awkward in the language. A score of \(2\) indicates that the text is of high quality.
* **Task Quality (TQ)**: This measures the ability of the model to follow the given instructions in the prompt. The values of this metric are [0, 1, 2], with \(0\) indicating that the model did not follow the instructions at all. Likewise, a score of \(1\) indicates that the model followed the instructions approximately well and \(2\) that it followed perfectly well. The difference between TQ and OCQ is that the latter focuses on whether the content is appealing to a user, while TQ emphasizes the ability of the model to follow the given instructions.
* **Problematic Content (PC)**: Whether there was any offensive or problematic content in the output. This is a binary metric, with \(0\) indicating that the output contains this type of content.
* **Hallucinations (H)**: This measures how well-grounded the model's output was to the input content, and/or whether the model output counterfactual information conflicted with the input content. It is a binary metric, with \(0\) indicating the presence of hallucinations.
### Human evaluation setup
We asked human judges to evaluate the output of LLM-based systems configured to perform the three tasks described earlier. Each entry was annotated by three annotators. They were contracted through an external annotator services company at a starting rate depending on locale ranging from $\(14\) USD/hr and up to $\(30\) USD/hr. The pay was adjusted based on locale and experience level. Each annotator was given \(250\) texts to judge. We used a subset of the annotated data for our experiments.
#### 3.1.1 Annotation guidelines
We provided annotators with the following information: general instructions about the task (including specific instructions from the prompt) and high-level descriptions of the metrics that we are seeking to evaluate, a description of the file that contained data to be evaluated, and the output format expected. Then we provided detailed descriptions of each metric including the range of values for each metric and examples in English. These examples were provided in the context of different tasks, as each metric could have slightly different interpretations for different tasks.
#### 3.1.2 Data statistics
Figure 0(a) contains the statistics of the human evaluation dataset for the three tasks across the languages we consider. We create a subset of this data for experimenting with prompting variations shown in Figure 0(b). Our full dataset contains over \(7000\) data points, while the smaller subset contains over \(2500\) data points. Each of the data points in our dataset was annotated by 3 annotators.
### LLM-based evaluators
We use the GPT4-32K model1 as our LLM-based evaluator with a temperature of \(0\), except in our ablation experiments. The model was accessed through Azure.
Footnote 1: 2023-03-15-preview
#### 3.2.1 Prompts
Our evaluation prompts are constructed using the {(guidance)} toolkit2. guidance is a DSL that uses handlebars templating to enable the specification of prompts that interleave instructions and generation with data and logic. This makes it simpler to construct and validate complex prompts.
Footnote 2: [https://github.com/guidance-ai/guidance/tree/main](https://github.com/guidance-ai/guidance/tree/main)
Evaluation prompts were written to be clear, simple, and not tuned for the data or task. All prompts for evaluation were specified in English, as past work has shown that instructions in native languages can lead to worse performance (Ahuja et al., 2023).
In writing the evaluation prompts, we stated with simple unstructured specifications (Natural language sentences with no formatting or styling) and found that it often led to errors in formatting the outputs correctly or even returning all the expected outputs. We found adding styling and formatting, for example, outputting JSON by providing the prompt with a JSON schema for the expected at
tributes improved the reliability of the LLM outputs.
We tried to keep the task and metric description as close as possible to the text that was shown to human annotators for evaluations in the default prompting variation. Each prompt consists of system, user, and assistant components as shown in Figure 2 in a generic prompt schema. The metric and task description components of the prompt are shown in Figures 3 and 5.
### Prompting variations
First, we experiment with multiple variations of prompts based on how many metrics we evaluate in a single prompt and how many examples we provide in the prompt.
* Zero Shot:** In this variation, we call GPT-4 once per metric, without any in-context examples.
* Few-Shot:** In this variation, we call GPT-4 once per metric, with any in-context examples.
* **Compound Call:** In this variation, we call GPT-4 once for all the metrics in a single prompt.
For few-shot prompting, we provide examples in the prompt of human judgments for the same task and metric from a held-out dev set. We take the majority vote from the three human annotations per sample as the aggregate class for that sample to choose our few-shot examples. For each task, language, and metric we choose up to two samples per possible class for that metric. Therefore, we have a minimum of two and a maximum of six exemplars as few-shot examples.
### Calibration with human judgments
We analyze how well-calibrated the variants of the LLM-evaluator are to native speakers as well as the inter-annotator agreement between the three annotators who scored each data point.
* **Inter-annotator agreement across the three annotators:** We measure Inter-annotator agreement (IAA) between the three annotators, referred to as Annot1, Annot2, Annot3. We use Percentage Agreement (PA) to measure IAA. Percentage agreement simply computes the fraction of data points on which both parties match. Specifically, we used the irrCAC library3 for this metric. Footnote 3: [https://github.com/afergadis/irrCAC](https://github.com/afergadis/irrCAC)
* **IAA (3 annotators) and GPT:** We measure IAA between the majority score of the three annotators and the LLM-evaluator. We refer to this as AnnotAgg, GPT4 and use PA to measure it.
* **Class distribution:** We analyze the class distribution of scores across tasks, metrics, and languages to check for potential biases in the dataset and LLM-evaluator.
We perform experiments contrasting compound and single-call prompting on the full dataset and zero-shot vs. few-shot prompting on the smaller
Figure 1: Dataset statistics across tasks and languages
dataset. We analyze how well-calibrated our LLM-based evaluators are with respect to human judgments by examining PA, and class distribution of scores.
### Ablation experiments
In addition, we do some ablation experiments to check for consistency, the effect of hyperparameters, and detailed metric descriptions. We perform these ablations on the smaller dataset.
* **Consistency check:** We prompt GPT-4 with the same prompt five times to check its consistency.
* **Sensitivity analysis:** We check the sensitivity of the Linguistic Acceptability metric evaluation by randomly shuffling \(10\)% of the words in all sentences and checking if the LA score provided by the model changes.
* **Temperature variation:** We vary the temperature parameter to check its effect on LLM evaluation
* **More detailed instructions:** In our initial experiments, we provide simple instructions to "name": "hallucinations", "description": "Hallucination refers to the generation of text that is untrue, fabricated, inconsistent with the given input, deviates from generally accepted knowledge, or makes unverifiable claims.", "scoring": "1: No hallucinations in the text; 0: text has hallucinations" "name": "linguistic_acceptability", "description": "Linguistic acceptability means does this sound right to a native speaker?, not does this stick to the rules of the grammar.", "scoring": "0: not acceptable; 1: some weird things but ok; 2: no errors found/acceptable." "name": "problematic_content", "description": "Problematic content refers to the presence of harmful (bias, offensive, toxic for example) content in the output.", "scoring": "1: no problematic content in the text; 0: if there is problematic content in the text" "name": "task quality", "description": "The quality of the output is related to the task. We are evaluating whether the model did what the task asked.", "scoring": "0: the model did not do what the task asked; 1: mostly did what the task asked, with some errors; 2: did what the task asked." "name": "output content quality", "description": "Low-Quality Content means whether the discourse (text) is any good.", "scoring": "0: bad content --- If the text sounds repetitive (or is non-factual/ inconsistent or it's not in the given language, or seems to have been web-scrapped); 1: OK content, but some flaws found --- If it's ok (grammatical, lexically, vocab is good) but kind of goes around in circles ; 2; good or above content."
Figure 3: Metric description for simple instructions
Figure 2: General Prompting Schema
"name": "linguistic_acceptability",
"description": "Linguistic acceptability pertains to the degree to which a given language structure (e. g., phrase, sentence, discourse) aligns with the implicit norms and rules of a native speaker's linguistic intuition. In the study of language, it's distinct from 'grammaticality', which is a stricter and narrower concept based on the prescriptive rules of a language. Linguistic acceptability, on the other hand, captures broader native-speaker intuitions and encompasses factors like fluency, idiomacy, and appropriateness in context. In the context of language models, evaluating linguistic acceptability involves assessing the output of the model not just for its adherence to grammar rules, but for its overall fit within the natural, expected, and intuitive contours of fluent human language. The scoring rubric is described below, with a few possible reasons (which might not be exhaustive) for a given score.",
"scoring": "{
"0": {
"(a)": "Sentences that lack clear syntactic structure.",
"(b)": "Usage of non-existent or incorrect words.",
"(c)": "Grossly inappropriate word choices for a given context."
,
"1": {
"(a)": "Overly verbose or stitled phrasing.",
"(b)": "Minor grammatical errors that do not impede understanding.",
"(c)": "Use of a word that's technically correct but not the most appropriate for context."
,
"2": {
"(a)": "Seamless integration of contextually relevant vocabulary",
"(b)": "Effective use of idiomatic expressions without sounding forced.",
"(c)": "Sentences that reflect natural rhythm, emphasis, and intonation of spoken language."
}
"Open Prompt": "Given a short user provided starting prompt and its concise completion (which is roughly a page long), your task is to evaluate the completion with respect to the starting prompt and listed set of metrics. For each metric listed, you must always return a score and a justification of the score. Note that, both the starting prompt and its completion are given in {{ language}}.",
"Contine Writing": "Given two passages (passage a and passage b), one of which may be empty, and third passage (passage c), which aims to provide a seamless transitions between passage a and passage b. Your task is to evaluate the passage c with respect to the listed set of metrics. For each metric listed, you must always return a score and a justification of the score. Note that, all three passages are given in {language}.",
"Summarize": "Given a passage and a brief summary of that passage which attempts to capture the essence of it, your task is to evaluate the summary with respect to the given passage and listed set of metrics. For each metric listed, you must always return a score and a justification of the score. Note that, both the passage and its summary are given in {language}."
Figure 4: Metric description for complex instructions (Linguistic Acceptability)
Figure 5: Task description
the LLM-based evaluators similar to instructions provided to humans. In this variation, we provide much more detailed descriptions of the metrics, as shown in Figure 4 for linguistic acceptability4.
Footnote 4: Other metrics are included in Appendix A.1
## 4 Results
### Percentage Agreement
In this set of graphs, we look at the percentage agreement between LLM-evaluator and the annotators. We also look at agreement between the annotators. We aggregate the results by task, metric, and language.
Figure 5(a) and 5(b) show the percentage agreement between the aggregate of the human annotator scores and LLM-evaluator for the full and small datasets. The figures show both joint (compound) and single prompting techniques for the full dataset and the few-shot prompting technique for the smaller dataset. We see that the PA between the annotators and GPT is lowest compared to the PA between the human annotators for Japanese and Czech, with the PA between annotators also being lower for Chinese.
Next, we look at PA grouped by metric in Figures 6(a) and 6(b) for the full and smaller datasets with the same prompting variations as before. We find that the PA of the LLM-evaluator with the annotators is lower for the OCQ metric. We also find that the PA between annotators is relatively low for the TQ metric, while all the PA values are very high for the problematic content metrics.
Finally, we look at PA aggregated by task in Figures 7(a) and 7(b). We find that PA is lower for the "Continue Writing" task, while the PA between GPT and the annotators is lower than the agreement between annotators for the "Open Prompt" and "Continue Writing" tasks.
Overall, we find that the LLM-evaluator prompted using the compound prompt has a lower agreement with human annotators than the single prompt variation. We also find that adding few-shot examples does not increase the PA in our experiments. For the remaining ablation experiments, we use the single prompt variation without few-shot examples.
### Class distribution
In this set of graphs, we seek to examine the distributions of the scores from native speakers and LLM-evaluator. There are three cases to consider for metrics that have three values: Full agreement between all three annotators in which all three annotators give the same score, partial agreement between the annotators where two of the three give the same score and no agreement, where all three annotators give different scores. In metrics that have binary values, we only have full or partial agreement. We group annotations into these classes and analyze responses across these classes.
We present results for metrics that have three values (LA, OCQ, and TQ), with \(0\) corresponding to the lowest score and \(2\) corresponding to the highest score. In Figures 20(a) and 20(b), we find that the LLM-evaluator provides a score of \(2\) in most cases, particularly in cases where human annotators disagree. This is even more evident in the case of non-English languages where there is partial agreement or no agreement between the annotators (around \(15\)% of the time on average).
Next, we look at the same graphs for languages that are either lower-resourced or not written in the Latin script. In Figures 11(c) and 11(d) we find that the LLM-evaluator almost never provides scores of \(0\) and \(1\) in the \(26\)% of cases that annotators disagree and find similar results for Japanese in Figures 11(e) and 11(f) and Czech in Figures 11(g) and 11(h). Overall, we find that LLM-based evaluators give a score of \(2\) in most cases. While this is consistent with human evaluations in a large part of the dataset, the LLM-based evaluator continues to assign a score of \(2\) even when humans disagree or provide lower scores.
#### 4.2.1 Consistency check
We use a temperature of \(0\) in the consistency check experiments and find that we receive the same score and justification in each of the five tries. This indicates that the LLM-based evaluator shows high consistency.
### Sensitivity to perturbations
As described earlier, we perturb the word order of sentences and check the sensitivity of the Linguistic Acceptability metric. Figure 11 shows the distribution of cases per language per task where the LLM-based evaluator changed its evaluation from a higher score to a lower score. We can observe that the evaluator shows the most sensitivity to inputs for the Summarization task for all languages except Japanese. For Insert, Chinese and Japanese show very little sensitivity. For Start, Chinese and
Figure 6: Percentage Agreement (PA) by language
Figure 7: Percentage Agreement (PA) by metric
Figure 8: Percentage Agreement (PA) by task
Figure 9: Class distribution per language (En, Es, Fr, De, It). Results are aggregated over all tasks and metrics with 3 classes (LA, OCQ, TQ).
Figure 10: Class distribution per language (Pt(Br), Zh, Ja, Cz). Results are aggregated over all tasks and metrics with \(3\) classes (LA, OCQ, TQ).
Japanese show no sensitivity to the perturbations. One possible explanation for this could be that the evaluator is genuinely less sensitive to these languages. Alternatively, it might be attributed to the flexible word order characteristics of Chinese and Japanese.
### Temperature variation
Figures 11(a), 11(b) and 11(c) show the PA values for temperatures of \(0\), \(0.3\), \(0.7\) and \(1.0\) aggregated across each language, task and metric respectively. We observe that PA reduces as we increase temperature, indicating that a temperature of \(0\) should be used for LLM-based evaluators.
### More detailed instructions
One of the challenges with LLM evaluation is sensitivity to prompting instructions, which can greatly affect the performance of the LLM on tasks, including evaluation. Since we observe that the LLM-evaluator tends to be biased toward producing higher scores, we experiment with adding more detailed instructions to the prompt. The detailed instructions for all metrics can be found in the Appendix and were generated by querying GPT-4 to produce these instructions by providing it the instructions given to annotators and manually modifying them.
Figures 12(a), 12(b) and 12(c) compare the PA of the LLM-evalators with detailed instructions vs. the simpler instructions described earlier. Interestingly, even though PA drops slightly for all metrics with the detailed instructions, we find that the LLM-based evaluator may be slightly less biased towards producing high scores with these instructions as shown in Figures 13(a) and 13(b). However, more investigation is needed to determine whether detailed instructions or a different prompting strategy can eliminate the bias toward high scores.
## 5 Discussion and Limitations
Overall, our results indicate that GPT-based evaluators have relatively high consistency for non-English languages when set to a temperature of 0. They also display a fair sensitivity to input variations, especially in aspects like linguistic acceptability. While LLM-based evaluators show a high Percentage Agreement, there is a noticeable bias towards positive scores, particularly when human opinions differ. It remains uncertain what score an LLM-based evaluator should provide when humans cannot reach a consensus, but consistently high scores in such situations might create a misleading impression of good performance in more challenging evaluations. We find that PA and bias towards higher scores are particularly evident in non-Latin script languages such as Chinese and Japanese, and lower-resource languages such as Czech, which is consistent with prior work on the performance of LLMs on various tasks Ahuja et al. (2023).
We experiment with several prompting strategies for LLM-based evaluators and find that evaluating a single metric at a time produces better results than evaluating all metrics in one go, which comes at the cost of having to make multiple calls to the LLM. We also find that providing few-shot examples does not help improve performance. We also provide more detailed instructions to the LLM-evaluator but find that it does not eliminate the problem of bias toward higher scores. Future work in this direction includes exploring better prompting approaches including automatically tuning prompts to a held-out set. In this work, we only use evaluators based on GPT-4. An interesting future direction is the use of smaller models for evaluation or models trained with better coverage of non-English data.
In this work, we utilize a dataset comprising human assessments of a text generation system executing various tasks in eight languages. As we do not regulate the quality of the system's output, most of the generated texts receive positive ratings from human evaluators. Consequently, the high Percentage Agreement's origin remains unclear
Figure 11: Percentage of samples where GPT evaluation changed from a higher score to a lower score per language per task. Note: We do not have Chinese and Czech for Summarize in smaller dataset.
Figure 12: Percentage Agreement (PA) for different cases and temperature variations
Figure 13: Percentage Agreement (PA) for single metric call with simple instructions vs detailed instructions
whether it stems from the inclination of the LLM-evaluator to assign high scores or not. In future work, we aim to replicate this study using a dataset with a more balanced distribution of human judgments, achieved by controlling the output quality. We also intend to make this dataset available to the research community for calibrating LLM-based evaluators. An important research direction is the creation of datasets with good language coverage, multiple annotators per data point, and clear annotation instructions, covering a variety of dimensions to calibrate LLM-based evaluators. Exploring the development of various evaluator personas to represent diverse perspectives of human evaluators and achieve consensus is another research direction that needs further investigation.
Our results in this paper show that LLM-based evaluators should be calibrated with human evaluation in the multilingual setting, particularly on low-resource and non-Latin script languages. We also show that certain metrics corresponding to output quality and task completion may be challenging for LLM-based evaluators. Hence, we advocate for a cautious approach in using LLM-based evaluators for non-English languages and suggest that all LLM-based multilingual evaluations should be calibrated with a set of human-labeled judgments in each language before deployment.
## 6 Conclusion
In this paper, we highlight the urgent problem of scaling up multilingual evaluation and explore whether LLM-based evaluators can be a potential solution. We introduce the first assessment of LLMs as multilingual evaluators and compare their performance against human judgments across eight languages. We experiment with various prompting strategies for LLM-based evaluation, including single and joint calls and providing few-shot examples, and conduct ablation studies to test for sensitivity and consistency. While we find that LLM-based evaluators show high consistency with human evaluation when annotators agree and rate outputs as positive, LLM-based evaluators may be biased towards giving a higher rating for cases that annotators do not agree on. Our work indicates that LLM-based evaluators need to be used cautiously in the multilingual setting, particularly on languages on which LLMs are known to perform poorly. Future work in this direction includes the creation of high-quality datasets for calibrating LLM-based evaluators in multiple languages. The use of LLM-based evaluation raises ethical concerns that warrant consideration before implementing such solutions, particularly in a multilingual context. Languages with insufficient benchmarks and resources may experience a disproportionate impact, as they could solely rely on LLMs for evaluation, potentially leading to unintended consequences. A hybrid solution with LLM-based evaluators and native speakers in-the-loop is a potential way forward to scale up multilingual evaluation and ensure that no language is left unevaluated.
|
2305.19589 | SLABERT Talk Pretty One Day: Modeling Second Language Acquisition with
BERT | Second language acquisition (SLA) research has extensively studied
cross-linguistic transfer, the influence of linguistic structure of a speaker's
native language [L1] on the successful acquisition of a foreign language [L2].
Effects of such transfer can be positive (facilitating acquisition) or negative
(impeding acquisition). We find that NLP literature has not given enough
attention to the phenomenon of negative transfer. To understand patterns of
both positive and negative transfer between L1 and L2, we model sequential
second language acquisition in LMs. Further, we build a Mutlilingual Age
Ordered CHILDES (MAO-CHILDES) -- a dataset consisting of 5 typologically
diverse languages, i.e., German, French, Polish, Indonesian, and Japanese -- to
understand the degree to which native Child-Directed Speech (CDS) [L1] can help
or conflict with English language acquisition [L2]. To examine the impact of
native CDS, we use the TILT-based cross lingual transfer learning approach
established by Papadimitriou and Jurafsky (2020) and find that, as in human
SLA, language family distance predicts more negative transfer. Additionally, we
find that conversational speech data shows greater facilitation for language
acquisition than scripted speech data. Our findings call for further research
using our novel Transformer-based SLA models and we would like to encourage it
by releasing our code, data, and models. | Aditya Yadavalli, Alekhya Yadavalli, Vera Tobin | 2023-05-31T06:22:07Z | http://arxiv.org/abs/2305.19589v1 | # SLABERT Talk Pretty One Day: Modeling Second Language Acquisition with BERT
###### Abstract
Second language acquisition (SLA) research has extensively studied cross-linguistic transfer, the influence of linguistic structure of a speaker's native language [L1] on the successful acquisition of a foreign language [L2]. Effects of such transfer can be positive (facilitating acquisition) or negative (impeding acquisition). We find that NLP literature has not given enough attention to the phenomenon of _negative transfer_. To understand patterns of both positive and negative transfer between L1 and L2, we model sequential second language acquisition in LMs. Further, we build a Mutlilingual Age Ordered CHILDES (MAOCHILDES)--a dataset consisting of 5 typologically diverse languages, i.e., German, French, Polish, Indonesian, and Japanese--to understand the degree to which native Child-Directed Speech (CDS) [L1] can help or conflict with English language acquisition [L2]. To examine the impact of native CDS, we use the TILT-based cross lingual transfer learning approach established by Papadimitriou and Jurafsky (2020) and find that, as in human SLA, language family distance predicts more negative transfer. Additionally, we find that conversational speech data shows greater facilitation for language acquisition than scripted speech data. Our findings call for further research using our novel Transformer-based SLA models and we would like to encourage it by releasing our code, data, and models.
## 1 Introduction
Cross-linguistic transfer can be described as the influence of native language [L1] properties on a speaker's linguistic performance in a new, foreign language [L2]. The interaction of the linguistic structure of a speaker's L1 with the successful acquisition of L2 results in what are termed as _transfer effects_. Transfer effects appear in various aspects of linguistic performance, including vocabulary, pronunciation, and grammar (Jarvis and Pavlenko, 2007). Cross-linguistic transfer can be positive or negative in nature: positive transfer refers to the facilitating effects of one language in acquiring another (e.g., of Spanish vocabulary in acquiring French) and _negative transfer_ between the learner's native [L1] and target [L2] languages, producing errors. The greater the differences between two languages, the greater the negative effects.
While cross-lingual transfer has received considerable attention in NLP research (Wu and Dredze, 2019; Wu et al., 2019; Conneau et al., 2017, 2018; Artetxe et al., 2018; Ruder et al., 2017), most of this research has concentrated on practical implications such as the degree to which the right tokenizer can optimize cross-lingual transfer, and has not looked at the kind of sequential transfer relationships that arise in human second language acquisition. Meanwhile, approaches like the Test for Inductive Bias via Language Model Transfer (TILT) (Papadimitriou and Jurafsky, 2020) focus on positive transfer with divergent pairs of training sets, such as MIDI music and Spanish, to shed light on which kinds of data induce generalizable structural features that linguistic and non-linguistic data share. Patterns of both positive and negative transfer between a given L1 and L2, however, can be a valuable source of information about general processes of second language acquisition and typological relationships between the languages in question (Berzak et al., 2014).
Most cross-lingual models do not mimic how humans acquire language, and modeling the differences between first and second language acquisition is a particularly under-explored area. To engage with questions about second language acquisition using LMs, we model sequential second language acquisition in order to look more closely
at both positive and negative transfer effects that may occur during the acquisition of L2.
Using Child-Directed Speech (CDS) to create L1 training sets that are naturalistic, ecologically valid, and fine-tuned for language acquisition, we model the kind of cross-linguistic transfer effects that cause linguistic structure of the native L1 to influence L2 language acquisition in our novel Second Language Acquisition BERT (SLABERT) framework. The resulting models, when tested on the BLiMP (Benchmark of Linguistic Minimal Pairs for English) grammar test suite (Warstadt et al., 2020), show that L1 may not only facilitate L2 learning, but can also interfere. To the extent that interference is considered in NLP research, it is often understood simply as a failure of positive transfer in model training. We suggest, instead, that these results should be analyzed in terms of distinctive patterns of both negative and positive transfer, which can reveal not just the existence of generalizable features across datasets, but also finer-grained information about structural features of these languages and their accessibility to second language learners.
## 2 Related Work
Our work is closely related to and in many ways builds on the work done by Huebner et al. (2021). They proposed that Child-Directed Speech has greater potential than other kinds of linguistic data to provide the structure necessary for language acquisition, and released BabyBERTa, a smaller sized RoBERTa (Liu et al., 2019) model designed to investigate the language acquisition ability of Transformer-based Language Models (TLM) when given the same amount of data as children aged 1-6 get from their surroundings. They also released Zorro, a grammar test suite, that is compatible with the small vocabulary of child-directed input.
Child-directed speech (CDS) refers to the special register adopted by some adults, especially parents, when talking to young children (Saxton, 2009). CDS typically features higher fundamental pitch, exaggerated intonation, slower speech, and longer pauses than Adult-Directed Speech (ADS) (Clark, 2016). Utterances in CDS are usually well-formed grammatically, but are syntactically simpler than ADS, often comprising single word utterances or short declaratives. Adults often repeat words, phrases, and whole utterances in CDS (Kunstay and Slobin, 2002; Snow, 1972) and make fewer errors (Broen, 1972) than they do in ADS. CDS also tends to use a smaller and simplified vocabulary, especially with very young children (Hayes and Ahrens, 1988). While the universality and necessity of CDS for language acquisition is a matter of debate (Pinker, 1995; Hornstein et al., 2005; Haggan, 2002), it is likely that the features of CDS are universally beneficial in language acquisition (Saxton, 2009). NLP literature suggests that are certain benefits when models are trained on CDS (Gelderloos et al., 2020). Studies from other fields suggest that the pitch contours, repetitiveness, fluency, and rhythms of CDS make it easier for children to segment speech, acquire constructions, and understand language (Cristia, 2011; Thiessen et al., 2005; Nelson et al., 1986; Ma et al., 2011; Soderstrom et al., 2008; Kirchhoff and Schimmel, 2003). Many of these distinctive qualities of CDS seem tailor-made for human language acquisition, which is why we use CDS data as L1 in our SLABERT models.
Several recent studies confirm that the distinctive distributional features of CDS influence the grammatical and lexical categories that children acquire. For instance, Mintz (2003) found that "frequent frames" in CDS-commonly recurring co-occurance patterns of words in sentences-yield very accurate grammatical category information for both adults and children. Similarly, Veneziano and Parisse (2010) found that patterns of frequent use and, importantly, reinforcement in CDS-specific conversational exchanges were most predictive of the constructions children learn. Together, these findings suggest that both token distribution and the distinctive conversational structure of CDS provide useful reinforcement for acquisition. Therefore, when training our L1 model, we pay attention to qualities of the training input such as the conversational structure.
In second language acquisition (SLA) research, patterns of negative transfer are a topic of much interest and have been considered a source of information both about what happens in second language learning and what it can reveal about the typological relationships between L1 and L2. For instance, Dulay and Burt (1974) show that closely analyzing data from children learning a second language reveals that some errors are due to L1 interference (_negative transfer_), while others arise from developmental cognitive strategies similar to those made during L1 acquisition (_developmental errors_).
Berzak et al. (2014) show a strong correlation between language similarities derived from the structure of English as Second Language (ESL) texts and equivalent similarities obtained directly from the typological features of the native languages. This finding was then leveraged to recover native language typological similarity from ESL texts and perform prediction of typological features in an unsupervised fashion with respect to the target languages, showing that structural transfer in ESL texts can serve as valuable data about typological facts.
The phenomenon of cross-linguistic transfer has received considerable attention in NLP research in the context of multilingual Language Models Wu and Dredze (2019); Wu et al. (2019); Conneau et al. (2017, 2018); Artetxe et al. (2018); Ruder et al. (2017). Our investigation is particularly inspired by Papadimitriou and Jurafsky (2020)'s Test for Inductive Bias via Language Model Transfer (TILT). This is a novel transfer mechanism where the model is initially pre-trained on training data [L1]. Next, they freeze a part of the model and fine-tune the model on L2. Finally, they test the resulting model on a test set of L2. We follow a similar approach to our model's second language acquisition.
## 3 Data
### Why Child-Directed Speech
We wanted L1 training sets that are both realistic and fine-tuned to teach language to developmental (first language) learners. We also wanted to reproduce the findings of Huebner et al. (2021) which suggest that Child-Directed Speech as training data has superior structure-teaching abilities for models compared to scripted adult-directed language.
The BabyBERTa studies Huebner et al. (2021) found that their LM required less data than RoBERTa to achieve similar (or greater) linguistic/syntactic expertise (as tested by Zorro), and suggested that CDS is better than Wikipedia text for teaching linguistic structure to models. Given these findings and widespread support in cognitive science and linguistics for the facilitative nature of CDS in child language learning, we choose to use CDS data from five different languages as our L1s to examine our hypothesis that preexisting linguistic structure of L1 interacts differentially with the acquisition of L2 (English).
Additionally, building on the Huebner et al. (2021) efforts to find superior training data for LMs in general, we explore the possibility that comparing conversational CDS with scripted ADS is a less fair comparison than comparing the quality of conversational CDS with that of conversational ADS as training input for LMs.
#### 3.1.1 Why CHILDES
Our focus in training the Child-Directed Speech model is on replicating for the LM, as closely as possible, the primary linguistic input of young children. While young children are exposed to passive Adult-Directed Speech, speech that is directed at them and intended to communicate with them plays a more central role in the child's linguistic experience Soderstrom (2007). For this reason, we use a language database of naturalistic speech directed at children. The CHILDES Macwhinney (2000) database, a component of the larger TalkBank corpus, is a vast repository of transcriptions of spontaneous interactions and conversations between children of varying ages and adults.1 The database comprises more than 130 corpora from over 40 different languages and includes speech directed at children from ages of 6 months to 7 years. The large selection of languages permits us the necessary flexibility in choosing different languages for our L1 data (see Section 3.1.2 for more on Language Selection). The range of child ages allows us to train our models with increasingly complex linguistic input, emulating the linguistic experience of a growing child.
Footnote 1: [https://talkbank.org](https://talkbank.org)
#### 3.1.2 Language Selection
Our focus is on cross-linguistic transfer of language structure; therefore, we use a simple selection criterion and choose five languages with varying distance from English according to their language family: German, French, Polish, Indonesian, and Japanese. We hypothesize languages that are structurally similar to English should perform better (show more positive transfer and less negative transfer). German, French, and Polish, like English, are all Indo-European languages. However, each of these languages belongs to a unique genus: German and English are Germanic languages, French is a Romance language, and Polish is a Slavic language. While English and French do not share the same genus, there is much overlap between the two languages due to the substantial influence of French on English stretching back to the time of Norman
Conquest. Japanese belongs to the Japanese language family and Indonesian to the Austronesian language family.
#### 3.1.3 Using the AO-CHILDES corpus
The AO-CHILDES (AO: age-ordered) corpus was created from Huebner and Willits (2021) American English transcripts from the CHILDES database. To curate the American English collection, we followed the same cleaning criteria as Huebner and Willits (2021): only transcripts involving children 0 to 6 years of age were procured, from which child (non-adult) utterances and empty utterances were omitted. The initial CHILDES transcriptions were converted from CHAT transcription format to csv format files using childes-db Sanchez et al. (2019) to conduct the data cleaning processes. The resulting dataset, which now contains 2,000,352 sentences, 27723 unique words, and 4,960,141 total word tokens, forms the American English input. This cleaning process was repeated for the corpora of German, French, Polish, Japanese, and Indonesian to create the dataset for each language (see Table 1 for the language statistics).
#### 3.1.4 Mao-Childes
For the sake of simplicity we refer to the corpus resulting from the collective datasets of the six languages as MAO-CHILDES (MAO is short for Multilingual Age-Ordered) to show that the transcripts it contains include a selection of different languages and also are ordered by age of child (see Table 1).
Data in MAO-CHILDES is not uniformly distributed across languages, as seen in Table 1. First, Polish is represented by significantly less data than every other language. Second, Indonesian has a lower number of unique tokens compared to other languages. The Indonesian data is also only collected from conversations with 9 children, a much smaller sample size compared to the other languages, which have sample sizes in the hundreds if not thousands. Third, the average sentence length of the Asian languages--Indonesian and Japanese--is smaller than any of the other languages. The effect of these variations in data, caused by both available resources and natural linguistic characteristics of the languages, on the performance of the cross-lingual model is anticipated.
### Adult-Directed Speech corpus
The Adult-Directed Speech (ADS) corpus comprises conversational speech data and scripted speech data. We build on the BabyBERTa efforts to find superior training data for LMs (in general) by experimenting with conversational ADS and comparing its training utility with that of conversational CDS. This investigation is aimed at narrowing down the true source, child-directed language or conversational language, of the reduced data size requirements of BabyBERTa.
To create our conversational ADS corpus, we use the sample COCA SPOKEN corpus.2 COCA (Corpus of Contemporary American English) is one of the most widely used corpora of English for its rich representation of texts from a wide range of genres, dialects, and time periods. The SPOKEN genre comprises transcriptions of spontaneous conversations between adults. To clean this sample corpus we followed a three step process:
Footnote 2: [https://www.corpusdata.org](https://www.corpusdata.org)
* All spoken disfluencies such as pauses, laughter, and filler utterances encoded in the spoken transcripts were cleaned.
* All meta tags that mention the names of the speakers were removed.
* Finally, the data was sampled manually to check that the corpus was clean.
\begin{table}
\begin{tabular}{l c c c c c} \hline \hline
**Language** & **Vocabulary** & **Total tokens** & **Avg. Sentence Length** & **No. of Children** & **Utterances** \\ \hline American English & 27,723 & 4,960,141 & 5.54832 & 1117 & 893,989 \\ French & 22,809 & 2,473,989 & 5.74531 & 535 & 487,156 \\ German & 59,048 & 4,795,075 & 5.65909 & 134 & 951,559 \\ Indonesian & 21,478 & 2,122,374 & 3.97058 & 9 & 572,581 \\ Polish & 31,462 & 493,298 & 5.84276 & 128 & 84,578 \\ Japanese & 44,789 & 2,397,386 & 4.17552 & 136 & 588,456 \\ Wikipedia-4 & 84,231 & 1,907,706 & 23.8456 & - & 80,000 \\ English ADS & 55,673 & 905,378 & 13.1901 & - & 74,252 \\ \hline \hline \end{tabular}
\end{table}
Table 1: MAO-CHILDES corpus statistics: the number of unique tokens, total tokens, the average sentence length, the total number of children, and the mean age of child for each language dataset is presented
After cleaning, we were left with 74,252 utterances. We use this cleaned corpus to train our conversational Adult-Directed Speech (ADS) model.
To replicate the findings of the BabyBERTa study, we also train a model on scripted ADS. To create our scripted ADS corpus, we randomly sample 80,000 sentences from Wikipedia-3 (Huebner et al., 2021), which we term Wikipedia-4, so that the data size of conversational ADS and scripted ADS is approximately equal, to allow fair comparison. All the information about the data we used is in Table 1.
## 4 Experimental Setup
We use BabyBERTa (Huebner et al., 2021) to run all our experiments. BabyBERTa is a smaller-sized RoBERTa (Liu et al., 2019) tuned to perform well on data of the size of AO-CHILDES. However, we make additional changes to the vocabulary size of the model as we found that to improve the results of the model. The implementation details of the model can be found in Appendix A.1.
We follow the TILT approach introduced by Papadimitriou and Jurafsky (2020) to originally test the LSTM-based (Hochreiter and Schmidhuber, 1997) LM's structure acquisition. Their general approach is followed in the current study with a few notable changes (See Figure 1). Our approach comprises two stages: (1) train the model on L1 (CDS language) (2) freeze all parameters except the word embeddings at the transfer stage of the experiment, and fine-tune the model on L2 (English ADS). Finally, the resulting model is tested on a test set of L2 for which we use the Benchmark of Linguistic Minimal Pairs (BLiMP) (Warstadt et al., 2020), a challenge set for evaluating the linguistic knowledge of the model on major grammatical phenomena in English. Our study deviates from Papadimitriou and Jurafsky (2020) approach in three ways: (1) instead of using LSTM-based LMs we use Transformer-based LMs (Vaswani et al., 2017) (2) they freeze all layers except the word embedding and linear layers between the LSTM layers however, for simplicity we freeze all parameters except the word embeddings (3) while they report their findings based on LM perplexity scores, we use the BLiMP test suite to report how L1 structure (particularly, syntax and semantics) affects L2 acquisition in our Transformer-based LMs.
There are two experiments for which we follow a different procedure than what is explained above:
* In the case of random-baseline experiment, we freeze all of the model except the embeddings and let the model train on conversational English ADS. The corresponding tokenizer is also trained on conversational English ADS. This experiment is run in order to have the right benchmark to compare against. This method prevents the model from picking up any grammatical structure from the training data, while allowing it to acquire English vocabulary.
* In the case of the scripted ADS and conversational ADS experiments, we do not employ TILT-based cross lingual transfer. We train the model from scratch on scripted ADS and conversational ADS respectively.
**Testing:** We use the BLiMP grammar test suite to evaluate the linguistic knowledge of our model. BLiMP consists of 67 paradigms categorized into 12 major grammatical phenomena in English. Each of these 67 datasets comprises 1,000 minimal pairs i.e. pairs of minimally different sentences, one of
Figure 1: Diagram illustrating our experimental process for each L1, as listed in Table 1. Training occurs in two stages and each model is finally tested on the BLiMP test suite.
which is grammatically acceptable and the other not (refer to Warstadt et al. (2020) for a detailed description of the test suite).
## 5 Results and Discussion
### Results
The proportion of the BLiMP minimal pairs in which the model assigns a higher probability to the acceptable sentence informs the accuracy of the model. A total of 9 models are compared in their performance using the accuracy scores obtained on 12 different grammatical tests from the BliMP test suite. We report the results for all models in Figure 2 (see Appendix A.2 for detailed results). The model trained on conversational English ADS achieves the highest accuracy and the one trained on Indonesian CDS achieves the lowest. Despite the conversational English ADS corpus size being at least 10x smaller than the CDS corpora sizes, it performs the best in 9 out of 12 grammatical phenomena from the BLiMP test suite. CDS demonstrates higher accuracy only in anaphor agreement, irregular forms, and quantifiers. Overall, English CDS performs 5.13 points behind English ADS. These results show that (conversational) Adult-Directed speech makes for superior training data for models as compared to (conversational) Child-Directed Speech. From Figure 2, we note a few other significant trends:
First, the results indicate that conversational speech data form a superior training data for language models in general as compared to the conventional scripted data. Table 2 compares the performance of models when trained on different types of training inputs of the same language (English): scripted ADS (Wikipedia-4), conversational ADS, and conversational CDS. Among the three, the performance of the model trained on conversational ADS is highest, followed by conversational CDS, and lastly scripted ADS. Important to note here is that, corroborating the findings of the BabyBERTa study, conversational CDS still outperforms scripted ADS (Wikipedia-4) but falls behind compared to conversational ADS. These results suggest that conversational speech data are a more effective training source for models than scripted data (more on this in Section 5.2).
Second, the results show a negative correlation between the distance of the CDS language from English and the performance of the model, i.e., as the typological distance between L1 and L2 increases, the performance of the model decreases. We term this the Language Effect. This finding supports our hypothesis that, given the relation between transfer errors and typological distance between L1 and L2 (Ringbom, 2006), the increasing structural dissimilarities between the L1 (CDS language) and the L2 (always English ADS) should adversely impact the performance of the model (more on this in Section 5.3).
Third, the results show that CDS performs worse than ADS in several grammatical phenomena (9 out of 12). Considering the simplistic and facilitating structure and, more importantly, the ecologically valid nature of CDS, these results engender some
Figure 2: Performance of model on various grammatical phenomena from the BLiMP test suite
interesting hypotheses which we discuss briefly in Section 5.4.
Fourth, we see several results in which individual models perform poorly on individual tests in ways that are not cleanly predicted by general trends. We believe these results reflect patterns of negative transfer, in which L1-specific structures actively interfere with the acquisition of structures in L2 (more on this in Section 5.5).
### Conversational vs. Scripted Data
The conventional training data for LMs is scripted adult-directed speech, perhaps owing to its easily accessible nature compared to other forms of data, such as conversational ADS or any form of CDS. However, our findings demonstrate that conversational data yields better model performance than scripted data (see Table 2). The best accuracy scores are produced by conversational ADS on 67% of the phenomena, by conversational CDS on 25% of the phenomena, by scripted ADS on 8% of the phenomena. Conversational data may make for a better training input for language acquisition given a higher level of interactive components in its composition which is an essential feature of language acquisition in children. Much of the previous research has looked at what conversational language does for the people who are directly contributing to the conversation in question. For instance, there is a general tendency for speakers to reproduce grammatical [1, 13] elements of their interloctor's previous utterances. These behaviors both enhance interactive alignment [1] and ease cognitive load for utterance planning [1, 13]. Studies of children's conversational behavior [12, 14] show, similarly, that children use their interlocutors' immediately preceding utterances as resources for producing and reinforcing construction types they are in the process of acquiring. Our findings suggest that the resulting distributional patterns of "dialogic syntax" [1] in the conversational record leave a trace that can make conversational data especially informative for model training.
### Language Effect
We selected five languages at varying distances from English according to their language family and examined how structural dissimilarities with increasing distance from English impact the performance of the model. Figure 3 shows the increase in difference between the performance of model trained on English ADS and CDS of the various languages. Our results show negative correlation between the distance of the CDS language from English and the performance of the model, i.e., as the typological distance between L1 and L2 increases, the performance of the model decreases. Based on prior work on transfer errors and typological distance [15], this decrease in performance could be the result of negative transfer effects, which tend to increase with increase in typological distance between L1 and L2. Among all CDS languages, English CDS performs closest to English ADS (5.13 points behind ADS), suggesting that even within the same language the linguistic differences between ADS and CDS affect model performance (see Table 2). This is considered as comparisons between other CDS languages and English ADS are made. German shows the next best performance (6.71 points behind English ADS), followed by French (7.27 points behind ADS), Polish (7.57 points behind ADS), Japanese (8.17 points behind ADS), and lastly Indonesian (8.69 points behind ADS). These results confirm our hypothesis that L1s that are structurally closer to L2 (English ADS) perform better, owing to greater degree of positive transfer effects.
For human language learners, transfer works both ways: sometimes knowledge of parallel structures in the native language facilitate performance in the new language. Other times, there is interference from the native language, resulting in errors. The SLABERT models, similarly, show evidence of both positive and negative transfer. As with human second-language learners, some of the errors we see in SLABERT performance suggest the ef
Figure 3: Mean multilingual CDS performance compared to ADS
fect of negative transfer from native [L1] language, while others can be characterized as developmental, in that they are similar to the kinds of errors that even native human speakers will make on their way to learning the target constructions.
### CDS & Sources of Errors in Language Learning
Our results show that CDS performs worse than ADS in a majority (9 out of 12) of the grammatical phenomena from the BLiMP test suite (see Figure 2). We discuss some theoretical explanations for these results.
**Negation and NPIs:** Child language acquisition research strongly suggests that mastering the full range of negative licensing and anti-licensing contexts takes a long time. Across languages, detailed acquisition studies find that children do use NPIs with licensing expressions consistently by age 3 or 4 (Tieu, 2013; Lin et al., 2015) but only with a limited range of negative licensers. Moreover, Schwab et al. (2021) showed that, even 11 and 12-year-olds, whose language input by that age is entirely ADS, are still in the process of learning some polarity-sensitive expressions. Thus, CDS input alone may not be sufficient for learning the licensing conditions for NPIs. Previous NLP literature also suggests that negation is particularly challenging for language models to learn (Kassner and Schutze, 2019; Ettinger, 2019). Given this, and acquisition studies that have shown that learning licensing conditions for NPIs goes hand-in-hand with learning negation (van der Wal, 1996), we expected our model trained on CDS to make _developmental errors_ on tests related to NPIs. As discussed in Section 5.5, as a Slavic language, Polish also has distinctive constraints on the appearance of NPIs that are the result of competition with grammatical constraints not present in English. In this case, NPI performance is likely subject to both _developmental_ errors and _negative transfer_.
**Longer Distance Dependencies:** Short and simple sentences are characteristic of CDS. However, it is likely that such utterances do not make ideal training input for LMs to learn long-distance dependencies (LDDs). Consequently, we expect all models trained on CDS data to be negatively impacted on tests that demand long-distance dependency understanding. Island effects, the phenomenon that showed the widest difference in performance compared to ADS-trained (-21.3 points), is one such phenomenon in the BLiMP test suite, requiring long-distance dependency understanding to perform well (Sprouse and Hornstein, 2013). Ellipsis and filler-gap structures also depend on LDDs and also suffer from significant decreases in scores compared to ADS (-10.8 and -6.5 points, respectively). This also applies to binding and control/raising phenomena (-2.8 and -3.6 respectively); however, island effects, ellipsis, and filler-gap tests are particularly affected by the model's lack of LDD understanding.
**Phenomena That Confuse Humans:** Warstadt et al. (2020) report human performance scores which we use to gain an understanding of how our model performs on tests compared to humans. From the reported human performance scores, we observe that not all of the grammatical phenomena in the BLiMP test suite are equally transparent to humans. Human performance on 8 out of 12 phenomena is below 90 points and 3 of those are below 85 points. The lowest is a mean score of 81 for tests on argument structure, where the CDS-trained and ADS-trained models are also seen struggling (rather more seriously) with a mean score of 55.1 and 56.1, respectively. For control/raising, simi
\begin{table}
\begin{tabular}{l|c c c} \hline \hline
**Phenomenon** & **Wikipedia-4** & **Conversational ADS** & **Conversational CDS** \\ \hline Ananaphor Agreement & 51.4 & 60.6 & 62.9 \\ Argument Structure & 54.5 & 56.1 & 55.1 \\ Binding & 60.7 & 61.6 & 58.9 \\ Control/Raising & 48.8 & 59.1 & 55.6 \\ Determiner Noun Agreement & 65.2 & 70.9 & 67.8 \\ Ellipses & 68.6 & 66.2 & 57.5 \\ Filler Gap & 62.4 & 67.3 & 62.6 \\ Irregular Forms & 61.8 & 68.2 & 70.9 \\ Island Effects & 51.8 & 72.7 & 51.3 \\ NPI Licensing & 53.7 & 62.6 & 51.9 \\ Quantifiers & 58.5 & 62.4 & 71.7 \\ Subject Verb Agreement & 54.9 & 57.7 & 53.8 \\ \hline \hline \end{tabular}
\end{table}
Table 2: Performance of model on BLiMP test suite when trained on different types of input data.
larly, human performance has a mean score of 84 points while CDS-trained and ADS-trained models have mean scores of 55.6 and 59.1 respectively. We expect CDS to perform poorly on these tests, which are challenging even for people.
### Negative Transfer
There are tests where performance of CDS-trained models would be expected to be better given the nature of the phenomena and the characteristics of CDS utterances. However, CDS underperforms compared to ADS even on tests we might expect to be in its wheelhouse. In particular, determiner-noun agreement and subject-verb agreement are the kinds of phenomena that should be easy for the model to learn even from shorter utterances and with relatively small vocabulary size, since they are matters of simple, regular morphology. The results, therefore, are interesting. We hypothesize one reason we do not see good transfer boosts from other-language CDS on these is that patterns of morphology are very language specific.
Looking broadly at the performance of non-English CDS models, we suggest that these results reflect negative cross-linguistic transfer. For example, the distribution of negative polarity items in Polish and many other Slavic languages displays what has been termed the "Bagel problem" (Pereltsvaig, 2006): because of conflicts with the demands of strict negative concord (in which negation requires multiple elements of an expression must all appear in their negative forms), in Slavic languages, there are NPIs that never appear in what would otherwise be the canonical context of negative polarity licensing, i.e. direct negation (Hoeksema, 2012). In this way, language-specific paradigmatic patterns supersede the general correlational relationship between NPIs and their licensing contexts, producing an opportunity for _negative transfer_ and L1 interference effects.
## 6 Conclusion
In this paper, we explore how second language acquisition research and models of second language acquisition can contribute to questions in NLP about the learnability of grammar. Drawing from the previous research on the unique role of child-directed speech (CDS) in language acquisition, we investigate the potential of spontaneously generated CDS to form a special source from which LMs can acquire the structure necessary for first language acquisition. To test sequential second language acquisition in LMs, we introduce SLABERT. The results from our experiments suggest that while positive transfer is a lot more common than negative transfer, negative transfer occurs in LMs just like it occurs in English Second Language (ESL) learners. We believe these novel findings call for further research on this front, and suggest that models like SLABERT can provide useful data for testing questions about both language acquisition and typological relationships through patterns of cross-linguistic transfer. To support this, we release our code, novel MAO-CHILDES corpus, and models.
## 7 Limitations
Given that many special properties of Child-Directed Speech are not present in text, we would have liked to work on a multimodal dataset, where both visual and speech information would be present. More specifically, we would have liked to test the effect of the following:
* Grounding the language models in vision to test the effect of joint attention (Rowe, 2012; Akhtar and Gernsbacher, 2007). Joint attention refers to the phenomena where the caregiver's and the child's coordinated attention attention to each other to a third object or an event.
* Child-Directed Speech is known to have special prosodic properties such as higher variability in pitch (Fernald et al., 1989; McRoberts and Best, 1997; Papousek et al., 1991), lengthening of vowels and pauses (Albin and Echols, 1996; Ratner, 1986; Fernald et al., 1989), context-specific intonational contours (Katz et al., 1996; Papousek et al., 1991; Stern et al., 1982). These properties have been suggested by many researchers to serve as a mechanism for getting the infants attention (Cruttenden, 1994; Ferguson, 1977; Fernald, 1989). This attentive role may be considered to be beneficial for language development in children (Garnica, 1977). As our models only take text as the input, we were unable to test the relationship the between these properties and language acquisition in neural network based models have.
* Caregivers give a lot of feedback when young children are first producing and acquiring lan
guage (Soderstrom, 2007). Our current mainstream language models are not interactive. Therefore, it is difficult to incorporate the feedback loop and the test the effect of the same in models' language acquisition.
As it is, our findings suggest that many of the most important facilitative features of Child-Directed Speech are relevant to precisely those formal and conceptual aspects of language acquisition that are not captured by text-based language models.
In this paper, we have tested the effect of native CDS in L2 acquisition with 5 typologically diverse languages. However, there is enormous scope to test the effect of the same with many more different languages, which may lead to more pointed implications and conclusions than the findings offered here.
## 8 Ethics Statement
We use publicly available CHILDES data to build our corpora (MAO-CHILDES). Please read more about their terms before using the data.3 We use the dataset extracted from the CHILDES database only for research purposes and not for commercial reasons. We will release the dataset upon publication under the same license as CHILDES and this is compatible with the license of CHILDES database (Macwhinney, 2000). The results of this study are reported on a single run as part of measures taken to avoid computation wastage. We do not foresee any harmful uses of this work.
Footnote 3: [https://talkbank.org](https://talkbank.org)
## Acknowledgements
We would like to acknowledge Philip Huebner for clearing our queries regarding the BabyBERTa code-base. We would also like to thank Saujas Vaduguru for helping us improve our initial drafts. We also thank the anonymous reviewers for their feedback on our work. This work made use of the High Performance Computing Resource in the Core Facility for Advanced Research Computing at Case Western Reserve University.
|
2308.16646 | Hydrodynamic limit and Newtonian limit from the relativistic Boltzmann
equation to the classical Euler equations | The hydrodynamic limit and Newtonian limit are important in the relativistic
kinetic theory. We justify rigorously the validity of the two independent
limits from the special relativistic Boltzmann equation to the classical Euler
equations without assuming any dependence between the Knudsen number
$\varepsilon$ and the light speed $\mathfrak{c}$. The convergence rates are
also obtained. This is achieved by Hilbert expansion of relativistic Boltzmann
equation. New difficulties arise when tacking the uniform in $\mathfrak{c}$ and
$\varepsilon$ estimates for the Hilbert expansion, which have been overcome by
establishing some uniform-in-$\mathfrak{c}$ estimate for relativistic Boltzmann
operators. | Yong Wang, Changguo Xiao | 2023-08-31T11:36:31Z | http://arxiv.org/abs/2308.16646v1 | Hydrodynamic limit and Newtonian limit from the relativistic Boltzmann equation to the classical Euler equations
###### Abstract.
The hydrodynamic limit and Newtonian limit are important in the relativistic kinetic theory. We justify rigorously the validity of the two independent limits from the special relativistic Boltzmann equation to the classical Euler equations without assuming any dependence between the Knudsen number \(\varepsilon\) and the light speed \(\mathfrak{c}\). The convergence rates are also obtained. This is achieved by Hilbert expansion of relativistic Boltzmann equation. New difficulties arise when tacking the uniform in \(\mathfrak{c}\) and \(\varepsilon\) estimates for the Hilbert expansion, which have been overcome by establishing some uniform-in-\(\mathfrak{c}\) estimate for relativistic Boltzmann operators.
Key words and phrases:relativistic Boltzmann equation; relativistic Euler equations; hydrodynamic limit; Newtonian limit; Hilbert expansion 2010 Mathematics Subject Classification: 82C40; 35Q20; 35Q75; 76P05; 76Y05 * Corresponding author: [email protected]
###### Contents
* 1 Introduction
* 2 Preliminaries
* 3 The Newtonian limit of the relativistic Euler equations
* 4 Uniform-in-\(\mathfrak{c}\) estimates on the linearized collision operators
* 5 Uniform-in-\(\mathfrak{c}\) estimates on the linear part of Hilbert expansion
* 6 Uniform in \(\mathfrak{c}\) and \(\varepsilon\) estimates on the remainder \(F_{R}^{\varepsilon,\mathfrak{c}}\)
* 7 Appendix: Derivation of the orthonormal basis of \(\mathcal{N}_{\mathfrak{c}}\)
## 1. Introduction
### The relativistic Boltzmann equation
We consider the special relativistic Boltzmann equation
\[p^{\mu}\partial_{\mu}F=\frac{1}{\varepsilon}\mathcal{C}(F,F), \tag{1.1}\]
which describes the dynamics of single-species relativistic particles. The dimensionless parameter \(\varepsilon\) is the Knudsen number, which is proportional to the mean free path. The unknown \(F(t,x,p)\geq 0\) is a distribution function for relativistic particles with position \(x=(x_{1},x_{2},x_{3})\in\Omega\) and particle momentum \(p=(p^{1},p^{2},p^{3})\in\mathbb{R}^{3}\) at time \(t>0\). The collision term \(\mathcal{C}(h_{1},h_{2})\) is defined by
\[\mathcal{C}(h_{1},h_{2})=\frac{1}{2}\int_{\mathbb{R}^{3}}\frac{dq}{q^{0}}\int _{\mathbb{R}^{3}}\frac{dp^{\prime}}{p^{\prime 0}}\int_{\mathbb{R}^{3}}\frac{dq^{ \prime}}{q^{\prime 0}}W\left(p,q\mid p^{\prime},q^{\prime}\right)\left[h_{1}\left(p^{ \prime}\right)h_{2}\left(q^{\prime}\right)-h_{1}(p)h_{2}(q)\right],\]
where the transition rate \(W\left(p,q\mid p^{\prime},q^{\prime}\right)\) has the form
\[W\left(p,q\mid p^{\prime},q^{\prime}\right)=s\varsigma(g,\vartheta)\delta(p^{ 0}+q^{0}-p^{\prime 0}-q^{\prime 0})\delta^{(3)}(p+q-p^{\prime}-q^{\prime}). \tag{1.2}\]
The streaming term of the relativistic Boltzmann equation (1.1) is given by
\[p^{\mu}\partial_{\mu}=\frac{p^{0}}{\mathfrak{c}}\partial_{t}+p\cdot\nabla_{x},\]
where \(\mathfrak{c}\) denotes the speed of light and \(p^{0}\) denotes the energy of a relativistic particle with
\[p^{0}=\sqrt{m_{0}^{2}\mathfrak{c}^{2}+|p|^{2}}.\]
Here \(m_{0}\) denotes the rest mass of particle. Now we can rewrite (1.1) as
\[\partial_{t}F+\hat{p}\cdot\nabla_{x}F=\frac{1}{\varepsilon}Q(F,F), \tag{1.3}\]
where \(\hat{p}\) denotes the normalized particle velocity
\[\hat{p}:=\mathfrak{c}\frac{p}{p^{0}}=\frac{\mathfrak{c}p}{\sqrt{m_{0}^{2} \mathfrak{c}^{2}+|p|^{2}}}.\]
The collision term \(Q(h_{1},h_{2})\) in (1.3) has the form
\[Q(h_{1},h_{2})=\frac{\mathfrak{c}}{2}\frac{1}{p^{0}}\int_{\mathbb{R}^{3}} \frac{dq}{q^{0}}\int_{\mathbb{R}^{3}}\frac{dp^{\prime}}{p^{\prime 0}}\int_{ \mathbb{R}^{3}}\frac{dq^{\prime}}{q^{\prime 0}}W\left(p,q\mid p^{\prime},q^{ \prime}\right)\left[h_{1}\left(p^{\prime}\right)h_{2}\left(q^{\prime}\right)- h_{1}(p)h_{2}(q)\right].\]
We denote the energy-momentum 4-vector as \(p^{\mu}=(p^{0},p^{1},p^{2},p^{3})\). The energy-momentum 4-vector with the lower index is written as a product in the Minkowski metric \(p_{\mu}=g_{\mu\nu}p^{\nu}\), where the Minkowski metric is given by \(g_{\mu\nu}=\text{diag}(-1,1,1,1)\). The inner product of energy-momentum 4-vectors \(p^{\mu}\) and \(q_{\mu}\) is defined via the Minkowski metric
\[p^{\mu}q_{\mu}=p^{\mu}g_{\mu\nu}q^{\nu}=-p^{0}q^{0}+\sum_{i=1}^{3}p^{i}q^{i}.\]
Then it is clear that
\[p^{\mu}p_{\mu}=-m_{0}^{2}\mathfrak{c}^{2}.\]
We note that the inner product of energy-momentum 4-vectors is Lorentz invariant.
The quantity \(s\) is the square of the energy in the _center of momentum system_, \(p+q=0\), and is given as
\[s=s(p,q)=-\left(p^{\mu}+q^{\mu}\right)\left(p_{\mu}+q_{\mu}\right)=2\left(p^{ 0}q^{0}-p\cdot q+m_{0}^{2}\mathfrak{c}^{2}\right)\geq 4m_{0}^{2}\mathfrak{c}^ {2}.\]
And the relative momentum \(g\) in (1.24) is denoted as
\[g=g(p,q)=\sqrt{\left(p^{\mu}-q^{\mu}\right)\left(p_{\mu}-q_{\mu}\right)}= \sqrt{2\left(p^{0}q^{0}-p\cdot q-m_{0}^{2}\mathfrak{c}^{2}\right)}\geq 0.\]
It is direct to know that
\[s=g^{2}+4m_{0}^{2}\mathfrak{c}^{2}.\]
The post-collision momentum pair \((p^{\prime\mu},q^{\prime\mu})\) and the pre-collision momentum pair \((p^{\mu},q^{\mu})\) satisfy the relation
\[p^{\mu}+q^{\mu}=p^{\prime\mu}+q^{\prime\mu}. \tag{1.4}\]
One may also write (1.4) as
\[p^{0}+q^{0} =p^{\prime 0}+q^{\prime 0}, \tag{1.5}\] \[p+q =p^{\prime}+q^{\prime}, \tag{1.6}\]
where (1.5) represents the principle of conservation of energy and (1.6) represents the conservation of momentum after a binary collision.
Using Lorentz transformations in [22, 47], in the _center of momentum system_, \(Q(F,F)\) can be written as
\[Q(F,F)= \int_{\mathbb{R}^{3}}\int_{\mathbb{S}^{2}}v_{\phi}\zeta(g,\vartheta) \Big{[}F(p^{\prime})F(q^{\prime})-F(p)F(q)\Big{]}d\omega dq\]
\[:= Q^{+}(F,F)-Q^{-}(F,F), \tag{1.7}\]
where \(v_{\phi}=v_{\phi}(p,q)\) is the Moller velocity
\[v_{\phi}(p,q):=\frac{\mathfrak{c}}{2}\sqrt{\left|\frac{p}{p^{0}}-\frac{q}{q^{0} }\right|^{2}-\left|\frac{p}{p^{0}}\times\frac{q}{q^{0}}\right|^{2}}=\frac{ \mathfrak{c}}{4}\frac{g\sqrt{s}}{p^{0}q^{0}}.\]
The pre-post collisional momentum in (1.7) satisfies
\[\begin{cases}p^{\prime}=\frac{1}{2}(p+q)+\frac{1}{2}g\Big{(}\omega+(\gamma_{0} -1)(p+q)\frac{(p+q)\cdot\omega}{|p+q|^{2}}\Big{)},\\ q^{\prime}=\frac{1}{2}(p+q)-\frac{1}{2}g\Big{(}\omega+(\gamma_{0}-1)(p+q) \frac{(p+q)\cdot\omega}{|p+q|^{2}}\Big{)},\end{cases}\]
where \(\gamma_{0}:=(p^{0}+q^{0})/\sqrt{s}\). The pre-post collisional energy is given by
\[\begin{cases}p^{\prime 0}=\frac{1}{2}(p^{0}+q^{0})+\frac{1}{2}\frac{g}{ \sqrt{s}}(p+q)\cdot\omega,\\ q^{\prime 0}=\frac{1}{2}(p^{0}+q^{0})-\frac{1}{2}\frac{g}{\sqrt{s}}(p+q) \cdot\omega.\end{cases}\]
The scattering angle \(\vartheta\) is defined by
\[\cos\vartheta:=\frac{(p^{\mu}-q^{\mu})(p^{\prime}_{\mu}-q^{\prime}_{\mu})}{g^ {2}}.\]
The angle is well defined under (1.4) and we refer to [17, Lemma 3.15.3].
The function \(\varsigma(g,\vartheta)\) in (1.2) is called the differential cross-section or scattering kernel. The relativistic differential cross section \(\varsigma(g,\vartheta)\) measures the interactions between relativistic particles. Throughout the present paper, we consider the "hard ball" particles
\[\varsigma(g,\vartheta)=\text{constant}.\]
Without loss of generality, we take \(\varsigma(g,\vartheta)=1\) for simplicity. The Newtonian limit in this situation, as \(\mathfrak{c}\to\infty\), is the Newtonian hard-sphere Boltzmann collision operator [48].
### Hilbert expansion
In the present paper, we are concerned with both the hydrodynamic limit and Newtonian limit from the relativistic Boltzmann equation to the classical Euler equations. To achieve this, we perform a Hilbert expansion for the relativistic Boltzmann equation (1.3) with small Knudsen number \(\varepsilon\). To emphasize the dependence on \(\varepsilon\) and \(\mathfrak{c}\) for relativistic Boltzmann solutions, we denote the solutions of (1.3) as \(F^{\varepsilon,\mathfrak{c}}\) and decompose \(F^{\varepsilon,\mathfrak{c}}\) as the sum
\[F^{\varepsilon,\mathfrak{c}}=\sum_{n=0}^{2k-1}\varepsilon^{n}F^{\mathfrak{c}}_ {n}+\varepsilon^{k}F^{\varepsilon,\mathfrak{c}}_{R},\quad k\geq 3, \tag{1.8}\]
where \(F^{\mathfrak{c}}_{0},F^{\mathfrak{c}}_{1},\ldots,F^{\mathfrak{c}}_{2k-1}\) in (1.8) will depend upon \(\mathfrak{c}\) but be independent of \(\varepsilon\). Also, \(F^{\varepsilon,\mathfrak{c}}_{R}\) is called the remainder term which will depend upon \(\varepsilon\) and \(\mathfrak{c}\). For \(\mathfrak{c}=1\), Speck-Strain[46] have already established the Hilbert expansion for the relativistic Boltzmann equation. Since we shall consider both the hydrodynamic limit \(\varepsilon\to 0\) and Newtonian limit \(\mathfrak{c}\to\infty\) of the relativistic Boltzmann equation, it is crucial to derive the uniform-in-\(\mathfrak{c}\) estimates on \(F^{\mathfrak{c}}_{n}\) (\(n=0,1,\cdots,2k-1\)) and uniform in \(\mathfrak{c}\) and \(\varepsilon\) estimates on \(F^{\varepsilon,\mathfrak{c}}_{R}\).
To determine the coefficients \(F^{\mathfrak{c}}_{0}(t,x,p),\cdots\), \(F^{\mathfrak{c}}_{2k-1}(t,x,p)\), we begin by plugging the expansion (1.8) into (1.3) to obtain
\[\partial_{t}\Big{(}\sum_{n=0}^{2k-1}\varepsilon^{n}F^{\mathfrak{c }}_{n}+\varepsilon^{k}F^{\varepsilon,\mathfrak{c}}_{R}\Big{)}+\hat{p}\cdot \nabla_{x}\Big{(}\sum_{n=0}^{2k-1}\varepsilon^{n}F^{\mathfrak{c}}_{n}+ \varepsilon^{k}F^{\varepsilon,\mathfrak{c}}_{R}\Big{)}\] \[=\frac{1}{\varepsilon}Q_{\mathfrak{c}}\Big{(}\sum_{n=0}^{2k-1} \varepsilon^{n}F^{\mathfrak{c}}_{n}+\varepsilon^{k}F^{\varepsilon,\mathfrak{c} }_{R},\sum_{n=0}^{2k-1}\varepsilon^{n}F^{\mathfrak{c}}_{n}+\varepsilon^{k}F ^{\varepsilon,\mathfrak{c}}_{R}\Big{)}. \tag{1.9}\]
Comparing the order of \(\varepsilon\) in (1.9), one has
\[0 =Q_{\mathfrak{c}}\left(F_{0}^{\mathfrak{c}},F_{0}^{\mathfrak{c}} \right),\] \[\partial_{t}F_{0}^{\mathfrak{c}}+\hat{p}\cdot\nabla_{x}F_{0}^{ \mathfrak{c}} =Q_{\mathfrak{c}}\left(F_{0}^{\mathfrak{c}},F_{1}^{\mathfrak{c}} \right)+Q_{\mathfrak{c}}\left(F_{1}^{\mathfrak{c}},F_{0}^{\mathfrak{c}} \right),\] \[\partial_{t}F_{1}^{\mathfrak{c}}+\hat{p}\cdot\nabla_{x}F_{1}^{ \mathfrak{c}} =Q_{\mathfrak{c}}\left(F_{0}^{\mathfrak{c}},F_{2}^{\mathfrak{c}} \right)+Q_{\mathfrak{c}}\left(F_{2}^{\mathfrak{c}},F_{0}^{\mathfrak{c}} \right)+Q_{\mathfrak{c}}\left(F_{1}^{\mathfrak{c}},F_{1}^{\mathfrak{c}} \right),\] \[\cdots\cdots\cdots\] \[\partial_{t}F_{n}^{\mathfrak{c}}+\hat{p}\cdot\nabla_{x}F_{n}^{ \mathfrak{c}} =\sum_{\begin{subarray}{c}i+j=n+1\\ i,j\geq 0\end{subarray}}Q_{\mathfrak{c}}\left(F_{i}^{\mathfrak{c}},F_{j}^{ \mathfrak{c}}\right), \tag{1.10}\] \[\cdots\cdots\cdots\] \[\partial_{t}F_{2k-1}^{\mathfrak{c}}+\hat{p}\cdot\nabla_{x}F_{2k- 1}^{\mathfrak{c}} =\sum_{\begin{subarray}{c}i+j=2k\\ i,j\geq 1\end{subarray}}Q_{\mathfrak{c}}\left(F_{i}^{\mathfrak{c}},F_{j}^{ \mathfrak{c}}\right).\]
The remainder \(F_{R}^{\varepsilon,\mathfrak{c}}\) satisfies the equation
\[\partial_{t}F_{R}^{\varepsilon,\mathfrak{c}}+\hat{p}\cdot\nabla _{x}F_{R}^{\varepsilon,\mathfrak{c}}-\frac{1}{\varepsilon}\left\{Q_{\mathfrak{ c}}\left(F_{0}^{\mathfrak{c}},F_{R}^{\varepsilon,\mathfrak{c}}\right)+Q_{ \mathfrak{c}}\left(F_{R}^{\varepsilon,\mathfrak{c}},F_{0}^{\mathfrak{c}} \right)\right\}\] \[=\varepsilon^{k-1}Q_{\mathfrak{c}}\left(F_{R}^{\varepsilon, \mathfrak{c}},F_{R}^{\varepsilon,\mathfrak{c}}\right)+\sum_{i=1}^{2k-1} \varepsilon^{i-1}\left\{Q_{\mathfrak{c}}\left(F_{i}^{\mathfrak{c}},F_{R}^{ \varepsilon,\mathfrak{c}}\right)+Q_{\mathfrak{c}}\left(F_{R}^{\varepsilon, \mathfrak{c}},F_{i}^{\mathfrak{c}}\right)\right\}+\varepsilon^{k}A, \tag{1.11}\]
where
\[A:=\sum_{\begin{subarray}{c}i+j\geq 2k+1\\ 2\leq i,j\leq 2k-1\end{subarray}}\varepsilon^{i+j-1-2k}Q_{\mathfrak{c}} \left(F_{i}^{\mathfrak{c}},F_{j}^{\mathfrak{c}}\right).\]
From [22, Chap.2], the first equation of (1.10) implies that \(F_{0}^{\mathfrak{c}}\) is a local Maxwellian of the form \(\mathbf{M}_{\mathfrak{c}}(n_{0},u,T_{0};p)\), i.e.,
\[F_{0}^{\mathfrak{c}}(t,x,p)=\mathbf{M}_{\mathfrak{c}}(n_{0},u,T_{0};p)=\frac{ n_{0}\gamma}{4\pi\mathfrak{c}^{3}K_{2}(\gamma)}\exp\Big{\{}\frac{u^{\mu}p_{\mu}}{T_ {0}}\Big{\}}, \tag{1.12}\]
where \(\gamma\) a dimensionless variable defined as
\[\gamma=\frac{m_{0}\mathfrak{c}^{2}}{k_{B}T_{0}}\]
and \(T_{0}(t,x)>0\) represents the temperature, \(n_{0}(t,x)>0\) is the proper number density, \((u^{0},u)\) is the four-velocity. \(K_{j}(\gamma)\)\((j=0,1,2,\cdots)\) are the modified second order Bessel functions defined in (2.1).
### The relativistic Euler equations and classical Euler equations
Similar to [10, 23], for \(\alpha\), \(\beta\in\{0,1,2,3\}\), we define the first momentum as
\[I^{\alpha}[\mathbf{M}_{\mathfrak{c}}]:=\int_{\mathbb{R}^{3}}\frac{p^{\alpha}}{ p^{0}}\mathbf{M}_{\mathfrak{c}}dp\]
and the second momentum as
\[T^{\alpha\beta}[\mathbf{M}_{\mathfrak{c}}]:=\int_{\mathbb{R}^{3}}\frac{p^{ \alpha}p^{\beta}}{p^{0}}\mathbf{M}_{\mathfrak{c}}dp.\]
It has been shown in [46, Proposition 3.3] that
\[I^{\alpha}[\mathbf{M}_{\mathfrak{c}}] =\frac{n_{0}u^{\alpha}}{\mathfrak{c}}, \tag{1.13}\] \[T^{\alpha\beta}[\mathbf{M}_{\mathfrak{c}}] =\frac{e_{0}+P_{0}}{\mathfrak{c}^{3}}u^{\alpha}u^{\beta}+\frac{P_{ 0}g^{\alpha\beta}}{\mathfrak{c}}, \tag{1.14}\]
where \(e_{0}(t,x)>0\) is the proper energy density and \(P_{0}(t,x)>0\) is the pressure.
Projecting the second equation in (1.10) onto \(1\), \(p\), \(p^{0}\), which are five collision invariants for the relativistic Boltzmann collision operator \(Q_{\mathfrak{c}}(\cdot,\cdot)\), and using (1.13)-(1.14), one obtains that \((n_{0},u,T_{0})\) satisfies the relativistic Euler equations.
\[\begin{cases}\frac{1}{\mathfrak{c}}\partial_{t}\left(n_{0}u^{0} \right)+\nabla_{x}\cdot(n_{0}u)=0,\\ \frac{1}{\mathfrak{c}}\partial_{t}\left[\left(e_{0}+P_{0}\right)u^{0}u\right] +\nabla_{x}\cdot\left[\left(e_{0}+P_{0}\right)u\otimes u\right]+\mathfrak{c} ^{2}\nabla_{x}P_{0}=0,\\ \frac{1}{\mathfrak{c}}\partial_{t}\left[\left(e_{0}+P_{0}\right)\left(u^{0} \right)^{2}-\mathfrak{c}^{2}P_{0}\right]+\nabla_{x}\cdot\left[\left(e_{0}+P_{ 0}\right)u^{0}u\right]=0.\end{cases} \tag{1.15}\]
The fluid variables \(n_{0}\), \(T_{0}\), \(S\), \(P_{0}\), \(e_{0}\) in (1.15) satisfy the following relations
\[P_{0} =k_{B}n_{0}T_{0}=m_{0}\mathfrak{c}^{2}\frac{n_{0}}{\gamma}, \tag{1.16}\] \[e_{0} =m_{0}\mathfrak{c}^{2}n_{0}\frac{K_{1}(\gamma)}{K_{2}(\gamma)}+ 3P_{0}=m_{0}\mathfrak{c}^{2}n_{0}\frac{K_{3}(\gamma)}{K_{2}(\gamma)}-P_{0},\] (1.17) \[n_{0} =4\pi e^{4}m_{0}^{3}\mathfrak{c}^{3}\exp\left(\frac{-S}{k_{B}} \right)\frac{K_{2}(\gamma)}{\gamma}\exp\left(\gamma\frac{K_{1}(\gamma)}{K_{2} (\gamma)}\right), \tag{1.18}\]
where \(k_{B}>0\) is Boltzmann's constant, \(S(t,x)>0\) is the entropy per particle which is defined by (1.18) and \(K_{i}(\gamma)\) is the modified Bessel function of the second kind defined later.
Denote
\[V:=\begin{pmatrix}P_{0}\\ u\\ S\end{pmatrix}.\]
We assume that (1.15) is supplemented with initial data
\[V|_{t=0}=V_{0}. \tag{1.19}\]
The existence of local smooth solutions of the relativistic Euler equations (1.15) with initial condition (1.19) can be established by standard hyperbolic symmetrized method and it holds that
\[\|V-\overline{V}\|_{H^{N_{0}}}\lesssim 1, \tag{1.20}\]
where \(\overline{V}:=(\overline{P},0,\overline{S})\) is a constant background with \(\overline{P}>0\) and \(\overline{S}>0\). We point out that the estimate in (1.20) is uniform-in-\(\mathfrak{c}\), which is important for us, see Lemma 3.1 for details.
From [8, 46], we have the following two properties :
_Property 1_: The map \(\mathbf{\Phi}:(n_{0},T_{0})\mapsto(P_{0},S)\) is an auto-diffeomorphism of the region \((0,\infty)\times(0,\infty)\), where the map is defined by (1.16)-(1.18).
_Property 2_: Under the equations of state (1.16)-(1.18), there hold
1. There exists a smooth function \(\mathcal{H}\) such that \(P_{0}\) can be expressed in terms of \(e_{0}\) and \(S\) as \(P_{0}=\mathcal{H}(e_{0},S)\).
2. The relativistic Euler equations is hyperbolic.
3. The relativistic Euler equations is causal (the speed of sound \(a:=\mathfrak{c}\sqrt{\frac{\partial P_{0}}{\partial e_{0}}}\Big{|}_{S}\) is real and less than the speed of light). Actually, it holds that \(0<a<\frac{\mathfrak{c}}{\sqrt{3}}\).
From [46, Proposition 3.4], the following Gibbs relation (see [11]) holds
\[T_{0}dS=d\Big{(}\frac{e_{0}}{n_{0}}\Big{)}+P_{0}d\Big{(}\frac{1}{n_{0}}\Big{)},\]
which is equivalent to
\[\frac{\partial e_{0}}{\partial n_{0}}\Big{|}_{S}=\frac{e_{0}+P_{0}}{n_{0}}, \quad\frac{\partial e_{0}}{\partial S}\Big{|}_{n_{0}}=n_{0}T_{0}.\]
For simplicity of presentation, in the rest of this paper, we always assume that
\[k_{B}=1,\quad m_{0}=1.\]
Formally, when \(\mathfrak{c}\) tends to infinity, the relativistic Euler equations (1.15) reduces to
\[\begin{cases}\partial_{t}\rho+\nabla_{x}\cdot(\rho\mathfrak{u})=0,\\ \partial_{t}(\rho\mathfrak{u})+\nabla_{x}\cdot(\rho\mathfrak{u}\otimes \mathfrak{u})+\nabla_{x}\mathcal{P}=0,\\ \partial_{t}\Big{(}\rho\Big{(}\frac{1}{2}|\mathfrak{u}|^{2}+\mathcal{E}\Big{)} \Big{)}+\nabla_{x}\cdot\Big{(}\Big{(}\rho\Big{(}\frac{1}{2}|\mathfrak{u}|^{2}+ \mathcal{E}\Big{)}+\mathcal{P}\Big{)}\mathfrak{u}\Big{)}=0,\end{cases} \tag{1.21}\]
which is the classical compressible Euler equations. Here \(\rho(t,x)>0\) denotes the density of the fluid, \(\mathfrak{u}(t,x)\) is velocity, \(\mathcal{P}(t,x)\) is pressure, \(\mathcal{E}(t,x)>0\) is internal energy per unit mass. The fluid variables \(\rho\), \(\theta\), \(\eta\), \(\mathcal{P}\) and \(\mathcal{E}\) satisfy the following relations
\[\mathcal{P}=\rho\theta,\quad\eta=-\ln(A_{0}\rho\theta^{-\frac{3}{2}}),\quad \mathcal{E}=\frac{3}{2}\theta, \tag{1.22}\]
where \(A_{0}=(2\pi)^{-\frac{3}{2}}e^{-\frac{5}{2}}\) and \(\theta(t,x)\) is the temperature of the fluid, \(\eta(t,x)\) is the physical entropy. It is clear that
\[\theta d\eta=d\mathcal{E}-\frac{\mathcal{P}}{\rho^{2}}d\rho.\]
Denote
\[W:=\begin{pmatrix}\mathcal{P}\\ \mathfrak{u}\\ \eta\end{pmatrix},\]
then the classical Euler equations (1.21) can be written as a symmetric hyperbolic system
\[\mathbf{D}_{0}\partial_{t}W+\sum_{j=1}^{3}\mathbf{D}_{j}\partial_{j}W=0, \tag{1.23}\]
where
\[\mathbf{D}_{0}=\begin{pmatrix}1&0&0\\ 0&\sigma^{2}\rho^{2}\mathbf{I}&0\\ 0&0&1\end{pmatrix},\quad\mathbf{D}_{j}=\begin{pmatrix}\mathfrak{u}_{j}&\sigma ^{2}\rho\mathbf{e}_{j}^{t}&0\\ \sigma^{2}\rho\mathbf{e}_{j}&\sigma^{2}\rho^{2}\mathfrak{u}_{j}\mathbf{I}&0 \\ 0&0&\mathfrak{u}_{j}\end{pmatrix}.\]
The quantity \(\sigma=\sqrt{\frac{\partial\mathcal{P}}{\partial\rho}}\Big{|}_{\eta}>0\) is the sound speed of the classical Euler equations.
For simplicity, we supplement (1.23) with the same initial data and constant background as in the relativistic Euler case, that is,
\[W|_{t=0}=V_{0},\quad\overline{W}=\overline{V},\]
where \(\overline{W}:=(\overline{\mathcal{P}},0,\overline{\eta})\). It is a classical result from the theory of symmetric hyperbolic systems that (1.23) admits a local smooth solution with smooth initial data, see Lemma 3.2 for details.
### A brief history of the hydrodynamic and Newtonian limits for (relativistic) Boltzmann equation
For the hydrodynamic limit of the non-relativistic Boltzmann equation, there have been extensive studies on this subject and we only mention a few works. In the founding work of Maxwell [41] and Boltzmann [5], it is shown that the Boltzmann equation is closely related to the fluid dynamical systems for both compressible and incompressible flows. Hilbert and Enskog-Chapman independently developed a set of formal small-parameter expansion methods, called Hilbert expansion and Enskog-Chapman expansion respectively, and established the connection between Boltzmann equation and compressible (incompressible) Euler equations, compressible (incompressible) Navier-Stokes (Fourier) systems and the acoustic system, etc. It is a important and challenging problem to rigorously justify these formal approximations. In fact, the purpose of Hilbert's sixth problem [31] is to establish the laws of motion of continua from more microscopic
physical models, such as Boltzmann theory, from a rigorous mathematical standpoint. For the hydrodynamic limit from Boltzmann equation to compressible Euler equations, Cafilsch [6] strictly justified the validity of the limit by employing the truncated Hilbert expansion method, see also [36, 42, 49], and [25, 27] for an application of \(L^{2}-L^{\infty}\) approach. For the hydrodynamic limit to incompressible Navier-Stokes system, see [1, 2, 4, 9, 12, 13, 14, 20, 26, 30, 32, 34, 39, 45, 52] and references cited therein. For the compressible Euler limit and acoustic limit of Boltzmann equation with specular reflection boundary conditions, we refer the reader to the recent work of Guo-Huang-Wang [28]. For other works which connect to hydrodynamic limit, we refer to [3, 37, 19, 29] and the review articles [21, 40, 50].
Although there have been satisfactory results on the hydrodynamic limit of the non-relativistic Boltzmann equation, much less is known on the hydrodynamic limit or/and Newtonian limit of the relativistic Boltzmann equation despite of its importance. For the Newtonian limit of the relativistic particles, Calogero [7] established the existence of local-in-time relativistic Boltzmann solutions in periodic box, and then proved that such solutions converge, in a suitable norm, to the Newtonian Boltzmann solutions as \(\mathfrak{c}\to\infty\). Later, for the case near vacuum, Strain [48] proved the unique global-in-time mild solution and justified the Newtonian limit for arbitrary time intervals \([0,T]\). For the hydrodynamic limit of the relativistic Boltzmann equation, Speck-Strain [46] demonstrated the hydrodynamic limit from the relativistic Boltzmann equation to the relativistic Euler equations for local-in-time smooth solutions. It is shown in [23] that solutions of the relativistic Vlasov-Maxwell-Boltzmann system converge to solutions of the relativistic Euler-Maxwell system globally in time, as the Knudsen number \(\varepsilon\to 0\).
In the present paper, we are concerned with both the hydrodynamic limit \(\varepsilon\to 0\) and Newtonian limit \(\mathfrak{c}\to\infty\) from the relativistic Boltzmann equation to the classical Euler equations. This is achieved by employing the Hilbert expansion method and uniform in \(\mathfrak{c}\) and \(\varepsilon\) estimates on the Hilbert expansion.
### Main results
We consider the perturbation around the local Maxwellian \(\mathbf{M}_{\mathfrak{c}}\):
\[F(t,x,p)=\mathbf{M}_{\mathfrak{c}}(t,x,p)+\sqrt{\mathbf{M}_{\mathfrak{c}}(t,x,p)}f(t,x,p), \tag{1.24}\]
We define the linearized collision operator \(\mathbf{L}_{\mathfrak{c}}f\) and nonlinear collision operator \(\Gamma_{\mathfrak{c}}\left(f_{1},f_{2}\right)\):
\[\mathbf{L}_{\mathfrak{c}}f:=-\frac{1}{\sqrt{\mathbf{M}_{\mathfrak{c}}}}[Q_{ \mathfrak{c}}(\sqrt{\mathbf{M}_{\mathfrak{c}}}f,\mathbf{M}_{\mathfrak{c}})+Q_ {\mathfrak{c}}(\mathbf{M}_{\mathfrak{c}},\sqrt{\mathbf{M}_{\mathfrak{c}}}f)]= \nu_{\mathfrak{c}}f-\mathbf{K}_{\mathfrak{c}}f,\]
\[\Gamma_{\mathfrak{c}}\left(f_{1},f_{2}\right):=\frac{1}{\sqrt{\mathbf{M}_{ \mathfrak{c}}}}Q_{\mathfrak{c}}\left(\sqrt{\mathbf{M}_{\mathfrak{c}}}f_{1}, \sqrt{\mathbf{M}_{\mathfrak{c}}}f_{2}\right),\]
where the collision frequency \(\nu_{\mathfrak{c}}=\nu_{\mathfrak{c}}(t,x,p)\) is defined as
\[\nu_{\mathfrak{c}}(p):=\int_{\mathbb{R}^{3}}\int_{\mathbb{S}^{2}}v_{\phi}(p,q )\mathbf{M}_{\mathfrak{c}}(q)d\omega dq=\frac{\mathfrak{c}}{2}\frac{1}{p^{0}} \int_{\mathbb{R}^{3}}\frac{dq}{q^{0}}\int_{\mathbb{R}^{3}}\frac{dq^{\prime}}{ q^{\prime 0}}\int_{\mathbb{R}^{3}}\frac{dp^{\prime}}{p^{0}}W(p,q\mid p^{\prime},q^{\prime}) \mathbf{M}_{\mathfrak{c}}(q) \tag{1.25}\]
and \(\mathbf{K}_{\mathfrak{c}}f\) takes the following form:
\[\mathbf{K}_{\mathfrak{c}}f:=\int_{\mathbb{R}^{3}}\int_{\mathbb{S}^ {2}}v_{\phi}(p,q)\sqrt{\mathbf{M}_{\mathfrak{c}}(q)}\left[\sqrt{\mathbf{M}_{ \mathfrak{c}}(q^{\prime})}f\left(p^{\prime}\right)+\sqrt{\mathbf{M}_{ \mathfrak{c}}(p^{\prime})}f\left(q^{\prime}\right)\right]d\omega dq\] \[\qquad\qquad-\int_{\mathbb{R}^{3}}\int_{\mathbb{S}^{2}}v_{\phi}(p,q)\sqrt{\mathbf{M}_{\mathfrak{c}}(p)\mathbf{M}_{\mathfrak{c}}(q)}f(q)d \omega dq\] \[=\mathbf{K}_{\mathfrak{c}2}f-\mathbf{K}_{\mathfrak{c}1}f.\]
We introduce the global Maxwellian \(J_{\mathfrak{c}}(p)\) as
\[J_{\mathfrak{c}}(p):=\frac{n_{M}\gamma_{M}}{4\pi^{3}K_{2}(\gamma_{M})}\exp \Big{(}\frac{-\mathfrak{c}p^{0}}{T_{M}}\Big{)}, \tag{1.26}\]
where \(n_{M}\), \(T_{M}\) are positive constants and \(\gamma_{M}=\frac{\mathfrak{c}^{2}}{T_{M}}\). For each \(\ell\geq 0\), we also define the weight function \(w_{\ell}\) as
\[w_{\ell}:=w_{\ell}(p)=(1+|p|^{2})^{\frac{\ell}{2}}. \tag{1.27}\]
We then define a corresponding weighted \(L^{\infty}\) norm by
\[\|h\|_{\infty,\ell}=\|w_{\ell}h\|_{L^{\infty}}.\]
Our first result is the Hilbert expansion of relativistic Boltzmann equation with uniform-in-\(\mathfrak{c}\) estimates. Notice that \(k\) and \(N_{0}\) are defined in (1.8) and Lemma 3.1.
**Theorem 1.1**.: _Assume \(k=3\), \(N_{0}\geq 10\). Let \((n_{0}(t,x),u(t,x),T_{0}(t,x))\) be a smooth solution to the relativistic Euler equations (1.15) with initial data (1.19) and constant background \(\overline{V}\) for \((t,x)\in[0,T]\times\mathbb{R}^{3}\). Suppose that \(\mathbf{M}_{\mathfrak{c}}(t,x,p)\) is the local relativistic Maxwellian in (1.12) and there exist constants \(C>0\), \(n_{M}>0\), \(T_{M}>0\), and \(\alpha\in(\frac{1}{2},1)\) such that_
\[\frac{J_{\mathfrak{c}}(p)}{C}\leq\mathbf{M}_{\mathfrak{c}}(t,x,p)\leq CJ_{ \mathfrak{c}}^{\alpha}(p). \tag{1.28}\]
_Define initially_
\[F^{\varepsilon,\mathfrak{c}}(0,x,p)=\mathbf{M}_{\mathfrak{c}}(0,x,p)+\sum_{n= 1}^{2k-1}\varepsilon^{n}F_{n}^{\mathfrak{c}}(0,x,p)+\varepsilon^{k}F_{R}^{ \varepsilon,\mathfrak{c}}(0,x,p)\geq 0\]
_with_
\[\varepsilon^{\frac{3}{2}}\Big{\|}\frac{F_{R}^{\varepsilon,\mathfrak{c}}(0)}{ \sqrt{J_{\mathfrak{c}}}}\Big{\|}_{\infty,\ell}+\Big{\|}\frac{F_{R}^{ \varepsilon,\mathfrak{c}}(0)}{\sqrt{\mathbf{M}_{\mathfrak{c}}(0)}}\Big{\|}_{ 2}\leq C<\infty.\]
_Then there exist two independent positive constants \(\varepsilon_{0}\in(0,1]\) and \(\mathfrak{c}_{0}\gg 1\) such that, for each \(0<\varepsilon\leq\varepsilon_{0}\) and \(\mathfrak{c}\geq\mathfrak{c}_{0}\), there admits a unique classical solution \(F^{\varepsilon,\mathfrak{c}}\) of the relativistic Boltzmann equation (1.15) for \((t,x,p)\in[0,T]\times\mathbb{R}^{3}\times\mathbb{R}^{3}\) in the following form of expansion_
\[F^{\varepsilon,\mathfrak{c}}(t,x,p)=\mathbf{M}_{\mathfrak{c}}(t,x,p)+\sum_{n= 1}^{2k-1}\varepsilon^{n}F_{n}^{\mathfrak{c}}(t,x,p)+\varepsilon^{k}F_{R}^{ \varepsilon,\mathfrak{c}}(t,x,p)\geq 0,\]
_where the functions \(F_{n}^{\varepsilon}\)\((n=1,\cdots,2k-1)\) are constructed in Proposition 5.1._
_Furthermore, there exists a constant \(C_{T}>0\) such that for all \(\varepsilon\in(0,\varepsilon_{0}]\) and for any \(\ell\geq 9\), the following estimate holds:_
\[\varepsilon^{\frac{3}{2}}\sup_{0\leq t\leq T}\left\|\frac{F_{R}^{ \varepsilon,\mathfrak{c}}(t)}{\sqrt{J_{\mathfrak{c}}}}\right\|_{\infty,\ell}+ \sup_{0\leq t\leq T}\left\|\frac{F_{R}^{\varepsilon,\mathfrak{c}}(t)}{\sqrt{ \mathbf{M}_{\mathfrak{c}}(t)}}\right\|_{2}\] \[\leq C_{T}\left\{\varepsilon^{\frac{3}{2}}\Big{\|}\frac{F_{R}^{ \varepsilon,\mathfrak{c}}(0)}{\sqrt{J_{\mathfrak{c}}}}\Big{\|}_{\infty,\ell}+ \Big{\|}\frac{F_{R}^{\varepsilon,\mathfrak{c}}(0)}{\sqrt{\mathbf{M}_{ \mathfrak{c}}(0)}}\Big{\|}_{2}+1\right\}.\]
_Moreover, we have that_
\[\sup_{0\leq t\leq T}\left\|\frac{F^{\varepsilon,\mathfrak{c}}(t)-\mathbf{M}_{ \mathfrak{c}}(t)}{\sqrt{J_{\mathfrak{c}}}}\right\|_{\infty}+\sup_{0\leq t\leq T }\left\|\frac{F^{\varepsilon,\mathfrak{c}}(t)-\mathbf{M}_{\mathfrak{c}}(t)}{ \sqrt{\mathbf{M}_{\mathfrak{c}}(t)}}\right\|_{2}\leq C_{T}\varepsilon, \tag{1.29}\]
_where the constant \(C\) and \(C_{T}>0\) are independent of \(\varepsilon\) and \(\mathfrak{c}\)._
**Remark 1.2**.: It follows from (1.29) that we have established the uniform-in-\(\mathfrak{c}\) hydrodynamic limit from the relativistic Boltzmann equation to the relativistic Euler equations.
**Remark 1.3**.: When \(\frac{|u|}{\mathfrak{c}}\) is suitably small, it has been shown in [46, Lemma 1.1] that there exist positive constants \(C>0\), \(n_{M}>0\), \(T_{M}>0\), and \(\alpha\in(\frac{1}{2},1)\), which are independent of \(\mathfrak{c}\), such that (1.28) holds.
**Remark 1.4**.: The uniform-in-\(\mathfrak{c}\) estimates for the relativistic Boltzmann collision operators developed here can also be applied to the Newtonian limit from the relativistic Boltzmann equation to the Newtonian Boltzmann equation. This will be considered in a forthcoming paper.
With the uniform-in-\(\mathfrak{c}\) estimates in Theorem 1.1, one can further obtain both the hydrodynamic limit \(\varepsilon\to 0\) and the Newtonian limit \(\mathfrak{c}\to\infty\) at the same time.
**Theorem 1.5**.: _Assume that all conditions in Theorem 1.1 are satisfied. Suppose that \((\rho(t,x)\), \(\mathfrak{u}(t,x),\theta(t,x))\) is a smooth solution to the classical Euler equations (1.21) with the same initial data and constant background as the relativistic Euler case. Let \(\mu\) be the local Maxwellian of classical Boltzmann equation, i.e.,_
\[\mu(t,x,p)=\frac{\rho}{(2\pi\theta)^{\frac{3}{2}}}e^{-\frac{|p-u|^{2}}{2\theta}}. \tag{1.30}\]
_Then there exist independent positive constants \(\varepsilon_{0}\in(0,1]\) and \(\mathfrak{c}_{0}\gg 1\) such that for all \(0<\varepsilon\leq\varepsilon_{0}\) and \(\mathfrak{c}\geq\mathfrak{c}_{0}\), the following estimate holds:_
\[\sup_{0\leq t\leq T}\left\|\big{(}F^{\varepsilon,\mathfrak{c}}-\mu\big{)}(t)e ^{\delta_{0}|p|}\right\|_{\infty}\leq C_{T}\varepsilon+C_{T}\mathfrak{c}^{- \frac{3}{2}}, \tag{1.31}\]
_where all the positive constants \(\varepsilon_{0}\), \(\mathfrak{c}_{0}\), \(C_{T}\) and \(\delta_{0}\) are independent of \(\varepsilon\) and \(\mathfrak{c}\)._
**Remark 1.6**.: (1.31) indicates that we have established both the hydrodynamic limit and the Newtonian limit from the relativistic Boltzmann equation to the classical Euler equations. We point out that the two limits \(\varepsilon\to 0\) and \(\mathfrak{c}\to\infty\) can be taken independently at the same time without assuming any dependence between \(\varepsilon\) and \(\mathfrak{c}\).
**Remark 1.7**.: It is worth noting that we make no efforts to obtain the best convergence rates, which is not our main focus here. Actually, for the Newtonian limit, one can obtain the convergence rate \(\frac{1}{\varepsilon^{2-\varepsilon}}\) for any given small \(\epsilon>0\).
**Remark 1.8**.: Due to the effect of special relativity, we can only obtain the particle velocity weight \(e^{\delta_{0}|p|}\) in (1.31).
### Main difficulties and strategy of the proof
We make some comments on the main ideas of the proof and explain the main difficulties and techniques involved in the process.
It is noted that, for the relativistic Boltzmann equation (1.3), one can not transform the solution \(F(t,x,p)\) to \(F(t,x,\mathfrak{p})\) with a change of variables \(p=\mathfrak{cp}\). Now we take the global Maxwellian \(\mathbf{M}_{\mathfrak{c}}(1,0,1;p)\) as an example. In fact, \(\mathbf{M}_{\mathfrak{c}}(1,0,1;p)\cong e^{\mathfrak{c}^{2}-cp^{0}}\). It is clear that
\[e^{\mathfrak{c}^{2}-\mathfrak{cp}^{0}}=e^{-\frac{\mathfrak{c}^{2}|p|^{2}}{1+ \sqrt{1+|p|^{2}}}},\]
which is actually still a function of \(\mathfrak{p}\) and \(\mathfrak{c}\). On the other hand, for the normalized particle velocity \(\hat{p}\), it holds that
\[\hat{p}=\mathfrak{c}\frac{p}{p^{0}}=\frac{\mathfrak{cp}}{\sqrt{|p|^{2}+ \mathfrak{c}^{2}}}=\frac{\mathfrak{cp}}{\sqrt{1+|\mathfrak{p}|^{2}}},\]
which is also a function of \(\mathfrak{p}\) and \(\mathfrak{c}\). Hence the collision term \(Q_{\mathfrak{c}}(F,F)\) can not be transformed into a new form depending only on \(\mathfrak{p}\). Thus the roles of the light speed \(\mathfrak{c}\) and the Knudsen number \(\varepsilon\) are totally different. Therefore it is important to establish the uniform-in-\(\mathfrak{c}\) estimate for the relativistic Boltzmann collision in the present paper.
To justify both the hydrodynamic limit and Newtonian limit from the relativistic Boltzmann equation to the classical Euler equations, we utilize the Hilbert expansion for the relativistic Boltzmann equation with respect to the small Knudsen number. The key point is to establish the uniform-in-\(\mathfrak{c}\) estimates for the Hilbert expansion.
Firstly, we prove the existence of smooth solutions to the relativistic Euler equations with uniform-in-\(\mathfrak{c}\) estimates, see Lemma 3.1. Then, by applying the energy method of the symmetric
hyperbolic systems, we establish the Newtonian limit from the relativistic Euler equations (1.15) to the classical Euler equations (1.21) with convergence rate \(\mathfrak{c}^{-2}\), see section 3.3 for details of proof.
Secondly, we aim to establish the uniform-in-\(\mathfrak{c}\) bounds for the Hilbert expansion \(F_{n}^{\mathfrak{c}}\) (\(n\geq 1\)) as well as the remainder \(F_{R}^{\mathfrak{c},\mathfrak{c}}\). As explained above, since the collision operators must dependent on the speed of light \(\mathfrak{c}\), the main difficulty lies in the uniform-in-\(\mathfrak{c}\) estimates on the collision operators \(Q_{\mathfrak{c}}(\cdot,\cdot)\), \(\mathbf{L}_{\mathfrak{c}}\) and \(\mathbf{L}_{\mathfrak{c}}^{-1}\).
For the relativistic Boltzmann equation, due to the complexity of the local relativistic Maxwellian \(\mathbf{M}_{\mathfrak{c}}\), the expression of the kernel \(k_{\mathfrak{c}}(p,q)\) (see (2.9)) of \(\mathbf{K}_{\mathfrak{c}}\) is very complicated and it is not an easy job to obtain the uniform-in-\(\mathfrak{c}\) estimate for \(k_{\mathfrak{c}}(p,q)\). By applying the Lorentz transformation and dividing the integration region into three parts: \(\{|\bar{p}-\bar{q}|\geq\mathfrak{c}^{\frac{1}{3}}\}\), \(\{|\bar{p}-\bar{q}|\leq\mathfrak{c}^{\frac{1}{3}},\,\&\,|p|\leq\mathfrak{c}\}\) and \(\{|\bar{p}-\bar{p}|\leq\mathfrak{c}^{\frac{1}{8}},\,\&\,|p|\geq\mathfrak{c}\}\), one can get
\[\int_{\mathbb{R}^{3}}|k_{\mathfrak{c}}(p,q)|dq\lesssim\begin{cases}(1+|p|)^{-1 },&|p|\leq\mathfrak{c},\\ \mathfrak{c}^{-1},&|p|\geq\mathfrak{c},\end{cases}\]
see Lemmas 4.3-4.5 for details. Similarly, we can also prove
\[\nu_{\mathfrak{c}}(p)\cong\begin{cases}1+|p|,&|p|\leq\mathfrak{c},\\ \mathfrak{c},&|p|\geq\mathfrak{c},\end{cases}\]
see Lemma 4.6 for details.
Let \(k(p,q)\) be the kernel of the classical Boltzmann equation of hard sphere (see (4.93)). Observe that \(k_{\mathfrak{c}}(p,q)\) and \(k(p,q)\) depend on the relativistic Euler solutions and classical Euler solutions, respectively. By tedious calculations and the Newtonian limit of the relativistic Euler equations (see Proposition 3.8), we can establish the following
\[\int_{\mathbb{R}^{3}}|k_{\mathfrak{c}}(p,q)-k(p,q)|dq\lesssim\mathfrak{c}^{- \frac{3}{8}}\to 0\quad\text{as}\quad\mathfrak{c}\to\infty, \tag{1.32}\]
see Lemmas 4.8-4.9 for details. Since the orthonormal basis \(\{\chi_{\alpha}^{\mathfrak{c}}\}_{\alpha=0}^{4}\) of the null space \(\mathcal{N}_{\mathfrak{c}}\) also depend on \(\mathfrak{c}\), we also need to prove that
\[\lim_{\mathfrak{c}\to\infty}\chi_{\alpha}^{\mathfrak{c}}=\chi_{\alpha}, \tag{1.33}\]
where \(\{\chi_{\alpha}\}_{\alpha=0}^{4}\) is the corresponding orthonormal basis of the null space \(\mathcal{N}\) for the classical Boltzmann equation, see Lemma 4.12 for details. With the help of (1.32)-(1.33) and a contradiction argument, one can finally obtain the following uniform-in-\(\mathfrak{c}\) coercivity estimate for \(\mathbf{L}_{\mathfrak{c}}\)
\[\langle\mathbf{L}_{\mathfrak{c}}g,g\rangle\geq\zeta_{0}\|(\mathbf{I}-\mathbf{ P}_{\mathfrak{c}})g\|_{\nu_{\mathfrak{c}}}^{2},\quad g\in L_{\nu}^{2}(\mathbb{R}^{ 3}).\]
Here we emphasize that \(\zeta_{0}>0\) is a positive constant independent of \(\mathfrak{c}\). With the uniform-in-\(\mathfrak{c}\) coercivity estimate for \(\mathbf{L}_{\mathfrak{c}}\), one can derive the uniform-in-\(\mathfrak{c}\) exponential decay for \(\mathbf{L}_{\mathfrak{c}}^{-1}\) by similar arguments as in [33], see section 4.2 for details.
Utilizing the above uniform-in-\(\mathfrak{c}\) estimates, we can establish the uniform bounds on the Hilbert expansions \(F_{n}^{\mathfrak{c}}(t,x,p)\) (\(n\geq 1\)), see Proposition 5.1 for details. Based on the estimates on \(F_{n}^{\mathfrak{c}}(t,x,p)\) (\(n\geq 1\)), we use the \(L^{2}-L^{\infty}\) framework in [24, 25, 46] to control the remainder \(F_{R}^{\mathfrak{c},\mathfrak{c}}\) uniformly in \(\mathfrak{c}\) and \(\varepsilon\), see Lemmas 6.3-6.4 for details. Hence, we established the Hilbert expansion of the relativistic Boltzmann equation with uniform-in-\(\mathfrak{c}\) estimates, see Theorem 1.1.
Finally, by combining the Hilbert expansion in Theorem 1.1 and the Newtonian limit of relativistic Euler equations in Proposition 3.8, we can justify both the hydrodynamic limit and Newtonian limit of the relativistic Boltzmann equation to the classical Euler equations, see Theorem 1.5 for details.
### Organization of the paper
In section 2, we present some results about Bessel functions and give explicit expressions for the kernel of the linearized relativistic collision operators. Section 3 is dedicated to the existence of local-in-time solutions of the relativistic Euler equations and the Newtonian limit of the relativistic Euler equations. In section 4, we develop a series of uniform-in-\(\mathfrak{c}\) estimates to obtain the key coercivity estimate on the linearized operator \(\mathbf{L}_{\mathfrak{c}}\) as well as \(\mathbf{L}_{\mathfrak{c}}^{-1}\), which allow us to establish the uniform-in-\(\mathfrak{c}\) bounds on the Hilbert expansion \(F_{n}^{\mathfrak{c}}\) in section 5. In section 6, we use the \(L^{2}-L^{\infty}\) method to derive the uniform in \(\mathfrak{c}\) and \(\varepsilon\) estimates on the remainder \(F_{R}^{\varepsilon,\mathfrak{c}}\) and prove the main theorems, Theorems 1.1 and 1.5. In the appendix, we are devoted to present the orthonormal basis of the null space \(\mathcal{N}_{\mathfrak{c}}\) of \(\mathbf{L}_{\mathfrak{c}}\).
### Notations
Throughout this paper, \(C\) denotes a generic positive constant which is independent of \(\mathfrak{c}\), \(\varepsilon\) and \(C_{a},C_{b},\ldots\) denote the generic positive constants depending on \(a,b,\ldots\), respectively, which may vary from line to line. \(A\lesssim B\) means that there exists a constant \(C>0\), which is independent of \(\mathfrak{c},\ \varepsilon\), such that \(A\leq CB\). \(A\cong B\) means that both \(A\lesssim B\) and \(B\lesssim A\) hold. \(\|\cdot\|_{2}\) denotes either the standard \(L^{2}\left(\mathbb{R}_{x}^{3}\right)\)-norm or \(L^{2}\left(\mathbb{R}_{p}^{3}\right)\)-norm or \(L^{2}\left(\mathbb{R}_{x}^{3}\times\mathbb{R}_{p}^{3}\right)\)-norm. Similarly, \(\|\cdot\|_{\infty}\) denotes either the \(L^{\infty}\left(\mathbb{R}_{x}^{3}\right)\)-norm or \(L^{\infty}\left(\mathbb{R}_{p}^{3}\right)\)-norm or \(L^{\infty}\left(\mathbb{R}_{x}^{3}\times\mathbb{R}_{p}^{3}\right)\)-norm. We also introduce the weighted \(L^{\infty}\) norm \(\|\cdot\|_{\infty,\ell}=\|w_{\ell}\cdot\|_{\infty}\). We denote \(\langle\cdot,\cdot\rangle\) as either the \(L^{2}\left(\mathbb{R}_{x}^{3}\right)\) inner product or \(L^{2}\left(\mathbb{R}_{p}^{3}\right)\) inner product or \(L^{2}\left(\mathbb{R}_{x}^{3}\times\mathbb{R}_{p}^{3}\right)\) inner product. Moreover, we denote \(\|\cdot\|_{\nu_{\mathfrak{c}}}:=\|\sqrt{\nu_{\mathfrak{c}}}\cdot\|_{2}\).
## 2. Preliminaries
We define the modified Bessel function of the second kind (see [10, (3.19)])
\[K_{j}(z)=\left(\frac{z}{2}\right)^{j}\frac{\Gamma(\frac{1}{2})}{\Gamma(j+ \frac{1}{2})}\int_{1}^{\infty}e^{-zt}\left(t^{2}-1\right)^{j-\frac{1}{2}}dt, \quad j\geq 0,\ z>0. \tag{2.1}\]
We will frequently use the following properties for \(K_{j}(z)\).
**Lemma 2.1**.: ([43, 51]) _It holds that_
\[K_{j+1}(z)=\frac{2j}{z}K_{j}(z)+K_{j-1}(z),\quad j\geq 1,\]
_and_
\[\frac{d}{dz}\left(\frac{K_{j}(z)}{z^{j}}\right)=-\left(\frac{K_{j+1}(z)}{z^{j }}\right),\quad j\geq 0.\]
_The asymptotic expansion for \(K_{j}(z)\) takes the form_
\[K_{j}(z)=\sqrt{\frac{\pi}{2z}}\frac{1}{e^{z}}\left[\sum_{m=0}^{n-1}A_{j,m}z^{ -m}+\gamma_{j,n}(z)z^{-n}\right],\quad j\geq 0,\ n\geq 1,\]
_where the following additional identities and inequalities also hold:_
\[A_{j,0} =1,\] \[A_{j,m} =\frac{1}{m!8^{m}}(4j^{2}-1)(4j^{2}-3^{2})\cdots(4j^{2}-(2m-1)^{ 2}),\quad j\geq 0,\ m\geq 1,\] \[|\gamma_{j,n}(z)| \leq 2|A_{j,n}|\exp\left([j^{2}-\frac{1}{4}]z^{-1}\right),\quad j \geq 0,\ n\geq 1,\] \[K_{j}(z) <K_{j+1}(z),\quad j\geq 0.\]
_Furthermore, for \(j\leq n+\frac{1}{2}\), one has a more exact estimate_
\[|\gamma_{j,n}(z)|\leq|A_{j,n}|.\]
We next deduce the kernel of the linearized relativistic collision operator. Recall that
\[\mathbf{K}_{\mathfrak{c}}f=\int_{\mathbb{R}^{3}}\int_{\mathbb{S}^{2}}v_{\phi}(p,q)\sqrt{\mathbf{M}_{\mathfrak{c}}(q)}\left[\sqrt{\mathbf{M}_{\mathfrak{c}}(q ^{\prime})}f\left(p^{\prime}\right)+\sqrt{\mathbf{M}_{\mathfrak{c}}(p^{\prime })}f\left(q^{\prime}\right)\right]d\omega dq\]
\[-\int_{\mathbb{R}^{3}}\int_{\mathbb{S}^{2}}v_{\phi}(p,q)\sqrt{\mathbf{M}_{ \mathfrak{c}}(p)\mathbf{M}_{\mathfrak{c}}(q)}f(q)d\omega dq\] \[=\frac{\mathfrak{c}}{2}\frac{1}{p^{0}}\int_{\mathbb{R}^{3}}\frac{ dq}{q^{0}}\int_{\mathbb{R}^{3}}\frac{dq^{\prime}}{q^{\prime 0}}\int_{\mathbb{R}^{3}}\frac{dp^{\prime}}{p^{\prime 0}}W(p,q \mid p^{\prime},q^{\prime})\sqrt{\mathbf{M}_{\mathfrak{c}}(q)\mathbf{M}_{ \mathfrak{c}}(q^{\prime})}f(p^{\prime})\] \[\qquad+\frac{\mathfrak{c}}{2}\frac{1}{p^{0}}\int_{\mathbb{R}^{3}} \frac{dq}{q^{0}}\int_{\mathbb{R}^{3}}\frac{dq^{\prime}}{q^{\prime 0}}\int_{ \mathbb{R}^{3}}\frac{dp^{\prime}}{p^{\prime 0}}W(p,q\mid p^{\prime},q^{ \prime})\sqrt{\mathbf{M}_{\mathfrak{c}}(q)\mathbf{M}_{\mathfrak{c}}(p^{ \prime})}f(q^{\prime})\] \[\qquad-\frac{\mathfrak{c}}{2}\frac{1}{p^{0}}\int_{\mathbb{R}^{3}} \frac{dq}{q^{0}}\int_{\mathbb{R}^{3}}\frac{dq^{\prime}}{q^{\prime 0}}\int_{ \mathbb{R}^{3}}\frac{dp^{\prime}}{p^{\prime 0}}W(p,q\mid p^{\prime},q^{ \prime})\sqrt{\mathbf{M}_{\mathfrak{c}}(p)\mathbf{M}_{\mathfrak{c}}(q)}f(q)\] \[:=\mathbf{K}_{\mathfrak{c}2}f-\mathbf{K}_{\mathfrak{c}1}f.\]
Then it is clear that the kernel of \(\mathbf{K}_{\mathfrak{c}1}\) takes the form
\[k_{\mathfrak{c}1}(p,q)=\int_{\mathbb{S}^{2}}v_{\phi}(p,q)\sqrt{\mathbf{M}_{ \mathfrak{c}}(p)\mathbf{M}_{\mathfrak{c}}(q)}d\omega=\frac{\pi\mathfrak{c}g \sqrt{s}}{p^{0}q^{0}}\sqrt{\mathbf{M}_{\mathfrak{c}}(p)\mathbf{M}_{\mathfrak{c }}(q)}. \tag{2.2}\]
By similar arguments as in [47], we can deduce that each term of \(\mathbf{K}_{\mathfrak{c}2}f\) is equal to
\[\frac{\mathfrak{c}}{2}\frac{1}{p^{0}}\int_{\mathbb{R}^{3}}\frac{dq}{q^{0}}f(q )\Big{\{}\int_{\mathbb{R}^{3}}\frac{dq^{\prime}}{q^{\prime 0}}\int_{ \mathbb{R}^{3}}\frac{dp^{\prime}}{p^{\prime 0}}\bar{s}\delta^{(4)}(p^{\mu}+p^{ \prime\mu}-q^{\mu}-q^{\prime\mu})\sqrt{\mathbf{M}_{\mathfrak{c}}(p^{\prime}) \mathbf{M}_{\mathfrak{c}}(q^{\prime})}\Big{\}},\]
which yields that the kernel of \(\mathbf{K}_{\mathfrak{c}2}\) is
\[k_{\mathfrak{c}2}(p,q)=\frac{\mathfrak{c}}{p^{0}q^{0}}\int_{\mathbb{R}^{3}} \frac{dq^{\prime}}{q^{\prime 0}}\int_{\mathbb{R}^{3}}\frac{dp^{\prime}}{p^{\prime 0}} \bar{s}\delta^{(4)}(p^{\mu}+p^{\prime\mu}-q^{\mu}-q^{\prime\mu})\sqrt{ \mathbf{M}_{\mathfrak{c}}(p^{\prime})\mathbf{M}_{\mathfrak{c}}(q^{\prime})}, \tag{2.3}\]
where
\[\bar{s}=\bar{g}^{2}+4\mathfrak{c}^{2},\quad\bar{g}^{2}=g^{2}-\frac{1}{2}(p^{ \mu}+q^{\mu})(p^{\prime}_{\mu}+q^{\prime}_{\mu}-p_{\mu}-q_{\mu}).\]
We introduce the Lorentz transformation \(\bar{\Lambda}\)
\[\bar{\Lambda}=\left(\bar{\Lambda}^{\mu}_{\nu}\right)=\left(\begin{array}{ cccc}\tilde{r}&\frac{\tilde{r}v_{1}}{\mathfrak{c}}&\frac{\tilde{r}v_{2}}{ \mathfrak{c}}&\frac{\tilde{r}v_{3}}{\mathfrak{c}}\\ \frac{\tilde{r}v_{1}}{\mathfrak{c}}&1+(\tilde{r}-1)\frac{v_{1}^{2}}{|v|^{2}}& (\tilde{r}-1)\frac{v_{1}v_{2}}{|v|^{2}}&(\tilde{r}-1)\frac{v_{1}v_{3}}{|v|^{2 }}\\ \frac{\tilde{r}v_{2}}{\mathfrak{c}}&(\tilde{r}-1)\frac{v_{1}v_{2}}{|v|^{2}}& 1+(\tilde{r}-1)\frac{v_{2}^{2}}{|v|^{2}}&(\tilde{r}-1)\frac{v_{2}v_{3}}{|v|^{2 }}\\ \frac{\tilde{r}v_{3}}{\mathfrak{c}}&(\tilde{r}-1)\frac{v_{1}v_{3}}{|v|^{2}}& (\tilde{r}-1)\frac{v_{2}v_{3}}{|v|^{2}}&1+(\tilde{r}-1)\frac{v_{3}^{2}}{|v|^{2 }}\end{array}\right) \tag{2.4}\]
and its inverse transformation
\[\bar{\Lambda}^{-1}=\left(\begin{array}{cccc}\tilde{r}&-\frac{\tilde{r}v_{1}} {\mathfrak{c}}&-\frac{\tilde{r}v_{2}}{\mathfrak{c}}&-\frac{\tilde{r}v_{3}}{ \mathfrak{c}}\\ -\frac{\tilde{r}v_{1}}{\mathfrak{c}}&1+(\tilde{r}-1)\frac{v_{1}^{2}}{|v|^{2}}& (\tilde{r}-1)\frac{v_{1}v_{2}}{|v|^{2}}&(\tilde{r}-1)\frac{v_{1}v_{3}}{|v|^{2 }}\\ -\frac{\tilde{r}v_{2}}{\mathfrak{c}}&(\tilde{r}-1)\frac{v_{1}v_{2}}{|v|^{2}}&1+ (\tilde{r}-1)\frac{v_{2}^{2}}{|v|^{2}}&(\tilde{r}-1)\frac{v_{2}v_{3}}{|v|^{2 }}\\ -\frac{\tilde{r}v_{3}}{\mathfrak{c}}&(\tilde{r}-1)\frac{v_{1}v_{3}}{|v|^{2}}& (\tilde{r}-1)\frac{v_{2}v_{3}}{|v|^{2}}&1+(\tilde{r}-1)\frac{v_{2}^{2}}{|v|^{2 }}\end{array}\right),\]
where \(\tilde{r}=\frac{u^{0}}{\mathfrak{c}},v_{i}=\frac{\mathfrak{c}u_{i}}{u^{0}}\). A direct calculation shows that
\[\bar{\Lambda}^{-1}(u^{0},u^{1},u^{2},u^{3})^{t}=(\mathfrak{c},0,0,0)^{t}.\]
Assume \(\bar{\Lambda}\bar{P}=P\), then one has
\[\bar{P}=\bar{\Lambda}^{-1}P=\left(\begin{matrix}\frac{u^{0}p^{0}-u\cdot p}{ \mathfrak{c}}\\ -\frac{u_{1}p^{0}}{\mathfrak{c}}+p_{1}+\left(\frac{u^{0}}{\mathfrak{c}}-1\right) \frac{u_{1}}{|u|^{2}}u\cdot p\\ -\frac{u_{2}p^{0}}{\mathfrak{c}}+p_{2}+\left(\frac{u^{0}}{\mathfrak{c}}-1\right) \frac{u_{2}}{|u|^{2}}u\cdot p\\ -\frac{u_{3}p^{0}}{\mathfrak{c}}+p_{3}+\left(\frac{u^{0}}{\mathfrak{c}}-1\right) \frac{u_{3}}{|u|^{2}}u\cdot p\end{matrix}\right). \tag{2.5}\]
Using Lorentz transformation \(\bar{\Lambda}\), we can express \(k_{\mathfrak{c}2}(p,q)\) as
\[k_{\mathfrak{c}2}(p,q)=\frac{\mathfrak{c}c_{0}}{p^{0}q^{0}}\int_{\mathbb{R}^{3}} \frac{dq^{\prime}}{q^{\prime 0}}\int_{\mathbb{R}^{3}}\frac{dp^{\prime}}{p^{\prime 0}}\tilde{s} \delta^{(4)}(\bar{p}^{\mu}+p^{\prime\mu}-\bar{q}^{\mu}-q^{\prime\mu})e^{-\frac{ \mathfrak{c}(p^{\prime 0}+q^{\prime 0})}{2\tilde{T}_{0}}},\]
where \(\tilde{s}=-(\bar{p}^{\prime\mu}+p^{\prime\mu})(\bar{p}_{\mu}+p^{\prime}_{\mu})\) and
\[c_{0}:=\frac{n_{0}\gamma}{4\pi\mathfrak{c}^{3}K_{2}(\gamma)}.\]
By similar arguments as in [47], we can write \(k_{\mathfrak{c}2}(p,q)\) as
\[k_{\mathfrak{c}2}(p,q)=\frac{\mathfrak{c}c_{0}\pi s^{\frac{3}{2}}}{4gp^{0}q^{0 }}\int_{0}^{\infty}\frac{y(1+\sqrt{y^{2}+1})}{\sqrt{y^{2}+1}}e^{-\tilde{ \mathfrak{c}}\sqrt{y^{2}+1}}I_{0}(\tilde{\mathcal{J}}y)dy, \tag{2.6}\]
where
\[I_{0}(r)=\frac{1}{2\pi}\int_{0}^{2\pi}e^{r\cos\Theta}d\Theta\]
and
\[\bar{\boldsymbol{\ell}}=\frac{\boldsymbol{\ell}}{T_{0}},\quad \bar{\boldsymbol{j}}=\frac{\boldsymbol{j}}{T_{0}},\quad\boldsymbol{\ell}= \frac{\mathfrak{c}}{2}(\bar{p}^{0}+\bar{q}^{0}),\quad\boldsymbol{j}= \mathfrak{c}\frac{|\bar{p}\times\bar{q}|}{g}.\]
Using the fact that for any \(R>r\geq 0\),
\[\int_{0}^{\infty}\frac{e^{-R\sqrt{1+y^{2}}}yI_{0}(ry)}{\sqrt{1+y^{ 2}}}dy =\frac{e^{-\sqrt{R^{2}-r^{2}}}}{\sqrt{R^{2}-r^{2}}},\] \[\int_{0}^{\infty}e^{-R\sqrt{1+y^{2}}}yI_{0}(ry)dy =\frac{R}{R^{2}-r^{2}}\left\{1+\frac{1}{\sqrt{R^{2}-r^{2}}} \right\}e^{-\sqrt{R^{2}-r^{2}}},\]
one can express \(k_{\mathfrak{c}2}(p,q)\) as
\[k_{\mathfrak{c}2}(p,q)=\frac{\mathfrak{c}c_{0}\pi s^{\frac{3}{2}}}{4gp^{0}q^{0 }}\left[J_{1}(\bar{\boldsymbol{\ell}},\bar{\boldsymbol{j}})+J_{2}(\bar{ \boldsymbol{\ell}},\bar{\boldsymbol{j}})\right], \tag{2.7}\]
where
\[J_{1}(\bar{\boldsymbol{\ell}},\bar{\boldsymbol{j}})=\frac{\bar{\boldsymbol{ \ell}}}{\bar{\boldsymbol{\ell}}^{2}-\bar{\boldsymbol{j}}^{2}}\left[1+\frac{1} {\sqrt{\bar{\boldsymbol{\ell}}^{2}-\bar{\boldsymbol{j}}^{2}}}\right]e^{- \sqrt{\bar{\boldsymbol{\ell}}^{2}-\bar{\boldsymbol{j}}^{2}}},\quad J_{2}( \bar{\boldsymbol{\ell}},\bar{\boldsymbol{j}})=\frac{1}{\sqrt{\bar{\boldsymbol {\ell}}^{2}-\bar{\boldsymbol{j}}^{2}}}e^{-\sqrt{\bar{\boldsymbol{\ell}}^{2}- \bar{\boldsymbol{j}}^{2}}}. \tag{2.8}\]
For later use, we denote the kernel of \(\mathbf{K}_{\mathfrak{c}}\) as
\[k_{\mathfrak{c}}(p,q):=k_{\mathfrak{c}2}(p,q)-k_{\mathfrak{c}1}(p,q). \tag{2.9}\]
It is well-known that \(\mathbf{L}_{\mathfrak{c}}\) is a self-adjoint non-negative definite operator in \(L_{p}^{2}\) space with the kernel
\[\mathcal{N}_{\mathfrak{c}}=\mathrm{span}\left\{\sqrt{\mathbf{M}_{\mathfrak{c }}},\ p_{i}\sqrt{\mathbf{M}_{\mathfrak{c}}}\ (i=1,2,3),\ p^{0}\sqrt{\mathbf{M}_{\mathfrak{c}}}\right\}.\]
Let \(\mathbf{P}_{\mathfrak{c}}\) be the orthogonal projection from \(L_{p}^{2}\) onto \(\mathcal{N}_{\mathfrak{c}}\). For given \(f\), we denote the macroscopic part \(\mathbf{P}_{\mathfrak{c}}f\) as
\[\mathbf{P}_{\mathfrak{c}}f=\left\{a_{f}+b_{f}\cdot p+c_{f}p^{0}\right\}\sqrt{ \mathbf{M}_{\mathfrak{c}}},\]
and further denote \(\{\mathbf{I}-\mathbf{P}_{\mathfrak{c}}\}f\) to be the microscopic part of \(f\). For the orthonormal basis of \(\mathcal{N}_{\mathfrak{c}}\), see the appendix.
## 3. The Newtonian limit of the relativistic Euler equations
### Reformulation of the relativistic Euler equations
By a delicate computation, the relativistic Euler equations (1.21) can be rewritten as the following symmetric hyperbolic system
\[\mathbf{B}_{0}\partial_{t}V+\sum_{j=1}^{3}\mathbf{B}_{j}\partial_{j}V=0, \tag{3.1}\]
where
\[\mathbf{B}_{0}=\begin{pmatrix}1&n_{0}\frac{\partial P_{0}}{\partial n_{0}} \frac{u^{t}}{(u^{0})^{2}}&0\\ n_{0}\frac{\partial P_{0}}{\partial n_{0}}\frac{u}{(u^{0})^{2}}&\frac{1}{ \mathrm{c}^{2}}n_{0}\frac{\partial P_{0}}{\partial n_{0}}(e_{0}+P_{0})( \mathbf{I}-\frac{u\otimes u}{(u^{0})^{2}})&0\\ 0&0&\frac{1}{\mathrm{c}}u^{0}\end{pmatrix}\]
and
\[\mathbf{B}_{j}=\begin{pmatrix}\frac{\mathfrak{c}}{u^{0}}u_{j}&\frac{\mathfrak{ c}}{u^{0}}n_{0}\frac{\partial P_{0}}{\partial n_{0}}\mathbf{e}_{j}^{t}&0\\ \frac{\mathfrak{c}}{\mathfrak{c}^{0}}n_{0}\frac{\partial P_{0}}{\partial n_{0 }}\mathbf{e}_{j}&\frac{1}{\mathfrak{c}u^{0}}n_{0}\frac{\partial P_{0}}{ \partial n_{0}}[(e_{0}+P_{0})u_{j}\mathbf{I}-\frac{u\otimes u}{(u^{0})^{2}}u_ {j}]&0\\ 0&0&u_{j}\end{pmatrix}.\]
It is clear that \(\mathbf{B}_{0}\) and \(\mathbf{B}_{j}\) (\(j=1,2,3\)) are symmetric. Recall that
\[\frac{\partial e_{0}}{\partial n_{0}}\Big{|}_{S}=\frac{e_{0}+P_{0}}{n_{0}}, \quad\frac{\partial e_{0}}{\partial S}\Big{|}_{n_{0}}=n_{0}T_{0},\]
then one has
\[n_{0}\frac{\partial P_{0}}{\partial n_{0}}\Big{|}_{S}=n_{0}\frac{\partial P_{0 }}{\partial e_{0}}\Big{|}_{S}\cdot\frac{\partial e_{0}}{\partial n_{0}}\Big{|} _{S}=\frac{a^{2}}{\mathrm{c}^{2}}(e_{0}+P_{0}),\]
where \(a^{2}=\mathfrak{c}^{2}\frac{\partial P_{0}}{\partial e_{0}}|_{S}\) is the square of sound speed. Using the fact that \(a\in\big{(}0,\frac{\mathfrak{c}}{\sqrt{3}}\big{)}\) (see [8, 46]), one can show that \(\mathbf{B}_{0}\) is a positive definite matrix.
Denoting
\[\zeta_{0}:=\frac{a}{\mathfrak{c}^{2}}(e_{0}+P_{0})=an_{0}\frac{K_{3}(\gamma)} {K_{2}(\gamma)}>0,\]
we can rewrite \(\mathbf{B}_{0}\) as
\[\mathbf{B}_{0}=\begin{pmatrix}1&a\zeta_{0}\frac{u^{t}}{(u^{0})^{2}}&0\\ a\zeta_{0}\frac{u}{(u^{0})^{2}}&\zeta_{0}^{2}(\mathbf{I}-\frac{u\otimes u}{(u^ {0})^{2}})&0\\ 0&0&\frac{u^{0}}{\mathfrak{c}}\end{pmatrix}.\]
### Local smooth solutions to the relativistic Euler and classical Euler
Assume that
\[\eta_{1}(V)\leq\eta_{2}(V)\leq\eta_{3}(V)\leq\eta_{4}(V)\leq\eta_{5}(V)\]
are the five eigenvalues of \(\mathbf{B}_{0}\). Since \(\mathbf{B}_{0}\) is positive definite, it follows that \(\eta_{i}(V)>0\) for all \(V\neq 0\), \(i=1,2,\cdots,5\). By Vieta's theorem, one has
\[\sum_{i=1}^{5}\eta_{i}(V)=\sum_{i=1}^{5}(\mathbf{B}_{0})_{ii}=1+\frac{u^{0}}{ \mathfrak{c}}+\zeta_{0}^{2}\Big{(}2+\frac{\mathfrak{c}^{2}}{(u^{0})^{2}}\Big{)} \tag{3.2}\]
and
\[\Pi_{i=1}^{5}\eta_{i}(V)=\det\mathbf{B}_{0}=\frac{\zeta_{0}^{6}}{\mathfrak{c}( u^{0})^{3}}\Big{[}\mathfrak{c}^{4}+|u|^{2}(\mathfrak{c}^{2}-a^{2})\Big{]}. \tag{3.3}\]
Since all the elements of \(\mathbf{B}_{0}\) are smooth functions of \(V\), it yields that \(\eta_{i}(V)\) (\(i=1,\cdots,5\)) are continuous functions of \(V\). Therefore, for any compact subset \(\mathcal{V}\subset\mathbb{R}^{+}\times\mathbb{R}^{3}\times\mathbb{R}^{+}\) and suitably large \(\mathfrak{c}\), the RHS of (3.2) and (3.3) are bounded by positive constants from below and above
and the constants are independent of \(\mathfrak{c}\). Thus there exists a positive constant \(\beta>0\) which is independent of \(\mathfrak{c}\), such that
\[\beta\mathbf{I}_{5}\leq\mathbf{B}_{0}(V)\leq\beta^{-1}\mathbf{I}_{5},\quad V\in \mathcal{V} \tag{3.4}\]
holds in the sense of quadratic forms.
**Lemma 3.1** (Local existence for the relativistic Euler equations).: _Considering the relativistic Euler equations (3.1) with a complete equation of state (1.16)-(1.18) in some open domain \(\mathcal{V}\subset\left\{(P_{0},u,S)\in\mathbb{R}^{+}\times\mathbb{R}^{3} \times\mathbb{R}^{+}\right\}\), we assume that \(\overline{V}=(\overline{P},0,\overline{S})\in\mathcal{V}\) with \(\overline{P}>0\), \(\overline{S}>0\) being given constants which are independent of the light speed \(\mathfrak{c}\). Suppose that_
\[V_{0}\in\overline{V}+H^{N_{0}}\left(\mathbb{R}^{3}\right),\]
_with \(N_{0}\geq 3\) and \(V_{0}\in\mathcal{V}_{1}\subset\subset\mathcal{V}\). Then there exist a local existing time \(T_{1}>0\) which is independent of \(\mathfrak{c}\), and a unique classical solution \(V\in C^{1}\left([0,T]\times\mathbb{R}^{3}\right)\) of the Cauchy problem associated with (3.1) and the initial data \(V(0)=V_{0}\) such that \(V-\overline{V}\) belongs to \(C\left([0,T_{1}];H^{N_{0}}\right)\cap C^{1}\left([0,T_{1}];H^{N_{0}-1}\right)\) and the following estimate holds_
\[\|V-\overline{V}\|_{C\left([0,T_{1}];H^{N_{0}}\right)\cap C^{1}\left([0,T_{1}] ;H^{N_{0}-1}\right)}\leq C_{1},\]
_where \(C_{1}\) depends on \(\|V_{0}-\overline{V}\|_{H^{N_{0}}}\) and is independent of \(\mathfrak{c}\)._
Proof.: The proof is very similar to the one in [16, Theorem 10.1]. The only difference lies in the argument of the independence of \(\mathfrak{c}\) for \(T_{1}\) and the upper bound for the solution. The fact that \(T_{1}\) is independent of \(\mathfrak{c}\) comes from the one that \(\beta\) in (3.4) is independent of \(\mathfrak{c}\). In addition, from the specific expressions for the elements of \(\mathbf{B}_{\alpha}\) (\(\alpha=0,1,2,3\)), we can easily derive that
\[\|\nabla_{x}\mathbf{B}_{0}(V)\|_{H^{N_{0}-1}}+\sum_{j=1}^{3}\|\mathbf{B}_{j}(V )\|_{H^{N_{0}}}\leq C\|V-\overline{V}\|_{H^{N_{0}}},\]
where \(C\) depends on \(\|V-\overline{V}\|_{L^{\infty}}\) and is independent of \(\mathfrak{c}\). The remaining arguments are very similar to ones in [16, Theorem 10.1] and we omit the details here for brevity. Therefore the proof is completed.
For later use, we present the local result for the classical Euler equations (1.23), see [15, 16, 35, 38] for instance.
**Lemma 3.2**.: _[_16_]_ _Considering the classical Euler equations (1.23) with equation of state (1.22) in some open domain \(\mathcal{W}\subset\left\{(\mathcal{P},\mathfrak{u},\eta)\in\mathbb{R}^{+} \times\mathbb{R}^{3}\times\mathbb{R}^{+}\right\}\), we assume that \(\overline{W}=(\overline{\mathcal{P}},0,\overline{\eta})\in\mathcal{W}\) with \(\overline{\mathcal{P}}>0\), \(\overline{\eta}>0\) being given constants. Suppose that_
\[W_{0}\in\overline{W}+H^{N_{0}}\left(\mathbb{R}^{3}\right),\quad\overline{W} \in\mathcal{W}\]
_with \(N_{0}\geq 3\) and \(W_{0}\in\mathcal{W}_{1}\subset\subset\mathcal{W}\). Then there exist a local existing time \(T_{2}>0\) and a unique classical solution \(W\in C^{1}\left([0,T_{2}]\times\mathbb{R}^{3}\right)\) of the Cauchy problem associated with (1.23) and the initial data \(W(0)=W_{0}\) such that \(W-\overline{W}\) belongs to \(C\left([0,T_{2}];H^{N_{0}}\right)\cap C^{1}\left([0,T_{2}];H^{N_{0}-1}\right)\) and the following estimate holds_
\[\|W-\overline{W}\|_{C\left([0,T_{2}];H^{N_{0}}\right)\cap C^{1}\left([0,T_{2}] ;H^{N_{0}-1}\right)}\leq C_{2},\]
_where \(C_{2}\) depends on \(\|W_{0}-\overline{W}\|_{H^{N_{0}}}\). Furthermore, the lifespan \(T_{2}\) have the following lower bound_
\[T_{2}\geq C_{3}\Big{(}\|W_{0}-\overline{W}\|_{H^{N_{0}}}\Big{)}^{-1},\]
_where \(C_{3}\) is independent of \(\mathfrak{c}\) and \(\|W_{0}-\overline{W}\|_{H^{N_{0}}}\)._
### Newtonian limit from the relativistic Euler to the classical Euler
In this subsection, we focus on the Newtonian limit of the relativistic Euler equations. It follows from (1.23) and (3.1) that
\[\mathbf{D}_{0}\partial_{t}(W-V)+\sum_{j=1}^{3}\mathbf{D}_{j}\partial_{j}(W-V)= \Upsilon,\quad(W-V)\Big{|}_{t=0}=0, \tag{3.5}\]
where
\[\Upsilon=(\mathbf{B}_{0}-\mathbf{D}_{0})\partial_{t}V+\sum_{j=1}^{3}(\mathbf{B }_{j}-\mathbf{D}_{j})\partial_{j}V.\]
**Lemma 3.3**.: _There hold_
\[\sigma^{2}=\frac{\partial\mathcal{P}}{\partial\rho}\Big{|}_{\eta}=\frac{5}{3}\theta \tag{3.6}\]
_and_
\[a^{2}=\mathfrak{c}^{2}\frac{\partial P_{0}}{\partial e_{0}}\Big{|}_{S}=\frac{ 5}{3}T_{0}+O(\mathfrak{c}^{-2}). \tag{3.7}\]
Proof.: For (3.6), it follows from (1.22) that
\[\mathcal{P}=\rho\theta=A_{0}^{\frac{2}{3}}\rho^{\frac{5}{3}}e^{\frac{2}{3}\eta },\quad A_{0}=(2\pi)^{-\frac{3}{2}}e^{-\frac{5}{2}}, \tag{3.8}\]
which implies that
\[\frac{\partial\mathcal{P}}{\partial\rho}\Big{|}_{\eta}=\frac{5}{3}A_{0}^{ \frac{2}{3}}e^{\frac{2}{3}\eta}=\frac{5\mathcal{P}}{3\rho}=\frac{5}{3}\theta.\]
For (3.7), it follows from (1.16)-(1.18) that
\[P_{0} =4\pi e^{4}e^{-S}\mathfrak{c}^{5}\frac{K_{2}(\gamma)}{\gamma^{2} }\exp\left(\gamma\frac{K_{1}(\gamma)}{K_{2}(\gamma)}\right),\] \[e_{0} =P_{0}\Big{(}\gamma\frac{K_{1}(\gamma)}{K_{2}(\gamma)}+3\Big{)}.\]
It is clear that
\[\frac{\partial P_{0}}{\partial\gamma}\Big{|}_{S}=\frac{\partial P_{0}}{ \partial e_{0}}\Big{|}_{S}\cdot\frac{\partial e_{0}}{\partial\gamma}\Big{|}_{S}.\]
It follows from [46, (3.32)] that
\[\Big{(}\frac{\partial P_{0}}{\partial e_{0}}\Big{|}_{S}\Big{)}^{-1}=\frac{ \left.\frac{\partial e_{0}}{\partial\gamma}\right|_{S}}{\frac{\partial P_{0}} {\partial\gamma}\Big{|}_{S}}=\gamma\frac{K_{1}(\gamma)}{K_{2}(\gamma)}+3+ \frac{\gamma\left(\frac{K_{1}(\gamma)}{K_{2}(\gamma)}\right)^{2}+4\frac{K_{1} (\gamma)}{K_{2}(\gamma)}-\gamma}{\gamma\left(\frac{K_{1}(\gamma)}{K_{2}( \gamma)}\right)^{2}+3\frac{K_{1}(\gamma)}{K_{2}(\gamma)}-\gamma-\frac{4}{ \gamma}}.\]
Using the asymptotic expansions of \(K_{2}(\gamma)\) and \(K_{3}(\gamma)\) in Lemma 2.1, one has
\[\frac{K_{3}^{2}(\gamma)}{K_{2}^{2}(\gamma)}-1 =\frac{\frac{5}{\gamma}+\frac{115}{4\gamma^{2}}+\frac{2205}{32 \gamma^{3}}+\frac{10395}{128\gamma^{4}}+O(\gamma^{-5})}{1+\frac{15}{4\gamma}+ \frac{165}{32\gamma^{2}}+\frac{315}{128\gamma^{3}}+O(\gamma^{-4})}\] \[=\frac{5}{\gamma}+\frac{10}{\gamma^{2}}+\frac{45}{8\gamma^{3}}- \frac{15}{4\gamma^{4}}+O(\gamma^{-5}) \tag{3.9}\]
and
\[\frac{K_{3}(\gamma)}{K_{2}(\gamma)}-1=\frac{\frac{5}{2\gamma}+\frac{105}{16 \gamma^{2}}+\frac{945}{256\gamma^{3}}+O(\gamma^{-4})}{1+\frac{15}{8\gamma}+ \frac{105}{128\gamma^{2}}+O(\gamma^{-3})}=\frac{5}{2\gamma}+\frac{15}{8\gamma ^{2}}-\frac{15}{8\gamma^{3}}+O(\gamma^{-4}). \tag{3.10}\]
Then one has
\[\left(\frac{K_{3}(\gamma)}{K_{2}(\gamma)}\right)^{2}-\frac{5}{ \gamma}\frac{K_{3}(\gamma)}{K_{2}(\gamma)}-1\] \[=\Big{[}\left(\frac{K_{3}(\gamma)}{K_{2}(\gamma)}\right)^{2}-1- \frac{5}{\gamma}\Big{]}-\frac{5}{\gamma}\Big{(}\frac{K_{3}(\gamma)}{K_{2}( \gamma)}-1\Big{)}\] \[=\frac{10}{\gamma^{2}}+\frac{45}{8\gamma^{3}}-\frac{15}{4\gamma^ {4}}+O(\gamma^{-5})-\frac{5}{\gamma}\Big{(}\frac{5}{2\gamma}+\frac{15}{8\gamma ^{2}}-\frac{15}{8\gamma^{3}}+O(\gamma^{-4})\Big{)}\] \[=-\frac{5}{2\gamma^{2}}-\frac{15}{4\gamma^{3}}+\frac{45}{8\gamma^ {4}}+O(\gamma^{-5}). \tag{3.11}\]
Applying \(\frac{K_{1}(\gamma)}{K_{2}(\gamma)}=\frac{K_{3}(\gamma)}{K_{2}(\gamma)}-\frac {4}{\gamma}\), we have
\[\left(\gamma\frac{\partial P_{0}}{\partial e_{0}}\Big{|}_{S} \right)^{-1} =\frac{K_{1}(\gamma)}{K_{2}(\gamma)}+\frac{3}{\gamma}+\frac{ \left(\frac{K_{1}(\gamma)}{K_{2}(\gamma)}\right)^{2}+\frac{4}{\gamma}\frac{K_ {1}(\gamma)}{K_{2}(\gamma)}-1}{\left(\frac{K_{1}(\gamma)}{K_{2}(\gamma)} \right)^{2}+\frac{3}{\gamma}\frac{K_{1}(\gamma)}{K_{2}(\gamma)}-1-\frac{4}{ \gamma^{2}}}\cdot\frac{1}{\gamma}\] \[=\frac{K_{3}(\gamma)}{K_{2}(\gamma)}+\frac{\frac{K_{3}(\gamma)}{ K_{2}(\gamma)}}{\left(\frac{K_{3}(\gamma)}{K_{2}(\gamma)}\right)^{2}-\frac{5}{ \gamma}\frac{K_{3}(\gamma)}{K_{2}(\gamma)}-1}\cdot\frac{1}{\gamma^{2}}\] \[=1+O(\gamma^{-1})+\frac{1+O(\gamma^{-1})}{-\frac{5}{2}+O(\gamma^ {-1})}=\frac{3}{5}+O(\gamma^{-1}),\]
which implies that
\[a^{2}=T_{0}\gamma\frac{\partial P_{0}}{\partial e_{0}}\Big{|}_{S}=T_{0}\Big{(} \frac{5}{3}+O(\gamma^{-1})\Big{)}=\frac{5}{3}T_{0}+O(\mathfrak{c}^{-2}).\]
Therefore the proof is completed.
**Lemma 3.4**.: _It holds that_
\[|n_{0}-\rho|+|T_{0}-\theta|\leq C|W-V|+\frac{C}{\mathfrak{c}^{2}}, \tag{3.12}\]
_where the constant \(C\) depends on \(\sup_{0\leq t\leq T}\|V(t)-\overline{V}\|_{H^{N_{0}}}\) and \(\sup_{0\leq t\leq T}\|W(t)-\overline{W}\|_{H^{N_{0}}}\) and is independent of \(\mathfrak{c}\)._
Proof.: It follows from (3.8) that
\[\rho=(2\pi\mathcal{P})^{\frac{3}{5}}e^{1-\frac{2}{5}\eta}. \tag{3.13}\]
Since \(K_{2}(\gamma)=\sqrt{\frac{\pi}{2\gamma}}e^{-\gamma}(1+O(\gamma^{-1}))\), it follows from (1.18) that
\[n_{0} =4\pi e^{4-S}\mathfrak{c}^{3}\frac{K_{2}(\gamma)}{\gamma}\exp \left(\gamma\frac{K_{1}(\gamma)}{K_{2}(\gamma)}\right)\] \[=(2\pi)^{\frac{3}{2}}\Big{(}\frac{P_{0}}{n_{0}}\Big{)}^{\frac{3}{ 2}}e^{-S}\exp\left(\gamma\frac{K_{3}(\gamma)}{K_{2}(\gamma)}-\gamma\right)(1+ O(\gamma^{-1})),\]
which yields immediately that
\[n_{0} =(2\pi)^{\frac{3}{5}}P_{0}^{\frac{3}{5}}e^{-\frac{2}{5}S}\exp \left(\frac{2}{5}\gamma\frac{K_{3}(\gamma)}{K_{2}(\gamma)}-\frac{2}{5}\gamma \right)(1+O(\gamma^{-1}))\] \[=(2\pi)^{\frac{3}{5}}P_{0}^{\frac{3}{5}}e^{1-\frac{2}{5}S}\exp \left(O(\gamma^{-1})\right)(1+O(\gamma^{-1}))\] \[=(2\pi P_{0})^{\frac{3}{5}}e^{1-\frac{1}{5}S}+O(\gamma^{-1}). \tag{3.14}\]
Using (3.13)-(3.14), one has
\[|n_{0}-\rho|\leq C|W-V|+\frac{C}{\mathfrak{c}^{2}}. \tag{3.15}\]
For the estimate of \(|T_{0}-\theta|\) in (3.12), a direct calculation shows that
\[T_{0}-\theta=\frac{P_{0}}{n_{0}}-\frac{\mathcal{P}}{\rho}=\frac{1}{n_{0}}(P_{0 }-\mathcal{P})+\frac{P_{0}}{n_{0}\rho}(\rho-n_{0}),\]
which, together with (3.15), yields that
\[|T_{0}-\theta|\leq C|W-V|+\frac{C}{\mathfrak{c}^{2}}.\]
Therefore the proof is completed.
Since \(n_{0}=n_{0}(P_{0},S)\) and \(\rho=\rho(\mathcal{P},\eta)\), to consider the Newtonian limit of the relativistic Euler equations, we still need to control the following functions
\[\frac{\partial n_{0}}{\partial P_{0}}\Big{|}_{S}-\frac{\partial\rho}{\partial \mathcal{P}}\Big{|}_{\eta},\quad\frac{\partial n_{0}}{\partial S}\Big{|}_{P_ {0}}-\frac{\partial\rho}{\partial\eta}\Big{|}_{\mathcal{P}}.\]
For simplicity of notations, we replace \(\frac{\partial n_{0}}{\partial P_{0}}\Big{|}_{S}\) with \(\frac{\partial n_{0}}{\partial P_{0}}\) and the remaining notations can be understood in the same way.
**Lemma 3.5**.: _It holds that_
\[\Big{|}\frac{\partial n_{0}}{\partial P_{0}}-\frac{\partial\rho}{\partial \mathcal{P}}\Big{|}+\Big{|}\frac{\partial n_{0}}{\partial S}-\frac{\partial \rho}{\partial\eta}\Big{|}+\Big{|}\frac{\partial T_{0}}{\partial P_{0}}- \frac{\partial\theta}{\partial\mathcal{P}}\Big{|}+\Big{|}\frac{\partial T_{0 }}{\partial S}-\frac{\partial\theta}{\partial\eta}\Big{|}\leq C|W-V|+\frac{C} {\mathfrak{c}^{2}}, \tag{3.16}\]
_where the constant \(C\) depends on \(\sup_{0\leq t\leq T}\|V(t)-\overline{V}\|_{H^{N_{0}}}\) and \(\sup_{0\leq t\leq T}\|W(t)-\overline{W}\|_{H^{N_{0}}}\) and is independent of \(\mathfrak{c}\)._
Proof.: Using (3.13), one has
\[\frac{\partial\rho}{\partial\mathcal{P}}=\frac{3\rho}{5\mathcal{P}}=\frac{3} {5\theta},\quad\frac{\partial\rho}{\partial\eta}=-\frac{2\mathcal{P}}{5\theta }=-\frac{2}{5}\rho. \tag{3.17}\]
Since \(\gamma=\frac{\mathfrak{c}^{2}}{T_{0}}=\mathfrak{c}^{2}\frac{n_{0}}{P_{0}}\) and
\[n_{0}=4\pi e^{-S}\mathfrak{c}^{3}\frac{K_{2}(\gamma)}{\gamma}\exp\left( \gamma\frac{K_{3}(\gamma)}{K_{2}(\gamma)}\right),\]
it holds that
\[\frac{\partial n_{0}}{\partial P_{0}} =4\pi e^{-S}\mathfrak{c}^{3}\frac{d}{d\gamma}\Big{[}\frac{K_{2}( \gamma)}{\gamma}\exp\left(\gamma\frac{K_{3}(\gamma)}{K_{2}(\gamma)}\right) \Big{]}\cdot\frac{\partial\gamma}{\partial P_{0}}\] \[=4\pi e^{-S}\mathfrak{c}^{3}\Big{[}\frac{K_{2}^{\prime}(\gamma) \gamma-K_{2}(\gamma)}{\gamma^{2}}+\frac{K_{2}(\gamma)}{\gamma}\Big{(}\frac{K_ {3}(\gamma)}{K_{2}(\gamma)}+\gamma\frac{K_{3}^{\prime}(\gamma)K_{2}(\gamma)-K _{3}(\gamma)K_{2}^{\prime}(\gamma)}{K_{2}^{2}(\gamma)}\Big{)}\Big{]}\] \[\quad\times\exp\Big{(}\gamma\frac{K_{3}(\gamma)}{K_{2}(\gamma)} \Big{)}\cdot\frac{\mathfrak{c}^{2}}{P_{0}}\Big{(}\frac{\partial n_{0}}{ \partial P_{0}}-\frac{n_{0}}{P_{0}}\Big{)}\] \[=\gamma^{2}\Big{(}\frac{K_{3}^{2}(\gamma)}{K_{2}^{2}(\gamma)}- \frac{5}{\gamma}\frac{K_{3}(\gamma)}{K_{2}(\gamma)}-1+\frac{1}{\gamma^{2}} \Big{)}\Big{(}\frac{\partial n_{0}}{\partial P_{0}}-\frac{n_{0}}{P_{0}}\Big{)}\] \[=\varphi(\gamma)\Big{(}\frac{\partial n_{0}}{\partial P_{0}}- \frac{n_{0}}{P_{0}}\Big{)}, \tag{3.18}\]
where we have denoted
\[\varphi(\gamma):=\gamma^{2}\Big{(}\frac{K_{3}^{2}(\gamma)}{K_{2}^{2}(\gamma)}- \frac{5}{\gamma}\frac{K_{3}(\gamma)}{K_{2}(\gamma)}-1+\frac{1}{\gamma^{2}} \Big{)}.\]
It follows from (3.11) that
\[\varphi(\gamma)=-\frac{3}{2}+O(\gamma^{-1}). \tag{3.19}\]
Substituting (3.19) into (3.18), one gets
\[\frac{\partial n_{0}}{\partial P_{0}}=\frac{3}{5T_{0}}+O(\gamma^{-1}). \tag{3.20}\]
Similarly, one has
\[\frac{\partial n_{0}}{\partial S} =-n_{0}+4\pi e^{-S}\mathfrak{c}^{3}\frac{d}{d\gamma}\Big{[}\frac {K_{2}(\gamma)}{\gamma}\exp\left(\gamma\frac{K_{3}(\gamma)}{K_{2}(\gamma)} \right)\Big{]}\cdot\frac{\partial\gamma}{\partial S}\] \[=-n_{0}+\varphi(\gamma)\frac{\partial n_{0}}{\partial S}=-n_{0}+ \Big{(}-\frac{3}{2}+O(\gamma^{-1})\Big{)}\frac{\partial n_{0}}{\partial S},\]
which yields that
\[\frac{\partial n_{0}}{\partial S}=-\frac{2}{5}n_{0}+O(\gamma^{-1}). \tag{3.21}\]
It follows from (3.17), (3.20) and (3.21) that
\[\Big{|}\frac{\partial n_{0}}{\partial P_{0}}-\frac{\partial\rho}{\partial \mathcal{P}}\Big{|}+\Big{|}\frac{\partial n_{0}}{\partial s}-\frac{\partial \rho}{\partial\eta}\Big{|}\leq C\Big{|}T_{0}-\theta|+C\Big{|}n_{0}-\rho|+ \frac{C}{\mathfrak{c}^{2}}\leq C|W-V|+\frac{C}{\mathfrak{c}^{2}}. \tag{3.22}\]
Next, we consider \(\big{|}\frac{\partial T_{0}}{\partial P_{0}}-\frac{\partial\theta}{\partial \mathcal{P}}\big{|}\) and \(\big{|}\frac{\partial T_{0}}{\partial S}-\frac{\partial\theta}{\partial\eta} \big{|}\) in (3.16). Noting \(T_{0}=\frac{P_{0}}{n_{0}}\) and \(\theta=\frac{\mathcal{P}}{\rho}\), we have
\[\frac{\partial T_{0}}{\partial P_{0}}=\frac{1}{n_{0}}-\frac{T_{0}}{n_{0}} \frac{\partial n_{0}}{\partial P_{0}},\quad\frac{\partial\theta}{\partial \mathcal{P}}=\frac{1}{\rho}-\frac{\theta}{\rho}\frac{\partial\rho}{\partial \mathcal{P}} \tag{3.23}\]
and
\[\frac{\partial T_{0}}{\partial S}=-\frac{T_{0}}{n_{0}}\frac{\partial n_{0}}{ \partial S},\quad\frac{\partial\theta}{\partial\eta}=-\frac{\theta}{\rho} \frac{\partial\rho}{\partial\eta}. \tag{3.24}\]
Hence it is clear that
\[\Big{|}\frac{\partial T_{0}}{\partial P_{0}}-\frac{\partial\theta}{\partial \mathcal{P}}\Big{|}\leq C\Big{|}n_{0}-\rho\Big{|}+C\Big{|}T_{0}-\theta\Big{|} +C\Big{|}\frac{\partial n_{0}}{\partial P_{0}}-\frac{\partial\rho}{\partial \mathcal{P}}\Big{|}\]
and
\[\Big{|}\frac{\partial T_{0}}{\partial S}-\frac{\partial\theta}{\partial\eta} \Big{|}\leq C\Big{|}n_{0}-\rho\Big{|}+C\Big{|}T_{0}-\theta\Big{|}+C\Big{|} \frac{\partial n_{0}}{\partial S}-\frac{\partial\rho}{\partial\eta}\Big{|},\]
which, together with (3.22), yield (3.16). Therefore the proof is completed.
**Lemma 3.6**.: _There hold_
\[\Big{|}\frac{\partial^{2}n_{0}}{\partial P_{0}^{2}}-\frac{\partial^{2}\rho}{ \partial\mathcal{P}^{2}}\Big{|}+\Big{|}\frac{\partial^{2}n_{0}}{\partial S^{2} }-\frac{\partial^{2}\rho}{\partial\eta^{2}}\Big{|}+\Big{|}\frac{\partial^{2}n _{0}}{\partial P_{0}\partial S}-\frac{\partial^{2}\rho}{\partial\mathcal{P} \partial\eta}\Big{|}\leq C|W-V|+\frac{C}{\mathfrak{c}^{2}} \tag{3.25}\]
_and_
\[\Big{|}\frac{\partial^{2}T_{0}}{\partial P_{0}^{2}}-\frac{\partial^{2}\theta}{ \partial\mathcal{P}^{2}}\Big{|}+\Big{|}\frac{\partial^{2}T_{0}}{\partial S^{2} }-\frac{\partial^{2}\theta}{\partial\eta^{2}}\Big{|}+\Big{|}\frac{\partial^{2} T_{0}}{\partial P_{0}\partial S}-\frac{\partial^{2}\theta}{\partial\mathcal{P} \partial\eta}\Big{|}\leq C|W-V|+\frac{C}{\mathfrak{c}^{2}}, \tag{3.26}\]
_where the constant \(C\) depends on \(\sup_{0\leq t\leq T}\|V(t)-\overline{V}\|_{H^{N_{0}}}\) and \(\sup_{0\leq t\leq T}\|W(t)-\overline{W}\|_{H^{N_{0}}}\) and is independent of \(\mathfrak{c}\)._
Proof.: It follows from (3.17) that
\[\frac{\partial^{2}\rho}{\partial\mathcal{P}^{2}}=-\frac{6}{25}\frac{1}{ \mathcal{P}\theta},\quad\frac{\partial^{2}\rho}{\partial\mathcal{P}\partial \eta}=-\frac{6}{25}\frac{1}{\theta},\quad\frac{\partial^{2}\rho}{\partial \eta^{2}}=\frac{4}{25}\rho. \tag{3.27}\]
Using (3.18), one has
\[\frac{\partial^{2}n_{0}}{\partial P_{0}^{2}}=\varphi^{\prime}(\gamma)\frac{ \mathfrak{c}^{2}}{P_{0}}\Big{(}\frac{\partial n_{0}}{\partial P_{0}}-\frac{n_{0} }{P_{0}}\Big{)}\Big{(}\frac{\partial n_{0}}{\partial P_{0}}-\frac{n_{0}}{P_{0}} \Big{)}+\varphi(\gamma)\Big{(}\frac{\partial^{2}n_{0}}{\partial P_{0}^{2}}- \frac{1}{P_{0}}\frac{\partial n_{0}}{\partial P_{0}}+\frac{n_{0}}{P_{0}^{2}} \Big{)}\]
\[=\gamma\varphi^{\prime}(\gamma)\frac{1}{n_{0}\varphi^{2}(\gamma)}\Big{(}\frac{ \partial n_{0}}{\partial P_{0}}\Big{)}^{2}+\varphi(\gamma)\Big{(}\frac{\partial^ {2}n_{0}}{\partial P_{0}^{2}}-\frac{1}{P_{0}}\frac{\partial n_{0}}{\partial P_ {0}}+\frac{1}{P_{0}T_{0}}\Big{)}. \tag{3.28}\]
Noting
\[\Big{(}\frac{K_{3}(\gamma)}{K_{2}(\gamma)}\Big{)}^{\prime}=\frac{K_{3}^{2}( \gamma)}{K_{2}^{2}(\gamma)}-\frac{5}{\gamma}\frac{K_{3}(\gamma)}{K_{2}(\gamma) }-1, \tag{3.29}\]
one has
\[\varphi^{\prime}(\gamma) =\frac{d}{d\gamma}\Big{\{}\gamma^{2}\Big{(}\frac{K_{3}^{2}(\gamma )}{K_{2}^{2}(\gamma)}-\frac{5}{\gamma}\frac{K_{3}(\gamma)}{K_{2}(\gamma)}-1+ \frac{1}{\gamma^{2}}\Big{)}\Big{\}}\] \[=2\gamma\Big{[}\frac{K_{3}^{2}(\gamma)}{K_{2}^{2}(\gamma)}-1 \Big{]}+2\gamma^{2}\Big{[}\frac{K_{3}(\gamma)}{K_{2}(\gamma)}-1\Big{]}\Big{(} \frac{K_{3}(\gamma)}{K_{2}(\gamma)}\Big{)}^{\prime}\] \[\qquad+(2\gamma^{2}-5\gamma)\Big{(}\frac{K_{3}(\gamma)}{K_{2}( \gamma)}\Big{)}^{\prime}-5\frac{K_{3}(\gamma)}{K_{2}(\gamma)}:=\sum_{j=1}^{4} \mathcal{R}_{j}. \tag{3.30}\]
Applying (3.9), (3.10), (3.11) and (3.29), one can obtain
\[\mathcal{R}_{1} =2\gamma\Big{[}\frac{K_{3}^{2}(\gamma)}{K_{2}^{2}(\gamma)}-1 \Big{]}=2\gamma\Big{(}\frac{5}{\gamma}+\frac{10}{\gamma^{2}}+\frac{45}{8 \gamma^{3}}+O(\gamma^{-4})\Big{)}=10+\frac{20}{\gamma}+\frac{45}{4\gamma^{2}} +O(\gamma^{-3}), \tag{3.31}\] \[\mathcal{R}_{2} =2\gamma^{2}\Big{[}\frac{K_{3}(\gamma)}{K_{2}(\gamma)}-1\Big{]} \Big{(}\frac{K_{3}(\gamma)}{K_{2}(\gamma)}\Big{)}^{\prime}\] \[=2\gamma^{2}\Big{(}\frac{5}{2\gamma}+\frac{15}{8\gamma^{2}}+O( \gamma^{-3})\Big{)}\Big{(}-\frac{5}{2\gamma^{2}}-\frac{15}{4\gamma^{3}}+O( \gamma^{-4})\Big{)}=-\frac{25}{2\gamma}-\frac{225}{8\gamma^{2}}+O(\gamma^{-3}),\] (3.32) \[\mathcal{R}_{3} =(2\gamma^{2}-5\gamma)\Big{(}\frac{K_{3}(\gamma)}{K_{2}(\gamma)} \Big{)}^{\prime}=(2\gamma^{2}-5\gamma)\Big{(}-\frac{5}{2\gamma^{2}}-\frac{15} {4\gamma^{3}}+\frac{45}{8\gamma^{4}}+O(\gamma^{-5})\Big{)}\] \[=-5+\frac{5}{\gamma}+\frac{30}{\gamma^{2}}+O(\gamma^{-3}),\] (3.33) \[\mathcal{R}_{4} =-5\Big{[}\frac{K_{3}(\gamma)}{K_{2}(\gamma)}-1\Big{]}-5=-5\Big{(} \frac{5}{2\gamma}+\frac{15}{8\gamma^{2}}+O(\gamma^{-3})\Big{)}-5\] \[=-5-\frac{25}{2\gamma}-\frac{75}{8\gamma^{2}}+O(\gamma^{-3}). \tag{3.34}\]
Hence it follows from (3.30)-(3.34) that
\[\varphi^{\prime}(\gamma)=\frac{15}{4\gamma^{2}}+O(\gamma^{-3}),\]
which implies that
\[\gamma\varphi^{\prime}(\gamma)\frac{1}{\varphi^{2}(\gamma)}=O(\gamma^{-1}). \tag{3.35}\]
Since \(\frac{\partial n_{0}}{\partial P_{0}}=\frac{3}{5T_{0}}+O(\gamma^{-1})\) and \(\varphi(\gamma)=-\frac{3}{2}+O(\gamma^{-1})\), it follows from (3.28) and (3.35) that
\[(1-\varphi(\gamma))\frac{\partial^{2}n_{0}}{\partial P_{0}^{2}}=(-\frac{3}{2}+ O(\gamma^{-1}))\Big{\{}-\frac{1}{P_{0}}\Big{(}\frac{3}{5T_{0}}+O(\gamma^{-1}) \Big{)}+\frac{1}{P_{0}T_{0}}\Big{\}}+O(\gamma^{-1}),\]
which implies that
\[\frac{\partial^{2}n_{0}}{\partial P_{0}^{2}}=-\frac{6}{25}\frac{1}{P_{0}T_{0}} +O(\gamma^{-1}). \tag{3.36}\]
Similarly, we can obtain that
\[\frac{\partial^{2}n_{0}}{\partial P_{0}\partial S}=-\frac{6}{25}\frac{1}{T_{0}} +O(\gamma^{-1}),\quad\frac{\partial^{2}n_{0}}{\partial S^{2}}=\frac{4}{25}n_{0} +O(\gamma^{-1}). \tag{3.37}\]
Hence we conclude (3.25) from (3.27), (3.36)-(3.37) and Lemma 3.4.
Using (3.23) and (3.24), one has
\[\frac{\partial^{2}T_{0}}{\partial S^{2}} =\Big{(}-\frac{1}{n_{0}}\frac{\partial T_{0}}{\partial S}+\frac{T_ {0}}{n_{0}^{2}}\frac{\partial n_{0}}{\partial S}\Big{)}\frac{\partial n_{0}}{ \partial S}-\frac{T_{0}}{n_{0}}\frac{\partial^{2}n_{0}}{\partial S^{2}}, \tag{3.38}\] \[\frac{\partial^{2}T_{0}}{\partial S\partial P_{0}} =\Big{(}-\frac{1}{n_{0}}\frac{\partial T_{0}}{\partial P_{0}}+ \frac{T_{0}}{n_{0}^{2}}\frac{\partial n_{0}}{\partial P_{0}}\Big{)}\frac{ \partial n_{0}}{\partial S}-\frac{T_{0}}{n_{0}}\frac{\partial^{2}n_{0}}{ \partial S\partial P_{0}},\] (3.39) \[\frac{\partial^{2}T_{0}}{\partial P_{0}^{2}} =\Big{(}-\frac{1}{n_{0}^{2}}\frac{\partial n_{0}}{\partial P_{0}} -\frac{1}{n_{0}}\frac{\partial T_{0}}{\partial P_{0}}+\frac{T_{0}}{n_{0}^{2}} \frac{\partial n_{0}}{\partial P_{0}}\Big{)}\frac{\partial n_{0}}{\partial P_ {0}}-\frac{T_{0}}{n_{0}}\frac{\partial^{2}n_{0}}{\partial P_{0}^{2}}, \tag{3.40}\]
and
\[\frac{\partial^{2}\theta_{0}}{\partial\eta^{2}} =\Big{(}-\frac{1}{\rho}\frac{\partial\theta}{\partial\eta}+\frac {\theta}{\rho^{2}}\frac{\partial\rho}{\partial\eta}\Big{)}\frac{\partial\rho} {\partial\eta}-\frac{\theta}{\rho}\frac{\partial^{2}\rho}{\partial\eta^{2}}, \tag{3.41}\] \[\frac{\partial^{2}\theta}{\partial\eta\partial\mathcal{P}} =\Big{(}-\frac{1}{\rho}\frac{\partial\theta}{\partial\mathcal{P}} +\frac{\theta}{\rho^{2}}\frac{\partial\rho}{\partial\mathcal{P}}\Big{)}\frac{ \partial\rho}{\partial\eta}-\frac{\theta}{\rho}\frac{\partial^{2}\rho}{ \partial\eta\partial\mathcal{P}},\] (3.42) \[\frac{\partial^{2}\theta}{\partial\mathcal{P}^{2}} =\Big{(}-\frac{1}{\rho^{2}}\frac{\partial\rho}{\partial\mathcal{ P}}-\frac{1}{\rho}\frac{\partial\theta}{\partial\mathcal{P}}+\frac{\theta}{ \rho^{2}}\frac{\partial\rho}{\partial\mathcal{P}}\Big{)}\frac{\partial\rho}{ \partial\mathcal{P}}-\frac{\theta}{\rho}\frac{\partial^{2}\rho}{\partial \mathcal{P}^{2}}. \tag{3.43}\]
Thus (3.26) follows from (3.38)-(3.43), Lemmas 3.4-3.5 and (3.25). Therefore the proof is completed.
By similar arguments as in Lemmas 3.5-3.6, we can obtain the following lemma whose proof is omitted for brevity of presentation.
**Lemma 3.7**.: _There hold_
\[|\partial_{i}(a^{2}-\sigma^{2})|\leq C|W-V|+C|\partial_{i}(W-V)|+\frac{C}{ \mathfrak{c}^{2}},\quad i=1,2,3,\]
_and_
\[|\partial_{ij}(a^{2}-\sigma^{2})|\leq C|W-V|+C|\nabla_{x}(W-V)|+C|\partial_{ ij}(W-V)|+\frac{C}{\mathfrak{c}^{2}},\quad i,j=1,2,3,\]
_where the constant \(C\) depends on \(\sup_{0\leq t\leq T}\|V(t)-\overline{V}\|_{H^{N_{0}}}\) and \(\sup_{0\leq t\leq T}\|W(t)-\overline{W}\|_{H^{N_{0}}}\) and is independent of \(\mathfrak{c}\)._
We are now in a position to show the Newtonian limit from the relativistic Euler equations to the classical Euler equations.
**Proposition 3.8**.: Assume \(\overline{V}=\overline{W}\). Suppose that \(V=(P_{0},u,S)\) is the unique smooth solution in Lemma 3.1 and \(W=(\mathcal{P},\mathfrak{u},\eta)\) is the unique smooth solution in Lemma 3.2 with the same initial data. Let \(T=\min\{T_{1},T_{2}\}\), then it holds that
\[\sup_{0\leq t\leq T}\|(W-V)(t)\|_{L^{\infty}}\leq\frac{C}{\mathfrak{c}^{2}},\]
where the constant \(C\) depends on \(\sup_{0\leq t\leq T}\|V(t)-\overline{V}\|_{H^{N_{0}}}\) and \(\sup_{0\leq t\leq T}\|W(t)-\overline{W}\|_{H^{N_{0}}}\) and is independent of \(\mathfrak{c}\).
Proof.: Using Lemmas 3.3-3.4, we have
\[|\mathbf{B}_{\alpha}-\mathbf{D}_{\alpha}|\leq C|W-V|+\frac{C}{\mathfrak{c}^{2}},\quad\alpha=0,1,2,3.\]
Denote \(\mathcal{U}(t):=\langle\mathbf{D}_{0}(W-V),W-V\rangle(t)\). It follows from (3.5) that
\[\frac{d}{dt}\mathcal{U}(t) =\frac{d}{dt}\langle\mathbf{D}_{0}(W-V),W-V\rangle\] \[=\langle(\partial_{i}\mathbf{D}_{0}+\sum_{j=1}^{3}\partial_{j} \mathbf{D}_{j})(W-V),W-V\rangle+2\langle\Upsilon,W-V\rangle\]
\[\leq C\|W-V\|_{2}^{2}+\frac{C}{\mathfrak{c}^{2}}\|W-V\|_{2}\] \[\leq C\mathcal{U}(t)+\frac{C}{\mathfrak{c}^{2}}\sqrt{\mathcal{U}(t )}.\]
Applying Gronwall's inequality, one obtains
\[\sup_{0\leq t\leq T}\mathcal{U}(t)\leq\frac{C}{\mathfrak{c}^{4}},\]
where the constant \(C\) depends on \(T\), \(\sup_{0\leq t\leq T}\|V(t)-\overline{V}\|_{H^{N_{0}}}\) and \(\sup_{0\leq t\leq T}\|W(t)-\overline{W}\|_{H^{N_{0}}}\) and is independent of \(\mathfrak{c}\). Hence we get
\[\sup_{0\leq t\leq T}\|(W-V)(t)\|_{2}\leq\frac{C}{\mathfrak{c}^{2}}. \tag{3.44}\]
Similarly, by using Lemmas 3.5-3.7 and the energy method, we can obtain
\[\sup_{0\leq t\leq T}\|\nabla_{x}(W-V)(t)\|_{2}+\sup_{0\leq t\leq T}\|\nabla_{ x}^{2}(W-V)(t)\|_{2}\leq\frac{C}{\mathfrak{c}^{2}},\]
which, together with (3.44), yields that
\[\sup_{0\leq t\leq T}\|(W-V)(t)\|_{\infty}\leq C\sup_{0\leq t\leq T}\|(W-V)(t) \|_{H^{2}}\leq\frac{C}{\mathfrak{c}^{2}}.\]
Therefore the proof is completed.
Based on Lemmas 3.1-3.2 and the diffeomorphisms \((n_{0},T_{0})\leftrightarrow(P_{0},S)\), \((\rho,\theta)\leftrightarrow(\mathcal{P},\eta)\) of \((0,\infty)\times(0,\infty)\), there exist positive constants \(\bar{C}_{0}\), \(\bar{c}_{j}\) (\(j=1,2,3,4\)) which are independent of \(\mathfrak{c}\), such that for any \((t,x)\in[0,T]\times\mathbb{R}^{3}\), there holds
\[|u(t,x)|\leq\bar{C}_{0},\quad 0<4\bar{c}_{1}\leq T_{0}(t,x)\leq\frac{1}{4 \bar{c}_{1}},\quad 0<4\bar{c}_{2}\leq\theta(t,x)\leq\frac{1}{4\bar{c}_{2}} \tag{3.45}\]
and
\[0<\bar{c}_{3}\leq n_{0}(t,x)\leq\frac{1}{\bar{c}_{3}},\quad 0<\bar{c}_{4} \leq\rho(t,x)\leq\frac{1}{\bar{c}_{4}}. \tag{3.46}\]
## 4. Uniform-in-\(\mathfrak{c}\) estimates on the linearized collision operators
We first present a useful lemma which is very similar to [18, Lemma 3.1]. Since the proof is similar, we omit the details here for brevity.
**Lemma 4.1**.: ([18]) _Denote_
\[\boldsymbol{\ell}_{1}:=\mathfrak{c}\frac{p^{0}+q^{0}}{2},\quad \boldsymbol{j}_{1}:=\mathfrak{c}\frac{|p\times q|}{g},\]
_then there hold_
\[(i) \frac{\sqrt{|p\times q|^{2}+\mathfrak{c}^{2}|p-q|^{2}}}{\sqrt{p^ {0}q^{0}}}\leq g\leq|p-q|\ \ \text{and}\ \ g^{2}<s\leq 4p^{0}q^{0}.\] \[(ii) v_{\phi}=\frac{\mathfrak{c}}{4}\frac{g\sqrt{s}}{p^{0}q^{0}}\leq \min\Big{\{}\mathfrak{c},\frac{|p-q|}{2}\Big{\}}.\] \[(iii) \boldsymbol{\ell}_{1}^{2}-\boldsymbol{j}_{1}^{2}=\frac{s\mathfrak{ c}^{2}}{4g^{2}}|p-q|^{2}=\frac{\mathfrak{c}^{2}g^{2}+4\mathfrak{c}^{4}}{4g^{2}}|p-q|^{2} \geq\mathfrak{c}^{4}+\frac{\mathfrak{c}^{2}}{4}|p-q|^{2},\quad p\neq q.\] \[(iv) \lim_{\varepsilon\to\infty}\frac{g}{|p-q|}=\lim_{\varepsilon\to \infty}\frac{s}{4\mathfrak{c}^{2}}=\lim_{\varepsilon\to\infty}\frac{\boldsymbol {\ell}_{1}}{\mathfrak{c}^{2}}=\lim_{\varepsilon\to\infty}\frac{\boldsymbol{ \ell}_{1}^{2}-\boldsymbol{j}_{1}^{2}}{\mathfrak{c}^{4}}=1,\quad p\neq q.\]
**Lemma 4.2**.: _Recall \(\bar{C}_{0}\) in (3.45) and \(\bar{p}\) in (2.5). For \(\mathfrak{c}\) suitably large, there hold_
\[\frac{1}{2}|p-q|\leq|\bar{p}-\bar{q}|\leq\frac{3}{2}|p-q|,\quad p\in \mathbb{R}^{3}, \tag{4.1}\] \[\frac{1}{2}|p^{0}|\leq|\bar{p}^{0}|\leq\frac{3}{2}|p^{0}|,\quad p \in\mathbb{R}^{3},\] (4.2) \[\frac{|p|}{2}-\bar{C}_{0}\leq|\bar{p}|\leq\frac{3|p|}{2}+\bar{C} _{0},\quad p\in\mathbb{R}^{3},\] (4.3) \[\frac{1}{2}\leq\det\Big{(}\frac{\partial\bar{p}}{\partial p} \Big{)}\leq\frac{3}{2},\quad p\in\mathbb{R}^{3}. \tag{4.4}\]
Proof.: It follows from (2.5) that
\[\bar{p}-\bar{q}=p-q+\big{(}\frac{u^{0}}{\mathfrak{c}}-1\big{)}\frac{u\cdot(p -q)}{|u|^{2}}u-\frac{p^{0}-q^{0}}{\mathfrak{c}}u \tag{4.5}\]
and
\[\bar{p}^{0}=\frac{u^{0}}{\mathfrak{c}}p^{0}-\frac{u\cdot p}{ \mathfrak{c}}=p^{0}+\Big{(}\frac{u^{0}}{\mathfrak{c}}-1\Big{)}p^{0}-\frac{u \cdot p}{\mathfrak{c}}. \tag{4.6}\]
For \(\frac{\bar{C}_{0}}{\mathfrak{c}}\leq\frac{1}{4}\), it holds that
\[\Big{|}\big{(}\frac{u^{0}}{\mathfrak{c}}-1\big{)}\frac{u\cdot(p-q)}{|u|^{2}}u \Big{|}+\Big{|}\frac{p^{0}-q^{0}}{\mathfrak{c}}u\Big{|}\leq\frac{|u|^{2}}{ \mathfrak{c}(u^{0}+\mathfrak{c})}|p-q|+\frac{|u|}{\mathfrak{c}}|p-q|\leq \frac{1}{2}|p-q|\]
and
\[\Big{|}\Big{(}\frac{u^{0}}{\mathfrak{c}}-1\Big{)}p^{0}-\frac{u\cdot p}{ \mathfrak{c}}\Big{|}\leq\frac{|u|^{2}}{\mathfrak{c}(u^{0}+\mathfrak{c})}p^{0 }+\frac{|u|}{\mathfrak{c}}p^{0}\leq\frac{1}{2}p^{0},\]
which, together with (4.5) and (4.6), yield (4.1) and (4.2).
Observing
\[\bar{p}_{i}=p_{i}+\Big{(}\frac{u^{0}}{\mathfrak{c}}-1\Big{)}\frac{u_{i}}{|u|^ {2}}\sum_{j=1}^{3}u_{j}p_{j}-\frac{u_{i}}{\mathfrak{c}}p^{0}, \tag{4.7}\]
we have
\[\Big{|}\big{(}\frac{u^{0}}{\mathfrak{c}}-1\big{)}\frac{u\cdot p}{|u|^{2}}u \Big{|}+\Big{|}\frac{p^{0}}{\mathfrak{c}}u\Big{|}\leq\frac{|u|^{2}}{\mathfrak{ c}(u^{0}+\mathfrak{c})}|p|+\frac{|u|}{\mathfrak{c}}|p|+|u|\leq\frac{|p|}{2}+ \bar{C}_{0},\]
which, together with (4.7), implies (4.3).
It follows from (4.7) that
\[\frac{\partial\bar{p}_{i}}{\partial p_{j}}=\delta_{ij}+\Big{(}\frac{u^{0}}{ \mathfrak{c}}-1\Big{)}\frac{u_{i}}{|u|^{2}}u_{j}-\frac{u_{i}}{\mathfrak{c}} \frac{p_{j}}{p^{0}}. \tag{4.8}\]
For \(\mathfrak{c}\) suitably large, it is clear that
\[\Big{|}\Big{(}\frac{u^{0}}{\mathfrak{c}}-1\Big{)}\frac{u_{i}}{|u|^{2}}u_{j}- \frac{u_{i}}{\mathfrak{c}}\frac{p_{j}}{p^{0}}\Big{|}\leq\frac{|u|^{2}}{ \mathfrak{c}(u^{0}+\mathfrak{c})}+\frac{|u|}{\mathfrak{c}}\leq\frac{1}{16},\]
which, together with (4.8), implies (4.4). Therefore the proof is completed.
**Lemma 4.3**.: _Recall \(\bar{c}_{1}\) in (3.45). Then there hold_
\[k_{\mathfrak{c}1}(p,q)\lesssim|\bar{p}-\bar{q}|e^{-2\bar{c}_{1}|\bar{p}|-2\bar {c}_{1}|\bar{q}|}\lesssim|p-q|e^{-\bar{c}_{1}|p|-\bar{c}_{1}|q|} \tag{4.9}\]
_and_
\[k_{\mathfrak{c}2}(p,q)\lesssim\Big{[}\frac{1}{\mathfrak{c}}+\frac{1}{|\bar{p}- \bar{q}|}\Big{]}e^{-2\bar{c}_{1}|\bar{p}-\bar{q}|}\lesssim\Big{[}\frac{1}{ \mathfrak{c}}+\frac{1}{|p-q|}\Big{]}e^{-\bar{c}_{1}|p-q|}. \tag{4.10}\]
_Moreover, it holds that_
\[k_{\epsilon 2}(p,q)\lesssim\frac{1}{|\bar{p}-\bar{q}|}e^{-\bar{c}_{1}|\bar{p}- \bar{q}|}\lesssim\frac{1}{|p-q|}e^{-\frac{c_{1}}{2}|p-q|}. \tag{4.11}\]
Proof.: For any \(p\in\mathbb{R}^{3}\), it is clear that
\[\mathfrak{c}^{2}-\mathfrak{c}p^{0}=\mathfrak{c}^{2}(1-\sqrt{1+\frac{|p|^{2}}{ \mathfrak{c}^{2}}})=-\frac{|p|^{2}}{1+\sqrt{1+\frac{|p|^{2}}{\mathfrak{c}^{2}} }},\]
which yields
\[-\frac{|p|^{2}}{2}\leq\mathfrak{c}^{2}-\mathfrak{c}p^{0}\leq-\frac{|p|^{2}}{1+ \sqrt{1+|p|^{2}}}=-\sqrt{1+|p|^{2}}+1\leq-|p|+1. \tag{4.12}\]
It follows from (2.2), Lemmas 4.1-4.2 and (4.12) that
\[k_{\epsilon 1}(p,q) \lesssim|p-q|\exp\Big{(}\frac{\mathfrak{c}^{2}+u^{\mu}p_{\mu}}{2 T_{0}}\Big{)}\exp\Big{(}\frac{\mathfrak{c}^{2}+u^{\mu}q_{\mu}}{2T_{0}}\Big{)}\] \[=|p-q|\exp\Big{(}\frac{\mathfrak{c}^{2}-\mathfrak{c}\bar{p}^{0} }{2T_{0}}\Big{)}\exp\Big{(}\frac{\mathfrak{c}^{2}-\mathfrak{c}\bar{q}^{0}}{2 T_{0}}\Big{)}\] \[\lesssim|p-q|\exp\Big{(}-\frac{|\bar{p}|+|\bar{q}|}{2T_{0}}\Big{)} \lesssim|p-q|\exp\Big{(}-\frac{|p|+|q|}{4T_{0}}\Big{)}\]
and (4.9) follows. For (4.10), observing \(J_{2}(\bar{\boldsymbol{\ell}},\bar{\boldsymbol{j}})\leq J_{1}(\bar{\boldsymbol {\ell}},\bar{\boldsymbol{j}})\), we have from (2.7)-(2.8) that
\[k_{\epsilon 2}(p,q)\lesssim\mathfrak{c}\frac{s^{3/2}}{gp^{0}q^{0}}e ^{\frac{\bar{\boldsymbol{\ell}}^{2}}{T_{0}}}J_{1}(\bar{\boldsymbol{\ell}}, \bar{\boldsymbol{j}}) \lesssim\mathfrak{c}\frac{s^{3/2}}{gp^{0}q^{0}}\frac{\bar{ \boldsymbol{\ell}}}{\bar{\boldsymbol{\ell}}^{2}-\bar{\boldsymbol{j}}^{2}} \left[1+\frac{1}{\sqrt{\bar{\boldsymbol{\ell}}^{2}-\bar{\boldsymbol{j}}^{2}} }\right]e^{\frac{\mathfrak{c}^{2}-\sqrt{\boldsymbol{\ell}^{2}-\bar{\boldsymbol{j} }^{2}}}{T_{0}}}\] \[\lesssim\mathfrak{c}\frac{s^{3/2}}{gp^{0}q^{0}}\frac{\bar{ \boldsymbol{\ell}}}{\bar{\boldsymbol{\ell}}^{2}-\bar{\boldsymbol{j}}^{2}}e^{ \frac{\mathfrak{c}^{2}-\sqrt{\boldsymbol{\ell}^{2}-\bar{\boldsymbol{j}}^{2}}}{ T_{0}}}.\]
It follows from Lemma 4.1 that
\[\mathfrak{c}^{2}-\sqrt{\boldsymbol{\ell}^{2}-\boldsymbol{j}^{2}} \leq\mathfrak{c}^{2}-\sqrt{\mathfrak{c}^{4}+\frac{\mathfrak{c}^{ 2}}{4}|\bar{p}-\bar{q}|^{2}}\leq-\frac{\mathfrak{c}^{2}}{4}\frac{|\bar{p}-\bar {q}|^{2}}{\mathfrak{c}^{2}+\sqrt{\mathfrak{c}^{4}+\frac{\mathfrak{c}^{2}}{4} |\bar{p}-\bar{q}|^{2}}}\] \[=-\frac{1}{4}\frac{|\bar{p}-\bar{q}|^{2}}{1+\sqrt{1+\frac{1}{4 \mathfrak{c}^{2}}|\bar{p}-\bar{q}|^{2}}}\leq-\frac{1}{4}\frac{|\bar{p}-\bar{ q}|^{2}}{1+\sqrt{1+\frac{1}{4}|\bar{p}-\bar{q}|^{2}}}\] \[=-\sqrt{1+\frac{1}{4}|\bar{p}-\bar{q}|^{2}}+1\leq-\frac{|\bar{p}- \bar{q}|}{2}+1,\]
then we have
\[k_{\epsilon 2}(p,q)\lesssim \mathfrak{c}\frac{s^{3/2}}{gp^{0}q^{0}}\frac{\mathfrak{c}(\bar{p }^{0}+\bar{q}^{0})}{2T_{0}}\frac{1}{\frac{s\mathfrak{c}^{2}|\bar{p}-\bar{q}|^ {2}}{4g^{2}T_{0}^{2}}}e^{-\frac{|\bar{p}-\bar{q}|}{2T_{0}}}\lesssim\frac{s^{1 /2}(\bar{p}^{0}+\bar{q}^{0})}{p^{0}q^{0}}\frac{g}{|\bar{p}-\bar{q}|^{2}}e^{- \frac{|\bar{p}-\bar{q}|}{2T_{0}}}\] \[\lesssim \frac{\sqrt{g^{2}+4\mathfrak{c}^{2}}(\bar{p}^{0}+\bar{q}^{0})}{ |\bar{p}-\bar{q}|}e^{-\frac{|\bar{p}-\bar{q}|}{2T_{0}}}\lesssim\frac{(|\bar{p}- \bar{q}|+\mathfrak{c})(\bar{p}^{0}+\bar{q}^{0})}{p^{0}q^{0}}\frac{1}{|\bar{p}- \bar{q}|}e^{-\frac{|\bar{p}-\bar{q}|}{2T_{0}}}\] \[\lesssim \Big{[}\frac{1}{p^{0}}+\frac{1}{q^{0}}+\frac{1}{|p-q|}\Big{(} \frac{\mathfrak{c}}{p^{0}}+\frac{\mathfrak{c}}{q^{0}}\Big{)}\Big{]}e^{-\frac{ |p-\bar{q}|}{4T_{0}}}\lesssim\Big{[}\frac{1}{\mathfrak{c}}+\frac{1}{|p-q|} \Big{]}e^{-\bar{c}_{1}|p-q|}, \tag{4.13}\]
where we used the fact that both \(s\) and \(g\) are Lorentz invariant, i.e.,
\[s(p,q)=s(\bar{p},\bar{q}),\quad g(p,q)=g(\bar{p},\bar{q}).\]
Moreover, it follows from the fourth inequality of (4.13) that
\[k_{\mathfrak{c}2}(p,q) \lesssim\frac{(|\bar{p}-\bar{q}|+\mathfrak{c})(\bar{p}^{0}+\bar{q}^ {0})}{p^{0}q^{0}}\frac{1}{|\bar{p}-\bar{q}|}e^{-\frac{|p-q|}{270}}\lesssim(| \bar{p}-\bar{q}|+1)\frac{1}{|\bar{p}-\bar{q}|}e^{-\frac{|p-q|}{270}}\] \[\lesssim\frac{1}{|\bar{p}-\bar{q}|}e^{-\frac{|\bar{p}-\bar{q}|}{4 \bar{\bar{\bar{\bar{\bar{\bar{\bar{\bar{\bar{\bar{\bar{\bar{\bar{\bar{\bar{\bar{ \bar{\bar{ \bar{ \bar{\bar{\bar{\bar{
\[=\frac{1}{p^{0}q^{0}+\mathfrak{c}^{2}}\Big{\{}-\frac{\mathfrak{c}^{2} (|p|^{2}+|q|^{2})+|p|^{2}|q|^{2}}{\mathfrak{c}^{2}+p^{0}q^{0}}(|p|^{2}+|q|^{2}) +2|p|^{2}|q|^{2}\Big{\}}\] \[=-\frac{(q^{0}|p|^{2}-p^{0}|q|^{2})^{2}}{(p^{0}q^{0}+\mathfrak{c}^ {2})^{2}},\]
which, together with Lemma 4.1, yields that
\[\mathfrak{c}^{2}-\sqrt{\boldsymbol{\ell}^{2}-\boldsymbol{j}^{2}} =\frac{\mathfrak{c}^{4}-(\boldsymbol{\ell}^{2}-\boldsymbol{j}^{2} )}{\mathfrak{c}^{2}+\sqrt{\boldsymbol{\ell}^{2}-\boldsymbol{j}^{2}}}=\frac{ \mathfrak{c}^{4}-\frac{\mathfrak{s}\mathfrak{c}^{2}}{4g^{2}}|\bar{p}-\bar{q} |^{2}}{\mathfrak{c}^{2}+\frac{\sqrt{\mathfrak{s}}\mathfrak{c}}{2g}|\bar{p}- \bar{q}|}=\frac{\mathfrak{c}^{2}-\frac{s}{4g^{2}}|\bar{p}-\bar{q}|^{2}}{1+ \sqrt{\frac{\sqrt{\mathfrak{s}}}{4\mathfrak{c}^{2}}}\frac{|\bar{p}-\bar{q}|}{ g}}\] \[=\frac{\mathfrak{c}^{2}-\frac{g^{2}+4\mathfrak{c}^{2}}{4g^{2}}| \bar{p}-\bar{q}|^{2}}{1+\sqrt{\frac{\mathfrak{s}}{4\mathfrak{c}^{2}}}\frac{| \bar{p}-\bar{q}|}{g}}=\frac{1}{1+\sqrt{\frac{\mathfrak{s}}{4\mathfrak{c}^{2}} }\frac{|\bar{p}-\bar{q}|}{g}}\Big{\{}-\frac{1}{4}|\bar{p}-\bar{q}|^{2}+\frac{ \mathfrak{c}^{2}}{g^{2}}(g^{2}-|\bar{p}-\bar{q}|^{2})\Big{\}}\] \[=-\frac{1}{4}\frac{|\bar{p}-\bar{q}|^{2}}{1+\sqrt{\frac{\mathfrak{ s}}{4\mathfrak{c}^{2}}}\frac{|\bar{p}-\bar{q}|}{g}}-\frac{1}{1+\sqrt{\frac{ \mathfrak{s}}{4\mathfrak{c}^{2}}}\frac{|\bar{p}-\bar{q}|}{g}}\frac{\mathfrak{ c}^{2}}{g^{2}}\frac{(\bar{q}^{0}|\bar{p}|^{2}-\bar{p}^{0}|\bar{q}|^{2})^{2}}{(\bar{p}^{0} \bar{q}^{0}+\mathfrak{c}^{2})^{2}}. \tag{4.19}\]
Hence, it follows from (4.13) and (4.19) that
\[\int_{|\bar{p}-\bar{q}|\leq\mathfrak{c}^{\frac{1}{8}}}k_{\mathfrak{ c}2}(p,q)dq\] \[\lesssim\int_{|\bar{p}-\bar{q}|\leq\mathfrak{c}^{\frac{1}{8}}} \frac{|\bar{p}-\bar{q}|+1}{|\bar{p}-\bar{q}|}e^{-\bar{c}_{1}|\bar{p}-\bar{q}|}e ^{\frac{1}{2\bar{r}_{0}}(\mathfrak{c}^{2}-\sqrt{\boldsymbol{\ell}^{2}- \boldsymbol{j}^{2}})}dq\] \[\lesssim\int_{|\bar{p}-\bar{q}|\leq\mathfrak{c}^{\frac{1}{8}}} \frac{1}{|\bar{p}-\bar{q}|}e^{-\frac{\mathfrak{c}_{1}}{2}|\bar{p}-\bar{q}|}e ^{M(\bar{p},\bar{q})}\exp\Big{(}-\frac{1}{8\bar{r}_{0}}\frac{|\bar{p}-\bar{q}| ^{2}}{1+\sqrt{\frac{\mathfrak{s}}{4\mathfrak{c}^{2}}}\frac{|\bar{p}-\bar{q}|}{ g}}\Big{)}d\bar{q}\] \[\lesssim\int_{|\bar{p}-\bar{q}|\leq\mathfrak{c}^{\frac{1}{8}}} \frac{1}{|\bar{p}-\bar{q}|}e^{-\frac{\mathfrak{c}_{1}}{2}|\bar{p}-\bar{q}|}e ^{M(\bar{p},\bar{q})}d\bar{q}, \tag{4.20}\]
where we made a change of variables \(q\to\bar{q}\) and
\[M(\bar{p},\bar{q}):=-\frac{1}{1+\sqrt{\frac{\mathfrak{s}}{4 \mathfrak{c}^{2}}}\frac{|\bar{p}-\bar{q}|}{g}}\frac{\mathfrak{c}^{2}}{g^{2}} \frac{(\bar{q}^{0}|\bar{p}|^{2}-\bar{p}^{0}|\bar{q}|^{2})^{2}}{(\bar{p}^{0} \bar{q}^{0}+\mathfrak{c}^{2})^{2}}\frac{1}{2\bar{T}_{0}}. \tag{4.21}\]
Noting \(|\bar{p}-\bar{q}|\leq\mathfrak{c}^{\frac{1}{8}}\) and \(|p|\leq\mathfrak{c}\), one has \(|\bar{p}|\lesssim\mathfrak{c}\) and so
\[\bar{p}^{0}=\sqrt{\mathfrak{c}^{2}+|\bar{p}|^{2}}\lesssim\mathfrak{c},\quad \bar{q}^{0}=\sqrt{\mathfrak{c}^{2}+|\bar{q}|^{2}}\leq\sqrt{\mathfrak{c}^{2}+2| \bar{p}-\bar{q}|^{2}+2|\bar{p}|^{2}}\lesssim\mathfrak{c}, \tag{4.22}\]
which yields that
\[(1+\sqrt{\frac{s}{4\mathfrak{c}^{2}}}\frac{|\bar{p}-\bar{q}|}{g}) g^{2}(\bar{p}^{0}\bar{q}^{0}+\mathfrak{c}^{2})^{2} \leq(1+\sqrt{1+\frac{g^{2}}{4\mathfrak{c}^{2}}})|\bar{p}-\bar{q}|^{2}(\bar {p}^{0}\bar{q}^{0}+\mathfrak{c}^{2})^{2}\] \[\lesssim\mathfrak{c}^{4}|\bar{p}-\bar{q}|^{2}.\]
A direct calculation shows that
\[\bar{p}^{0}|\bar{q}|^{2}-\bar{q}^{0}|\bar{p}|^{2} =\mathfrak{c}(|\bar{q}|^{2}-|\bar{p}|^{2})+(\bar{p}^{0}-\mathfrak{ c})|\bar{q}|^{2}-(\bar{q}^{0}-\mathfrak{c})|\bar{p}|^{2}\] \[=\mathfrak{c}(|\bar{q}|^{2}-|\bar{p}|^{2})+\frac{|\bar{p}|^{2}| \bar{q}|^{2}}{\bar{p}^{0}+\mathfrak{c}}-\frac{|\bar{p}|^{2}|\bar{q}|^{2}}{\bar{q} ^{0}+\mathfrak{c}}\] \[=\mathfrak{c}(|\bar{q}|^{2}-|\bar{p}|^{2})+\frac{|\bar{p}|^{2}| \bar{q}|^{2}}{(\bar{p}^{0}+\mathfrak{c})(\bar{q}^{0}+\mathfrak{c})}(\bar{q}^{0}- \bar{p}^{0})\] \[=\mathfrak{c}(|\bar{q}|^{2}-|\bar{p}|^{2})+\frac{|\bar{p}|^{2}| \bar{q}|^{2}(|\bar{q}|^{2}-|\bar{p}|^{2})}{(\bar{p}^{0}+\mathfrak{c})(\bar{q}^{0}+ \mathfrak{c})(\bar{p}^{0}+\mathfrak{c}^{2})^{2}}. \tag{4.23}\]
Thus, in view of (3.45) and (4.22)-(4.23), there exists a positive constant \(\alpha_{0}\) which is independent of \(\mathfrak{c}\) such that
\[M(\bar{p},\bar{q}) \leq-\alpha_{0}\frac{1}{\mathfrak{c}^{2}}\frac{1}{|\bar{p}-\bar{q} |^{2}}\left(\mathfrak{c}(|\bar{q}|^{2}-|\bar{p}|^{2})+\frac{|\bar{p}|^{2}|\bar{ q}|^{2}(|\bar{q}|^{2}-|\bar{p}|^{2})}{(\bar{p}^{0}+\mathfrak{c})(\bar{q}^{0}+ \mathfrak{c})(\bar{p}^{0}+\bar{q}^{0})}\right)^{2}\] \[\leq-\alpha_{0}\frac{(|\bar{q}|^{2}-|\bar{p}|^{2})^{2}}{|\bar{p}- \bar{q}|^{2}}. \tag{4.24}\]
Combining (4.20) and (4.24), one has that
\[\int_{|\bar{p}-\bar{q}|\leq\mathfrak{c}^{\frac{1}{8}}}k_{\mathfrak{c}2}(p,q) dq\lesssim\int_{|\bar{p}-\bar{q}|\leq\mathfrak{c}^{\frac{1}{8}}}\frac{1}{|\bar{p}- \bar{q}|}e^{-\frac{\mathfrak{c}_{1}}{2}|\bar{p}-\bar{q}|}e^{-\alpha_{0}\frac {(|\bar{q}|^{2}-|\bar{p}|^{2})^{2}}{|\bar{p}-\bar{q}|^{2}}}d\bar{q}.\]
By taking similar arguments as in [17, Lemma 3.3.1] (see also Case 3 below), we obtain
\[\int_{|\bar{p}-\bar{q}|\leq\mathfrak{c}^{\frac{1}{8}}}k_{\mathfrak{c}2}(p,q) dq\lesssim\frac{1}{1+|\bar{p}|}\lesssim\frac{1}{1+|p|},\quad\text{for}\ |p|\leq\mathfrak{c}. \tag{4.25}\]
_Case 3: \(|\bar{p}-\bar{q}|\leq\mathfrak{c}^{\frac{1}{8}}\) and \(|p|\geq\mathfrak{c}\)._ It is clear that \(|\bar{p}|\gtrsim\mathfrak{c}\). Noting
\[|\bar{q}|\leq|\bar{q}-\bar{p}|+|\bar{p}|\lesssim|\bar{p}|,\quad|\bar{q}|\geq| \bar{p}|-|\bar{p}-\bar{q}|\gtrsim|\bar{p}|,\]
then we have
\[|\bar{p}|\cong|\bar{q}|,\quad\bar{p}^{0}\cong\bar{q}^{0}.\]
Hence it is clear that
\[(1+\sqrt{\frac{s}{4\mathfrak{c}^{2}}}\frac{|\bar{p}-\bar{q}|}{g })g^{2}(\bar{p}^{0}\bar{q}^{0}+\mathfrak{c}^{2})^{2} \leq(1+\sqrt{1+\frac{g^{2}}{4\mathfrak{c}^{2}}})|\bar{p}-\bar{q} |^{2}(\bar{p}^{0}\bar{q}^{0}+\mathfrak{c}^{2})^{2}\] \[\lesssim|\bar{p}-\bar{q}|^{2}(\mathfrak{c}^{2}+|\bar{p}|^{2})^{2}. \tag{4.26}\]
For \(|\bar{p}|\gtrsim\mathfrak{c}\), it holds that
\[\mathfrak{c}+\frac{|\bar{p}|^{2}|\bar{q}|^{2}}{(\bar{p}^{0}+ \mathfrak{c})(\bar{q}^{0}+\mathfrak{c})(\bar{p}^{0}+\bar{q}^{0})}\cong \mathfrak{c}+\frac{|\bar{p}|^{4}}{(\mathfrak{c}^{0})^{3}}\cong\mathfrak{c}+ \frac{|\bar{p}|^{4}}{(\mathfrak{c}^{2}+|\bar{p}|^{2})^{\frac{3}{2}}}\cong \mathfrak{c}+|\bar{p}|,\]
which, together with (4.23), yields that
\[(\bar{p}^{0}|\bar{q}|^{2}-\bar{q}^{0}|\bar{p}|^{2})^{2}\cong(|\bar{q}|^{2}-| \bar{p}|^{2})^{2}(\mathfrak{c}^{2}+|\bar{p}|^{2}). \tag{4.27}\]
Combining (4.21), (4.26) and (4.27), for some positive constant \(\alpha_{1}\) which is independent of \(\mathfrak{c}\), we have
\[M(\bar{p},\bar{q})\leq-\alpha_{1}\frac{\mathfrak{c}^{2}}{\mathfrak{c}^{2}+| \bar{p}|^{2}}\frac{(|\bar{q}|^{2}-|\bar{p}|^{2})^{2}}{|\bar{p}-\bar{q}|^{2}}. \tag{4.28}\]
Hence, for \(|p|\geq\mathfrak{c}\), it follows from (4.20) and (4.28) that
\[\int_{|\bar{p}-\bar{q}|\leq\mathfrak{c}^{\frac{1}{8}}}k_{\mathfrak{ c}2}(p,q)dq \lesssim\int_{|\bar{p}-\bar{q}|\leq\mathfrak{c}^{\frac{1}{8}}}\frac{1}{|\bar{p}- \bar{q}|}e^{-\frac{\mathfrak{c}_{1}}{2}|\bar{p}-\bar{q}|}e^{M(\bar{p},\bar{q})} d\bar{q}\] \[\lesssim\int_{|\bar{p}-\bar{q}|\leq\mathfrak{c}^{\frac{1}{8}}} \frac{1}{|\bar{p}-\bar{q}|}e^{-\frac{\mathfrak{c}_{1}}{2}|\bar{p}-\bar{q}|}e^{- \alpha_{1}\frac{\mathfrak{c}^{2}}{\mathfrak{c}^{2}+|\bar{p}|^{2}}\frac{(|\bar{ q}|^{2}-|\bar{p}|^{2})^{2}}{|\bar{p}-\bar{q}|^{2}}}d\bar{q}.\]
Following the arguments as in [17, Lemma 3.3.1], we can make a change of variables
\[|\bar{p}-\bar{q}|=r,\quad(\bar{q}-\bar{p})\cdot\bar{p}=|\bar{p}|r\cos\theta, \quad 0\leq r<\infty,\ 0\leq\theta\leq\pi,\]
which yields that
\[|\bar{q}|^{2}=|\bar{q}-\bar{p}|^{2}+|\bar{p}|^{2}+2(\bar{q}-\bar{p})\cdot\bar{ p}=r^{2}+|\bar{p}|^{2}+2r|\bar{p}|\cos\theta.\]
Denoting \(\alpha_{2}^{2}:=\alpha_{1}\frac{\mathfrak{c}^{2}}{\mathfrak{c}^{2}+|\bar{p}|^{2}}\) and \(u=\alpha_{2}(r+2|\bar{p}|\cos\theta)\), one has
\[\int_{|\bar{p}-\bar{q}|\leq\mathfrak{c}^{\frac{1}{8}}}k_{\mathfrak{ c}2}(p,q)dq \lesssim\int_{0}^{\infty}re^{-\frac{\mathfrak{c}_{1}}{2}r}dr\int_{ 0}^{\pi}e^{-\alpha_{2}^{2}(r+2|\bar{p}|\cos\theta)^{2}}\sin\theta d\theta\] \[\lesssim\frac{1}{\alpha_{2}|\bar{p}|}\int_{-\infty}^{\infty}e^{-u ^{2}}du\lesssim\frac{\sqrt{\mathfrak{c}^{2}+|\bar{p}|^{2}}}{\mathfrak{c}|\bar{ p}|}\] \[\lesssim\frac{1}{\mathfrak{c}},\]
which, together with (4.17), (4.18), (4.25), yields (4.16). Therefore the proof is completed.
By similar arguments as in Lemma 4.4, one can also obtain
**Lemma 4.5**.: _There hold_
\[\int_{\mathbb{R}^{3}}k_{\mathfrak{c}1}^{2}(p,q)\Big{(}\frac{w_{\ell}(p)}{w_{ \ell}(q)}\Big{)}^{2}dq\lesssim\frac{1}{1+|p|}\]
_and_
\[\int_{\mathbb{R}^{3}}k_{\mathfrak{c}2}^{2}(p,q)\Big{(}\frac{w_{\ell}(p)}{w_{ \ell}(q)}\Big{)}^{2}dq\lesssim\begin{cases}\frac{1}{1+|p|},\quad|p| \leq\mathfrak{c},\\ \frac{1}{\mathfrak{c}},\quad|p|\geq\mathfrak{c}.\end{cases}\]
Recall \(k_{\mathfrak{c}}(p,q)=k_{\mathfrak{c}2}(p,q)-k_{\mathfrak{c}1}(p,q)\) in (2.9) and denote
\[k_{ew}(p,q):=k_{\mathfrak{c}}(p,q)\frac{w_{\ell}(p)}{w_{\ell}(q)}.\]
By similar arguments as in Lemma 4.4, one can also obtain
\[\int_{\mathbb{R}^{3}}k_{ew}(p,q)e^{\frac{\mathfrak{c}_{1}}{4}|p-q|}dq \lesssim\begin{cases}\frac{1}{1+|p|},\quad\text{for }|p|\leq\mathfrak{c},\\ \frac{1}{\mathfrak{c}},\quad\text{for }|p|\geq\mathfrak{c}.\end{cases}\]
Next we estimate the collision frequency \(\nu_{\mathfrak{c}}(p)\).
**Lemma 4.6**.: _It holds that_
\[\nu_{\mathfrak{c}}(p)\cong\begin{cases}1+|p|,\quad|p|\leq\mathfrak{c},\\ \mathfrak{c},\quad|p|\geq\mathfrak{c}.\end{cases} \tag{4.29}\]
Proof.: Recall
\[\nu_{\mathfrak{c}}(p)=\int_{\mathbb{R}^{3}}\int_{\mathbb{S}^{2}}\frac{ \mathfrak{c}}{4}\frac{g\sqrt{s}}{p^{0}q^{0}}\mathbf{M}_{\mathfrak{c}}(q)d \omega dq.\]
Since the proof is complicated, we split it into four cases.
_Case 1:_\(|q|\geq\mathfrak{c}^{\frac{1}{8}}\). Using Lemma 4.1 and (4.14), one has
\[\int_{|q|\geq\mathfrak{c}^{\frac{1}{8}}}\int_{\mathbb{S}^{2}}\frac{\mathfrak{ c}}{4}\frac{g\sqrt{s}}{p^{0}q^{0}}\mathbf{M}_{\mathfrak{c}}(q)d\omega dq\lesssim \int_{|q|\geq\mathfrak{c}^{\frac{1}{8}}}\mathfrak{c}e^{-2\bar{c}_{1}|q|}dq \lesssim e^{-\bar{c}_{1}\mathfrak{c}^{\frac{1}{8}}}. \tag{4.30}\]
_Case 2:_\(|q|\leq\mathfrak{c}^{\frac{1}{8}}\) and \(|p|\leq\mathfrak{c}^{\frac{2}{8}}\). It holds that
\[\int_{|q|\leq\mathfrak{c}^{\frac{1}{8}}}\int_{\mathbb{S}^{2}}\frac {\mathfrak{c}g\sqrt{s}}{4p^{0}q^{0}}\frac{n_{0}\gamma}{4\pi\mathfrak{c}^{3}K_ {2}(\gamma)}\exp\Big{(}\frac{u^{\mu}q_{\mu}}{T_{0}}\Big{)}d\omega dq\] \[=\int_{|q|\leq\mathfrak{c}^{\frac{1}{8}}}\int_{\mathbb{S}^{2}} \frac{\mathfrak{c}g\sqrt{s}}{4p^{0}q^{0}}\frac{n_{0}}{(2\pi T_{0})^{\frac{1}{ 8}}}(1+O(\gamma^{-1}))\exp\Big{(}\frac{\mathfrak{c}^{2}-\mathfrak{c}\mathfrak{ q}^{0}}{T_{0}}\Big{)}d\omega dq\]
\[=\frac{n_{0}}{(2\pi T_{0})^{\frac{3}{2}}}\int_{|q|\leq\epsilon^{\frac{1}{8}}} \int_{\mathbb{S}^{2}}\frac{\varsigma g\sqrt{s}}{4p^{0}q^{0}}\exp\Big{(}\frac{ \mathfrak{c}^{2}-\varsigma\bar{q}^{0}}{T_{0}}\Big{)}d\omega dq\cdot O(\gamma^{-1})\] \[\qquad+\frac{n_{0}}{(2\pi T_{0})^{\frac{3}{2}}}\int_{|q|\leq \epsilon^{\frac{1}{8}}}\int_{\mathbb{S}^{2}}\Big{(}\frac{\varsigma g\sqrt{s}}{4 p^{0}q^{0}}-\frac{|p-q|}{2}\Big{)}\exp\Big{(}\frac{\mathfrak{c}^{2}-\varsigma\bar{q}^{0}}{T_ {0}}\Big{)}d\omega dq\] \[\qquad+\frac{n_{0}}{(2\pi T_{0})^{\frac{3}{2}}}\int_{|q|\leq \epsilon^{\frac{1}{8}}}\int_{\mathbb{S}^{2}}\frac{|p-q|}{2}\exp\Big{(}\frac{ \mathfrak{c}^{2}-\varsigma\bar{q}^{0}}{T_{0}}\Big{)}d\omega dq\] \[:=\mathcal{H}_{1}+\mathcal{H}_{2}+\mathcal{H}_{3}. \tag{4.31}\]
It is clear that
\[|\mathcal{H}_{1}|\lesssim\frac{1}{\mathfrak{c}^{2}}\int_{|q|\leq \epsilon^{\frac{1}{8}}}|p-q|e^{-2\bar{c}_{1}|q|}dq\cong\frac{1+|p|}{\mathfrak{ c}^{2}}\lesssim\mathfrak{c}^{-\frac{13}{8}}. \tag{4.32}\]
Using Lemma 4.2, (3.45) and (4.12), we have
\[\mathcal{H}_{3}\gtrsim\int_{|\bar{q}|\leq\frac{1}{2}\mathfrak{c}^{ \frac{1}{8}}}|\bar{p}-\bar{q}|\exp\Big{(}-\frac{|\bar{q}|^{2}}{8\bar{c}_{1}} \Big{)}d\bar{q}\cong 1+|\bar{p}|\gtrsim 1+|p|. \tag{4.33}\]
For \(\mathcal{H}_{2}\), notice that
\[g^{2} =2p^{0}q^{0}-2p\cdot q-2\mathfrak{c}^{2}=|p-q|^{2}+2p^{0}q^{0}-2 \mathfrak{c}^{2}-|p|^{2}-|q|^{2}\] \[=|p-q|^{2}+\frac{4(|p|^{2}+\mathfrak{c}^{2})(|q|^{2}+\mathfrak{c }^{2})-(2\mathfrak{c}^{2}+|p|^{2}+|q|^{2})^{2}}{2p^{0}q^{0}+(2\mathfrak{c}^{2} +|p|^{2}+|q|^{2})}\] \[=|p-q|^{2}-\frac{(|p|^{2}-|q|^{2})^{2}}{2p^{0}q^{0}+(2\mathfrak{ c}^{2}+|p|^{2}+|q|^{2})}, \tag{4.34}\]
then one has
\[\frac{\mathfrak{c}g\sqrt{s}}{4p^{0}q^{0}}-\frac{|p-q|}{2} =\frac{1}{4p^{0}q^{0}}\{\varsigma g\sqrt{s}-2p^{0}q^{0}|p-q|\}\] \[=\frac{\mathfrak{c}^{2}g^{2}(g^{2}+4\mathfrak{c}^{2})-4|p-q|^{2}( |p|^{2}+\mathfrak{c}^{2})(|q|^{2}+\mathfrak{c}^{2})}{4p^{0}q^{0}(\varsigma g \sqrt{s}+2p^{0}q^{0}|p-q|)}\] \[=\frac{4\mathfrak{c}^{4}(g^{2}-|p-q|^{2})+\mathfrak{c}^{2}g^{4}- 4|p-q|^{2}\{|p|^{2}|q|^{2}+\mathfrak{c}^{2}(|p|^{2}+|q|^{2})\}}{4p^{0}q^{0}( \varsigma g\sqrt{s}+2p^{0}q^{0}|p-q|)}\] \[\lesssim O(\mathfrak{c}^{-\frac{\tau}{8}}),\]
which implies that
\[|\mathcal{H}_{2}|\lesssim\int_{|q|\leq\epsilon^{\frac{1}{8}}} \mathfrak{c}^{-\frac{\tau}{8}}e^{-2\bar{c}_{1}|q|}dq\lesssim\mathfrak{c}^{- \frac{\tau}{8}}. \tag{4.35}\]
It follows from (4.31)-(4.33) and (4.35) that
\[\int_{|q|\leq\epsilon^{\frac{1}{8}}}\int_{\mathbb{S}^{2}}\frac{ \varsigma}{4}\frac{g\sqrt{s}}{p^{0}q^{0}}\mathbf{M}_{\mathfrak{c}}(q)d\omega dq \cong 1+|p|. \tag{4.36}\]
_Case 3:_\(|q|\leq\mathfrak{c}^{\frac{1}{8}}\) and \(\mathfrak{c}\geq|p|\geq\mathfrak{c}^{\frac{3}{8}}\). It follows from Lemma 4.1 that
\[g\geq\frac{\mathfrak{c}|p-q|}{\sqrt{p^{0}q^{0}}}\gtrsim\frac{ \mathfrak{c}|p|}{\mathfrak{c}}=|p|\]
and
\[g\leq|p-q|\lesssim|p|,\]
which yields that \(g\cong|p|\). Thus we have
\[\int_{|q|\leq\epsilon^{\frac{1}{8}}}\int_{\mathbb{S}^{2}}\frac{ \varsigma}{4}\frac{g\sqrt{s}}{p^{0}q^{0}}\mathbf{M}_{\mathfrak{c}}(q)d\omega dq \cong\int_{|q|\leq\epsilon^{\frac{1}{8}}}\int_{\mathbb{S}^{2}}|p|\exp\Big{(} \frac{\mathfrak{c}^{2}-\varsigma\bar{q}^{0}}{T_{0}}\Big{)}d\omega dq\cong 1+|p|. \tag{4.37}\]
_Case 4:_\(|q|\leq\mathfrak{c}^{\frac{1}{8}}\) and \(|p|\geq\mathfrak{c}\). It is obvious that
\[\int_{|q|\leq\mathfrak{c}^{\frac{1}{8}}}\int_{\mathbb{S}^{2}}\frac{\mathfrak{c}} {4}\frac{g\sqrt{s}}{p^{0}q^{0}}\mathbf{M}_{\mathfrak{c}}(q)d\omega dq\lesssim \int_{|q|\leq\mathfrak{c}^{\frac{1}{8}}}\mathfrak{c}e^{-2\varepsilon_{1}|q|}dq \lesssim\mathfrak{c}. \tag{4.38}\]
On the other hand, since \(|p|\geq\mathfrak{c}\), one has
\[g\geq\frac{\mathfrak{c}|p-q|}{\sqrt{p^{0}q^{0}}}\gtrsim\frac{\mathfrak{c}|p|}{(| p|^{2}+\mathfrak{c}^{2})^{\frac{1}{4}}\sqrt{\mathfrak{c}}}\gtrsim\sqrt{ \mathfrak{c}|p|}.\]
Thus we have
\[\int_{|q|\leq\mathfrak{c}^{\frac{1}{8}}}\int_{\mathbb{S}^{2}}\frac{\mathfrak{c }}{4}\frac{g\sqrt{s}}{p^{0}q^{0}}\mathbf{M}_{\mathfrak{c}}(q)d\omega dq\gtrsim \int_{|q|\leq\mathfrak{c}^{\frac{1}{8}}}\frac{\sqrt{\mathfrak{c}|p|}\sqrt{ \mathfrak{c}^{2}+\mathfrak{c}|p|}}{p^{0}}\exp\Big{(}\frac{\mathfrak{c}^{2}- \mathfrak{c}\mathfrak{q}^{0}}{T_{0}}\Big{)}dq\gtrsim\mathfrak{c}. \tag{4.39}\]
It follows from (4.38) and (4.39) that
\[\int_{|q|\leq\mathfrak{c}^{\frac{1}{8}}}\int_{\mathbb{S}^{2}}\frac{\mathfrak{c }}{4}\frac{g\sqrt{s}}{p^{0}q^{0}}\mathbf{M}_{\mathfrak{c}}(q)d\omega dq\cong \mathfrak{c}. \tag{4.40}\]
Combining (4.30), (4.36), (4.37) and (4.40), we conclude (4.29). Therefore the proof is completed.
**Remark 4.7**.: By similar arguments as in Lemma 4.6, we can obtain
\[\int_{\mathbb{R}^{3}}\int_{\mathbb{S}^{2}}v_{\phi}\mathbf{M}_{\mathfrak{c}}^{ \alpha}(q)d\omega dq\cong\nu_{\mathfrak{c}}(p),\quad\text{for }\alpha>0. \tag{4.41}\]
### Uniform-in-\(\mathfrak{c}\) coercivity estimate on \(\mathbf{L}_{\mathfrak{c}}\)
In this subsection, we shall derive a uniform-in-\(\mathfrak{c}\) coercivity estimate for the linearized relativistic collision operator \(\mathbf{L}_{\mathfrak{c}}\). For later use, we denote
\[k_{1}(p,q) :=2\pi|p-q|\frac{\rho}{(2\pi\theta)^{\frac{3}{2}}}e^{-\frac{|p-u |^{2}}{4\theta}-\frac{|q-u|^{2}}{4\theta}}, \tag{4.42}\] \[k_{2}(p,q) :=\frac{2}{|p-q|}\frac{\rho}{\sqrt{2\pi\theta}}e^{-\frac{|p-q|^{ 2}}{8\theta}-\frac{(|p-u|^{2}-|q-u|^{2})^{2}}{8\theta|p-q|^{2}}}, \tag{4.43}\]
which are indeed the corresponding kernels of Newtonian Boltzmann equation.
**Lemma 4.8**.: _It holds that_
\[\int_{\mathbb{R}^{3}}|k_{\mathfrak{c}1}(p,q)-k_{1}(p,q)|dq\lesssim\mathfrak{c }^{-\frac{3}{2}},\quad p\in\mathbb{R}^{3}. \tag{4.44}\]
Proof.: We remark that throughout the proof, we make no attempt to be optimal in our estimates. We split the proof into three cases.
_Case 1._\(|p|\geq\mathfrak{c}^{\frac{1}{8}}\). It follows from (3.45), (4.42) and Lemma 4.2 that
\[\int_{\mathbb{R}^{3}}|k_{\mathfrak{c}1}(p,q)-k_{1}(p,q)|dq\] \[\lesssim\int_{\mathbb{R}^{3}}|p-q|e^{-\bar{c}_{1}|p|-\bar{c}_{1}|q |}dq+\int_{\mathbb{R}^{3}}|p-q|e^{-\frac{|p|^{2}}{8\pi}-\frac{|q|^{2}}{8 \theta}}dq\] \[\lesssim e^{-\frac{c_{1}}{2}\mathfrak{c}^{\frac{1}{8}}}+e^{- \frac{c_{2}}{4}\mathfrak{c}^{\frac{1}{4}}}\lesssim\mathfrak{c}^{-\frac{3}{2}}. \tag{4.45}\]
_Case 2._\(|p|\leq\mathfrak{c}^{\frac{1}{8}}\) and \(|q|\geq\mathfrak{c}^{\frac{1}{8}}\). Similar to (4.45), one has
\[\int_{|q|\geq\mathfrak{c}^{\frac{1}{8}}}|k_{\mathfrak{c}1}(p,q)-k_ {1}(p,q)|dq \lesssim\int_{|q|\geq\mathfrak{c}^{\frac{1}{8}}}|p-q|e^{-\bar{c}_{1 }|p|-\bar{c}_{1}|q|}dq+\int_{|q|\geq\mathfrak{c}^{\frac{1}{8}}}|p-q|e^{-\frac {|p|^{2}}{8\theta}-\frac{|q|^{2}}{8\theta}}dq\] \[\lesssim e^{-\frac{c_{1}}{2}\mathfrak{c}^{\frac{1}{8}}}+e^{- \frac{c_{2}}{4}\mathfrak{c}^{\frac{1}{4}}}\lesssim\mathfrak{c}^{-\frac{3}{2}}. \tag{4.46}\]
_Case 3._\(|p|\leq\mathfrak{c}^{\frac{1}{8}}\) and \(|q|\leq\mathfrak{c}^{\frac{1}{8}}\). Recall that
\[k_{\mathfrak{c}1}(p,q) =\frac{\pi\mathfrak{c}g\sqrt{s}}{p^{0}q^{0}}\frac{n_{0}}{4\pi \mathfrak{c}T_{0}K_{2}(\gamma)}\exp\Big{(}\frac{u^{\mu}p_{\mu}}{2T_{0}}\Big{)} \exp\Big{(}\frac{u^{\mu}q_{\mu}}{2T_{0}}\Big{)}\] \[=\frac{\pi\mathfrak{c}g\sqrt{s}}{p^{0}q^{0}}\frac{n_{0}}{(2\pi T_ {0})^{\frac{3}{2}}}(1+O(\gamma^{-1}))\exp\Big{(}\frac{\mathfrak{c}^{2}- \mathfrak{c}\bar{p}^{0}}{2T_{0}}\Big{)}\exp\Big{(}\frac{\mathfrak{c}^{2}- \mathfrak{c}\bar{q}^{0}}{2T_{0}}\Big{)}\] \[=\frac{\pi\mathfrak{c}g\sqrt{s}}{p^{0}q^{0}}\frac{n_{0}}{(2\pi T_ {0})^{\frac{3}{2}}}(1+O(\gamma^{-1}))\exp\Big{(}\frac{\mathfrak{c}^{2}- \mathfrak{c}\bar{p}^{0}}{2T_{0}}\Big{)}\exp\Big{(}\frac{\mathfrak{c}^{2}- \mathfrak{c}\bar{q}^{0}}{2T_{0}}\Big{)}.\]
Then we have
\[|k_{\mathfrak{c}1}(p,q)-k_{1}(p,q)|\] \[\leq\frac{\pi\mathfrak{c}g\sqrt{s}}{p^{0}q^{0}}\frac{n_{0}}{(2\pi T _{0})^{\frac{3}{2}}}\exp\Big{(}\frac{\mathfrak{c}^{2}-\mathfrak{c}\bar{p}^{0} }{2T_{0}}\Big{)}\exp\Big{(}\frac{\mathfrak{c}^{2}-\mathfrak{c}\bar{q}^{0}}{2T_ {0}}\Big{)}\cdot O(\gamma^{-1})\] \[\quad\quad+\Big{|}\frac{\pi\mathfrak{c}g\sqrt{s}}{p^{0}q^{0}}-2 \pi|p-q|\Big{|}\frac{n_{0}}{(2\pi T_{0})^{\frac{3}{2}}}\exp\Big{(}\frac{ \mathfrak{c}^{2}-\mathfrak{c}\bar{p}^{0}}{2T_{0}}\Big{)}\exp\Big{(}\frac{ \mathfrak{c}^{2}-\mathfrak{c}\bar{q}^{0}}{2T_{0}}\Big{)}\] \[\quad\quad+2\pi|p-q|\Big{|}\frac{n_{0}}{(2\pi T_{0})^{\frac{3}{2 }}}-\frac{\rho}{(2\pi\theta)^{\frac{3}{2}}}\Big{|}\exp\Big{(}\frac{\mathfrak{ c}^{2}-\mathfrak{c}\bar{p}^{0}}{2T_{0}}\Big{)}\exp\Big{(}\frac{\mathfrak{c}^{2}- \mathfrak{c}\bar{q}^{0}}{2T_{0}}\Big{)}\] \[\quad\quad+2\pi|p-q|\frac{\rho}{(2\pi\theta)^{\frac{3}{2}}}e^{- \frac{|p-u|^{2}}{4\theta}-\frac{|q-u|^{2}}{4\theta}}\Big{|}\exp\Big{(}\frac{|p- u|^{2}}{4\theta}+\frac{|q-u|^{2}}{4\theta}+\frac{\mathfrak{c}^{2}- \mathfrak{c}\bar{p}^{0}}{2T_{0}}+\frac{\mathfrak{c}^{2}-\mathfrak{c}\bar{q}^{0 }}{2T_{0}}\Big{)}-1\Big{|}\] \[:=\mathcal{D}_{1}+\mathcal{D}_{2}+\mathcal{D}_{3}+\mathcal{D}_{4}. \tag{4.47}\]
It is clear that
\[|\mathcal{D}_{1}|\lesssim\frac{|p-q|}{\mathfrak{c}^{2}}e^{-\bar{c}_{1}|p|- \bar{c}_{1}|q|},\]
which implies that
\[\int_{|q|\leq\mathfrak{c}^{\frac{1}{8}}}|\mathcal{D}_{1}(q,p)|dq \lesssim\mathfrak{c}^{-2}. \tag{4.48}\]
For \(\mathcal{D}_{2}\), we notice that
\[\frac{\mathfrak{c}\sqrt{s}}{2p^{0}q^{0}}-1=\frac{\mathfrak{c}\sqrt{s}-2p^{0}q ^{0}}{2p^{0}q^{0}}=\frac{\mathfrak{c}^{2}g^{2}-4\mathfrak{c}^{2}(|p|^{2}+|q|^ {2})-4|p|^{2}|q|^{2}}{2p^{0}q^{0}(\mathfrak{c}\sqrt{s}+2p^{0}q^{0})}\lesssim O (\mathfrak{c}^{-\frac{3}{2}}). \tag{4.49}\]
It follows from (4.34) that
\[g^{2}-|p-q|^{2}=-\frac{(|p|^{2}-|q|^{2})^{2}}{2p^{0}q^{0}+(2\mathfrak{c}^{2}+| p|^{2}+|q|^{2})}\lesssim O(\mathfrak{c}^{-\frac{3}{2}}), \tag{4.50}\]
which yields that
\[|g-|p-q||=\frac{|g^{2}-|p-q|^{2}|}{g+|p-q|}\lesssim\frac{O( \mathfrak{c}^{-\frac{3}{2}})}{g+|p-q|}\lesssim\frac{O(\mathfrak{c}^{-\frac{3}{ 2}})}{|p-q|}. \tag{4.51}\]
Using (4.49) and (4.51), one has
\[\Big{|}\frac{\mathfrak{c}}{2}\frac{g\sqrt{s}}{p^{0}q^{0}}-|p-q| \Big{|} \leq|g(\frac{\mathfrak{c}\sqrt{s}}{2p^{0}q^{0}}-1)|+|g-|p-q||\] \[\lesssim(g+\frac{1}{|p-q|})\mathfrak{c}^{-\frac{3}{2}}\lesssim(|p- q|+\frac{1}{|p-q|})\mathfrak{c}^{-\frac{3}{2}},\]
which implies that
\[\int_{|q|\leq\mathfrak{c}^{\frac{1}{8}}}|\mathcal{D}_{2}(q,p)|dq \lesssim\int_{|q|\leq\mathfrak{c}^{\frac{1}{8}}}\Big{|}\frac{\mathfrak{c}}{2} \frac{g\sqrt{s}}{2p^{0}q^{0}}-|p-q||e^{-\bar{c}_{1}|p|-\bar{c}_{1}|q|}dq\]
\[\lesssim\mathfrak{c}^{-\frac{3}{2}}\int_{|q|\leq\mathfrak{c}^{\frac{1}{8}}} \Big{(}|p-q|+\frac{1}{|p-q|}\Big{)}e^{-\bar{c}_{1}|p|-\bar{c}_{1}|q|}\lesssim \mathfrak{c}^{-\frac{3}{2}}. \tag{4.52}\]
For \(\mathcal{D}_{3}\), it follows from Proposition 3.8 that
\[\Big{|}\frac{n_{0}}{(2\pi T_{0})^{\frac{3}{2}}}-\frac{\rho}{(2\pi \theta)^{\frac{3}{2}}}\Big{|}\lesssim|T_{0}-\theta|+|n_{0}-\rho|\lesssim \mathfrak{c}^{-2}, \tag{4.53}\]
which yields that
\[\int_{|q|\leq\mathfrak{c}^{\frac{1}{8}}}|\mathcal{D}_{3}(q,p)|dq \lesssim\mathfrak{c}^{-2}\int_{|q|\leq\mathfrak{c}^{\frac{1}{8}}}|p-q|e^{-\bar{ c}_{1}|p|-\bar{c}_{1}|q|}\lesssim\mathfrak{c}^{-2}. \tag{4.54}\]
For \(\mathcal{D}_{4}\), a direct calculation shows that
\[\frac{|p-\mathfrak{u}|^{2}}{4\theta}+\frac{\mathfrak{c}^{2}- \mathfrak{c}\bar{p}^{0}}{2T_{0}} =\frac{|p-\mathfrak{u}|^{2}}{4\theta T_{0}}(T_{0}-\theta)+\frac{1 }{4T_{0}}(|p-\mathfrak{u}|^{2}+2\mathfrak{c}^{2}-2\mathfrak{c}\bar{p}^{0})\] \[=\frac{|p-\mathfrak{u}|^{2}}{4\theta T_{0}}(T_{0}-\theta)+\frac{1 }{4T_{0}}\Big{[}\frac{|p|^{4}}{(p^{0}+\mathfrak{c})^{2}}+2p\cdot(u-\mathfrak{ u})+(|\mathfrak{u}|^{2}-|u|^{2})\Big{]}\] \[\qquad+\frac{1}{4T_{0}}\Big{[}\frac{|u|^{4}}{(u^{0}+\mathfrak{c} )^{2}}-2\frac{|p|^{2}|u|^{2}}{(u^{0}+\mathfrak{c})(p^{0}+\mathfrak{c})}\Big{]}, \tag{4.55}\]
which implies that
\[\Big{|}\frac{|p-\mathfrak{u}|^{2}}{4\theta}+\frac{\mathfrak{c}^{2}- \mathfrak{c}\bar{p}^{0}}{2T_{0}}\Big{|}\lesssim\mathfrak{c}^{-\frac{3}{2}}. \tag{4.56}\]
Similarly, one has
\[\Big{|}\frac{|q-\mathfrak{u}|^{2}}{4\theta}+\frac{\mathfrak{c}^{2}- \mathfrak{c}\bar{q}^{0}}{2T_{0}}\Big{|}\lesssim\mathfrak{c}^{-\frac{3}{2}}.\]
Thus we have
\[\int_{|q|\leq\mathfrak{c}^{\frac{1}{8}}}|\mathcal{D}_{4}(q,p)|dq \lesssim\mathfrak{c}^{-\frac{3}{2}}\int_{|q|\leq\mathfrak{c}^{\frac{1}{8}}}|p- q|e^{-\frac{|p|^{2}}{8\theta}-\frac{|q|^{2}}{8\theta}}dq\lesssim\mathfrak{c}^{- \frac{3}{2}}. \tag{4.57}\]
Combining (4.47), (4.48), (4.52), (4.54) and (4.57), we have that
\[\int_{|q|\leq\mathfrak{c}^{\frac{1}{8}}}|k_{\mathfrak{c}1}(p,q)-k_{1}(p,q)|dq \lesssim\mathfrak{c}^{-\frac{3}{2}},\quad|p|\leq\mathfrak{c}^{\frac{1}{8}}. \tag{4.58}\]
Hence, we conclude (4.44) from (4.45), (4.46) and (4.58). Therefore the proof is completed.
**Lemma 4.9**.: _It holds that_
\[\int_{\mathbb{R}^{3}}|k_{\mathfrak{c}2}(p,q)-k_{2}(p,q)|dq\lesssim\mathfrak{ c}^{-\frac{3}{8}},\quad p\in\mathbb{R}^{3}.\]
Proof.: Since the proof is complicated, we split the proof into three cases.
_Case 1._\(|p-q|\geq\mathfrak{c}^{\frac{1}{8}}\). It follows from (3.45), (4.11) and (4.43) that
\[\int_{|p-q|\geq\mathfrak{c}^{\frac{1}{8}}}|k_{\mathfrak{c}2}(p,q)- k_{2}(p,q)|dq \lesssim\int_{|p-q|\geq\mathfrak{c}^{\frac{1}{8}}}\frac{1}{|p-q|}e^{- \frac{c_{1}}{2}|p-q|}dq+\int_{|p-q|\geq\mathfrak{c}^{\frac{1}{8}}}\frac{1}{|p- q|}e^{-\frac{|p-q|^{2}}{8\theta}}dq\] \[\lesssim e^{-\frac{c_{1}}{4}\mathfrak{c}^{\frac{1}{8}}}+e^{- \frac{c_{2}}{4}\mathfrak{c}^{\frac{1}{4}}}\lesssim\mathfrak{c}^{-\frac{3}{8}}. \tag{4.59}\]
_Case 2._\(|p-q|\leq\mathfrak{c}^{\frac{1}{8}}\) and \(|p|\geq\mathfrak{c}^{\frac{3}{8}}\). By Lemma 4.4 and similar arguments for \(k_{2}(p,q)\) as in [17, Lemma 3.3.1], one has
\[\int_{|p-q|\leq\mathfrak{c}^{\frac{1}{8}}}|k_{\mathfrak{c}2}(p,q)-k_{2}(p,q)|dq \lesssim\frac{1}{\mathfrak{c}}+\frac{1}{1+|p|}+\frac{1}{1+|p-\mathfrak{u}|} \lesssim\mathfrak{c}^{-\frac{3}{8}}. \tag{4.60}\]
_Case 3._\(|p-q|\leq\mathfrak{c}^{\frac{1}{8}}\) and \(|p|\leq\mathfrak{c}^{\frac{3}{8}}\). In this case, we have \(|q|\lesssim\mathfrak{c}^{\frac{3}{8}}\). Recall that
\[k_{\mathfrak{c}2}(p,q)=\frac{\mathfrak{c}\pi s^{\frac{3}{2}}}{4gp^{0}q^{0}} \frac{n_{0}}{(2\pi T_{0})^{\frac{3}{2}}}(1+O(\gamma^{-1}))\frac{\overline{ \boldsymbol{\ell}}\sqrt{\overline{\boldsymbol{\ell}}^{2}-\overline{\boldsymbol {j}}^{2}}+\overline{\boldsymbol{\ell}}+(\overline{\boldsymbol{\ell}}^{2}- \overline{\boldsymbol{j}}^{2})}{(\overline{\boldsymbol{\ell}}^{2}-\overline{ \boldsymbol{j}}^{2})^{\frac{3}{2}}}e^{\frac{\mathfrak{c}^{2}-\sqrt{ \boldsymbol{\ell}^{2}-\overline{\boldsymbol{j}}^{2}}}{T_{0}}}.\]
Then one has
\[|k_{\mathfrak{c}2}(p,q)-k_{2}(p,q)|\] \[\leq\frac{\mathfrak{c}\pi s^{\frac{3}{2}}}{4gp^{0}q^{0}}\frac{n_ {0}}{(2\pi T_{0})^{\frac{3}{2}}}\frac{\overline{\boldsymbol{\ell}}\sqrt{ \overline{\boldsymbol{\ell}}^{2}-\overline{\boldsymbol{j}}^{2}}+\overline{ \boldsymbol{\ell}}+(\overline{\boldsymbol{\ell}}^{2}-\overline{\boldsymbol{ j}}^{2})}{(\overline{\boldsymbol{\ell}}^{2}-\overline{\boldsymbol{j}}^{2})^{ \frac{3}{2}}}e^{\frac{\mathfrak{c}^{2}-\sqrt{\boldsymbol{\ell}^{2}- \overline{\boldsymbol{j}}^{2}}}{T_{0}}}\cdot O(\gamma^{-1})\] \[\qquad+\frac{\mathfrak{c}\pi s^{\frac{3}{2}}}{4gp^{0}q^{0}} \Big{|}\frac{n_{0}}{(2\pi T_{0})^{\frac{3}{2}}}-\frac{\rho}{(2\pi\theta)^{ \frac{3}{2}}}\Big{|}\frac{\overline{\boldsymbol{\ell}}\sqrt{\overline{ \boldsymbol{\ell}}^{2}-\overline{\boldsymbol{j}}^{2}}+\overline{\boldsymbol{ \ell}}+(\overline{\boldsymbol{\ell}}^{2}-\overline{\boldsymbol{j}}^{2})}{( \overline{\boldsymbol{\ell}}^{2}-\overline{\boldsymbol{j}}^{2})^{\frac{3}{2} }}e^{\frac{\mathfrak{c}^{2}-\sqrt{\boldsymbol{\ell}^{2}-\overline{ \boldsymbol{j}}^{2}}}{T_{0}}}\] \[\qquad+\frac{4\pi\rho}{(2\pi\theta)^{\frac{3}{2}}}\Big{|}\frac{ \mathfrak{c}s^{\frac{3}{2}}}{16gp^{0}q^{0}}\frac{\overline{\boldsymbol{\ell}} \sqrt{\overline{\boldsymbol{\ell}}^{2}-\overline{\boldsymbol{j}}^{2}}+ \overline{\boldsymbol{\ell}}+(\overline{\boldsymbol{\ell}}^{2}-\overline{ \boldsymbol{j}}^{2})}{(\overline{\boldsymbol{\ell}}^{2}-\overline{\boldsymbol{ j}}^{2})^{\frac{3}{2}}}-\frac{\theta}{|p-q|}\Big{|}e^{\frac{\mathfrak{c}^{2}- \sqrt{\boldsymbol{\ell}^{2}-\overline{\boldsymbol{\ell}}^{2}}}{T_{0}}}\] \[\qquad+\frac{2}{|p-q|}\frac{\rho}{\sqrt{2\pi\theta}}e^{-\frac{| p-q|^{2}}{8\theta}-\frac{(|p-q|^{2}-|q-|q-|q|^{2})}{8\theta|p-q|^{2}}}\Big{|}e^{ \frac{\mathfrak{c}^{2}-\sqrt{\boldsymbol{\ell}^{2}-\overline{\boldsymbol{j}}^{ 2}}}{T_{0}}+\frac{|p-q|^{2}-|q-|q|^{2})^{2}}{8\theta|p-q|^{2}}}-1\Big{|}\] \[:=\mathcal{E}_{1}+\mathcal{E}_{2}+\mathcal{E}_{3}+\mathcal{E}_{4}. \tag{4.61}\]
It follows from (4.11) that
\[\int_{|p-q|\leq\mathfrak{c}^{\frac{1}{8}}}|\mathcal{E}_{1}|dq\lesssim\frac{1} {\mathfrak{c}^{2}}\int_{|p-q|\leq\mathfrak{c}^{\frac{1}{8}}}\frac{1}{|p-q|}e^{- \frac{\mathfrak{c}_{1}}{2}|p-q|}dq\lesssim\frac{1}{\mathfrak{c}^{2}}. \tag{4.62}\]
By (4.53), one has
\[\int_{|p-q|\leq\mathfrak{c}^{\frac{1}{8}}}|\mathcal{E}_{2}|dq\lesssim\frac{1} {\mathfrak{c}^{2}}\int_{|p-q|\leq\mathfrak{c}^{\frac{1}{8}}}\frac{1}{|p-q|}e^{ -\frac{\mathfrak{c}_{1}}{2}|p-q|}dq\lesssim\frac{1}{\mathfrak{c}^{2}}. \tag{4.63}\]
We next focus on \(\mathcal{E}_{3}\). It holds that
\[\frac{\mathfrak{c}s^{\frac{3}{2}}}{16gp^{0}q^{0}}\frac{\overline{ \boldsymbol{\ell}}\sqrt{\overline{\boldsymbol{\ell}}^{2}-\overline{\boldsymbol {j}}^{2}}+\overline{\boldsymbol{\ell}}+(\overline{\boldsymbol{\ell}}^{2}- \overline{\boldsymbol{j}}^{2})}{(\overline{\boldsymbol{\ell}}^{2}-\overline{ \boldsymbol{j}}^{2})^{\frac{3}{2}}}-\frac{\theta}{|p-q|}\] \[=\frac{1}{2\mathfrak{c}^{2}}\frac{g^{2}T_{0}^{3}}{p^{0}q^{0}| \bar{p}-\bar{q}|^{3}}\Big{(}\overline{\boldsymbol{\ell}}\sqrt{\overline{ \boldsymbol{\ell}}^{2}-\overline{\boldsymbol{j}}^{2}}+\overline{\boldsymbol{ \ell}}+(\overline{\boldsymbol{\ell}}^{2}-\overline{\boldsymbol{j}}^{2})\Big{)}- \frac{\theta}{|p-q|}\] \[=\frac{1}{2\mathfrak{c}^{2}}\frac{g^{2}}{p^{0}q^{0}|\bar{p}-\bar{q} |^{3}}\Big{(}\frac{\mathfrak{c}^{2}\sqrt{s}(\bar{p}^{0}+\bar{q}^{0})}{4g}| \bar{p}-\bar{q}|T_{0}+\frac{\mathfrak{c}^{2}s}{4g^{2}}|\bar{p}-\bar{q}|^{2}T_{ 0}+\frac{\mathfrak{c}}{2}(\bar{p}^{0}+\bar{q}^{0})T_{0}^{2}\Big{)}-\frac{ \theta}{|p-q|}\] \[=\Big{\{}\frac{1}{2}\frac{g^{2}}{\bar{p}^{0}\bar{q}^{0}|\bar{p}- \bar{q}|^{3}}\Big{(}\frac{\sqrt{s}(\bar{p}^{0}+\bar{q}^{0})}{4g}|\bar{p}- \bar{q}|+\frac{s}{4g^{2}}|\bar{p}-\bar{q}|^{2}\Big{)}\theta-\frac{\theta}{| \bar{p}-\bar{q}|}\Big{\}}+\frac{g^{2}(\bar{p}^{0}+\bar{q}^{0})}{4\mathfrak{c} p^{0}q^{0}|\bar{p}-\bar{q}|^{3}}T_{0}^{2}\] \[\qquad+\frac{1}{2}\frac{g^{2}}{p^{0}q^{0}|\bar{p}-\bar{q}|^{3}} \Big{(}\frac{\sqrt{s}(\bar{p}^{0}+\bar{q}^{0})}{4g}|\bar{p}-\bar{q}|+\frac{s}{4g ^{2}}|\bar{p}-\bar{q}|^{2}\Big{)}(T_{0}-\theta)+\Big{(}\frac{\theta}{|\bar{p}- \bar{q}|}-\frac{\theta}{|p-q|}\Big{)}\] \[:=\mathcal{E}_{31}+\mathcal{E}_{32}+\mathcal{E}_{33}+\mathcal{E}_{ 34}+\mathcal{E}_{35}. \tag{4.64}\]
A direct calculation shows that
\[|\mathcal{E}_{32}|+|\mathcal{E}_{33}|+|\mathcal{E}_{34}|+|\mathcal{E}_{35}| \lesssim\frac{1}{|p-q|}\mathfrak{c}^{-\frac{13}{8}}. \tag{4.65}\]
For \(\mathcal{E}_{31}\), one has
\[\frac{\mathcal{E}_{31}}{\theta}|\bar{p}-\bar{q}| =\frac{1}{2}\frac{g^{2}}{\bar{p}^{0}\bar{q}^{0}|\bar{p}-\bar{q}|^{ 2}}\Big{(}\frac{\sqrt{s}(\bar{p}^{0}+\bar{q}^{0})}{4g}|\bar{p}-\bar{q}|+\frac{s }{4g^{2}}|\bar{p}-\bar{q}|^{2}\Big{)}-1\] \[=\frac{1}{2}\Big{(}\frac{\sqrt{s}g(\bar{p}^{0}+\bar{q}^{0})}{4 \bar{p}^{0}\bar{q}^{0}|\bar{p}-\bar{q}|}-1\Big{)}+\frac{1}{2}\Big{(}\frac{s}{4 \bar{p}^{0}\bar{q}^{0}}-1\Big{)}\] \[:=\mathcal{E}_{311}+\mathcal{E}_{312}.\]
For \(\mathcal{E}_{312}\), we notice that
\[\frac{s}{4\bar{p}^{0}\bar{q}^{0}}-1 =\frac{s-4\bar{p}^{0}\bar{q}^{0}}{4\bar{p}^{0}\bar{q}^{0}}=\frac {(g^{2}+4\mathfrak{c}^{2})^{2}-16(\mathfrak{c}^{2}+|\bar{p}|^{2})(\mathfrak{c }^{2}+|\bar{q}|^{2})}{4\bar{p}^{0}\bar{q}^{0}(s+4\bar{p}^{0}\bar{q}^{0})}\] \[=\frac{g^{4}+8g^{2}\mathfrak{c}^{2}-16\mathfrak{c}^{2}(|\bar{p}|^ {2}+|\bar{q}|^{2})-16|\bar{p}|^{2}|\bar{q}|^{2}}{4\bar{p}^{0}\bar{q}^{0}(s+4 \bar{p}^{0}\bar{q}^{0})}\] \[\lesssim O(\mathfrak{c}^{-\frac{5}{4}}).\]
For \(\mathcal{E}_{311}\), it is clear that
\[\frac{\sqrt{s}g(\bar{p}^{0}+\bar{q}^{0})}{4\bar{p}^{0}\bar{q}^{0} |\bar{p}-\bar{q}|}-1 =\frac{\sqrt{s}g(\bar{p}^{0}+\bar{q}^{0})-4\bar{p}^{0}\bar{q}^{0} |\bar{p}-\bar{q}|}{4\bar{p}^{0}\bar{q}^{0}|\bar{p}-\bar{q}|}\] \[=\frac{\sqrt{s}g-2\bar{q}^{0}|\bar{p}-\bar{q}|}{4\bar{q}^{0}|\bar {p}-\bar{q}|}+\frac{\sqrt{s}g-2\bar{p}^{0}|\bar{p}-\bar{q}|}{4p^{0}|\bar{p}- \bar{q}|}.\]
Due to (4.50), one has
\[\frac{\sqrt{s}g-2\bar{q}^{0}|\bar{p}-\bar{q}|}{4\bar{q}^{0}|\bar {p}-\bar{q}|}= \frac{(g^{2}+4\mathfrak{c}^{2})g^{2}-4(|\bar{q}|^{2}+\mathfrak{ c}^{2})|\bar{p}-\bar{q}|^{2}}{4\bar{q}^{0}|\bar{p}-\bar{q}|(\sqrt{s}g+2\bar{q}^{0} |\bar{p}-\bar{q}|)}\] \[= \frac{4\mathfrak{c}^{2}(g^{2}-|\bar{p}-\bar{q}|^{2})+g^{4}-4|\bar {q}|^{2}|\bar{p}-\bar{q}|^{2}}{4\bar{q}^{0}|\bar{p}-\bar{q}|(\sqrt{s}g+2\bar{ q}^{0}|\bar{p}-\bar{q}|)}\] \[= -\frac{4\mathfrak{c}^{2}(|\bar{p}|^{2}-|\bar{q}|^{2})^{2}}{4\bar{ q}^{0}|\bar{p}-\bar{q}|(\sqrt{s}g+2\bar{q}^{0}|\bar{p}-\bar{q}|)(2\bar{p}^{0} \bar{q}^{0}+(2\mathfrak{c}^{2}+|\bar{p}|^{2}+|\bar{q}|^{2}))}\] \[+\frac{g^{4}-4|\bar{q}|^{2}|\bar{p}-\bar{q}|^{2}}{4\bar{q}^{0}| \bar{p}-\bar{q}|(\sqrt{s}g+2\bar{q}^{0}|\bar{p}-\bar{q}|)}\] \[\lesssim O(\mathfrak{c}^{-\frac{5}{4}})\]
and
\[\frac{\sqrt{s}g-2\bar{p}^{0}|\bar{p}-\bar{q}|}{4\bar{p}^{0}|\bar{ p}-\bar{q}|}\lesssim O(\mathfrak{c}^{-\frac{5}{4}}).\]
Thus we can obtain
\[|\mathcal{E}_{31}|\lesssim\frac{1}{|p-q|}\mathfrak{c}^{-\frac{5}{4 }}. \tag{4.66}\]
Combining (4.65) and (4.66), one obtains, for \(|p|\leq\mathfrak{c}^{\frac{3}{8}}\), that
\[\int_{|p-q|\leq\mathfrak{c}^{\frac{1}{8}}}|\mathcal{E}_{3}|dq\lesssim\mathfrak{ c}^{-\frac{5}{4}}\int_{|p-q|\leq\mathfrak{c}^{\frac{1}{8}}}\frac{1}{|p-q|}e^{-\frac{c_{ 1}}{2}|p-q|}dq\lesssim\mathfrak{c}^{-\frac{5}{4}}. \tag{4.67}\]
Next, we consider \(\mathcal{E}_{4}\). It follows from (4.19) and (4.34) that
\[\frac{\mathfrak{c}^{2}-\sqrt{\bar{\mathcal{L}}^{2}-\bar{\mathcal{J }}^{2}}}{T_{0}}+\frac{|p-q|^{2}}{8\theta}+\frac{(|p-\mathfrak{u}|^{2}-|q- \mathfrak{u}|^{2})^{2}}{8\theta|p-q|^{2}}\] \[=\frac{1}{T_{0}}\Big{[}\mathfrak{c}^{2}-\sqrt{\bar{\mathcal{L}}^ {2}-\bar{\mathcal{J}}^{2}}+\frac{|\bar{p}-\bar{q}|^{2}}{8}+\frac{(|\bar{p}|^{2}- |\bar{q}|^{2})^{2}}{8|\bar{p}-\bar{q}|^{2}}\Big{]}+\Big{[}\frac{|p-q|^{2}}{8 \theta}-\frac{|\bar{p}-\bar{q}|^{2}}{8T_{0}}\Big{]}\]
\[=\frac{|\bar{p}-\bar{q}|^{2}}{8}\frac{g^{2}|\bar{p}-\bar{q}|^{2}+4 \mathfrak{c}^{2}(|\bar{p}-\bar{q}|^{2}-g^{2})}{(\sqrt{s}|\bar{p}-\bar{q}|+2 \mathfrak{c}g)^{2}}\] \[=\frac{|\bar{p}-\bar{q}|^{2}}{8}\Big{\{}\frac{g^{2}|\bar{p}-\bar{ q}|^{2}}{(\sqrt{s}|\bar{p}-\bar{q}|^{2}+2\mathfrak{c}g)^{2}}+\frac{4\mathfrak{c}^{2}}{( \sqrt{s}|\bar{p}-\bar{q}|+2\mathfrak{c}g)^{2}}\frac{(|\bar{p}|^{2}-|\bar{q}|^{2 })^{2}}{2\bar{p}^{0}\bar{q}^{0}+(2\mathfrak{c}^{2}+|\bar{p}|^{2}+|\bar{q}|^{2 })}\Big{\}}\] \[\lesssim O(\mathfrak{c}^{-1}). \tag{4.69}\]
For \(\mathcal{G}_{11}\), it can be written as
\[\frac{(|\bar{p}|^{2}-|\bar{q}|^{2})^{2}}{8|\bar{p}-\bar{q}|^{2}} \Big{\{}1-\frac{1}{1+\sqrt{\frac{\pi}{4\mathfrak{c}^{2}}}\frac{|\bar{p}-\bar{ q}|^{2}}{g}}\frac{8|\bar{p}-\bar{q}|^{2}}{g^{2}}\frac{\mathfrak{c}^{2}}{2\bar{p}^{0 }\bar{q}^{0}+(2\mathfrak{c}^{2}+|\bar{p}|^{2}+|\bar{q}|^{2})}\Big{\}}\]
\[=\frac{(|\bar{p}|^{2}-|\bar{q}|^{2})^{2}}{8|\bar{p}-\bar{q}|^{2}} \frac{(2\varsigma g^{2}+\sqrt{s}|\bar{p}-\bar{q}|g)(2\bar{p}^{0}\bar{q}^{0}+(2 \varsigma^{2}+|\bar{p}|^{2}+|\bar{q}|^{2}))-16\varsigma^{3}|\bar{p}-\bar{q}|^{2} }{(2\varsigma g^{2}+\sqrt{s}|\bar{p}-\bar{q}|g)(2\bar{p}^{0}\bar{q}^{0}+(2 \varsigma^{2}+|\bar{p}|^{2}+|\bar{q}|^{2}))}\] \[=\frac{(|\bar{p}|^{2}-|\bar{q}|^{2})^{2}}{8|\bar{p}-\bar{q}|^{2}} \frac{(4\varsigma^{3}g^{2}-4\varsigma^{3}|\bar{p}-\bar{q}|^{2})+(4\varsigma g^{2 }\bar{p}^{0}\bar{q}^{0}-4\varsigma^{3}|\bar{p}-\bar{q}|^{2})+(2\sqrt{s}|\bar{p }-\bar{q}|g\varsigma^{2}-4\varsigma^{3}|\bar{p}-\bar{q}|^{2})}{(2\varsigma g^{2 }+\sqrt{s}|\bar{p}-\bar{q}|g)(2\bar{p}^{0}\bar{q}^{0}+(2\varsigma^{2}+|\bar{p} |^{2}+|\bar{q}|^{2}))}\] \[\quad+\frac{(|\bar{p}|^{2}-|\bar{q}|^{2})^{2}}{8|\bar{p}-\bar{q}|^ {2}}\frac{(2\sqrt{s}\bar{p}^{0}\bar{q}^{0}|\bar{p}-\bar{q}|g-4\varsigma^{3}|\bar {p}-\bar{q}|^{2})+(2\varsigma g^{2}+\sqrt{s}|\bar{p}-\bar{q}|g)(|\bar{p}|^{2}+| \bar{q}|^{2})}{(2\varsigma g^{2}+\sqrt{s}|\bar{p}-\bar{q}|g)(2\bar{p}^{0}\bar{ q}^{0}+(2\varsigma^{2}+|\bar{p}|^{2}+|\bar{q}|^{2}))}\] \[:=\frac{(|\bar{p}|^{2}-|\bar{q}|^{2})^{2}}{8|\bar{p}-\bar{q}|^{2}} (\mathcal{G}_{111}+\mathcal{G}_{112}+\mathcal{G}_{113}+\mathcal{G}_{114}+ \mathcal{G}_{115}). \tag{4.70}\]
We have from (4.34) that
\[\mathcal{G}_{111} =\frac{4\varsigma^{3}g^{2}-4\varsigma^{3}|\bar{p}-\bar{q}|^{2}}{(2 \varsigma g^{2}+\sqrt{s}|\bar{p}-\bar{q}|g)(2\bar{p}^{0}\bar{q}^{0}+(2 \varsigma^{2}+|\bar{p}|^{2}+|\bar{q}|^{2}))}\] \[=\frac{-4\varsigma^{3}}{(2\varsigma g^{2}+\sqrt{s}|\bar{p}-\bar{q }|g)(2\bar{p}^{0}\bar{q}^{0}+(2\varsigma^{2}+|\bar{p}|^{2}+|\bar{q}|^{2}))} \frac{(|\bar{p}|^{2}-|\bar{q}|^{2})^{2}}{2\bar{p}^{0}\bar{q}^{0}+(2\varsigma^{ 2}+|\bar{p}|^{2}+|\bar{q}|^{2})}\] \[\lesssim O(\varsigma^{-\frac{5}{4}}), \tag{4.71}\]
where we have used \(g^{2}\bar{p}^{0}\bar{q}^{0}\geq\varsigma^{2}|\bar{p}-\bar{q}|^{2}\). Similarly, one has
\[\mathcal{G}_{112} =\frac{4\varsigma g^{2}\bar{p}^{0}\bar{q}^{0}-4\varsigma^{3}|\bar {p}-\bar{q}|^{2}}{(2\varsigma g^{2}+\sqrt{s}|\bar{p}-\bar{q}|g)(2\bar{p}^{0} \bar{q}^{0}+(2\varsigma^{2}+|\bar{p}|^{2}+|\bar{q}|^{2}))}\] \[=\frac{-4\varsigma\bar{p}^{0}\bar{q}^{0}(|\bar{p}-\bar{q}|^{2}-g^ {2})+4\varsigma|\bar{p}-\bar{q}|^{2}(\bar{p}^{0}\bar{q}^{0}-\varsigma^{2})}{(2 \varsigma g^{2}+\sqrt{s}|\bar{p}-\bar{q}|g)(2\bar{p}^{0}\bar{q}^{0}+(2\varsigma ^{2}+|\bar{p}|^{2}+|\bar{q}|^{2}))}\] \[\lesssim O(\varsigma^{-\frac{5}{4}}), \tag{4.72}\] \[\mathcal{G}_{113} =\frac{2\sqrt{s}|\bar{p}-\bar{q}|g\varsigma^{2}-4\varsigma^{3}|\bar {p}-\bar{q}|^{2}}{(2\varsigma g^{2}+\sqrt{s}|\bar{p}-\bar{q}|g)(2\bar{p}^{0} \bar{q}^{0}+(2\varsigma^{2}+|\bar{p}|^{2}+|\bar{q}|^{2}))}\] \[=\frac{-2\varsigma^{2}|\bar{p}-\bar{q}|}{(2\varsigma g^{2}+\sqrt{s }|\bar{p}-\bar{q}|g)(2\bar{p}^{0}\bar{q}^{0}+(2\varsigma^{2}+|\bar{p}|^{2}+| \bar{q}|^{2}))}(2\varsigma|\bar{p}-\bar{q}|-\sqrt{s}g)\] \[=\frac{-2\varsigma^{2}|\bar{p}-\bar{q}|}{(2\varsigma g^{2}+\sqrt{s }|\bar{p}-\bar{q}|g)(2\bar{p}^{0}\bar{q}^{0}+(2\varsigma^{2}+|\bar{p}|^{2}+| \bar{q}|^{2}))}\frac{4\varsigma^{2}(|\bar{p}-\bar{q}|^{2}-g^{2})-g^{4}}{2 \varsigma|\bar{p}-\bar{q}|+\sqrt{s}g}\] \[\lesssim O(\varsigma^{-\frac{5}{4}}),\] (4.73) \[\mathcal{G}_{114} =\frac{2\sqrt{s}\bar{p}^{0}\bar{q}^{0}|\bar{p}-\bar{q}|g-4 \varsigma^{3}|\bar{p}-\bar{q}|^{2}}{(2\varsigma g^{2}+\sqrt{s}|\bar{p}-\bar{q}| g)(2\bar{p}^{0}\bar{q}^{0}+(2\varsigma^{2}+|\bar{p}|^{2}+|\bar{q}|^{2}))}\] \[=\frac{-2|\bar{p}-\bar{q}|}{(2\varsigma g^{2}+\sqrt{s}|\bar{p}- \bar{q}|g)(2\bar{p}^{0}\bar{q}^{0}+(2\varsigma^{2}+|\bar{p}|^{2}+|\bar{q}|^{2}))} (2\varsigma^{3}|\bar{p}-\bar{q}|-\sqrt{s}\bar{p}^{0}\bar{q}^{0}g)\] \[=\frac{-2|\bar{p}-\bar{q}|}{(2\varsigma g^{2}+\sqrt{s}|\bar{p}- \bar{q}|g)(2\bar{p}^{0}\bar{q}^{0}+(2\varsigma^{2}+|\bar{p}|^{2}+|\bar{q}|^{2}))} (2\varsigma^{3}|\bar{p}-\bar{q}|-\sqrt{s}\bar{p}^{0}\bar{q}^{0}g)\] \[=\frac{-2|\bar{p}-\bar{q}|}{(2\varsigma g^{2}+\sqrt{s}|\bar{p}- \bar{q}|g)(2\bar{p}^{0}\bar{q}^{0}+(2\varsigma^{2}+|\bar{p}|^{2}+|\bar{q}|^{2}))} \frac{4\varsigma^{6}|\bar{p}-\bar{q}|^{2}-s(\bar{p}^{0})^{2}(\bar{q}^{0})^{2}g^{ 2}}{2\varsigma^{3}|\bar{p}-\bar{q}|+\sqrt{s}\bar{p}^{0}\bar{q}^{0}g}\] \[=\frac{-2|\bar{p}-\bar{q}|}{(2\varsigma g^{2}+\sqrt{s}|\bar{p}- \bar{q}|g)(2\bar{p}^{0}\bar{q}^{0}+(2\varsigma^{2}+|\bar{p}|^{2}+|\bar{q}|^{2}))}\] \[\qquad\times\Big{\{}\frac{4\varsigma^{6}(|\bar{p}-\bar{q}|^{2}-g^ {2})-g^{4}(|\bar{p}|^{2}+\varsigma^{2})(|\bar{q}|^{2}+\varsigma^{2})}{2 \varsigma^{3}|\bar{p}-\bar{q}|+\sqrt{s}\bar{p}^{0}\bar{q}^{0}g}-\frac{4 \varsigma^{2}g^{2}(|\bar{p}|^{2}\varsigma^{2}+|\bar{q}|^{2}\varsigma^{2}+|\bar{p}|^ {2}|\bar{q}|^{2})}{2\varsigma^{3}|\bar{p}-\bar{q}|+\sqrt{s}\bar{p}^{0}\bar{q}^{0}g} \Big{\}}\] \[\lesssim O(\varsigma^{-\frac{5}{4}}) \tag{4.74}\]
and
\[\mathcal{G}_{115} =\frac{(2\varsigma g^{2}+\sqrt{s}|\bar{p}-\bar{q}|g)(|\bar{p}|^{2}+| \bar{q}|^{2})}{(2\varsigma g^{2}+\sqrt
Combining (4.70)-(4.75), we have
\[|\mathcal{G}_{11}|\lesssim\mathfrak{c}^{-\frac{1}{2}},\]
which, together with (4.68)-(4.69), yields that
\[\int_{|p-q|\leq\mathfrak{c}^{\frac{1}{2}}}|\mathcal{E}_{4}|dq\lesssim\mathfrak{ c}^{-\frac{1}{2}}\int_{|p-q|\leq\mathfrak{c}^{\frac{1}{8}}}\frac{1}{|p-q|}e^{- \frac{|p-q|^{2}-|q|^{2})^{2}}{8\sigma\beta}-\frac{(|p|^{2}-|q|^{2})^{2}}{8\sigma \beta|p-q|^{2}}}dq\lesssim\mathfrak{c}^{-\frac{1}{2}}. \tag{4.76}\]
Combining (4.59)-(4.63), (4.67) and (4.76), one has
\[\int_{\mathbb{R}^{3}}|k_{\mathfrak{c}2}(p,q)-k_{2}(p,q)|dq\lesssim\mathfrak{c} ^{-\frac{3}{8}},\quad p\in\mathbb{R}.\]
Therefore the proof is completed.
Denote \(\overline{\mathbf{M}}_{\mathfrak{c}}\) as the local Maxwellian in the rest frame where \((u^{0},u^{1},u^{2},u^{3})^{t}=(\mathfrak{c},0,0,0)^{t}\):
\[\overline{\mathbf{M}}_{\mathfrak{c}}(t,x,p):=\frac{n_{0}\gamma}{4\pi\mathfrak{ c}^{3}K_{2}(\gamma)}\exp\Big{\{}\frac{-\mathfrak{c}p^{0}}{T_{0}}\Big{\}}.\]
Define the third momentum
\[T^{\alpha\beta\gamma}[\mathbf{M}_{\mathfrak{c}}]:=\int_{\mathbb{R}^{3}}\frac {p^{\alpha}p^{\beta}p^{\gamma}}{p^{0}}\mathbf{M}_{\mathfrak{c}}dp,\quad \overline{T}^{\alpha\beta\gamma}:=\int_{\mathbb{R}^{3}}\frac{p^{\alpha}p^{ \beta}p^{\gamma}}{p^{0}}\overline{\mathbf{M}}_{\mathfrak{c}}dp.\]
We first give the expression of \(\overline{T}^{\alpha\beta\gamma}\) which can be proved directly and we omit the details here for brevity.
**Lemma 4.10**.: _Let \(i,j,k\in\{1,2,3\}\). For the third momentum \(\overline{T}^{\alpha\beta\gamma}\) which corresponds to \(T^{\alpha\beta\gamma}[\mathbf{M}_{\mathfrak{c}}]\) in the rest frame, there hold_
\[\overline{T}^{000} =\frac{n_{0}\mathfrak{c}^{2}\left[3K_{3}(\gamma)+\gamma K_{2}( \gamma)\right]}{\gamma K_{2}(\gamma)},\] \[\overline{T}^{0ii} =\overline{T}^{ii0}=\overline{T}^{i0i}=\frac{n_{0}\mathfrak{c}^{ 2}K_{3}(\gamma)}{\gamma K_{2}(\gamma)},\] \[\overline{T}^{\alpha\beta\gamma} =0,\quad\text{ if }(\alpha,\beta,\gamma)\neq(0,0,0),(0,i,i),(i,i,0),(i,0,i).\]
Recalling the Lorentz transformation in (2.4) and observing
\[T^{\alpha\beta\gamma}[\mathbf{M}_{\mathfrak{c}}]=\Lambda_{\alpha^{\prime}}^{ \alpha}\bar{\Lambda}_{\beta^{\prime}}^{\beta}\bar{\Lambda}_{\gamma^{\prime}}^ {\gamma}\overline{T}^{\alpha^{\prime}\beta^{\prime}\gamma^{\prime}},\]
we can obtain the expression of \(T^{\alpha\beta\gamma}[\mathbf{M}_{\mathfrak{c}}]\) from Lemma 4.10.
**Lemma 4.11**.: _For \(i,j,k\in\{1,2,3\}\), there hold_
\[T^{000}[\mathbf{M}_{\mathfrak{c}}]= \frac{n_{0}}{\varsigma\gamma K_{2}(\gamma)}\left[\left(3K_{3}( \gamma)+\gamma K_{2}(\gamma)\right)\left(u^{0}\right)^{3}+3K_{3}(\gamma)u^{0} |u|^{2}\right],\] \[T^{00i}[\mathbf{M}_{\mathfrak{c}}]= \frac{n_{0}}{\varsigma\gamma K_{2}(\gamma)}\left[\left(5K_{3}( \gamma)+\gamma K_{2}(\gamma)\right)\left(u^{0}\right)^{2}u_{i}+K_{3}(\gamma)|u |^{2}u_{i}\right],\] \[T^{0ij}[\mathbf{M}_{\mathfrak{c}}]= \frac{n_{0}}{\varsigma\gamma K_{2}(\gamma)}\left[\left(6K_{3}( \gamma)+\gamma K_{2}(\gamma)\right)u^{0}u_{i}u_{j}+\mathfrak{c}^{2}K_{3}(\gamma )u^{0}\delta_{ij}\right],\] \[T^{ijk}[\mathbf{M}_{\mathfrak{c}}]= \frac{n_{0}}{\varsigma\gamma K_{2}(\gamma)}\Big{[}\left(6K_{3}( \gamma)+\gamma K_{2}(\gamma)\right)u_{i}u_{j}u_{k}+\mathfrak{c}^{2}K_{3}( \gamma)\left(u_{i}\delta_{jk}+u_{j}\delta_{ik}+u_{k}\delta_{ij}\right)\Big{]}.\]
Since we have not found a direct reference which gives the orthonormal basis of \(\mathcal{N}_{\mathfrak{c}}\), so we present details of calculation in the appendix for completeness though it is somehow routine. Indeed, the orthonormal basis of \(\mathcal{N}_{\mathfrak{c}}\) for the relativistic Boltzmann equation has the form
\[\chi_{0}^{\mathfrak{c}}=\mathfrak{a}_{0}\sqrt{\mathbf{M}_{\mathfrak{c}}},\quad \chi_{j}^{\mathfrak{c}}=\frac{p_{j}-\mathfrak{a}_{j}}{\mathfrak{b}_{j}}\sqrt{ \mathbf{M}_{\mathfrak{c}}}\ (j=1,2,3),\quad\chi_{4}^{\mathfrak{c}}=\frac{p^{0}/\mathfrak{c}+\sum_{i=1}^{3} \lambda_{i}p_{i}+\mathfrak{c}}{\zeta}\sqrt{\mathbf{M}_{\mathfrak{c}}}, \tag{4.77}\]
where \(\mathfrak{a}_{\alpha}\) (\(\alpha=0,1,2,3\)), \(\mathfrak{b}_{j}\) (\(j=1,2,3\)), \(\lambda_{i}\) (\(i=1,2,3\)) and \(\mathfrak{e}\) are all given in the appendix.
In the following lemma, we shall show that, as \(\mathfrak{c}\to\infty\), the relativistic orthonormal basis in (4.77) converges to the following Newtonian orthonormal basis
\[\chi_{0}=\frac{1}{\sqrt{\rho}}\sqrt{\mu},\quad\chi_{j}=\frac{p_{j}-\mathfrak{ u}_{j}}{\sqrt{\rho\theta}}\sqrt{\mu}\ (j=1,2,3),\quad\chi_{4}=\frac{1}{\sqrt{6\rho}}\Big{(}\frac{|p-\mathfrak{u}|^{2} }{\theta}-3\Big{)}\sqrt{\mu}, \tag{4.78}\]
where \(\mu(t,x,p)\) is defined by (1.30).
**Lemma 4.12**.: _For any fixed \(p\in\mathbb{R}^{3}\), it holds that_
\[\lim_{\mathfrak{c}\to\infty}\chi_{\alpha}^{\mathfrak{c}}=\chi_{\alpha},\quad \alpha=0,1,\cdots,4.\]
Proof.: In view of Proposition 3.8, one has
\[\lim_{\mathfrak{c}\to\infty}\mathbf{M}_{\mathfrak{c}}(p)=\mu(p),\quad\lim_{ \mathfrak{c}\to\infty}I^{0}=\lim_{\mathfrak{c}\to\infty}\frac{n_{0}u^{0}}{ \mathfrak{c}}=\rho.\]
Then we have
\[\lim_{\mathfrak{c}\to\infty}\mathfrak{a}_{0}=\lim_{\mathfrak{c}\to\infty} \frac{1}{\sqrt{I^{0}}}=\frac{1}{\sqrt{\rho}},\]
which implies that \(\lim_{\mathfrak{c}\to\infty}\chi_{0}^{\mathfrak{c}}=\chi_{0}\).
For \(j=1,2,3\), a direct calculation shows that
\[\lim_{\mathfrak{c}\to\infty}T^{0j}=\lim_{\mathfrak{c}\to\infty}\frac{n_{0}}{ \mathfrak{c}}\frac{K_{3}(\gamma)}{K_{2}(\gamma)}u^{0}u_{j}=\rho\mathfrak{u}_ {j} \tag{4.79}\]
and
\[\lim_{\mathfrak{c}\to\infty}T^{0jj}=\lim_{\mathfrak{c}\to\infty}\frac{n_{0}}{ \mathfrak{c}\gamma K_{2}(\gamma)}\left[\left(6K_{3}(\gamma)+\gamma K_{2}( \gamma)\right)u^{0}u_{j}^{2}+\mathfrak{c}^{2}K_{3}(\gamma)u^{0}\right]=\rho \mathfrak{u}_{j}^{2}+\rho\theta.\]
Thus one has
\[\lim_{\mathfrak{c}\to\infty}\mathfrak{a}_{j}=\lim_{\mathfrak{c}\to\infty} \frac{T^{0j}}{I_{0}}=\mathfrak{u}_{j}\]
and
\[\lim_{\mathfrak{c}\to\infty}\mathfrak{b}_{j}=\lim_{\mathfrak{c}\to\infty} \sqrt{T^{0jj}-\frac{(T^{0j})^{2}}{I^{0}}}=\sqrt{\rho\theta},\]
which implies that \(\lim_{\mathfrak{c}\to\infty}\chi_{j}^{\mathfrak{c}}=\chi_{j}\), \(j=1,2,3\).
The proof for \(\lim_{\mathfrak{c}\to\infty}\chi_{4}^{\mathfrak{c}}=\chi_{4}\) is much more complicated. It is clear that
\[\chi_{4}^{\mathfrak{c}} =\frac{p^{0}+\mathfrak{c}\mathfrak{e}+\mathfrak{c}\sum_{i=1}^{3} \lambda_{i}p_{i}}{\mathfrak{c}\zeta}\sqrt{\mathbf{M}_{\mathfrak{c}}}\] \[=\frac{(p^{0}+\mathfrak{c}\mathfrak{e})(p^{0}-\mathfrak{c} \mathfrak{e})+\mathfrak{c}(p^{0}-\mathfrak{c}\mathfrak{e})\sum_{i=1}^{3} \lambda_{i}p_{i}}{\mathfrak{c}\zeta(p^{0}-\mathfrak{c}\mathfrak{e})}\sqrt{ \mathbf{M}_{\mathfrak{c}}}\] \[=\frac{\mathrm{Num}}{\mathrm{Den}}\sqrt{\mathbf{M}_{\mathfrak{c}}}.\]
We first calculate the numerator. Denote \(\hat{A}(\gamma):=\frac{K_{3}(\gamma)}{K_{2}(\gamma)}-\frac{6}{\gamma}-\frac{K_ {2}(\gamma)}{K_{3}(\gamma)}\). It follows from Lemma 2.1 that
\[\hat{A}(\gamma)=-\frac{1}{\gamma}+O(\gamma^{-2}).\]
Now we have
\[1+\mathfrak{e}=1+\frac{\frac{1}{\gamma}-\frac{(u^{0})^{2}}{\gamma^{T}Q_{0}} \frac{K_{3}(\gamma)}{K_{2}(\gamma)}-\hat{A}(\gamma)\frac{|u|^{2}}{\gamma^{T}Q_{ 0}}}{\mathfrak{e}^{0}-\hat{A}(\gamma)\frac{w^{0}|u|^{2}}{\epsilon T_{0}}}\]
\[=\frac{\frac{u^{0}}{\mathfrak{c}}-\hat{A}(\gamma)\frac{u^{0}|u|^{2}}{ \mathfrak{c}T_{0}}+\frac{1}{\gamma}-\frac{(u^{0})^{2}}{\mathfrak{c}^{2}}\frac{K _{3}(\gamma)}{K_{2}(\gamma)}-\hat{A}(\gamma)\frac{|u|^{2}}{\gamma T_{0}}}{ \frac{u^{0}}{\mathfrak{c}}-\hat{A}(\gamma)\frac{u^{0}|u|^{2}}{\mathfrak{c}T_{ 0}}}\] \[=\frac{1-\frac{u^{0}}{\mathfrak{c}}\frac{K_{3}(\gamma)}{K_{2}( \gamma)}-\hat{A}(\gamma)\frac{|u|^{2}}{\mathfrak{c}T_{0}}+\frac{\mathfrak{c}}{ \gamma u^{0}}-\hat{A}(\gamma)\frac{c|u|^{2}}{\gamma u^{0}T_{0}}}{1-\hat{A}( \gamma)\frac{|u|^{2}}{\mathfrak{c}T_{0}}}\]
and
\[\lambda_{i}=\frac{\hat{A}(\gamma)\frac{(u^{0})^{2}}{\mathfrak{c}^{2}T_{0}}u_{i }}{\frac{u^{0}}{\mathfrak{c}}-\hat{A}(\gamma)\frac{u^{0}|u|^{2}}{\mathfrak{c }T_{0}}}=\frac{\big{(}-\frac{1}{\gamma}+O(\gamma^{-2})\big{)}\frac{(u^{0})^{2 }}{\mathfrak{c}^{2}T_{0}}u_{i}}{\frac{u^{0}}{\mathfrak{c}}-\hat{A}(\gamma) \frac{u^{0}|u|^{2}}{\mathfrak{c}T_{0}}},\quad i=1,2,3,\]
thus we obtain
\[\lim_{\mathfrak{c}\to\infty}\mathfrak{c}=\lim_{\mathfrak{c}\to \infty}\frac{\frac{1}{\gamma}-\frac{(u^{0})^{2}}{\mathfrak{c}^{2}}\frac{K_{3} (\gamma)}{K_{2}(\gamma)}-\hat{A}(\gamma)\frac{|u|^{2}}{\gamma T_{0}}}{\frac{ u^{0}}{\mathfrak{c}}-\hat{A}(\gamma)\frac{u^{0}|u|^{2}}{\mathfrak{c}T_{0}}}=-1,\] \[\lim_{\mathfrak{c}\to\infty}\gamma(1+\mathfrak{c})=-\frac{3}{2}+ \frac{|\mathfrak{u}|^{2}}{2\theta},\] \[\lim_{\mathfrak{c}\to\infty}\gamma\lambda_{i}=-\frac{\mathfrak{ u}_{i}}{\theta},\quad i=1,2,3,\]
where we used the fact that
\[\lim_{\mathfrak{c}\to\infty}\gamma\Big{(}1-\frac{u^{0}}{\mathfrak{c}}\frac{K_{3}( \gamma)}{K_{2}(\gamma)}\Big{)}=\lim_{\mathfrak{c}\to\infty}\gamma\Big{[}- \frac{u^{0}}{\mathfrak{c}}\Big{(}\frac{K_{3}(\gamma)}{K_{2}(\gamma)}-1\Big{)} -\Big{(}\frac{u^{0}}{\mathfrak{c}}-1\Big{)}\Big{]}=-\frac{5}{2}-\frac{| \mathfrak{u}|^{2}}{2\theta}.\]
Hence we get
\[\lim_{\mathfrak{c}\to\infty}\text{Num} =\lim_{\mathfrak{c}\to\infty}\Big{(}|p|^{2}+\gamma T_{0}(1- \mathfrak{c})(1+\mathfrak{c})+\gamma T_{0}\Big{(}\frac{p^{0}}{\mathfrak{c}}- \mathfrak{c}\Big{)}\sum_{i=1}^{3}\lambda_{i}p_{i}\Big{)}\] \[=|p|^{2}-3\theta+|\mathfrak{u}|^{2}-2\sum_{i=1}^{3}\mathfrak{u}_ {i}p_{i}=|p-\mathfrak{u}|^{2}-3\theta. \tag{4.80}\]
We next consider the denominator. Notice that
\[\text{Den}=\mathfrak{c}\zeta(p^{0}-\mathfrak{c}\mathfrak{c})=\mathfrak{c}^{2} \zeta\Big{(}\frac{p^{0}}{\mathfrak{c}}-\mathfrak{c}\Big{)}\]
and
\[\lim_{\mathfrak{c}\to\infty}\Big{(}\frac{p^{0}}{\mathfrak{c}}- \mathfrak{c}\Big{)}=2,\]
then we focus on the quantity \(\mathfrak{c}^{2}\zeta=T_{0}\sqrt{\gamma^{2}\zeta^{2}}\). By the expression of \(\zeta\) in the appendix, one has
\[\zeta^{2} =\Big{(}\sum_{i,j=1}^{3}\lambda_{i}\lambda_{j}T^{0ij}\Big{)}+ \Big{(}2\sum_{i=1}^{3}\lambda_{i}\mathfrak{c}T^{0i}+2\sum_{i=1}^{3}\frac{ \lambda_{i}}{\mathfrak{c}}T^{00i}\Big{)}+\Big{(}\frac{T^{000}}{\mathfrak{c}^{2} }+\mathfrak{c}^{2}I^{0}+2\frac{\mathfrak{c}}{\mathfrak{c}}T^{00}\Big{)}\] \[:=\mathcal{I}_{1}+\mathcal{I}_{2}+\mathcal{I}_{3}. \tag{4.81}\]
It is easy to see that
\[\lim_{\mathfrak{c}\to\infty}T^{0ij}=\lim_{\mathfrak{c}\to\infty}\frac{n_{0}}{ \mathfrak{c}\gamma K_{2}(\gamma)}\left[\left(6K_{3}(\gamma)+\gamma K_{2}(\gamma )\right)u^{0}u_{i}u_{j}+\mathfrak{c}^{2}K_{3}(\gamma)u^{0}\delta_{ij}\right] =\rho\mathfrak{u}_{i}\mathfrak{u}_{j}+\rho\theta\delta_{ij},\]
which yields that
\[\lim_{\mathfrak{c}\to\infty}\gamma^{2}\mathcal{I}_{1}=\lim_{\mathfrak{c}\to \infty}\sum_{i,j=1}^{3}(\gamma\lambda_{i})\cdot(\gamma\lambda_{j})T^{0ij}\]
\[=\sum_{i,j=1}^{3}\Big{(}-\frac{\mathfrak{u}_{i}}{\theta}\Big{)} \Big{(}-\frac{\mathfrak{u}_{j}}{\theta}\Big{)}(\rho\mathfrak{u}_{i}\mathfrak{u}_ {j}+\rho\theta\delta_{ij})\] \[=\frac{\rho|\mathfrak{u}|^{4}}{\theta^{2}}+\frac{\rho|\mathfrak{u }|^{2}}{\theta}. \tag{4.82}\]
We notice that
\[\mathfrak{e}T^{0i}+\frac{T^{00i}}{\mathfrak{c}}=(\mathfrak{c}+1)T^{0i}+\Big{(} \frac{T^{00i}}{\mathfrak{c}}-T^{0i}\Big{)}\]
and
\[\frac{T^{00i}}{\mathfrak{c}}-T^{0i} =\frac{n_{0}}{\mathfrak{c}^{2}\gamma K_{2}(\gamma)}\left[\left(5 K_{3}(\gamma)+\gamma K_{2}(\gamma)\right)\left(u^{0}\right)^{2}u_{i}+K_{3}( \gamma)|u|^{2}u_{i}\right]-\frac{n_{0}}{\mathfrak{c}}\frac{K_{3}(\gamma)}{K_{ 2}(\gamma)}u^{0}u_{i}\] \[=n_{0}u_{i}\Big{\{}\frac{5}{\gamma}\frac{K_{3}(\gamma)}{K_{2}( \gamma)}\Big{(}\frac{u^{0}}{\mathfrak{c}}\Big{)}^{2}+\frac{|u|^{2}}{\mathfrak{ c}\gamma}\frac{K_{3}(\gamma)}{K_{2}(\gamma)}+\frac{u^{0}}{\mathfrak{c}} \Big{(}\frac{u^{0}}{\mathfrak{c}}-1\Big{)}+\frac{u^{0}}{\mathfrak{c}}\Big{(} 1-\frac{K_{3}(\gamma)}{K_{2}(\gamma)}\Big{)}\Big{\}}. \tag{4.83}\]
Then it follows from (4.79) and (4.83) that
\[\lim_{\mathfrak{c}\to\infty}\gamma(\mathfrak{c}+1)T^{0i}=\rho \mathfrak{u}_{i}\Big{(}-\frac{3}{2}+\frac{|\mathfrak{u}|^{2}}{2\theta}\Big{)}\]
and
\[\lim_{\mathfrak{c}\to\infty}\gamma\Big{(}\frac{T^{00i}}{ \mathfrak{c}}-T^{0i}\Big{)}=\rho\mathfrak{u}_{i}\Big{(}5+0+\frac{|\mathfrak{u }|^{2}}{2\theta}-\frac{5}{2}\Big{)}=\rho\mathfrak{u}_{i}\Big{(}\frac{5}{2}+ \frac{|\mathfrak{u}|^{2}}{2\theta}\Big{)}.\]
Hence one obtains
\[\lim_{\mathfrak{c}\to\infty}\gamma^{2}\mathcal{I}_{2} =2\lim_{\mathfrak{c}\to\infty}\sum_{i=1}^{3}(\gamma\lambda_{i}) \cdot\gamma\Big{(}\frac{T^{00i}}{\mathfrak{c}}-T^{0i}\Big{)}+2\lim_{ \mathfrak{c}\to\infty}\sum_{i=1}^{3}(\gamma\lambda_{i})\cdot\gamma(\mathfrak{ c}+1)T^{0i}\] \[=2\sum_{i=1}^{3}\Big{(}-\frac{\mathfrak{u}_{i}}{\theta}\Big{)} \cdot\Big{[}\rho\mathfrak{u}_{i}\Big{(}\frac{5}{2}+\frac{|\mathfrak{u}|^{2}}{ 2\theta}\Big{)}+\rho\mathfrak{u}_{i}\Big{(}-\frac{3}{2}+\frac{|\mathfrak{u}|^ {2}}{2\theta}\Big{)}\Big{]}\] \[=-2\frac{\rho|\mathfrak{u}|^{4}}{\theta^{2}}-2\frac{\rho| \mathfrak{u}|^{2}}{\theta}. \tag{4.84}\]
We finally consider \(\gamma^{2}\mathcal{I}_{3}\). It holds that
\[\frac{\mathcal{I}_{3}}{n_{0}} =\frac{1}{\mathfrak{c}^{2}}\frac{T^{000}}{n_{0}}+\mathfrak{c}^{2 }\frac{I^{0}}{n_{0}}+2\frac{\mathfrak{c}}{\mathfrak{c}}\frac{T^{00}}{n_{0}}\] \[=\frac{1}{\mathfrak{c}^{3}\gamma K_{2}(\gamma)}\left[\left(3K_{3} (\gamma)+\gamma K_{2}(\gamma)\right)\left(u^{0}\right)^{3}+3K_{3}(\gamma)u^{0 }|u|^{2}\right]+\mathfrak{c}^{2}\frac{u^{0}}{\mathfrak{c}}\] \[\qquad+2\mathfrak{c}\Big{(}\frac{1}{\mathfrak{c}^{2}}\frac{K_{3} (\gamma)}{K_{2}(\gamma)}(u^{0})^{2}-\frac{1}{\gamma}\Big{)}\] \[=\frac{3}{\gamma}\frac{K_{3}(\gamma)}{K_{2}(\gamma)}\Big{(}\frac {u^{0}}{\mathfrak{c}}\Big{)}^{3}+\Big{(}\frac{u^{0}}{\mathfrak{c}}\Big{)}^{3}+ \frac{3}{\gamma}\frac{K_{3}(\gamma)}{K_{2}(\gamma)}\frac{u^{0}|u|^{2}}{ \mathfrak{c}^{3}}+\mathfrak{c}^{2}\frac{u^{0}}{\mathfrak{c}}+2\mathfrak{c} \frac{K_{3}(\gamma)}{K_{2}(\gamma)}\Big{(}\frac{u^{0}}{\mathfrak{c}}\Big{)}^{2 }-\mathfrak{c}\frac{2}{\gamma}\] \[=\frac{3}{\gamma}\frac{K_{3}(\gamma)}{K_{2}(\gamma)}\frac{u^{0}|u|^ {2}}{\mathfrak{c}^{3}}+\frac{3}{\gamma}\frac{K_{3}(\gamma)}{K_{2}(\gamma)} \Big{(}\frac{u^{0}}{\mathfrak{c}}\Big{)}^{2}\Big{(}\frac{u^{0}}{\mathfrak{c}}-1 \Big{)}+\mathfrak{c}\Big{(}2\frac{u^{0}}{\mathfrak{c}}+2\Big{)}\Big{(}\frac{K_ {3}(\gamma)}{K_{2}(\gamma)}-1\Big{)}\Big{(}\frac{u^{0}}{\mathfrak{c}}-1\Big{)}\] \[\qquad+2\mathfrak{c}\Big{(}\frac{K_{3}(\gamma)}{K_{2}(\gamma)}-1- \frac{5}{2\gamma}\Big{)}+\frac{u^{0}}{\mathfrak{c}}\Big{(}\frac{u^{0}}{ \mathfrak{c}}-1\Big{)}^{2}+2\frac{u^{0}}{\mathfrak{c}}(1+\mathfrak{c})\Big{(} \frac{u^{0}}{\mathfrak{c}}-1\Big{)}\] \[\qquad+\frac{u^{0}}{\mathfrak{c}}(1+\mathfrak{c})^{2}+\frac{3}{ \gamma}\Big{[}\Big{(}\frac{K_{3}(\gamma)}{K_{2}(\gamma)}-1\Big{)}\Big{(}\frac {u^{0}}{\mathfrak{c}}\Big{)}^{2}+\Big{(}\frac{u^{0}}{\mathfrak{c}}+1\Big{)} \Big{(}\frac{u^{0}}{\mathfrak{c}}-1\Big{)}+(1+\mathfrak{c})\Big{]}.\]
Thus we have
\[\lim_{\mathfrak{c}\to\infty}\gamma^{2}\mathcal{I}_{3}=\frac{3}{2}\rho+\frac{ \rho|\mathfrak{u}|^{4}}{\theta^{2}}+\frac{\rho|\mathfrak{u}|^{2}}{\theta}. \tag{4.85}\]
Combining (4.81), (4.82), (4.84) and (4.85), we finally obtain
\[\lim_{\mathfrak{c}\to\infty}\gamma^{2}\zeta^{2}=\frac{3}{2}\rho.\]
Hence one obtains
\[\lim_{\mathfrak{c}\to\infty}\operatorname{Den}=\theta\sqrt{6\rho},\]
which, together with (4.80), yields that
\[\lim_{\mathfrak{c}\to\infty}\chi_{4}^{\mathfrak{c}}=\frac{|p-\mathfrak{u}|^{2 }-3\theta}{\theta\sqrt{6\rho}}=\chi_{4}.\]
Therefore the proof is completed.
With above preparations, we shall prove the coercivity estimate for the linear operator \(\mathbf{L}_{\mathfrak{c}}\).
**Proposition 4.13** (Uniform coercivity estimate on \(\mathbf{L}_{\mathfrak{c}}\)).: There exists a positive constant \(\zeta_{0}>0\), which is independent of \(\mathfrak{c}\), such that
\[\langle\mathbf{L}_{\mathfrak{c}}g,g\rangle\geq\zeta_{0}\|\{\mathbf{I}- \mathbf{P}_{\mathfrak{c}}\}g\|_{\nu_{\mathfrak{c}}}^{2}\]
for any \(g\in L_{\nu}^{2}(\mathbb{R}^{3})\).
Proof.: It is clear that one only needs to show that there is a positive constant \(\zeta_{0}>0\), which is independent of \(\mathfrak{c}\), such that
\[\langle\mathbf{L}_{\mathfrak{c}}g,g\rangle\geq\zeta_{0}\|g\|_{\nu_{\mathfrak{ c}}}^{2}=\zeta_{0} \tag{4.86}\]
holds for any \(\mathfrak{c}\) and any \(g\in\mathcal{N}_{\mathfrak{c}}^{\perp}\) with \(\|g\|_{\nu_{\mathfrak{c}}}=1\).
For any given \(\mathfrak{c}\), the linearized Boltzmann collision operator \(\mathbf{L}_{\mathfrak{c}}\) satisfies the well-known hypoco-ercivity (see [19] for instance), i.e., there exists a positive constant \(\alpha_{\mathfrak{c}}>0\), such that
\[\langle\mathbf{L}_{\mathfrak{c}}g,g\rangle\geq\alpha_{\mathfrak{c}}\|g\|_{\nu _{\mathfrak{c}}}^{2}=\alpha_{\mathfrak{c}} \tag{4.87}\]
for any \(g\in\mathcal{N}_{\mathfrak{c}}^{\perp}\) with \(\|g\|_{\nu_{\mathfrak{c}}}=1\). Denote
\[\zeta_{\mathfrak{c}}:=\inf_{\begin{subarray}{c}g\in\mathcal{N}_{\mathfrak{c} }^{\perp}\\ \|g\|_{\nu_{\mathfrak{c}}}=1\end{subarray}}\langle\mathbf{L}_{\mathfrak{c}}g,g\rangle. \tag{4.88}\]
It follows from (4.87) that \(\zeta_{\mathfrak{c}}\geq\alpha_{\mathfrak{c}}>0\) for any \(\mathfrak{c}\). To prove (4.86), it suffices to show that
\[\inf_{\mathfrak{c}\geq 1}\zeta_{\mathfrak{c}}>0. \tag{4.89}\]
We prove (4.89) by contradiction. Assume that (4.89) is not true, then there exists a sequence \(\{\zeta_{\mathfrak{c}_{n}}\}\) such that
\[\lim_{n\to\infty}\mathfrak{c}_{n}=\infty\quad\text{and}\quad\lim_{n\to\infty} \zeta_{\mathfrak{c}_{n}}=0. \tag{4.90}\]
For each \(n\), owing to (4.88), there exists \(g_{n}\in\mathcal{N}_{\mathfrak{c}_{n}}^{\perp}\) with \(\|g_{n}\|_{\nu_{\mathfrak{c}_{n}}}=1\), so that
\[\zeta_{\mathfrak{c}_{n}}\leq\langle\mathbf{L}_{\mathfrak{c}_{n}}g_{n},g_{n} \rangle<\zeta_{\mathfrak{c}_{n}}+\frac{1}{n},\]
which, together with (4.90), yields that
\[\lim_{n\to\infty}\langle\mathbf{L}_{\mathfrak{c}_{n}}g_{n},g_{n}\rangle=0. \tag{4.91}\]
It is clear that \(\{g_{n}\}_{n=1}^{\infty}\) is a bounded sequence in \(L^{2}(\mathbb{R}^{3})\). Since \(L^{2}\) is a Hilbert space, based on the Eberlein-Smulian theorem, we have the weakly convergent sequence (up to extracting a subsequence with an abuse of notation) \(g_{n}\rightharpoonup g\) in \(L^{2}\). Moreover, for any fixed \(N\geq 1\), one has
\[\chi_{\{|p|\leq N\}}\sqrt{\nu_{\mathfrak{c}_{n}}}g_{n}\rightharpoonup\chi_{\{|p| \leq N\}}\sqrt{\nu}g\quad\text{in }L^{2},\]
where \(\nu(p)=\lim_{\mathfrak{c}\to\infty}\nu_{\mathfrak{c}}(p)\). Hence, by the weak semi-continuity, for any fixed \(N\), we have
\[\|\chi_{\{|p|\leq N\}}\sqrt{\nu}g\|_{2}\leq\liminf_{n\to\infty}\| \chi_{\{|p|\leq N\}}\sqrt{\nu_{\mathfrak{c}_{n}}}g_{n}\|_{2}\leq 1,\]
which implies that
\[\|\sqrt{\nu}g\|_{2}\leq 1. \tag{4.92}\]
For later use, we denote
\[\mathbf{L}f:=\nu f-\mathbf{K}f,\]
where
\[\mathbf{K}f:=\int_{\mathbb{R}^{3}}k(p,q)f(q)dq=\int_{\mathbb{R}^{ 3}}[k_{2}(p,q)-k_{1}(p,q)]f(q)dq \tag{4.93}\]
with \(k_{1}(p,q)\) and \(k_{2}(p,q)\) defined in (4.42)-(4.43). We also denote \(\mathcal{N}\) as the null space of \(\mathbf{L}\), that is, \(\mathcal{N}:=\mathrm{span}\{\chi_{0},\chi_{1},\chi_{2},\chi_{3},\chi_{4}\}\). Clearly, we have
\[0\leq\left\langle\mathbf{L}_{\mathfrak{c}_{n}}g_{n},g_{n}\right\rangle =\left\|g_{n}\right\|_{\nu_{\mathfrak{c}_{n}}}^{2}-\left\langle( \mathbf{K}_{\mathfrak{c}_{n}}-\mathbf{K})g_{n},g_{n}\right\rangle-\left\langle \mathbf{K}g_{n},g_{n}\right\rangle\] \[=1-\left\langle(\mathbf{K}_{\mathfrak{c}_{n}}-\mathbf{K})g_{n},g _{n}\right\rangle-\left\langle\mathbf{K}g_{n},g_{n}\right\rangle. \tag{4.94}\]
Since \(\mathbf{K}\) is a compact operator on \(L^{2}\), it holds that
\[\lim_{n\to\infty}\|\mathbf{K}g_{n}-\mathbf{K}g\|_{2}=0.\]
Hence we have
\[\left\langle\mathbf{K}g_{n},g_{n}\right\rangle-\left\langle \mathbf{K}g,g\right\rangle=\left\langle\mathbf{K}g_{n}-\mathbf{K}g,g_{n} \right\rangle+\left\langle\mathbf{K}g,g_{n}-g\right\rangle\to 0,\quad n\to\infty.\]
It follows from Lemmas 4.8-4.9 that
\[\left\langle(\mathbf{K}_{\mathfrak{c}_{n}}-\mathbf{K})g_{n},g_{n}\right\rangle \to 0,\quad n\to\infty. \tag{4.95}\]
Combining (4.91), (4.94)-(4.95), we have
\[\left\langle\mathbf{K}g,g\right\rangle=1, \tag{4.96}\]
which, together with (4.92), yields that
\[0\leq\left\langle\mathbf{L}g,g\right\rangle=\|g\|_{\nu}^{2}- \left\langle\mathbf{K}g,g\right\rangle\leq 0.\]
Thus we have \(g\in\mathcal{N}\).
Next, we shall show that \(g\in\mathcal{N}^{\perp}\). Recall \(\chi_{\alpha}^{\mathfrak{c}_{n}}\), \(\chi_{\alpha}\) defined in (4.77) (with \(\mathfrak{c}\) replaced by \(\mathfrak{c}_{n}\)) and (4.78). Notice that
\[0=\left\langle g_{n},\chi_{\alpha}^{\mathfrak{c}_{n}}\right\rangle=\left\langle g _{n}-g,\chi_{\alpha}^{\mathfrak{c}_{n}}-\chi_{\alpha}\right\rangle+\left\langle g _{n}-g,\chi_{\alpha}\right\rangle+\left\langle g,\chi_{\alpha}^{\mathfrak{c}_ {n}}-\chi_{\alpha}\right\rangle+\left\langle g,\chi_{\alpha}\right\rangle,\quad \alpha=0,1,\cdots,4. \tag{4.97}\]
Using Lemma 4.12 and \(g_{n}\rightharpoonup g\) in \(L^{2}\), we take the limit \(n\to\infty\) in (4.97) to obtain
\[\left\langle g,\chi_{\alpha}\right\rangle=0,\quad\alpha=0,1,\cdots,4,\]
which implies that \(g\in\mathcal{N}^{\perp}\). Since we also have \(g\in\mathcal{N}\), one concludes that \(g=0\), which contradicts with (4.96). Therefore the proof of Proposition 4.13 is completed.
### Uniform estimate on \(\mathbf{L}_{\epsilon}^{-1}\)
To apply the Hilbert expansion procedure, we need uniform-in-\(\mathfrak{c}\) estimate on \(\mathbf{L}_{\epsilon}^{-1}\). The proof is inspired by [33].
**Lemma 4.14**.: _For any fixed \(0\leq\lambda<1\), it holds that_
\[\mathbf{M}_{\epsilon}^{-\frac{\lambda}{2}}(p)|\mathbf{K}_{\epsilon}f(p)|\lesssim \|\mathbf{M}_{\epsilon}^{-\frac{\lambda}{2}}f\|_{2},\quad p\in\mathbb{R}^{3}.\]
Proof.: It follows from (2.2) and Lemma 4.3 that
\[\mathbf{M}_{\epsilon}^{-\frac{\lambda}{2}}(p)|\mathbf{K}_{\epsilon 1}f(p)| \lesssim\int_{\mathbb{R}^{3}}|p-q|\mathbf{M}_{\epsilon}^{\frac{1- \lambda}{2}}(p)\mathbf{M}_{\epsilon}^{\frac{1}{2}}(q)|f(q)|dq\] \[\lesssim\int_{\mathbb{R}^{3}}|p-q|e^{-(1-\lambda)\bar{c}_{1}|p|}e ^{-\bar{c}_{1}|q|}|f(q)|dq\] \[\lesssim\int_{\mathbb{R}^{3}}e^{-\frac{c_{1}}{2}|q|}|f(q)|dq \lesssim\|f\|_{2}.\]
Using (2.3), one has
\[\mathbf{M}_{\epsilon}^{-\frac{\lambda}{2}}(p)|\mathbf{K}_{ \epsilon 2}f(p)|\] \[\leq\frac{\mathfrak{c}}{p^{0}}\int_{\mathbb{R}^{3}}\frac{dq}{q^{ 0}}\int_{\mathbb{R}^{3}}\frac{dq^{\prime}}{q^{\prime 0}}\int_{\mathbb{R}^{3}} \frac{dp^{\prime}}{p^{\prime 0}}W\left(p,q\mid p^{\prime},q^{\prime}\right) \mathbf{M}_{\epsilon}^{\frac{1+\lambda}{2}}(q)\mathbf{M}_{\epsilon}^{\frac{1- \lambda}{2}}(q^{\prime})|\mathbf{M}_{\epsilon}^{-\frac{\lambda}{2}}(p^{\prime })f(p^{\prime})|\] \[=\frac{\mathfrak{c}}{p^{0}}\int_{\mathbb{R}^{3}}\frac{dq}{q^{0}} \int_{\mathbb{R}^{3}}\frac{dq^{\prime}}{q^{\prime 0}}\int_{\mathbb{R}^{3}} \frac{dp^{\prime}}{p^{\prime 0}}\bar{s}\delta^{(4)}\left(p^{\mu}+p^{\prime\mu}-q^{ \mu}-q^{\prime\mu}\right)\mathbf{M}_{\epsilon}^{\frac{1+\lambda}{2}}(p^{ \prime})\mathbf{M}_{\epsilon}^{\frac{1-\lambda}{2}}(q^{\prime})|\mathbf{M}_{ \epsilon}^{-\frac{\lambda}{2}}(q)f(q)|\] \[\lesssim\int_{\mathbb{R}^{3}}\xi(p,q)|\mathbf{M}_{\epsilon}^{- \frac{\lambda}{2}}(q)f(q)|dq, \tag{4.98}\]
where we exchanged \(p^{\prime}\) and \(q\) in the last second step with
\[\xi(p,q):=\frac{\mathfrak{c}}{p^{0}q^{0}}\int_{\mathbb{R}^{3}}\frac{dq^{\prime }}{q^{\prime 0}}\int_{\mathbb{R}^{3}}\frac{dp^{\prime}}{p^{\prime 0}}\bar{s} \delta^{(4)}\left(p^{\mu}+p^{\prime\mu}-q^{\mu}-q^{\prime\mu}\right)\mathbf{M} _{\epsilon}^{\frac{1-\lambda}{2}}(p^{\prime})\mathbf{M}_{\epsilon}^{\frac{1- \lambda}{2}}(q^{\prime})\]
and
\[\bar{g}^{2}=g^{2}+\frac{1}{2}(p^{\mu}+q^{\mu})\cdot(p^{\prime\mu}+q^{\prime \mu}-p^{\mu}-q^{\mu})\,,\quad\bar{s}=\bar{g}^{2}+4\mathfrak{c}^{2}.\]
Applying Lorentz transformation for \(\xi(p,q)\), one has
\[\xi(p,q)=\frac{\mathfrak{c}c_{0}^{1-\lambda}}{p^{0}q^{0}}\int_{\mathbb{R}^{3}} \frac{dq^{\prime}}{q^{\prime 0}}\int_{\mathbb{R}^{3}}\frac{dp^{\prime}}{p^{\prime 0}}s( \bar{p},p^{\prime})\delta^{(4)}\left(\bar{p}^{\mu}+p^{\prime\mu}-\bar{q}^{\mu} -q^{\prime\mu}\right)e^{-(1-\lambda)\frac{\mathfrak{c}(p^{\prime 0}+q^{0})}{2T_{0}}},\]
where \(s(\bar{p},p^{\prime})=-(\bar{p}^{\mu}+p^{\prime\mu})(\bar{p}_{\mu}+p^{\prime}_ {\mu})\). By similar arguments as in [47], one can show that
\[\xi(p,q) =\frac{\mathfrak{c}c_{0}^{1-\lambda}\pi s^{3/2}}{4gp^{0}q^{0}} \int_{0}^{\infty}\frac{y\left(1+\sqrt{y^{2}+1}\right)}{\sqrt{y^{2}+1}}e^{- \frac{1-\lambda}{2T_{0}}\mathfrak{c}(\bar{p}^{0}+\bar{q}^{0})\sqrt{\bar{y}^{2} +1}}I_{0}\left(\frac{(1-\lambda)\mathfrak{c}|\bar{p}\times\bar{q}|}{gT_{0}}y \right)dy\] \[=\frac{\mathfrak{c}c_{0}^{1-\lambda}\pi s^{3/2}}{4gp^{0}q^{0}} \int_{0}^{\infty}\frac{y\left(1+\sqrt{y^{2}+1}\right)}{\sqrt{y^{2}+1}}e^{- \tilde{\boldsymbol{\ell}}\sqrt{y^{2}+1}}I_{0}\left(\tilde{\mathfrak{J}}y \right)dy, \tag{4.99}\]
where
\[c_{0}=\frac{n_{0}}{4\pi\mathfrak{c}T_{0}K_{2}(\gamma)},\quad\tilde{\boldsymbol {\ell}}=(1-\lambda)\bar{\boldsymbol{\ell}},\quad\tilde{\boldsymbol{j}}=(1- \lambda)\bar{\boldsymbol{j}},\quad\tilde{\boldsymbol{\ell}}=\mathfrak{c}\frac{ \bar{p}^{0}+\bar{q}^{0}}{2T_{0}},\quad\tilde{\boldsymbol{j}}=\mathfrak{c} \frac{|\bar{p}\times\bar{q}|}{gT_{0}}.\]
In view of (2.6)-(2.8), we can rewrite (4.99) as
\[\xi(p,q)=\frac{\mathfrak{c}c_{0}^{1-\lambda}\pi s^{3/2}}{4gp^{0}q^{0}}[J_{1}( \tilde{\boldsymbol{\ell}},\tilde{\boldsymbol{j}})+J_{2}(\tilde{\boldsymbol {\ell}},\tilde{\boldsymbol{j}})].\]
By similar arguments as in Lemma 4.3, one can prove
\[\xi(p,q)\lesssim\Big{[}\frac{1}{\mathfrak{c}}+\frac{1}{|p-q|}\Big{]}e^{-(1- \lambda)\bar{c}_{1}|p-q|}, \tag{4.100}\]
which yields that
\[\int_{\mathbb{R}^{3}}\xi^{2}(p,q)dq\lesssim\int_{\mathbb{R}^{3}}\Big{(}\frac{1}{ \mathfrak{c}^{2}}+\frac{1}{|p-q|^{2}}\Big{)}e^{-2(1-\lambda)\bar{c}_{1}|p-q|}dq< C<\infty, \tag{4.101}\]
where \(C\) is a positive constant independent of \(\mathfrak{c}\). Hence it follows from (4.98) and (4.101) that
\[\mathbf{M}_{\mathfrak{c}}^{-\frac{\lambda}{2}}(p)|\mathbf{K}_{ \mathfrak{c}2}f(p)| \lesssim\Big{(}\int_{\mathbb{R}^{3}}\xi^{2}(p,q)dq\Big{)}^{\frac{ 1}{2}}\cdot\Big{(}\int_{\mathbb{R}^{3}}|\mathbf{M}_{\mathfrak{c}}^{-\frac{ \lambda}{2}}(q)f(q)|^{2}dq\Big{)}^{\frac{1}{2}}\] \[\lesssim\|\mathbf{M}_{\mathfrak{c}}^{-\frac{\lambda}{2}}f\|_{2}.\]
Therefore the proof of Lemma 4.14 is completed.
**Lemma 4.15**.: _For any fixed \(0\leq\lambda<1\), it holds that_
\[\Big{|}\langle\mathbf{M}_{\mathfrak{c}}^{-\frac{\lambda}{2}}\mathbf{K}_{ \mathfrak{c}1}f,\mathbf{M}_{\mathfrak{c}}^{-\frac{\lambda}{2}}f\rangle\Big{|} +\Big{|}\langle\mathbf{M}_{\mathfrak{c}}^{-\frac{\lambda}{2}}\mathbf{K}_{ \mathfrak{c}2}f,\mathbf{M}_{\mathfrak{c}}^{-\frac{\lambda}{2}}f\rangle\Big{|} \lesssim\|\mathbf{M}_{\mathfrak{c}}^{-\frac{\lambda}{2}}f\|_{2}^{2}.\]
Proof.: It follows from (2.2) and Lemma 4.3 that
\[\Big{|}\langle\mathbf{M}_{\mathfrak{c}}^{-\frac{\lambda}{2}} \mathbf{K}_{\mathfrak{c}1}f,\mathbf{M}_{\mathfrak{c}}^{-\frac{\lambda}{2}}f \rangle\Big{|}\] \[\lesssim\iint_{\mathbb{R}^{3}\times\mathbb{R}^{3}}|p-q|\mathbf{M }_{\mathfrak{c}}^{\frac{1-\lambda}{2}}(p)\mathbf{M}_{\mathfrak{c}}^{\frac{1+ \lambda}{2}}(q)\cdot|\mathbf{M}_{\mathfrak{c}}^{-\frac{\lambda}{2}}(p)f(p)| \cdot|\mathbf{M}_{\mathfrak{c}}^{-\frac{\lambda}{2}}(q)f(q)|dpdq\] \[\lesssim\iint_{\mathbb{R}^{3}\times\mathbb{R}^{3}}|p-q|e^{-(1- \lambda)\bar{c}_{1}|p|}e^{-(1+\lambda)\bar{c}_{1}|q|}\cdot|\mathbf{M}_{ \mathfrak{c}}^{-\frac{\lambda}{2}}(p)f(p)|\cdot|\mathbf{M}_{\mathfrak{c}}^{- \frac{\lambda}{2}}(q)f(q)|dpdq\] \[\lesssim\Big{(}\iint_{\mathbb{R}^{3}\times\mathbb{R}^{3}}|p-q|e^ {-(1-\lambda)\bar{c}_{1}|p|}e^{-(1+\lambda)\bar{c}_{1}|q|}\cdot|\mathbf{M}_{ \mathfrak{c}}^{-\frac{\lambda}{2}}(p)f(p)|^{2}dpdq\Big{)}^{\frac{1}{2}}\] \[\qquad\times\Big{(}\iint_{\mathbb{R}^{3}\times\mathbb{R}^{3}}|p-q |e^{-(1-\lambda)\bar{c}_{1}|p|}e^{-(1+\lambda)\bar{c}_{1}|q|}\cdot|\mathbf{M}_{ \mathfrak{c}}^{-\frac{\lambda}{2}}(q)f(q)|^{2}dpdq\Big{)}^{\frac{1}{2}}\] \[\lesssim\|\mathbf{M}_{\mathfrak{c}}^{-\frac{\lambda}{2}}f\|_{2}^ {2}.\]
Using (4.98) and (4.100), one has
\[\Big{|}\langle\mathbf{M}_{\mathfrak{c}}^{-\frac{\lambda}{2}} \mathbf{K}_{\mathfrak{c}2}f,\mathbf{M}_{\mathfrak{c}}^{-\frac{\lambda}{2}}f \rangle\Big{|} \lesssim\iint_{\mathbb{R}^{3}\times\mathbb{R}^{3}}\xi(p,q)| \mathbf{M}_{\mathfrak{c}}^{-\frac{\lambda}{2}}(q)f(q)|\cdot|\mathbf{M}_{ \mathfrak{c}}^{-\frac{\lambda}{2}}(p)f(p)|dpdq\] \[\lesssim\Big{(}\iint_{\mathbb{R}^{3}\times\mathbb{R}^{3}}\xi(p,q )\cdot|\mathbf{M}_{\mathfrak{c}}^{-\frac{\lambda}{2}}(p)f(p)|^{2}dpdq\Big{)}^{ \frac{1}{2}}\] \[\lesssim\|\mathbf{M}_{\mathfrak{c}}^{-\frac{\lambda}{2}}f\|_{2}^ {2}.\]
Therefore the proof of Lemma 4.15 is completed.
**Lemma 4.16**.: _For any fixed \(0\leq\lambda<1\), there exists a positive constant \(C\) which is independent of \(\mathfrak{c}\), such that_
\[\langle\mathbf{M}_{\mathfrak{c}}^{-\frac{\lambda}{2}}\mathbf{L}_{\mathfrak{c}}f,\mathbf{M}_{\mathfrak{c}}^{-\frac{\lambda}{2}}f\rangle\geq\frac{1}{2}\| \mathbf{M}_{\mathfrak{c}}^{-\frac{\lambda}{2}}f\|_{\nu_{\mathfrak{c}}}^{2}-C\| f\|_{\nu_{\mathfrak{c}}}^{2}.\]
Proof.: For any \(r>0\), it follows from Lemmas 4.15 and 4.6 that
\[\Big{|}\langle\mathbf{M}_{\mathfrak{c}}^{-\frac{\lambda}{2}} \mathbf{K}_{\mathfrak{c}}f,\mathbf{M}_{\mathfrak{c}}^{-\frac{\lambda}{2}}f \rangle\Big{|} \lesssim\Big{\{}\int_{|p|\leq r}+\int_{|p|\geq r}\Big{\}}| \mathbf{M}_{\mathfrak{c}}^{-\frac{\lambda}{2}}(p)f(p)|^{2}dp\] \[\lesssim\max\Big{\{}\frac{1}{1+r},\frac{1}{\mathfrak{c}}\Big{\}} \|\mathbf{M}_{\mathfrak{c}}^{-\frac{\lambda}{2}}f\|_{\nu_{\mathfrak{c}}}^{2}+C_{r }\|f\|_{\nu_{\mathfrak{c}}}^{2}.\]
Noting \(\mathfrak{c}\gg 1\), taking \(r\) suitably large, we have
\[\Big{|}\langle\mathbf{M}_{\mathfrak{c}}^{-\frac{\lambda}{2}}\mathbf{K}f,\mathbf{ M}_{\mathfrak{c}}^{-\frac{\lambda}{2}}f\rangle\Big{|}\leq\frac{1}{2}\|\mathbf{M}_{ \mathfrak{c}}^{-\frac{\lambda}{2}}f\|_{\nu_{\mathfrak{c}}}^{2}+C\|f\|_{\nu_{ \mathfrak{c}}}^{2},\]
which, together with \(\mathbf{L}_{\mathfrak{c}}f=\nu_{\mathfrak{c}}f-\mathbf{K}_{\mathfrak{c}}f\), yields that
\[\langle\mathbf{M}_{\mathfrak{c}}^{-\frac{\lambda}{2}}\mathbf{L}_ {\mathfrak{c}}f,\mathbf{M}_{\mathfrak{c}}^{-\frac{\lambda}{2}}f\rangle =\langle\mathbf{M}_{\mathfrak{c}}^{-\frac{\lambda}{2}}\nu_{ \mathfrak{c}}f,\mathbf{M}_{\mathfrak{c}}^{-\frac{\lambda}{2}}f\rangle- \langle\mathbf{M}_{\mathfrak{c}}^{-\frac{\lambda}{2}}\mathbf{K}_{\mathfrak{c} }f,\mathbf{M}_{\mathfrak{c}}^{-\frac{\lambda}{2}}f\rangle\] \[\geq\frac{1}{2}\|\mathbf{M}_{\mathfrak{c}}^{-\frac{\lambda}{2}}f \|_{\nu_{\mathfrak{c}}}^{2}-C\|f\|_{\nu_{\mathfrak{c}}}^{2}.\]
Therefore the proof of Lemma 4.16 is completed.
**Proposition 4.17**.: For any fixed \(0\leq\lambda<1\), \(m>\frac{3}{2}\), suppose \(g\in\mathcal{N}_{\mathfrak{c}}^{\perp}\) and
\[\|(1+|p|)^{m}\mathbf{M}_{\mathfrak{c}}^{-\frac{\lambda}{2}}g\|_{\infty}<\infty,\]
then it holds
\[|\mathbf{L}_{\mathfrak{c}}^{-1}g(p)|\lesssim\|(1+|p|)^{m}\mathbf{M}_{ \mathfrak{c}}^{-\frac{\lambda}{2}}g\|_{\infty}\cdot\mathbf{M}_{\mathfrak{c}}^{ \frac{\lambda}{2}}(p),\quad p\in\mathbb{R}^{3}, \tag{4.102}\]
where the constant is independent of \(\mathfrak{c}\).
Proof.: Let \(f=\mathbf{L}_{\mathfrak{c}}^{-1}g\in\mathcal{N}_{\mathfrak{c}}^{\perp}\), then we have \(g=\mathbf{L}_{\mathfrak{c}}f=\nu_{\mathfrak{c}}f-\mathbf{K}_{\mathfrak{c}}f\). Using Lemma 4.14, we get
\[\mathbf{M}_{\mathfrak{c}}^{-\frac{\lambda}{2}}(p)|f(p)| \lesssim\nu_{\mathfrak{c}}^{-1}(p)\mathbf{M}_{\mathfrak{c}}^{- \frac{\lambda}{2}}(p)|g(p)|+\nu_{\mathfrak{c}}^{-1}(p)\mathbf{M}_{\mathfrak{c }}^{-\frac{\lambda}{2}}(p)|\mathbf{K}_{\mathfrak{c}}f(p)|\] \[\lesssim\mathbf{M}_{\mathfrak{c}}^{-\frac{\lambda}{2}}(p)|g(p)| +\mathbf{M}_{\mathfrak{c}}^{-\frac{\lambda}{2}}(p)|\mathbf{K}_{\mathfrak{c}} f(p)|\] \[\lesssim\mathbf{M}_{\mathfrak{c}}^{-\frac{\lambda}{2}}(p)|g(p)| +\|\mathbf{M}_{\mathfrak{c}}^{-\frac{\lambda}{2}}f\|_{\nu_{\mathfrak{c}}}. \tag{4.103}\]
By Proposition 4.13 and Lemma 4.16, we have
\[\|\mathbf{M}_{\mathfrak{c}}^{-\frac{\lambda}{2}}f\|_{\nu_{\mathfrak{c}}}^{2} \lesssim\langle\mathbf{M}_{\mathfrak{c}}^{-\frac{\lambda}{2}} \mathbf{L}_{\mathfrak{c}}f,\mathbf{M}_{\mathfrak{c}}^{-\frac{\lambda}{2}}f \rangle+\|f\|_{\nu_{\mathfrak{c}}}^{2}\] \[\lesssim\langle\mathbf{M}_{\mathfrak{c}}^{-\frac{\lambda}{2}} \mathbf{L}_{\mathfrak{c}}f,\mathbf{M}_{\mathfrak{c}}^{-\frac{\lambda}{2}}f \rangle+\langle\mathbf{L}_{\mathfrak{c}}f,f\rangle\] \[\lesssim\|\mathbf{M}_{\mathfrak{c}}^{-\frac{\lambda}{2}}f\|_{\nu _{\mathfrak{c}}}\cdot\|(1+|p|)^{m}\mathbf{M}_{\mathfrak{c}}^{-\frac{\lambda}{2 }}g\|_{\infty}\cdot\Big{(}\int_{\mathbb{R}^{3}}\frac{1}{(1+|p|)^{2m}}dp \Big{)}^{\frac{1}{2}},\]
which, together with \(m>\frac{3}{2}\), yields that
\[\|\mathbf{M}_{\mathfrak{c}}^{-\frac{\lambda}{2}}f\|_{\nu_{\mathfrak{c}}} \lesssim\|(1+|p|)^{m}\mathbf{M}_{\mathfrak{c}}^{-\frac{\lambda}{2}}g\|_{ \infty}<\infty. \tag{4.104}\]
Combining (4.104) and (4.103), one has
\[\mathbf{M}_{\mathfrak{c}}^{-\frac{\lambda}{2}}(p)|f(p)|\lesssim\|(1+|p|)^{m} \mathbf{M}_{\mathfrak{c}}^{-\frac{\lambda}{2}}g\|_{\infty},\]
which concludes (4.102). Therefore the proof of Proposition 4.17 is completed.
## 5. Uniform-in-\(\mathfrak{c}\) estimates on the linear part of Hilbert expansion
### Reformulation of \(F_{n+1}^{\mathfrak{c}}\)
For \(n=0,1,\cdots,2k-2\), we decompose \(F_{n+1}^{\mathfrak{c}}\) as
\[\frac{F_{n+1}^{\mathfrak{c}}}{\sqrt{\mathbf{M}_{\mathfrak{c}}}}=\mathbf{P}_{ \mathfrak{c}}\Big{(}\frac{F_{n+1}^{\mathfrak{c}}}{\sqrt{\mathbf{M}_{ \mathfrak{c}}}}\Big{)}+\{\mathbf{I}-\mathbf{P}_{\mathfrak{c}}\}\Big{(}\frac{F_ {n+1}^{\mathfrak{c}}}{\sqrt{\mathbf{M}_{\mathfrak{c}}}}\Big{)},\]
where
\[\mathbf{P}_{\mathfrak{c}}\Big{(}\frac{F_{n+1}^{\mathfrak{c}}}{\sqrt{\mathbf{M}_ {\mathfrak{c}}}}\Big{)}=\Big{[}a_{n+1}+b_{n+1}\cdot p+c_{n+1}\frac{p^{0}}{ \mathfrak{c}}\Big{]}\sqrt{\mathbf{M}_{\mathfrak{c}}}. \tag{5.1}\]
Using (1.13)-(1.14) and Lemma 4.11, by tedious calculations, one has
\[\int_{\mathbb{R}^{3}}F_{n+1}^{\mathfrak{c}}dp =\int_{\mathbb{R}^{3}}\Big{[}a_{n+1}+b_{n+1}\cdot p+c_{n+1}\frac{p^ {0}}{\mathfrak{c}}\Big{]}\mathbf{M}_{\mathfrak{c}}dp\] \[=\frac{n_{0}u^{0}}{\mathfrak{c}}a_{n+1}+\frac{e_{0}+P_{0}}{ \mathfrak{c}^{3}}u^{0}(u\cdot b_{n+1})+\frac{e_{0}(u^{0})^{2}+P_{0}|u|^{2}}{ \mathfrak{c}^{4}}c_{n+1},\] \[\int_{\mathbb{R}^{3}}\frac{p_{j}p}{p^{0}}F_{n+1}^{\mathfrak{c}}dp =\int_{\mathbb{R}^{3}}\frac{p_{j}p}{p^{0}}\left[a_{n+1}+b_{n+1} \cdot p+c_{n+1}\frac{p^{0}}{\mathfrak{c}}\right]\mathbf{M}_{\mathfrak{c}}dp+ \int_{\mathbb{R}^{3}}\frac{p_{j}p^{0}}{p^{0}}\sqrt{\mathbf{M}_{\mathfrak{c}}} \{\mathbf{I}-\mathbf{P}_{\mathfrak{c}}\}\left(\frac{F_{n+1}^{\mathfrak{c}}}{ \sqrt{\mathbf{M}_{\mathfrak{c}}}}\right)dp\] \[=\frac{e_{0}+P_{0}}{\mathfrak{c}^{3}}u_{j}ua_{n+1}+\frac{n_{0}}{ \mathfrak{c}\gamma K_{2}(\gamma)}\left(6K_{3}(\gamma)+\gamma K_{2}(\gamma) \right)u_{j}u\left[\left(u\cdot b_{n+1}\right)+\frac{u^{0}}{\mathfrak{c}}c_{n+ 1}\right]\] \[\quad\quad+\mathbf{e}_{j}a_{n+1}\frac{P_{0}}{\mathfrak{c}}+\frac{ \mathfrak{c}n_{0}K_{3}(\gamma)}{\gamma K_{2}(\gamma)}\left(ub_{n+1,j}+u_{j}b_{ n+1}\right)\] \[\quad\quad+\mathbf{e}_{j}\frac{\mathfrak{c}n_{0}K_{3}(\gamma)}{ \gamma K_{2}(\gamma)}\left[\left(u\cdot b_{n+1}\right)+\frac{u^{0}}{\mathfrak{ c}}c_{n+1}\right]+\int_{\mathbb{R}^{3}}\frac{p_{j}p}{p^{0}}\sqrt{\mathbf{M}_{ \mathfrak{c}}}\{\mathbf{I}-\mathbf{P}_{\mathfrak{c}}\}\left(\frac{F_{n+1}^{ \mathfrak{c}}}{\sqrt{\mathbf{M}_{\mathfrak{c}}}}\right)dp,\] \[\int_{\mathbb{R}^{3}}\hat{p}_{j}F_{n+1}^{\mathfrak{c}}dp =\int_{\mathbb{R}^{3}}\hat{p}_{j}\Big{[}a_{n+1}+b_{n+1}\cdot p+ c_{n+1}\frac{p^{0}}{\mathfrak{c}}\Big{]}\mathbf{M}_{\mathfrak{c}}dp+\int_{ \mathbb{R}^{3}}\hat{p}_{j}\sqrt{\mathbf{M}_{\mathfrak{c}}}\{\mathbf{I}- \mathbf{P}_{\mathfrak{c}}\}\left(\frac{F_{n+1}^{\mathfrak{c}}}{\sqrt{\mathbf{ M}_{\mathfrak{c}}}}\right)dp\] \[=n_{0}u_{j}a_{n+1}+\frac{e_{0}+P_{0}}{\mathfrak{c}^{2}}u_{j} \left(u\cdot b_{n+1}\right)+P_{0}b_{n+1,j}\] \[\quad\quad+\frac{e_{0}+P_{0}}{\mathfrak{c}^{3}}u^{0}u_{j}c_{n+1}+ \int_{\mathbb{R}^{3}}\hat{p}_{j}\sqrt{\mathbf{M}_{\mathfrak{c}}}\{\mathbf{I}- \mathbf{P}_{\mathfrak{c}}\}\left(\frac{F_{n+1}^{\mathfrak{c}}}{\sqrt{\mathbf{ M}_{\mathfrak{c}}}}\right)dp,\] \[\int_{\mathbb{R}^{3}}p_{j}F_{n+1}^{\mathfrak{c}}dp =\int_{\mathbb{R}^{3}}p_{j}\Big{[}a_{n+1}+b_{n+1}\cdot p+c_{n+1} \frac{p^{0}}{\mathfrak{c}}\Big{]}\mathbf{M}_{\mathfrak{c}}dp\] \[=\frac{n_{0}}{\mathfrak{c}\gamma K_{2}(\gamma)}\left[\left(6K_{3} (\gamma)+\gamma K_{2}(\gamma)\right)u^{0}u_{j}\left(u\cdot b_{n+1}\right)+ \mathfrak{c}^{2}K_{3}(\gamma)u^{0}b_{n+1,j}\right]\] \[\quad\quad+\frac{e_{0}+P_{0}}{\mathfrak{c}^{3}}u^{0}u_{j}a_{n+1},\] \[\int_{\mathbb{R}^{3}}p^{0}F_{n+1}^{\mathfrak{c}}dp =\int_{\mathbb{R}^{3}}p^{0}\left[a_{n+1}+b_{n+1}\cdot p+c_{n+1} \frac{p^{0}}{\mathfrak{c}}\right]\mathbf{M}_{\mathfrak{c}}dp\] \[=\frac{n_{0}}{\mathfrak{c}\gamma K_{2}(\gamma)}\left[\left(5K_{3} (\gamma)+\gamma K_{2}(\gamma)\right)\left(u^{0}\right)^{2}+K_{3}(\gamma)|u|^{2 }\right](u\cdot b_{n+1})\] \[\quad\quad+\frac{n_{0}}{\mathfrak{c}^{2}\gamma K_{2}(\gamma)} \left[\left(3K_{3}(\gamma)+\gamma K_{2}(\gamma)\right)\left(u^{0}\right)^{2}+3K _{3}(\gamma)|u|^{2}\right]u^{0}c_{n+1}\] \[\quad\quad+\frac{e_{0}\left(u^{0}\right)^{2}+P_{0}|u|^{2}}{ \mathfrak{c}^{3}}a_{n+1},\]
where \(\mathbf{e}_{j}\)\((j=1,2,3)\) are the unit base vectors in \(\mathbb{R}^{3}\).
Next, we shall derive the equation for \((a_{n+1},b_{n+1},c_{n+1})\). Notice that
\[\partial_{t}F_{n+1}^{\mathfrak{c}}+\hat{p}\cdot\nabla_{x}F_{n+1}^{\mathfrak{c}}= \sum_{\begin{subarray}{c}i+j=n+2\\ i,j\geq 0\end{subarray}}Q_{\mathfrak{c}}(F_{i}^{\mathfrak{c}},F_{i}^{\mathfrak{c }})\ (\text{or}\ \sum_{\begin{subarray}{c}i+j=n+2\\ i,j\geq 1\end{subarray}}Q_{\mathfrak{c}}(F_{i}^{\mathfrak{c}},F_{i}^{\mathfrak{ c}})\ \text{when}\ n=2k-2). \tag{5.2}\]
Integrating (5.2) with respect to \(p\), we have
\[\partial_{t}\left(\frac{n_{0}u^{0}}{\mathfrak{c}}a_{n+1}+\frac{e_ {0}+P_{0}}{\mathfrak{c}^{3}}u^{0}\left(u\cdot b_{n+1}\right)+\frac{e_{0}\left(u^{0 }\right)^{2}+P_{0}|u|^{2}}{\mathfrak{c}^{4}}c_{n+1}\right)\] \[\quad\quad+\nabla_{x}\cdot\left(n_{0}ua_{n+1}+\frac{e_{0}+P_{0}}{ \mathfrak{c}^{2}}u\left(u\cdot b_{n+1}\right)+P_{0}b_{n+1}+\frac{e_{0}+P_{0}}{ \mathfrak{c}^{3}}u^{0}uc_{n+1}\right)\]
\[+\nabla_{x}\cdot\int_{\mathbb{R}^{3}}\hat{p}\sqrt{\mathbf{M}_{\epsilon}}\{ \mathbf{I}-\mathbf{P}_{\epsilon}\}\left(\frac{F_{n+1}^{\epsilon}}{\sqrt{\mathbf{M }_{\epsilon}}}\right)dp=0. \tag{5.3}\]
Multiplying (5.2) by \(p_{j}\) and integrating over \(\mathbb{R}^{3}\), one gets
\[\partial_{t}\left(\frac{e_{0}+P_{0}}{\mathfrak{c}^{3}}u^{0}u_{j} a_{n+1}+\frac{n_{0}}{\mathfrak{c}\gamma K_{2}(\gamma)}\left[\left(6K_{3}(\gamma)+ \gamma K_{2}(\gamma)\right)u^{0}u_{j}\left(u\cdot b_{n+1}\right)+\mathfrak{c}^ {2}K_{3}(\gamma)u^{0}b_{n+1,j}\right]\right.\] \[\qquad+\frac{n_{0}}{\mathfrak{c}^{2}\gamma K_{2}(\gamma)}\left[ \left(5K_{3}(\gamma)+\gamma K_{2}(\gamma)\right)\left(u^{0}\right)^{2}+K_{3}( \gamma)|u|^{2}\right]u_{j}c_{n+1}\right)\] \[\qquad+\nabla_{x}\cdot\left(\frac{e_{0}+P_{0}}{\mathfrak{c}^{2}} u_{j}ua_{n+1}+\frac{n_{0}}{\gamma K_{2}(\gamma)}\left(6K_{3}(\gamma)+\gamma K _{2}(\gamma)\right)u_{j}u\left[\left(u\cdot b_{n+1}\right)+\frac{u^{0}}{ \mathfrak{c}}c_{n+1}\right]\right)\] \[\qquad+\partial_{x_{j}}\left(P_{0}a_{n+1}\right)+\nabla_{x}\cdot \left[\frac{\mathfrak{c}^{2}n_{0}K_{3}(\gamma)}{\gamma K_{2}(\gamma)}\left(ub _{n+1,j}+u_{j}b_{n+1}\right)\right]\] \[\qquad+\partial_{x_{j}}\left(\frac{\mathfrak{c}^{2}n_{0}K_{3}( \gamma)}{\gamma K_{2}(\gamma)}\left[\left(u\cdot b_{n+1}\right)+\frac{u^{0}}{ \mathfrak{c}}c_{n+1}\right]\right)+\nabla_{x}\cdot\int_{\mathbb{R}^{3}}p_{j} \hat{p}\sqrt{\mathbf{M}_{\epsilon}}\{\mathbf{I}-\mathbf{P}_{\epsilon}\}\left( \frac{F_{n+1}^{\epsilon}}{\sqrt{\mathbf{M}_{\epsilon}}}\right)dp=0 \tag{5.4}\]
for \(j=1,2,3\) with \(b_{n+1}=\left(b_{n+1,1},b_{n+1,2},b_{n+1,3}\right)^{t}\).
Multiplying (5.2) by \(\frac{p^{0}}{\mathfrak{c}}\) and integrating over \(\mathbb{R}^{3}\), one obtains that
\[\partial_{t}\left(\frac{e_{0}\left(u^{0}\right)^{2}+P_{0}|u|^{2}} {\mathfrak{c}^{4}}a_{n+1}+\frac{n_{0}}{\mathfrak{c}^{2}\gamma K_{2}(\gamma)} \left[\left(5K_{3}(\gamma)+\gamma K_{2}(\gamma)\right)\left(u^{0}\right)^{2}+K _{3}(\gamma)|u|^{2}\right]\left(u\cdot b_{n+1}\right)\right.\] \[\qquad+\frac{n_{0}}{\mathfrak{c}^{3}\gamma K_{2}(\gamma)}\left[ \left(3K_{3}(\gamma)+\gamma K_{2}(\gamma)\right)\left(u^{0}\right)^{2}+3K_{3}( \gamma)|u|^{2}\right]u^{0}c_{n+1}\right)\] \[\qquad+\nabla_{x}\cdot\left(\frac{e_{0}+P_{0}}{\mathfrak{c}^{3}} u^{0}ua_{n+1}+\frac{n_{0}}{\mathfrak{c}\gamma K_{2}(\gamma)}\left[\left(6K_{3}( \gamma)+\gamma K_{2}(\gamma)\right)u^{0}u\left(u\cdot b_{n+1}\right)+ \mathfrak{c}^{2}K_{3}(\gamma)u^{0}b_{n+1}\right]\right.\] \[\qquad+\frac{n_{0}}{\mathfrak{c}^{2}\gamma K_{2}(\gamma)}\left[ \left(5K_{3}(\gamma)+\gamma K_{2}(\gamma)\right)\left(u^{0}\right)^{2}+K_{3}( \gamma)|u|^{2}\right]uc_{n+1}\right)=0. \tag{5.5}\]
After a tedious computation, we can rewrite (5.3)-(5.5) into the following linear symmetric hyperbolic system:
\[\mathbf{A}_{0}\partial_{t}U_{n+1}+\sum_{i=1}^{3}\mathbf{A}_{i} \partial_{i}U_{n+1}+\mathbf{B}U_{n+1}=\mathbf{S}_{n+1}, \tag{5.6}\]
where
\[U_{n+1}=\begin{pmatrix}a_{n+1}\\ b_{n+1}\\ c_{n+1}\end{pmatrix},\quad\mathbf{S}_{n+1}=\begin{pmatrix}-\nabla_{x}\cdot\int_{ \mathbb{R}^{3}}\hat{p}\sqrt{\mathbf{M}_{\epsilon}}\{\mathbf{I}-\mathbf{P}_{ \epsilon}\}\Big{(}\frac{F_{n+1}^{\epsilon}}{\sqrt{\mathbf{M}_{\epsilon}}}\Big{)} dp\\ -\nabla_{x}\cdot\int_{\mathbb{R}^{3}}p\otimes\hat{p}\sqrt{\mathbf{M}_{\epsilon}}\{ \mathbf{I}-\mathbf{P}_{\epsilon}\}\Big{(}\frac{F_{n+1}^{\epsilon}}{\sqrt{ \mathbf{M}_{\epsilon}}}\Big{)}dp\end{pmatrix}.\]
The matrices \(\mathbf{A}_{0},\mathbf{A}_{i}\) (\(i=1,2,3\)) and \(\mathbf{B}\) depend only on the smooth relativistic Euler solution \((n_{0},u,T_{0})\). To express these matrices, we denote
\[h(t,x):=\frac{e_{0}+P_{0}}{n_{0}},\quad h_{1}(t,x):=\frac{n_{0}} {\gamma K_{2}(\gamma)}\left(6K_{3}(\gamma)+\gamma K_{2}(\gamma)\right),\quad h _{2}(t,x):=\frac{n_{0}K_{3}(\gamma)}{\gamma K_{2}(\gamma)}.\]
Then the matrices \(\mathbf{A}_{0},\mathbf{A}_{i},(i=1,2,3)\) in (5.6) are
\[\mathbf{A}_{0}=\left(\begin{array}{cc}\frac{n_{0}u^{0}}{\mathfrak{c}}&\frac{n_ {0}u^{0}h_{u}^{t}}{\mathfrak{c}^{3}}&\frac{e_{0}\left(u^{0}\right)^{2}+P_{0}|u|^ {2}}{\mathfrak{c}^{4}}\\ \frac{n_{0}u^{0}h_{u}^{t}}{\mathfrak{c}^{3}}&\left(\frac{h_{1}}{\mathfrak{c}} \mathfrak{u}\otimes u+\mathfrak{c}h_{2}\mathbf{I}\right)u^{0}&\left(\frac{h_{1} }{\mathfrak{c}^{2}}\left(u^{0}\right)^{2}-h_{2}\right)u\\ \frac{e_{0}\left(u^{0}\right)^{2}+P_{0}|u|^{2}}{\mathfrak{c}^{4}}&\left(\frac{h_ {1}}{\mathfrak{c}^{2}}\left(u^{0}\right)^{2}-h_{2}\right)u^{t}&\left(\frac{h_ {1}}{\mathfrak{c}^{3}}\left(u^{0}\right)^{2}-\frac{3h_{2}}{\mathfrak{c}}\right) u^{0}\end{array}\right)\]
and
\[\mathbf{A}_{i}=\left(\begin{array}{cc}n_{0}u_{i}&\frac{1}{\mathfrak{c}^{2}}n_{0}hu _{i}u^{t}+P_{0}\mathbf{e}_{i}^{t}&\frac{1}{\mathfrak{c}^{8}}n_{0}hu^{0}u_{i}\\ \frac{1}{\mathfrak{c}^{2}}n_{0}hu_{i}u+P_{0}\mathbf{e}_{i}&h_{1}u_{i}u\otimes u +\mathfrak{c}^{2}h_{2}\left(u_{i}\mathbf{I}+\tilde{\mathbf{A}}_{i}\right)& \left(\frac{h_{1}}{\mathfrak{c}}u_{i}u+\mathfrak{c}h_{2}\mathbf{e}_{i}\right) u^{0}\\ \frac{1}{\mathfrak{c}^{3}}n_{0}hu^{0}u_{i}&\left(\frac{h_{1}}{\mathfrak{c}}u_{i }u^{t}+\mathfrak{c}h_{2}\mathbf{e}_{i}^{t}\right)u^{0}&\left(\frac{h_{1}}{ \mathfrak{c}^{2}}\left(u^{0}\right)^{2}-h_{2}\right)u_{i}\end{array}\right),\]
where
\[\left(\tilde{\mathbf{A}}_{i}\right)_{jk}=\delta_{ij}u_{k}+\delta_{ik}u_{j}, \quad 1\leq j,k\leq 3.\]
The matrix \(\mathbf{B}=(b_{ij})\) has the form
\[b_{11}=0,\quad(b_{12},b_{13},b_{14})=\frac{n_{0}u^{0}}{\mathfrak{c}}\partial _{t}\Big{(}\frac{hu^{t}}{\mathfrak{c}^{2}}\Big{)}+n_{0}u^{t}\Big{[}\nabla_{x} \Big{(}\frac{hu}{\mathfrak{c}^{2}}\Big{)}\Big{]}^{t}+(\nabla_{x}P_{0})^{t},\]
\[b_{15}=\frac{n_{0}u^{0}}{\mathfrak{c}^{2}}\partial_{t}\Big{(} \frac{hu^{0}}{\mathfrak{c}^{2}}\Big{)}+\frac{n_{0}u}{\mathfrak{c}}\cdot\nabla _{x}\Big{(}\frac{hu^{0}}{\mathfrak{c}^{2}}\Big{)}-\partial_{t}\Big{(}\frac{P_ {0}}{\mathfrak{c}^{2}}\Big{)},\] \[(b_{21},b_{31},b_{41})=\frac{n_{0}u^{0}}{\mathfrak{c}}\partial_{t }\Big{(}\frac{hu}{\mathfrak{c}^{2}}\Big{)}+\nabla_{x}P_{0}+\nabla_{x}\Big{(} \frac{hu}{\mathfrak{c}^{2}}\Big{)}n_{0}u,\] \[(b_{j2},b_{j3},b_{j4})=\frac{n_{0}u^{0}}{\mathfrak{c}}\partial_{t }\Big{[}\frac{h_{1}}{n_{0}}u_{j}u^{t}+\frac{\mathfrak{c}^{2}h_{2}}{n_{0}} \mathbf{e}_{j}^{t}\Big{]}+n_{0}(u\cdot\nabla_{x})\Big{(}\frac{h_{1}}{n_{0}}u_{ j}u^{t}\Big{)}+n_{0}u^{t}\nabla_{x}\Big{(}\frac{\mathfrak{c}^{2}h_{2}}{n_{0}} \Big{)}\mathbf{e}_{j}^{t}\] \[\qquad\qquad+\Big{[}\nabla_{x}(\mathfrak{c}^{2}h_{2}u_{j})\Big{]} ^{t}+\partial_{x_{j}}(\mathfrak{c}^{2}h_{2}u^{t}),\] \[b_{j5}=-\partial_{t}(h_{2}u_{j})+\frac{n_{0}u^{0}}{\mathfrak{c}^ {2}}\partial_{t}\Big{(}\frac{h_{1}}{n_{0}}u_{j}u^{0}\Big{)}+\frac{n_{0}}{ \mathfrak{c}}u^{t}\nabla_{x}\Big{(}\frac{h_{1}}{n_{0}}u_{j}u^{0}\Big{)}+ \partial_{x_{j}}(\mathfrak{c}h_{2}u^{0}),\] \[b_{51}=\frac{n_{0}u^{0}}{\mathfrak{c}^{2}}\partial_{t}\Big{(} \frac{hu^{0}}{\mathfrak{c}^{2}}\Big{)}+\frac{n_{0}u}{\mathfrak{c}}\cdot \nabla_{x}\Big{(}\frac{hu^{0}}{\mathfrak{c}^{2}}\Big{)}-\partial_{t}\Big{(} \frac{P_{0}}{\mathfrak{c}^{2}}\Big{)},\] \[(b_{52},b_{53},b_{54})=\frac{n_{0}u^{0}}{\mathfrak{c}^{2}} \partial_{t}\Big{(}\frac{h_{1}}{n_{0}}u^{0}u^{t}\Big{)}-\partial_{t}(h_{2}u^{ t})+\frac{n_{0}}{\mathfrak{c}}u^{t}\Big{[}\nabla_{x}\Big{(}\frac{h_{1}}{n_{0}}u^{0}u \Big{)}\Big{]}^{t}+\Big{(}\nabla(\mathfrak{c}h_{2}u^{0})\Big{)}^{t},\] \[b_{55}=\frac{n_{0}u^{0}}{\mathfrak{c}^{3}}\partial_{t}\Big{(} \frac{h_{1}}{n_{0}}(u^{0})^{2}-3\mathfrak{c}^{2}\frac{h_{2}}{n_{0}}|u|^{2} \Big{)}+\frac{n_{0}}{\mathfrak{c}^{2}}u\cdot\nabla_{x}\Big{(}\frac{h_{1}}{n_{0} }(u^{0})^{2}-3\mathfrak{c}^{2}\frac{h_{2}}{n_{0}}|u|^{2}\Big{)}+\nabla_{x}\cdot(2 h_{2}u).\]
Next, we prove the positivity of \(\mathbf{A}_{0}\). Set \(\phi(\gamma):=\frac{K_{3}(\gamma)}{K_{2}(\gamma)}\). A direct calculation shows that
\[\det(\mathbf{A}_{0})_{1\times 1}\geq\frac{n_{0}u^{0}}{\mathfrak{c}}>0, \quad\det(\mathbf{A}_{0})_{2\times 2}\geq\Big{(}\frac{n_{0}u^{0}}{\mathfrak{c}} \Big{)}^{2}\frac{\mathfrak{c}^{2}\phi}{\gamma}>0,\] \[\det(\mathbf{A}_{0})_{3\times 3}\geq\Big{(}\frac{n_{0}u^{0}}{ \mathfrak{c}}\Big{)}^{3}\Big{(}\frac{\mathfrak{c}^{2}\phi}{\gamma}\Big{)}^{2}>0, \quad\det(\mathbf{A}_{0})_{4\times 4}\geq\Big{(}\frac{n_{0}u^{0}}{\mathfrak{c}} \Big{)}^{4}\Big{(}\frac{\mathfrak{c}^{2}\phi}{\gamma}\Big{)}^{3}>0,\]
and
\[\det\mathbf{A}_{0}=\Big{(}\frac{n_{0}u^{0}}{\mathfrak{c}}\Big{)}^{5} \Big{(}\frac{\mathfrak{c}^{2}\phi}{\gamma}\Big{)}^{3}(u^{0})^{-2}\Big{\{}|u|^{2} \mathfrak{c}^{2}(\Psi-\frac{\Psi}{\gamma\phi}-\frac{\phi}{\gamma})+\mathfrak{c}^{4}( \Psi-\frac{1}{\gamma^{2}}-\frac{\phi}{\gamma})\Big{\}}, \tag{5.7}\]
where \(\Psi:=1+\frac{6}{\gamma}\phi-\phi^{2}\). To prove the positivity of (5.7), we use [44, Proposition 10] to get
\[\phi^{2}-\frac{5}{\gamma}\phi+\frac{1}{\gamma^{2}}-1<0, \tag{5.8}\]
which yields that
\[\Psi-\frac{1}{\gamma^{2}}-\frac{\phi}{\gamma}=1+\frac{6}{\gamma}\phi-\phi^{2}- \frac{1}{\gamma^{2}}-\frac{\phi}{\gamma}=-(\phi^{2}-\frac{5}{\gamma}\phi+\frac{1 }{\gamma^{2}}-1)>0. \tag{5.9}\]
A direct calculation shows that
\[\mathfrak{h}:= \Psi-\frac{\Psi}{\gamma\phi}-\frac{\phi}{\gamma}=\frac{1}{\phi \Psi}(\phi\Psi-\frac{\Psi}{\gamma}-\frac{\phi^{2}}{\gamma})\]
\[\theta>0,\]
which, together with (5.7) and (5.9), yields that
\[\det\mathbf{A}_{0}\geq\Big{(}\frac{n_{0}u^{0}}{\mathfrak{c}}\Big{)}^{5}\Big{(} \frac{\mathfrak{c}^{2}\phi}{\gamma}\Big{)}^{3}\frac{\mathfrak{c}^{4}}{(u^{0})^{ 2}}\{-(\phi^{2}-\frac{5}{\gamma}\phi+\frac{1}{\gamma^{2}}-1)\}>0.\]
Therefore \(\mathbf{A}_{0}\) is actually a positive definite matrix.
### Uniform-in-\(\mathfrak{c}\) estimates on \(F_{n}^{\mathfrak{c}}\)
**Proposition 5.1**.: Let the local relativistic Maxwellian \(F_{0}^{\mathfrak{c}}=\mathbf{M}_{\mathfrak{c}}(n_{0},u,T_{0};p)\) be as in (1.12) formed by \((n_{0}(t,x),u(t,x),T_{0}(t,x))\) which is a smooth solution to the relativistic Euler equations (3.1) on a time interval \([0,T]\times\mathbb{R}^{3}\). Then we can construct the smooth terms \(F_{1}^{\mathfrak{c}},\dots,F_{2k-1}^{\mathfrak{c}}\) of the Hilbert expansion in \((t,x)\in[0,T]\times\mathbb{R}^{3}\) such that, for any \(0<\lambda<1\), the following estimates hold
\[|F_{n}^{\mathfrak{c}}(t,x,p)|\leq C(\lambda)\mathbf{M}_{\mathfrak{c}}^{\lambda }(n_{0}(t,x),u(t,x),T_{0}(t,x);p),\quad n=1,2,\dots,2k-1 \tag{5.11}\]
and
\[|\partial^{m}F_{n}^{\mathfrak{c}}(t,x,p)|\leq C(\lambda)\mathbf{M}_{ \mathfrak{c}}^{\lambda}(n_{0}(t,x),u(t,x),T_{0}(t,x);p),\quad n=1,2,\dots,2k-1,\quad m\geq 1, \tag{5.12}\]
where \(\partial^{m}:=\partial_{t,x}^{m}\). We emphasize that the constants in (5.11) and (5.12) are independent of \(\mathfrak{c}\).
Proof.: It is noted that \(\mathbf{A}_{0}\), \(\mathbf{A}_{i}\) and \(\mathbf{B}\) in (5.6) depend only on the smooth functions \(n_{0}(t,x)\), \(u(t,x)\) and \(T_{0}(t,x)\). Denote \(\psi_{1}:=\{\mathbf{I}-\mathbf{P}_{\mathfrak{c}}\}\Big{(}\frac{F_{1}^{ \mathfrak{c}}}{\sqrt{\mathbf{M}_{\mathfrak{c}}}}\Big{)}\), then one has
\[F_{1}^{\mathfrak{c}}=\Big{(}a_{1}+b_{1}\cdot p+c_{1}\frac{p^{0}}{\mathfrak{c }}\Big{)}\mathbf{M}_{\mathfrak{c}}+\sqrt{\mathbf{M}_{\mathfrak{c}}}\psi_{1},\]
which yields that
\[\partial F_{1}^{\mathfrak{c}}=\Big{(}\partial a_{1}+\partial b_{1}\cdot p+ \partial c_{1}\frac{p^{0}}{\mathfrak{c}}\Big{)}\mathbf{M}_{\mathfrak{c}}+ \Big{(}a_{1}+b_{1}\cdot p+c_{1}\frac{p^{0}}{\mathfrak{c}}\Big{)}\partial \mathbf{M}_{\mathfrak{c}}+\partial\sqrt{\mathbf{M}_{\mathfrak{c}}}\psi_{1}+ \sqrt{\mathbf{M}_{\mathfrak{c}}}\partial\psi_{1}, \tag{5.13}\]
where \(\partial=\partial_{t}\) or \(\partial=\partial_{x_{j}}\) for \(j=1,2,3\). A direct calculation shows that
\[|\partial\mathbf{M}_{\mathfrak{c}}|\leq C\mathbf{M}_{\mathfrak{c}}^{1-},\]
where \(C\) depends on \(\|\nabla_{t,x}(n_{0},u,T_{0})\|_{\infty}\). We denote
\[g_{1}:=Q_{\mathfrak{c}}(\mathbf{M}_{\mathfrak{c}},\sqrt{\mathbf{M}_{ \mathfrak{c}}}\psi_{1})+Q_{\mathfrak{c}}(\sqrt{\mathbf{M}_{\mathfrak{c}}}\psi_ {1},\mathbf{M}_{\mathfrak{c}}),\]
then it follows from (1.10) that
\[g_{1}=\partial_{t}\mathbf{M}_{\mathfrak{c}}+\hat{p}\cdot\nabla_{x}\mathbf{M}_ {\mathfrak{c}}, \tag{5.14}\]
which implies that \(|\partial^{m}g_{1}|\lesssim\mathbf{M}_{\mathfrak{c}}^{1-}\) for any \(m\geq 0\).
To estimate \(\partial\psi_{1}\), we apply \(\partial\) to (5.14) to obtain
\[\partial g_{1}=Q_{\mathfrak{c}}(\partial\mathbf{M}_{\mathfrak{c} },\sqrt{\mathbf{M}_{\mathfrak{c}}}\psi_{1})+Q_{\mathfrak{c}}(\sqrt{\mathbf{M}_ {\mathfrak{c}}}\psi_{1},\partial\mathbf{M}_{\mathfrak{c}})+Q_{\mathfrak{c}}( \mathbf{M}_{\mathfrak{c}},\partial\sqrt{\mathbf{M}_{\mathfrak{c}}}\psi_{1})+Q _{\mathfrak{c}}(\partial\sqrt{\mathbf{M}_{\mathfrak{c}}}\psi_{1},\mathbf{M}_{ \mathfrak{c}})\\ +Q_{\mathfrak{c}}(\mathbf{M}_{\mathfrak{c}},\sqrt{\mathbf{M}_{ \mathfrak{c}}}\partial\psi_{1})+Q_{\mathfrak{c}}(\sqrt{\mathbf{M}_{\mathfrak{ c}}}\partial\psi_{1},\mathbf{M}_{\mathfrak{c}}),\]
which yields that
\[\mathbf{L}_{\mathfrak{c}}(\{\mathbf{I}-\mathbf{P}_{\mathfrak{c} }\}\partial\psi_{1})=\mathbf{L}_{\mathfrak{c}}\partial\psi_{1}=-\frac{1}{ \sqrt{\mathbf{M}_{\mathfrak{c}}}}\partial g_{1}+\frac{1}{\sqrt{\mathbf{M}_{ \mathfrak{c}}}}\Big{\{}Q_{\mathfrak{c}}(\partial\mathbf{M}_{\mathfrak{c}}, \sqrt{\mathbf{M}_{\mathfrak{c}}}\psi_{1})+Q_{\mathfrak{c}}(\sqrt{\mathbf{M}_ {\mathfrak{c}}}\psi_{1},\partial\mathbf{M}_{\mathfrak{c}})\Big{\}}\\ +\frac{1}{\sqrt{\mathbf{M}_{\mathfrak{c}}}}\Big{\{}Q_{\mathfrak{ c}}(\mathbf{M}_{\mathfrak{c}},\partial\sqrt{\mathbf{M}_{\mathfrak{c}}}\psi_{1})+Q _{\mathfrak{c}}(\partial\sqrt{\mathbf{M}_{\mathfrak{c}}}\psi_{1},\mathbf{M}_{ \mathfrak{c}})\Big{\}}. \tag{5.15}\]
Using the exponential decay of \(\mathbf{L}_{\mathfrak{c}}^{-1}\) in Proposition 4.17, we have
\[|\psi_{1}|=\Big{|}\{\mathbf{I}-\mathbf{P}_{\mathfrak{c}}\}\Big{(}\frac{F_{1}^{ \mathfrak{c}}}{\sqrt{\mathbf{M}_{\mathfrak{c}}}}\Big{)}\Big{|}=\Big{|} \mathbf{L}_{\mathfrak{c}}^{-1}\Big{(}-\frac{1}{\sqrt{\mathbf{M}_{\mathfrak{c }}}}(\partial_{t}\mathbf{M}_{\mathfrak{c}}+\hat{p}\cdot\nabla_{x}\mathbf{M}_{ \mathfrak{c}})\Big{)}\Big{|}\lesssim\mathbf{M}_{\mathfrak{c}}^{\frac{1}{2}-}, \tag{5.16}\]
which, together with \(|\partial g_{1}|\lesssim\mathbf{M}_{\mathfrak{c}}^{1-}\), yields that the RHS of (5.15) can be bounded by \(\mathbf{M}_{\mathfrak{c}}^{\frac{1}{2}-}\). Using Proposition 4.17 again, we obtain
\[|\{\mathbf{I}-\mathbf{P}_{\mathfrak{c}}\}\partial\psi_{1}|\lesssim\mathbf{M}_{ \mathfrak{c}}^{\frac{1}{2}-}. \tag{5.17}\]
On the other hand, it is clear that
\[|\mathbf{P}_{\mathfrak{c}}\partial\psi_{1}|\lesssim\mathbf{M}_{\mathfrak{c}}^{ \frac{1}{2}-},\]
which, together with (5.17), implies that
\[|\partial\psi_{1}|\lesssim\mathbf{M}_{\mathfrak{c}}^{\frac{1}{2}-}.\]
Similarly, one can deduce that
\[|\partial^{m}\psi_{1}|\lesssim\mathbf{M}_{\mathfrak{c}}^{\frac{1}{2}-},\quad m \geq 1. \tag{5.18}\]
Next we consider the estimate on the macroscopic parts \((a_{1},b_{1},c_{1})\). Using (5.18), we get
\[\|\mathbf{S}_{1}\|_{H^{N_{0}-1}}\lesssim 1.\]
One obtains from Lemma 3.1 that
\[\left\|\partial_{t}\mathbf{A}_{0}\right\|_{\infty}+\sum_{\alpha=0}^{3}\left\| \nabla_{x}\mathbf{A}_{\alpha}\right\|_{H^{N_{0}-1}}+\left\|\mathbf{B}\right\| _{H^{N_{0}-1}}\lesssim 1.\]
Applying standard energy estimate, one gets
\[\frac{d}{dt}\left\|\left(a_{1},b_{1},c_{1}\right)(t)\right\|_{H^{N_{0}-3}}^{2} \lesssim\left\|\left(a_{1},b_{1},c_{1}\right)(t)\right\|_{H^{N_{0}-3}}^{2}+ \left\|\left(a_{1},b_{1},c_{1}\right)(t)\right\|_{H^{N_{0}-3}},\]
which, together with Gronwall's inequality, yields that
\[\left\|\left(a_{1},b_{1},c_{1}\right)(t)\right\|_{H^{N_{0}-3}}\lesssim 1. \tag{5.19}\]
Hence it follows from (5.1) that
\[\left|\mathbf{P}_{\mathfrak{c}}\Big{(}\frac{F_{1}^{\mathfrak{c}}}{\sqrt{ \mathbf{M}_{\mathfrak{c}}}}\Big{)}\right|\lesssim\mathbf{M}_{\mathfrak{c}}^{ \frac{1}{2}-},\]
which, together with (5.16), yields that
\[|F_{1}^{\mathfrak{c}}|\lesssim\mathbf{M}_{\mathfrak{c}}^{1-}.\]
For \(\partial F_{1}^{\mathfrak{c}}\), on account of (5.13), (5.18) and (5.19), one obtains
\[|\partial F_{1}^{\mathfrak{c}}|\lesssim\mathbf{M}_{\mathfrak{c}}^{1-}.\]
Similar arguments lead to
\[|\partial^{m}F^{\epsilon}_{1}|\lesssim\mathbf{M}^{1-}_{\epsilon},\quad m\geq 1.\]
By induction, we can prove that
\[|F^{\epsilon}_{n+1}|\lesssim\mathbf{M}^{1-}_{\epsilon},\quad|\partial^{m}F^{ \epsilon}_{n+1}|\lesssim\mathbf{M}^{1-}_{\epsilon},\quad n=0,1,\cdots,2k-2, \quad m\geq 1.\]
Therefore the proof is completed.
Uniform in \(\mathfrak{c}\) and \(\varepsilon\) estimates on the remainder \(F^{\varepsilon,\mathfrak{c}}_{R}\)
In this section, we shall prove our main results, Theorem 1.1 and Theorem 1.5. As in [25, 46], we define
\[f^{\varepsilon,\mathfrak{c}}_{R}(t,x,p)=\frac{F^{\varepsilon,\mathfrak{c}}_{R} (t,x,p)}{\sqrt{\mathbf{M}_{\mathfrak{c}}(t,x,p)}} \tag{6.1}\]
and
\[h^{\varepsilon,\mathfrak{c}}_{R}(t,x,p)=\frac{F^{\varepsilon,\mathfrak{c}}_{R }(t,x,p)}{\sqrt{J_{\mathfrak{c}}(p)}}. \tag{6.2}\]
We first present two uniform-in-\(\mathfrak{c}\) estimates on the nonlinear operators.
**Lemma 6.1**.: _It holds that_
\[\Big{|}\frac{w_{\ell}}{\sqrt{\mathbf{M}_{\mathfrak{c}}}}Q_{\mathfrak{c}}(h_{ 1}\sqrt{\mathbf{M}_{\mathfrak{c}}},h_{2}\sqrt{\mathbf{M}_{\mathfrak{c}}}) \Big{|}\lesssim\nu_{\mathfrak{c}}(p)\|h_{1}\|_{\infty,\ell}\|h_{2}\|_{\infty, \ell},\]
_where the constant is independent of \(\mathfrak{c}\)._
Proof.: Noting
\[p^{0}+q^{0}=p^{\prime 0}+q^{\prime 0},\quad p+q=p^{\prime}+q^{\prime},\]
we claim that
\[|p|\lesssim|p^{\prime}|+|q^{\prime}|,\quad|q|\lesssim|p^{\prime}|+|q^{\prime}|. \tag{6.3}\]
Actually, without loss of generality, we may assume that \(|p|\leq|q|\). Denote \(r:=\max\{|p^{\prime}|,|q^{\prime}|\}\), then one has
\[2\sqrt{\mathfrak{c}^{2}+|p|^{2}}\leq\sqrt{\mathfrak{c}^{2}+|p|^{2}}+\sqrt{ \mathfrak{c}^{2}+|q|^{2}}=\sqrt{\mathfrak{c}^{2}+|p^{\prime}|^{2}}+\sqrt{ \mathfrak{c}^{2}+|q^{\prime}|^{2}}\leq 2\sqrt{\mathfrak{c}^{2}+r^{2}},\]
which yields that
\[|p|^{2}\leq r^{2}\leq|p^{\prime}|^{2}+|q^{\prime}|^{2},\]
Thus it holds
\[|p|\leq|p^{\prime}|+|q^{\prime}|. \tag{6.4}\]
If \(|p|\leq\frac{|q|}{2}\), one has \(|p+q|\geq|q|-|p|\geq\frac{|q|}{2}\), which yields that
\[\frac{|q|}{2}\leq|p+q|=|p^{\prime}+q^{\prime}|\leq|p^{\prime}|+|q^{\prime}|.\]
If \(\frac{|q|}{2}\leq|p|\leq|q|\), it follows from (6.4) that
\[|q|\leq 2|p|\leq 2(|p^{\prime}|+|q^{\prime}|).\]
Hence the claim (6.3) holds.
Now it follows from (6.3) that
\[w_{\ell}(p)\lesssim w_{\ell}(p^{\prime})w_{\ell}(q^{\prime}),\]
which, together with from (4.41), yields that
\[\Big{|}\frac{w_{\ell}}{\sqrt{\mathbf{M}_{\mathfrak{c}}}}Q_{\mathfrak{c}}(h_{ 1}\sqrt{\mathbf{M}_{\mathfrak{c}}},h_{2}\sqrt{\mathbf{M}_{\mathfrak{c}}}) \Big{|}\]
\[\leq\frac{w_{\ell}(p)}{\sqrt{\mathbf{M}_{\mathsf{c}}(p)}}\int_{ \mathbb{R}^{3}}\int_{\mathbb{S}^{2}}v_{\phi}\Big{|}h_{1}(p^{\prime})h_{2}(q^{ \prime})\sqrt{\mathbf{M}_{\mathsf{c}}(p^{\prime})\mathbf{M}_{\mathsf{c}}(q^{ \prime})}-h_{1}(p)h_{2}(q)\sqrt{\mathbf{M}_{\mathsf{c}}(p)\mathbf{M}_{\mathsf{ c}}(q)}\Big{|}d\omega dq\] \[\leq\int_{\mathbb{R}^{3}}\int_{\mathbb{S}^{2}}v_{\phi}\Big{[}|w_{ \ell}(p^{\prime})h_{1}(p^{\prime})|\cdot|w_{\ell}(q^{\prime})h_{2}(q^{\prime}) |+|w_{\ell}(p)h_{1}(p)|\cdot|h(q)|\Big{]}\sqrt{\mathbf{M}_{\mathsf{c}}(q)}d \omega dq\] \[\lesssim\nu_{\mathsf{c}}(p)\|h_{1}\|_{\infty,\ell}\|h_{2}\|_{ \infty,\ell}.\]
Therefore the proof is completed.
**Lemma 6.2**.: _For any \(\ell\geq 9\), it holds that_
\[|\left\langle\Gamma_{\mathsf{c}}\left(h_{1},h_{2}\right),h_{3}\right\rangle| \lesssim\|h_{3}\|_{\infty,\ell}\left\|h_{2}\right\|_{2}\left\|h_{1}\right\|_{ 2}.\]
_Furthermore, if \(\chi(p)\) satisfies \(|\chi(p)|\lesssim e^{-\delta_{1}|p|}\) for some positive constant \(\delta_{1}>0\), then we have_
\[|\left\langle\Gamma_{\mathsf{c}}\left(h_{1},\chi\right),h_{3}\right\rangle|+| \left\langle\Gamma_{\mathsf{c}}\left(\chi,h_{1}\right),h_{3}\right\rangle| \lesssim\left\|h_{3}\right\|_{\nu_{\mathsf{c}}}\left\|h_{1}\right\|_{\nu_{ \mathsf{c}}},\]
_where the constants are independent of \(\mathsf{c}\)._
We point out that Lemma 6.2 has been proved in [46] when \(\mathsf{c}=1\). For the general case, the proof is very similar to the one in [46] and we omit the details here for brevity.
To establish the uniform in \(\mathsf{c}\) and \(\varepsilon\) estimates for the remainder \(F_{R}^{\varepsilon,\mathsf{c}}\), we shall use the \(L^{2}-L^{\infty}\) framework from [24]. We first consider the \(L^{2}\) estimate.
**Lemma 6.3** (\(L^{2}\) Estimate).: _Let \((n_{0}(t,x),u(t,x),T_{0}(t,x))\) be the smooth solution to the relativistic Euler equations (3.1) generated by Lemma 3.1. Let \(\mathbf{M}_{\mathsf{c}}(n_{0},u,T_{0};p)\), \(f_{R}^{\varepsilon,\mathsf{c}}\), \(h_{R}^{\varepsilon,\mathsf{c}}\) be defined in (1.12), (6.1) and (6.2), respectively, and let \(\zeta_{0}>0\) be the positive constant in Proposition 4.13. Then there exist constants \(\varepsilon_{0}>0\) and \(C>0\), such that for all \(\varepsilon\in(0,\varepsilon_{0}]\), it holds_
\[\frac{d}{dt}\left\|f_{R}^{\varepsilon,\mathsf{c}}\right\|_{2}^{2}(t)+\frac{ \zeta_{0}}{2\varepsilon}\left\|\left\{\mathbf{I}-\mathbf{P}_{\mathsf{c}} \right\}f_{R}^{\varepsilon,\mathsf{c}}\right\|_{\nu_{\mathsf{c}}}^{2}(t)\leq C \Big{\{}\sqrt{\varepsilon}\|\varepsilon^{\frac{1}{2}}h_{R}^{\varepsilon, \mathsf{c}}\|_{\infty,\ell}(t)+1\Big{\}}\left\{\left\|f_{R}^{\varepsilon, \mathsf{c}}\right\|_{2}^{2}+\left\|f_{R}^{\varepsilon,\mathsf{c}}\right\|_{2} \right\}, \tag{6.5}\]
_where the constant \(C\) depends upon the \(L^{2}\) norms and the \(L^{\infty}\) norms of the terms \(\mathbf{M}_{\mathsf{c}},F_{1}^{\mathsf{c}},\ldots,F_{2k-1}^{\mathsf{c}}\) as well as their first derivatives, and \(C\) is independent of \(\mathsf{c}\)._
Proof.: Plugging \(F_{R}^{\varepsilon,\mathsf{c}}=f_{R}^{\varepsilon,\mathsf{c}}\sqrt{\mathbf{M}_ {\mathsf{c}}}\) into (1.11), one has
\[\partial_{t}f_{R}^{\varepsilon,\mathsf{c}}+\hat{p}\cdot\nabla_{ x}f_{R}^{\varepsilon,\mathsf{c}}+\frac{1}{\varepsilon}\mathbf{L}_{\mathsf{c}}f_{R}^{ \varepsilon,\mathsf{c}}=-\frac{\{\partial_{t}+\hat{p}\cdot\nabla_{x}\}\sqrt{ \mathbf{M}_{\mathsf{c}}}}{\sqrt{\mathbf{M}_{\mathsf{c}}}}f_{R}^{\varepsilon, \mathsf{c}}+\varepsilon^{k-1}\Gamma_{\mathsf{c}}(f_{R}^{\varepsilon,\mathsf{c}},f _{R}^{\varepsilon,\mathsf{c}})\] \[\quad+\sum_{i=1}^{2k-1}\varepsilon^{i-1}\Big{\{}\Gamma_{\mathsf{ c}}\Big{(}\frac{F_{i}^{\mathsf{c}}}{\sqrt{\mathbf{M}_{\mathsf{c}}}},f_{R}^{ \varepsilon,\mathsf{c}}\Big{)}+\Gamma_{\mathsf{c}}\Big{(}f_{R}^{\varepsilon, \mathsf{c}},\frac{F_{i}^{\mathsf{c}}}{\sqrt{\mathbf{M}_{\mathsf{c}}}}\Big{)} \Big{\}}+\varepsilon^{k}\bar{A}, \tag{6.6}\]
where
\[\bar{A}:=\sum_{\begin{subarray}{c}i+j\geq 2k+1\\ 2\leq i,j\leq 2k-1\end{subarray}}\varepsilon^{i+j-1-2k}\Gamma_{\mathsf{c}}\Big{(} \frac{F_{i}^{\mathsf{c}}}{\sqrt{\mathbf{M}_{\mathsf{c}}}},\frac{F_{i}^{ \mathsf{c}}}{\sqrt{\mathbf{M}_{\mathsf{c}}}}\Big{)}.\]
Multiplying (6.6) by \(f_{R}^{\varepsilon,\mathsf{c}}\) and integrating over \(\mathbb{R}^{3}\times\mathbb{R}^{3}\), one has
\[\big{\langle}\partial_{t}f_{R}^{\varepsilon,\mathsf{c}} +\hat{p}\cdot\nabla_{x}f_{R}^{\varepsilon,\mathsf{c}}+\frac{1}{ \varepsilon}\mathbf{L}_{\mathsf{c}}f_{R}^{\varepsilon,\mathsf{c}},f_{R}^{ \varepsilon,\mathsf{c}}\big{\rangle}=-\Big{\langle}\Big{(}\frac{\{\partial_{t}+\hat{p }\cdot\nabla_{x}\}\sqrt{\mathbf{M}_{\mathsf{c}}}}{\sqrt{\mathbf{M}_{\mathsf{c}}}} \Big{)}f_{R}^{\varepsilon,\mathsf{c}},f_{R}^{\varepsilon,\mathsf{c}}\Big{\rangle}+ \langle\varepsilon^{k-1}\Gamma_{\mathsf{c}}(f_{R}^{\varepsilon,\mathsf{c}},f_{R}^{ \varepsilon,\mathsf{c}}),f_{R}^{\varepsilon,\mathsf{c}}\rangle\] \[\quad+\Big{\langle}\sum_{i=1}^{2k-1}\varepsilon^{i-1}\Big{\{} \Gamma_{\mathsf{c}}\Big{(}\frac{F_{i}^{\mathsf{c}}}{\sqrt{\mathbf{M}_{\mathsf{c}}}},f_{R} ^{\varepsilon,\mathsf{c}}\Big{)}+\Gamma_{\mathsf{c}}\Big{(}f_{R}^{\varepsilon, \mathsf{c}},\frac{F_{i}^{\mathsf{c}}}{\sqrt{\mathbf{M}_{\mathsf{c}}}}\Big{)} \Big{\}},f_{R}^{\varepsilon,\mathsf{c}}\Big{\rangle}+\langle\varepsilon^{k}\bar{A},f _{R}^{\varepsilon,\mathsf{c}}\rangle.\]
It follows from Proposition 4.13 that
\[\big{\langle}\partial_{t}f_{R}^{\varepsilon,\mathsf{c}}+\hat{p}\cdot\nabla_{x}f_{R}^{ \varepsilon,\mathsf{c}}+\frac{1}{\varepsilon}\mathbf{L}_{\mathsf{c}}f_{R}^{ \varepsilon,\mathsf{c}},f_{R}^{\varepsilon,\mathsf{c}}\big{\rangle}\geq\frac{1}{2} \frac{d}{dt}\left\|f_{R}^{\varepsilon,\mathsf{c}}\right\|_{2}^{2}+\frac{ \zeta_{0}}{\varepsilon}\left\|\left\{\mathbf{I}-\mathbf{P}_{\mathsf{c}}\right\}f_{R}^{ \varepsilon,\mathsf{c}}\right\|_{\nu_{\mathsf{c}}}^{2}.\]
For \(\partial=\partial_{t}\) or \(\partial=\partial_{x_{i}}\), it holds that
\[\frac{\partial\mathbf{M}_{\mathfrak{c}}}{\mathbf{M}_{\mathfrak{c}}}=\frac{ \partial n_{0}}{n_{0}}-3\frac{\partial T_{0}}{T_{0}}+\frac{\partial T_{0}}{T_{0 }^{2}}\Big{(}u^{0}p^{0}-\mathfrak{c}^{2}\frac{K_{1}(\gamma)}{K_{2}(\gamma)} \Big{)}-\frac{\partial T_{0}}{T_{0}^{2}}\sum_{i=1}^{3}u_{i}p_{i}+\frac{1}{T_{0 }}\Big{(}\sum_{i=1}^{3}p_{i}\partial u_{i}-\frac{\partial u\cdot u}{u^{0}}p^{0 }\Big{)}. \tag{6.7}\]
A direct calculation shows that
\[\Big{|}u^{0}p^{0}-\mathfrak{c}^{2}\frac{K_{1}(\gamma)}{K_{2}(\gamma)}\Big{|} \lesssim(1+|p|)^{2}C(n_{0},u,T_{0}),\]
which, together with (6.7), yields that
\[\Big{|}\frac{\{\partial_{t}+\hat{p}\cdot\nabla_{x}\}\sqrt{\mathbf{M}_{ \mathfrak{c}}}}{\sqrt{\mathbf{M}_{\mathfrak{c}}}}\Big{|}\lesssim(1+|p|)^{3}C (n_{0},u,T_{0}).\]
For any \(0<\sqrt{\varepsilon}\leq\kappa\), we obtain
\[\Big{|}\Big{\langle}\Big{(}\frac{\{\partial_{t}+\hat{p}\cdot \nabla_{x}\}\sqrt{\mathbf{M}_{\mathfrak{c}}}}{\sqrt{\mathbf{M}_{\mathfrak{c}} }}\Big{)}f_{R}^{\varepsilon,\mathfrak{c}},f_{R}^{\varepsilon,\mathfrak{c}} \Big{\rangle}\Big{|}\] \[\leq\Big{|}\int_{\{1+|p|\geq\frac{\kappa}{\sqrt{\varepsilon}}\}} dxdp\Big{|}+\Big{|}\int_{\{1+|p|\leq\frac{\kappa}{\sqrt{\varepsilon}}\}}dxdp\Big{|}\] \[\leq C_{\kappa}\varepsilon^{2}\|\nabla_{x}(n_{0},u,T_{0})\|_{2} \cdot\|h_{R}^{\varepsilon,\mathfrak{c}}\|_{\infty,\ell}\cdot\|f_{R}^{ \varepsilon,\mathfrak{c}}\|_{2}\] \[\qquad+C\|\nabla_{x}(n_{0},u,T_{0})\|_{L^{\infty}}\cdot\|(1+|p|) ^{\frac{3}{2}}f_{R}^{\varepsilon,\mathfrak{c}}\mathbf{1}_{\{1+|p|\leq\frac{ \kappa}{\sqrt{\varepsilon}}\}}\|_{2}^{2}\] \[\leq C_{\kappa}\varepsilon^{2}\|h_{R}^{\varepsilon,\mathfrak{c}} \|_{\infty,\ell}\cdot\|f_{R}^{\varepsilon,\mathfrak{c}}\|_{2}+C\|(1+|p|)^{ \frac{3}{2}}\mathbf{P}_{\mathfrak{c}}f_{R}^{\varepsilon,\mathfrak{c}}\mathbf{ 1}_{\{1+|p|\leq\frac{\kappa}{\sqrt{\varepsilon}}\}}\|_{2}^{2}\] \[\qquad+C\|(1+|p|)^{\frac{3}{2}}\{\mathbf{I}-\mathbf{P}_{\mathfrak{ c}}\}f_{R}^{\varepsilon,\mathfrak{c}}\mathbf{1}_{\{1+|p|\leq\frac{\kappa}{\sqrt{ \varepsilon}}\}}\|_{2}^{2}\] \[\leq C_{\kappa}\varepsilon^{2}\|h_{R}^{\varepsilon,\mathfrak{c}} \|_{\infty,\ell}\cdot\|f_{R}^{\varepsilon,\mathfrak{c}}\|_{2}+C\|f_{R}^{ \varepsilon,\mathfrak{c}}\|_{2}^{2}+\frac{C\kappa^{2}}{\varepsilon}\|\{ \mathbf{I}-\mathbf{P}_{\mathfrak{c}}\}f_{R}^{\varepsilon,\mathfrak{c}}\|_{ \nu_{\mathfrak{c}}}^{2}.\]
It follows from Lemma 6.2 that
\[|\langle\varepsilon^{k-1}\Gamma_{\mathfrak{c}}(f_{R}^{\varepsilon,\mathfrak{c}},f_{R}^{\varepsilon,\mathfrak{c}}),f_{R}^{\varepsilon,\mathfrak{c}}\rangle| \lesssim\varepsilon^{k-1}\|f_{R}^{\varepsilon,\mathfrak{c}}\|_{\infty,\ell} \cdot\|f_{R}^{\varepsilon,\mathfrak{c}}\|_{2}^{2}\lesssim\varepsilon^{k-1}\|h_ {R}^{\varepsilon,\mathfrak{c}}\|_{\infty,\ell}\cdot\|f_{R}^{\varepsilon, \mathfrak{c}}\|_{2}^{2}\]
and
\[\Big{|}\Big{\langle}\sum_{i=1}^{2k-1}\varepsilon^{i-1}\Big{\{} \Gamma_{\mathfrak{c}}\Big{(}\frac{F_{i}^{\mathfrak{c}}}{\sqrt{\mathbf{M}_{ \mathfrak{c}}}},f_{R}^{\varepsilon,\mathfrak{c}}\Big{)}+\Gamma_{\mathfrak{c}} \Big{(}f_{R}^{\varepsilon,\mathfrak{c}},\frac{F_{i}^{\mathfrak{c}}}{\sqrt{ \mathbf{M}_{\mathfrak{c}}}}\Big{)}\Big{\}},f_{R}^{\varepsilon,\mathfrak{c}} \Big{\rangle}\Big{|}\] \[\lesssim\sum_{i=1}^{2k-1}\varepsilon^{i-1}\|f_{R}^{\varepsilon, \mathfrak{c}}\|_{\nu_{\mathfrak{c}}}^{2}\lesssim\|\mathbf{P}_{\mathfrak{c}}f_{R}^{ \varepsilon,\mathfrak{c}}\|_{\nu_{\mathfrak{c}}}^{2}+\|\{\mathbf{I}-\mathbf{P}_{ \mathfrak{c}}\}f_{R}^{\varepsilon,\mathfrak{c}}\|_{\nu_{\mathfrak{c}}}^{2}\] \[\lesssim\|f_{R}^{\varepsilon,\mathfrak{c}}\|_{2}^{2}+\|\{\mathbf{ I}-\mathbf{P}_{\mathfrak{c}}\}f_{R}^{\varepsilon,\mathfrak{c}}\|_{\nu_{\mathfrak{c}}}^{2}.\]
Similarly, for the last term, one has
\[\Big{|}\Big{\langle}\varepsilon^{k}\bar{A},f_{R}^{\varepsilon, \mathfrak{c}}\Big{\rangle}\Big{|} \lesssim\varepsilon^{k}\sum_{\begin{subarray}{c}i+j\geq 2k+1\\ 2\leq i,j\leq 2k-1\end{subarray}}\varepsilon^{i+j-1-2k}\Big{|}\Big{\langle} \Gamma_{\mathfrak{c}}\Big{(}\frac{F_{i}^{\mathfrak{c}}}{\sqrt{\mathbf{M}_{ \mathfrak{c}}}},\frac{F_{i}^{\mathfrak{c}}}{\sqrt{\mathbf{M}_{\mathfrak{c}}}} \Big{)},f_{R}^{\varepsilon,\mathfrak{c}}\Big{\rangle}\Big{|}\] \[\lesssim\varepsilon^{k}\|f_{R}^{\varepsilon,\mathfrak{c}}\|_{2} \lesssim\|f_{R}^{\varepsilon,\mathfrak{c}}\|_{2}.\]
Collecting all the above estimates, one has
\[\frac{1}{2}\frac{d}{dt}\left\|f_{R}^{\varepsilon,\mathfrak{c}}\right\|_{2}^{2}+ \frac{\zeta_{0}}{\varepsilon}\left\|\{\mathbf{I}-\mathbf{P}_{\mathfrak{c}}\}f_{R}^{ \varepsilon,\mathfrak{c}}\right\|_{\nu_{\mathfrak{c}}}^{2}\leq C_{\kappa} \varepsilon^{2}\|h_{R}^{\varepsilon,\mathfrak{c}}\|_{\infty,\ell}\cdot\|f_{R}^{ \varepsilon,\mathfrak{c}}\|_{2}+C\|f_{R}^{\varepsilon,\mathfrak{c}}\|_{2}^{2}+C\| f_{R}^{\varepsilon,\mathfrak{c}}\|_{2}\] \[\qquad+C\Big{(}\frac{\kappa^{2}}{\varepsilon}+1\Big{)}\|\{ \mathbf{I}-\mathbf{P}_{\mathfrak{c}}\}f_{R}^{\varepsilon,\mathfrak{c}}\|_{\nu_{ \mathfrak{c}}}^{2}+C\varepsilon^{k-1}\|h_{R}^{\varepsilon,\mathfrak{c}}\|_{ \infty,\ell}\cdot\|f_{R}^{\varepsilon,\mathfrak{c}}\|_{2}^{2}.\]
We choose \(\kappa=\sqrt{\frac{\zeta_{0}}{4C}}\), then we suppose that \(0<\varepsilon\leq\varepsilon_{0}\leq\frac{\zeta_{0}}{4C}\). Thus one gets (6.5). Therefore the proof is completed.
Next we consider the \(L^{\infty}\) estimate for \(h_{R}^{\varepsilon,\varepsilon}\). Recall \(J_{\epsilon}(p)\) in (1.26). We define
\[\mathcal{L}_{\mathfrak{c}}h:=-J_{\mathfrak{c}}^{-\frac{1}{2}}\{Q_{\mathfrak{ c}}(\mathbf{M}_{\mathfrak{c}},\sqrt{J_{\mathfrak{c}}}h)+Q_{\mathfrak{c}}( \sqrt{J_{\mathfrak{c}}}h,\mathbf{M}_{\mathfrak{c}})\}=\nu_{\mathfrak{c}}h- \mathcal{K}_{\mathfrak{c}}h,\]
where \(\mathcal{K}_{\mathfrak{c}}=\mathcal{K}_{\mathfrak{c}2}-\mathcal{K}_{ \mathfrak{c}1}\). More specifically, \(\nu_{\mathfrak{c}}\) is defined in (1.25) and operators \(\mathcal{K}_{\mathfrak{c}1}h\) and \(\mathcal{K}_{\mathfrak{c}2}h\) are defined as
\[\mathcal{K}_{\mathfrak{c}1}h :=J_{\mathfrak{c}}^{-\frac{1}{2}}Q_{\mathfrak{c}}^{-}(\mathbf{M }_{\mathfrak{c}},\sqrt{J_{\mathfrak{c}}}h)=\int_{\mathbb{R}^{3}}\int_{ \mathbb{S}^{2}}v_{\phi}\Big{\{}\sqrt{J_{\mathfrak{c}}(q)}\frac{\mathbf{M}_{ \mathfrak{c}}(p)}{\sqrt{J_{\mathfrak{c}}(p)}}h(q)\Big{\}}d\omega dq,\] \[\mathcal{K}_{\mathfrak{c}2}h :=J_{\mathfrak{c}}^{-\frac{1}{2}}\left\{Q_{\mathfrak{c}}^{+}( \mathbf{M}_{\mathfrak{c}},\sqrt{J_{\mathfrak{c}}}h)+Q_{\mathfrak{c}}^{+}( \sqrt{J_{\mathfrak{c}}}h,\mathbf{M}_{\mathfrak{c}})\right\}\] \[=\int_{\mathbb{R}^{3}}\int_{\mathbb{S}^{2}}v_{\phi}\Big{\{} \mathbf{M}_{\mathfrak{c}}(p^{\prime})\frac{\sqrt{J_{\mathfrak{c}}(q^{\prime}) }}{\sqrt{J_{\mathfrak{c}}(p)}}h(q^{\prime})\Big{\}}d\omega dq+\int_{\mathbb{R} ^{3}}\int_{\mathbb{S}^{2}}v_{\phi}\Big{\{}\mathbf{M}_{\mathfrak{c}}(q^{ \prime})\frac{\sqrt{J_{\mathfrak{c}}(p^{\prime})}}{\sqrt{J_{\mathfrak{c}}(p)} }h(p^{\prime})\Big{\}}d\omega dq.\]
Noting (1.28), by similar arguments as in [47], one can show that
\[|\mathcal{K}_{\mathfrak{c}i}(h)|\lesssim\int_{\mathbb{R}^{3}}\hat{k}_{i}(p,q) |h(q)|dq,\quad i=1,2,\]
where
\[\hat{k}_{1}(p,q)=|p-q|e^{-\delta_{2}|p|}e^{-\delta_{2}|q|},\quad\hat{k}_{2}(p, q)=\frac{1}{|p-q|}e^{-\frac{\delta_{2}}{2}|p-q|}\]
with \(\delta_{2}:=\alpha-\frac{1}{2}>0\). We denote \(\hat{k}(p,q):=\hat{k}_{1}(p,q)+\hat{k}_{2}(p,q)\). Then it holds that
\[|\mathcal{K}_{\mathfrak{c}}(h)|\lesssim\int_{\mathbb{R}^{3}}\hat{k}(p,q)|h(q )|dq,\quad i=1,2.\]
Denote
\[\hat{k}_{w}(p,q):=\hat{k}(p,q)\frac{w_{\ell}(p)}{w_{\ell}(q)}.\]
By similar arguments as in Lemmas 4.4-4.5, one has
\[\int_{\mathbb{R}^{3}}\hat{k}_{w}(p,q)e^{\frac{\delta_{2}}{4}|p-q|}dq+\int_{ \mathbb{R}^{3}}\hat{k}_{w}^{2}(p,q)dq\lesssim\max\Big{\{}\frac{1}{\mathfrak{ c}},\frac{1}{1+|p|}\Big{\}}. \tag{6.8}\]
For later use, we introduce
\[\widehat{\nu}_{\mathfrak{c}}(p):=\int_{\mathbb{R}^{3}}\int_{\mathbb{S}^{2}}v_ {\phi}J_{\mathfrak{c}}(q)d\omega dq\cong\nu_{\mathfrak{c}}(p).\]
**Lemma 6.4** (\(L^{\infty}\) Estimate).: _Under the assumptions of Lemma 6.3, there exist \(\varepsilon_{0}>0\) and a positive constant \(C>0\), such that for all \(\varepsilon\in(0,\varepsilon_{0}]\) and for any \(\ell\geq 9\), it holds that_
\[\sup_{0\leq s\leq T}\|\varepsilon^{\frac{3}{2}}h_{R}^{\varepsilon,\mathfrak{c }}(s)\|_{\infty,\ell}\leq C\Big{(}\|\varepsilon^{\frac{3}{2}}h_{0}\|_{\infty, \ell}+\sup_{0\leq s\leq T}\|f_{R}^{\varepsilon,\mathfrak{c}}(s)\|_{2}+ \varepsilon^{k+\frac{5}{2}}\Big{)},\]
_where \(C\) is independent of \(\mathfrak{c}\)._
Proof.: Plugging \(F_{R}^{\varepsilon,\mathfrak{c}}=h_{R}^{\varepsilon,\mathfrak{c}}\sqrt{J_{ \mathfrak{c}}}\) into (1.11), one has
\[\partial_{t}h_{R}^{\varepsilon,\mathfrak{c}}+\hat{p}\cdot\nabla_{x }h_{R}^{\varepsilon,\mathfrak{c}}+\frac{\nu_{\mathfrak{c}}}{\varepsilon}h_{R}^{ \varepsilon,\mathfrak{c}}=\frac{1}{\varepsilon}\mathcal{K}(h_{R}^{\varepsilon, \mathfrak{c}})+\varepsilon^{k-1}Q_{\mathfrak{c}}(h_{R}^{\varepsilon,\mathfrak{c }}\sqrt{J_{\mathfrak{c}}},\sqrt{J_{\mathfrak{c}}}h_{R}^{\varepsilon,\mathfrak{c }})\] \[\quad+\sum_{i=1}^{2k-1}\varepsilon^{i-1}\frac{1}{\sqrt{J_{ \mathfrak{c}}}}\Big{\{}Q_{\mathfrak{c}}(F_{i}^{\varepsilon},\sqrt{J_{ \mathfrak{c}}}h_{R}^{\varepsilon,\mathfrak{c}})+Q_{\mathfrak{c}}(\sqrt{J_{ \mathfrak{c}}}h_{R}^{\varepsilon,\mathfrak{c}},F_{i}^{\epsilon})\Big{\}}+ \varepsilon^{k}\tilde{A}, \tag{6.9}\]
where
\[\tilde{A}:=\sum_{\begin{subarray}{c}i+j>2k+1\\ 2\leq i,j\leq 2k-1\end{subarray}}\varepsilon^{i+j-1-2k}\frac{1}{\sqrt{J_{ \mathfrak{c}}}}Q_{\mathfrak{c}}(F_{i}^{\mathfrak{c}},F_{i}^{\mathfrak{c}}).\]
Denote \(y_{1}:=x-\hat{p}(t-s)\) and
\[\tilde{\nu_{\mathfrak{c}}}(t,s):=\int_{s}^{t}\nu_{\mathfrak{c}}(\mathbf{M}_{ \mathfrak{c}})(\tau,x-\hat{p}(t-\tau),p)d\tau\cong(t-s)\tilde{\nu_{\mathfrak{ c}}}.\]
Integrating (6.9) along the backward trajectory, one has
\[h_{R}^{\varepsilon,\mathfrak{c}}(t,x,p)\] \[=\exp\Big{(}-\frac{\tilde{\nu_{\mathfrak{c}}}(t,0)}{\varepsilon }\Big{)}h_{0}(x-\hat{p}t,p)\] \[\quad+\frac{1}{\varepsilon}\int_{0}^{t}\exp\Big{(}-\frac{\tilde{ \nu_{\mathfrak{c}}}(t,s)}{\varepsilon}\Big{)}\mathcal{K}_{\mathfrak{c}}h_{R}^ {\varepsilon,\mathfrak{c}}(s,y_{1},p)ds\] \[\quad+\int_{0}^{t}\exp\Big{(}-\frac{\tilde{\nu_{\mathfrak{c}}}(t,s)}{\varepsilon}\Big{)}\frac{\varepsilon^{k-1}}{\sqrt{J_{\mathfrak{c}}}}Q_{ \mathfrak{c}}(h_{R}^{\varepsilon,\mathfrak{c}}\sqrt{J_{\mathfrak{c}}},h_{R}^ {\varepsilon,\mathfrak{c}}\sqrt{J_{\mathfrak{c}}})(s,y_{1},p)ds\] \[\quad+\int_{0}^{t}\exp\Big{(}-\frac{\tilde{\nu_{\mathfrak{c}}}(t,s)}{\varepsilon}\Big{)}\sum_{i=1}^{2k-1}\varepsilon^{i-1}\frac{1}{\sqrt{J_{ \mathfrak{c}}}}\Big{\{}Q_{\mathfrak{c}}(F_{i}^{\mathfrak{c}},\sqrt{J_{ \mathfrak{c}}}h_{R}^{\varepsilon,\mathfrak{c}})+Q_{\mathfrak{c}}(\sqrt{J_{ \mathfrak{c}}}h_{R}^{\varepsilon,\mathfrak{c}},F_{i}^{\mathfrak{c}})\Big{\}}(s,y_{1},p)ds\] \[\quad+\int_{0}^{t}\exp\Big{(}-\frac{\tilde{\nu_{\mathfrak{c}}}(t,s)}{\varepsilon}\Big{)}\varepsilon^{k}\tilde{A}(s,y_{1},p)ds\] \[=\sum_{j=1}^{5}\mathcal{J}_{j}. \tag{6.10}\]
It is clear that
\[|\varepsilon^{\frac{3}{2}}w_{\mathfrak{c}}\mathcal{J}_{1}|\leq \|\varepsilon^{\frac{3}{2}}h_{0}\|_{\infty,\ell}.\]
For \(\mathcal{J}_{3}\), it follows from Lemma 6.1 that
\[|\varepsilon^{\frac{3}{2}}w_{\mathfrak{c}}\mathcal{J}_{3}| \lesssim\varepsilon^{k+\frac{1}{2}}\int_{0}^{t}\exp\Big{(}-\frac{ \tilde{\nu_{\mathfrak{c}}}(t,s)}{\varepsilon}\Big{)}\Big{|}\frac{w_{\ell}}{ \sqrt{J_{\mathfrak{c}}}}Q_{\mathfrak{c}}(h_{R}^{\varepsilon,\mathfrak{c}} \sqrt{J_{\mathfrak{c}}},h_{R}^{\varepsilon,\mathfrak{c}}\sqrt{J_{\mathfrak{c} }})(s,y_{1},p)\Big{|}ds\] \[\lesssim\varepsilon^{k-\frac{5}{2}}\int_{0}^{t}\exp\Big{(}-\frac {\tilde{\nu_{\mathfrak{c}}}(t,s)}{\varepsilon}\Big{)}\tilde{\nu_{\mathfrak{ c}}}(p)ds\cdot\sup_{0\leq s\leq T}\|\varepsilon^{\frac{3}{2}}h_{R}^{ \varepsilon,\mathfrak{c}}(s)\|_{\infty,\ell}^{2}\] \[\lesssim\varepsilon^{k-\frac{3}{2}}\sup_{0\leq s\leq T}\| \varepsilon^{\frac{3}{2}}h_{R}^{\varepsilon,\mathfrak{c}}(s)\|_{\infty,\ell}^ {2}.\]
Similarly, we have
\[|\varepsilon^{\frac{3}{2}}w_{\ell}\mathcal{J}_{4}|\] \[\lesssim\varepsilon^{\frac{3}{2}}\sum_{i=1}^{2k-1}\varepsilon^{i- 1}\int_{0}^{t}\exp\Big{(}-\frac{\tilde{\nu_{\mathfrak{c}}}(t,s)}{\varepsilon }\Big{)}\Big{|}\frac{w_{\ell}}{\sqrt{J_{\mathfrak{c}}}}\Big{\{}Q_{\mathfrak{ c}}(F_{i}^{\mathfrak{c}},\sqrt{J_{\mathfrak{c}}}h_{R}^{\varepsilon, \mathfrak{c}})+Q_{\mathfrak{c}}(\sqrt{J_{\mathfrak{c}}}h_{R}^{\varepsilon, \mathfrak{c}},F_{i}^{\mathfrak{c}})\Big{\}}(s,y_{1},p)\Big{|}ds\] \[\lesssim\sum_{i=1}^{2k-1}\varepsilon^{i-1}\int_{0}^{t}\exp\Big{(} -\frac{\tilde{\nu_{\mathfrak{c}}}(t,s)}{\varepsilon}\Big{)}\tilde{\nu_{ \mathfrak{c}}}(p)ds\cdot\sup_{0\leq s\leq T}\|\varepsilon^{\frac{3}{2}}h_{R}^{ \varepsilon,\mathfrak{c}}(s)\|_{\infty,\ell}\cdot\sup_{0\leq s\leq T}\Big{\|} \frac{F_{i}^{\mathfrak{c}}(s)}{\sqrt{J_{\mathfrak{c}}}}\Big{\|}_{\infty,\ell}\] \[\lesssim\varepsilon\sup_{0\leq s\leq T}\|\varepsilon^{\frac{3}{2}} h_{R}^{\varepsilon,\mathfrak{c}}(s)\|_{\infty,\ell}\]
and
\[|\varepsilon^{\frac{3}{2}}w_{\ell}\mathcal{J}_{5}| \lesssim\varepsilon^{k+\frac{3}{2}}\sum_{\begin{subarray}{c}i+j \geq 2k+1\\ 2\leq i,j\leq 2k-1\end{subarray}}\varepsilon^{i+j-1-2k}\int_{0}^{t}\exp \Big{(}-\frac{\tilde{\nu_{\epsilon}}(t,s)}{\varepsilon}\Big{)}\Big{|}\frac{w_ {\ell}}{\sqrt{\mathcal{J}_{\epsilon}}}Q_{\epsilon}(F_{i}^{\epsilon},F_{i}^{ \epsilon})\Big{|}ds\] \[\lesssim\varepsilon^{k+\frac{3}{2}}\int_{0}^{t}\exp\Big{(}-\frac {\tilde{\nu_{\epsilon}}(t,s)}{\varepsilon}\Big{)}\tilde{\nu_{\epsilon}}(p)ds \cdot\sup_{0\leq s\leq T}\Big{\|}\frac{F_{i}^{\epsilon}(s)}{\sqrt{\mathcal{J }_{\epsilon}}}\Big{\|}_{\infty,\ell}\cdot\sup_{0\leq s\leq T}\Big{\|}\frac{F_ {i}^{\epsilon}(s)}{\sqrt{\mathcal{J}_{\epsilon}}}\Big{\|}_{\infty,\ell}\] \[\lesssim\varepsilon^{k+\frac{5}{2}}.\]
Collecting the above estimates, we have established
\[\sup_{0\leq s\leq T}\|\varepsilon^{\frac{3}{2}}h_{R}^{\varepsilon,\mathfrak{c}}(s)\|_{\infty,\ell}\leq C\varepsilon\sup_{0\leq s\leq T}\| \varepsilon^{\frac{3}{2}}h_{R}^{\varepsilon,\mathfrak{c}}(s)\|_{\infty,\ell}+ C\varepsilon^{k-\frac{3}{2}}\sup_{0\leq s\leq T}\|\varepsilon^{\frac{3}{2}}h_{R}^{ \varepsilon,\mathfrak{c}}(s)\|_{\infty,\ell}^{2}\] \[\quad+C\varepsilon^{k+\frac{5}{2}}+C\|\varepsilon^{\frac{3}{2}}h _{0}\|_{\infty,\ell}+Cw_{\ell}(p)\varepsilon^{\frac{3}{2}}|\mathcal{J}_{2}|. \tag{6.11}\]
To bound the last term \(\mathcal{J}_{2}\), we denote \(y_{2}:=y_{1}-\hat{q}\,(s-s^{\prime})=x-\hat{p}(t-s)-\hat{q}\,(s-s^{\prime})\) and
\[\tilde{\nu_{\epsilon}}^{\prime}(s,s^{\prime}):=\int_{s^{\prime}}^{s}\nu_{ \mathfrak{c}}(\mathbf{M}_{\mathfrak{c}})(\tau,y_{1}-\hat{q}(s-\tau),q)d\tau \cong(s-s^{\prime})\tilde{\nu_{\epsilon}}.\]
We substitute (6.10) into \(\mathcal{J}_{2}\) to obtain
\[|\mathcal{J}_{2}| \lesssim\frac{1}{\varepsilon}\int_{0}^{t}\exp\Big{(}-\frac{ \tilde{\nu_{\epsilon}}(t,s)}{\varepsilon}\Big{)}ds\int_{\mathbb{R}^{3}}\hat {k}(p,q)|h_{R}^{\varepsilon,\mathfrak{c}}(s,y_{1},q)|dq\] \[\lesssim\frac{1}{\varepsilon}\int_{0}^{t}\exp\Big{(}-\frac{ \tilde{\nu_{\epsilon}}(t,s)}{\varepsilon}\Big{)}ds\int_{\mathbb{R}^{3}}\hat {k}(p,q)\exp\Big{(}-\frac{\tilde{\nu_{\epsilon}}^{\prime}(s,0)}{\varepsilon} \Big{)}|h_{0}(y_{1}-\hat{q}s,q)|dq\] \[\quad+\frac{1}{\varepsilon^{2}}\int_{0}^{t}\exp\Big{(}-\frac{ \tilde{\nu_{\epsilon}}(t,s)}{\varepsilon}\Big{)}ds\int_{0}^{s}\exp\Big{(}- \frac{\tilde{\nu_{\epsilon}}^{\prime}(s,s^{\prime})}{\varepsilon}\Big{)}ds^{ \prime}\int_{\mathbb{R}^{3}}\hat{k}(p,q)\big{|}\mathcal{K}_{\mathfrak{c}}h_{R }^{\varepsilon,\mathfrak{c}}(s^{\prime},y_{2},q)\big{|}dq\] \[\quad+\varepsilon^{k-2}\int_{0}^{t}\exp\Big{(}-\frac{\tilde{\nu _{\epsilon}}(t,s)}{\varepsilon}\Big{)}ds\int_{0}^{s}\exp\Big{(}-\frac{\tilde {\nu_{\epsilon}}^{\prime}(s,s^{\prime})}{\varepsilon}\Big{)}ds^{\prime}\] \[\quad\quad\quad\quad\times\int_{\mathbb{R}^{3}}\hat{k}(p,q)\frac{ 1}{\sqrt{\mathcal{J}_{\mathfrak{c}}}}\Big{|}Q_{\mathfrak{c}}(h_{R}^{ \varepsilon,\mathfrak{c}}\sqrt{\mathcal{J}_{\mathfrak{c}}},h_{R}^{\varepsilon, \mathfrak{c}}\sqrt{\mathcal{J}_{\mathfrak{c}}})(s^{\prime},y_{2},q)\Big{|}dq\] \[\quad+\frac{1}{\varepsilon}\sum_{i=1}^{2k-1}\varepsilon^{i-1} \int_{0}^{t}\exp\Big{(}-\frac{\tilde{\nu_{\epsilon}}(t,s)}{\varepsilon}\Big{)} ds\int_{0}^{s}\exp\Big{(}-\frac{\tilde{\nu_{\epsilon}}^{\prime}(s,s^{\prime})}{ \varepsilon}\Big{)}ds^{\prime}\] \[\quad\quad\quad\quad\times\int_{\mathbb{R}^{3}}\hat{k}(p,q)\frac{ 1}{\sqrt{\mathcal{J}_{\mathfrak{c}}}}\Big{|}\Big{\{}Q_{\mathfrak{c}}(F_{i}^{ \epsilon},\sqrt{\mathcal{J}_{\mathfrak{c}}}h_{R}^{\varepsilon,\mathfrak{c}})+Q_ {\mathfrak{c}}(\sqrt{\mathcal{J}_{\mathfrak{c}}}h_{R}^{\varepsilon,\mathfrak{c}},F_{i}^{\epsilon})\Big{\}}(s^{\prime},y_{2},q)\Big{|}dq\] \[\quad+\varepsilon^{k-1}\int_{0}^{t}\exp\Big{(}-\frac{\tilde{\nu _{\epsilon}}(t,s)}{\varepsilon}\Big{)}ds\int_{0}^{s}\exp\Big{(}-\frac{\tilde{\nu _{\epsilon}}^{\prime}(s,s^{\prime})}{\varepsilon}\Big{)}ds^{\prime}\int_{ \mathbb{R}^{3}}\hat{k}(p,q)|\tilde{A}(s^{\prime},y_{2},q)|dq\] \[=\sum_{j=1}^{5}\mathcal{J}_{2j}.\]
By Lemma 4.6, there exists a positive constant \(\nu_{0}\) which is independent of \(\mathfrak{c}\), such that
\[\nu_{\mathfrak{c}}(p)\geq\nu_{0},\quad p\in\mathbb{R}^{3}.\]
For \(\mathcal{J}_{21}\), one has from (6.8) that
\[|\varepsilon^{\frac{3}{2}}w_{\ell}(p)\mathcal{J}_{21}| \lesssim\frac{1}{\varepsilon}\int_{0}^{t}\exp\Big{(}-\frac{\nu_{0 }t}{\varepsilon}\Big{)}ds\int_{\mathbb{R}^{3}}\hat{k}_{w}(p,q)|\varepsilon^{ \frac{3}{2}}w_{\ell}(q)h_{0}(y_{1}-\hat{q}s,q)|dq\] \[\lesssim\|\varepsilon^{\frac{3}{2}}h_{0}\|_{\infty,\ell}.\]
Similarly, using Lemma 6.1, we get
\[|\varepsilon^{\frac{3}{2}}w_{\ell}(p)\mathcal{J}_{23}| \lesssim\varepsilon^{k-\frac{1}{2}}\int_{0}^{t}\exp\Big{(}-\frac{ \nu_{0}(t-s)}{\varepsilon}\Big{)}ds\int_{\mathbb{R}^{3}}\hat{k}_{w}(p,q)dq\] \[\qquad\qquad\times\int_{0}^{s}\exp\Big{(}-\frac{\tilde{\nu_{ \epsilon}}^{\prime}(s,s^{\prime})}{\varepsilon}\Big{)}\frac{w_{\ell}(q)}{ \sqrt{J_{\epsilon}}}\Big{|}Q_{\epsilon}(h_{R}^{\varepsilon,\epsilon}\sqrt{J_ {\epsilon}},h_{R}^{\varepsilon,\epsilon}\sqrt{J_{\epsilon}})(s^{\prime},y_{2},q)\Big{|}ds^{\prime}\] \[\lesssim\varepsilon^{k-\frac{3}{2}}\sup_{0\leq s\leq T}\| \varepsilon^{\frac{3}{2}}h_{R}^{\varepsilon,\epsilon}(s)\|_{\infty,\ell}^{2}\]
and
\[|\varepsilon^{\frac{3}{2}}w_{\ell}(p)\mathcal{J}_{24}| \lesssim\varepsilon^{\frac{1}{2}}\sum_{i=1}^{2k-1}\varepsilon^{i-1} \int_{0}^{t}\exp\Big{(}-\frac{\nu_{0}(t-s)}{\varepsilon}\Big{)}ds\int_{ \mathbb{R}^{3}}\hat{k}_{w}(p,q)dq\] \[\qquad\times\int_{0}^{s}\exp\Big{(}-\frac{\tilde{\nu_{\epsilon}} ^{\prime}(s,s^{\prime})}{\varepsilon}\Big{)}\frac{w_{\ell}(q)}{\sqrt{J_{ \epsilon}}}\Big{|}\Big{\{}Q_{\epsilon}(F_{i}^{\epsilon},\sqrt{J_{\epsilon}}h_{ R}^{\varepsilon,\epsilon})+Q_{\epsilon}(\sqrt{J_{\epsilon}}h_{R}^{\varepsilon, \epsilon},F_{i}^{\epsilon})\Big{\}}(s^{\prime},y_{2},q)\Big{|}ds^{\prime}\] \[\lesssim\varepsilon\sup_{0\leq s\leq T}\|\varepsilon^{\frac{3}{2 }}h_{R}^{\varepsilon,\epsilon}(s)\|_{\infty,\ell}.\]
For \(\mathcal{J}_{25}\), one has
\[|\varepsilon^{\frac{3}{2}}w_{\ell}(p)\mathcal{J}_{25}| \lesssim\varepsilon^{k+\frac{1}{2}}\sum_{\begin{subarray}{c}i+j \geq 2k+1\\ 2\leq i,j\leq 2k-1\end{subarray}}\varepsilon^{i+j-1-2k}\int_{0}^{t}\exp \Big{(}-\frac{\nu_{0}(t-s)}{\varepsilon}\Big{)}ds\int_{\mathbb{R}^{3}}\hat{k} _{w}(p,q)dq\] \[\qquad\times\int_{0}^{s}\exp\Big{(}-\frac{\tilde{\nu_{\epsilon}} ^{\prime}(s,s^{\prime})}{\varepsilon}\Big{)}\frac{w_{\ell}(q)}{\sqrt{J_{ \epsilon}}}|Q_{\epsilon}(F_{i}^{\epsilon},F_{i}^{\epsilon})(s^{\prime},y_{2},q )|ds^{\prime}\] \[\lesssim\varepsilon^{k+\frac{5}{2}}.\]
Now we focus on the estimate of \(\mathcal{J}_{22}\). It holds that
\[|\varepsilon^{\frac{3}{2}}w_{\ell}(p)\mathcal{J}_{22}| \lesssim\frac{1}{\varepsilon^{2}}\int_{0}^{t}\exp\Big{(}-\frac{ \tilde{\nu_{\epsilon}}(t,s)}{\varepsilon}\Big{)}ds\int_{0}^{s}\exp\Big{(}- \frac{\tilde{\nu_{\epsilon}}^{\prime}(s,s^{\prime})}{\varepsilon}\Big{)}ds^{\prime}\] \[\qquad\times\int_{\mathbb{R}^{3}}\hat{k}_{w}(p,q)dq\int_{\mathbb{ R}^{3}}\hat{k}_{w}(q,q^{\prime})|\varepsilon^{\frac{3}{2}}w_{\ell}(q^{ \prime})h_{R}^{\varepsilon,\epsilon}(s^{\prime},y_{2},q^{\prime})dq^{\prime}.\]
We divide the estimate into four cases.
_Case 1_: \(|p|\geq N\). Using (6.8), one has
\[|\varepsilon^{\frac{3}{2}}w_{\ell}(p)\mathcal{J}_{22}| \lesssim\max\Big{\{}\frac{1}{\epsilon},\frac{1}{1+|p|}\Big{\}} \sup_{0\leq s\leq T}\|\varepsilon^{\frac{3}{2}}h_{R}^{\varepsilon,\epsilon}(s) \|_{\infty,\ell}\] \[\lesssim\max\Big{\{}\frac{1}{\epsilon},\frac{1}{N}\Big{\}}\sup_{ 0\leq s\leq T}\|\varepsilon^{\frac{3}{2}}h_{R}^{\varepsilon,\epsilon}(s)\|_{ \infty,\ell}.\]
_Case 2_: \(|p|\leq N\), \(|q|\geq 2N\) or \(|q|\leq 2N\), \(|q^{\prime}|\geq 3N\). Using (6.8) again, we have
\[\frac{1}{\varepsilon^{2}}\int_{0}^{t}\exp\Big{(}-\frac{\tilde{\nu _{\epsilon}}(t,s)}{\varepsilon}\Big{)}ds\int_{0}^{s}\exp\Big{(}-\frac{\tilde{ \nu_{\epsilon}}^{\prime}(s,s^{\prime})}{\varepsilon}\Big{)}ds^{\prime}\] \[\qquad\times\Big{\{}\iint_{|p|\leq N,|q|\geq 2N}+\iint_{|q|\leq 2N,|q^{ \prime}|\geq 3N}\Big{\}}\] \[\lesssim e^{-\frac{\delta_{2}}{4}N}\sup_{0\leq s\leq T}\| \varepsilon^{\frac{3}{2}}h_{R}^{\varepsilon,\epsilon}(s)\|_{\infty,\ell} \lesssim\frac{1}{N}\sup_{0\leq s\leq T}\|\varepsilon^{\frac{3}{2}}h_{R}^{ \varepsilon,\epsilon}(s)\|_{\infty,\ell}.\]
_Case 3_: For \(s-s^{\prime}\leq\kappa\varepsilon\) and \(|p|\leq N\), \(|q|\leq 2N\), \(|q^{\prime}|\leq 3N\), one has
\[\frac{1}{\varepsilon^{2}}\int_{0}^{t}\exp\Big{(}-\frac{\tilde{\nu _{\epsilon}}(t,s)}{\varepsilon}\Big{)}ds\int_{s-s\kappa\varepsilon}^{s}\exp \Big{(}-\frac{\tilde{\nu_{\epsilon}}^{\prime}(s,s^{\prime})}{\varepsilon}\Big{)} ds^{\prime}\]
\[\times\int_{|q|\leq 2N}\hat{k}_{w}(p,q)dq\int_{|q^{\prime}|\leq 3N}\hat{k} _{2}(q,q^{\prime})|\varepsilon^{\frac{3}{2}}w_{\ell}(q^{\prime})h_{R}^{ \varepsilon,\mathfrak{c}}(s^{\prime},y_{2},q^{\prime})|dq^{\prime}\] \[\lesssim\kappa\sup_{0\leq s\leq T}\|\varepsilon^{\frac{3}{2}}h_{R}^ {\varepsilon,\mathfrak{c}}(s)\|_{\infty,\ell}.\]
_Case 4_: For \(s-s^{\prime}\geq\kappa\varepsilon\) and \(|p|\leq N\), \(|q|\leq 2N\), \(|q^{\prime}|\leq 3N\), this is the last remaining case. Using (6.8), one has
\[\int_{|q|\leq 2N}\int_{|q^{\prime}|\leq 3N}\hat{k}_{w}(p,q)\hat{k} _{w}(q,q^{\prime})|w_{\ell}(q^{\prime})h_{R}^{\varepsilon,\mathfrak{c}}(s^{ \prime},y_{2},q^{\prime})|dqdq^{\prime}\] \[\leq C_{N}\int_{|q|\leq 2N}\int_{|q^{\prime}|\leq 3N}\hat{k}_{w}(p,q )\hat{k}_{w}(q,q^{\prime})|f_{R}^{\varepsilon,\mathfrak{c}}(s^{\prime},y_{2}, q^{\prime})|dqdq^{\prime}\] \[\leq C_{N}\Big{(}\int_{|q|\leq 2N}\int_{|q^{\prime}|\leq 3N}\hat{k} _{w}^{2}(p,q)\hat{k}_{w}^{2}(q,q^{\prime})dqdq^{\prime}\Big{)}^{\frac{1}{2}}\] \[\qquad\times\Big{(}\int_{|q|\leq 2N}\int_{|q^{\prime}|\leq 3N}|f_{R}^{ \varepsilon,\mathfrak{c}}(s^{\prime},y_{2},q^{\prime})|^{2}dqdq^{\prime} \Big{)}^{\frac{1}{2}}\] \[\leq C_{N}\Big{(}\int_{\mathbb{R}^{3}}\int_{\mathbb{R}^{3}}|f_{R} ^{\varepsilon,\mathfrak{c}}(s^{\prime},y_{2},q^{\prime})|^{2}\cdot\varepsilon^ {-3}\kappa^{-3}dy_{2}dq^{\prime}\Big{)}^{\frac{1}{2}}\] \[\leq\frac{C_{N,\kappa}}{\varepsilon^{\frac{3}{2}}}\sup_{0\leq s \leq T}\|f_{R}^{\varepsilon,\mathfrak{c}}(s)\|_{2},\]
where we have made a change of variables \(q\mapsto y_{2}\) with
\[\Big{|}\frac{dy_{2}}{dq}\Big{|}=\frac{\mathfrak{c}^{5}}{(q^{0})^{5}}(s-s^{ \prime})^{3}\geq\frac{\kappa^{3}\varepsilon^{3}}{3^{5}}.\]
Here we take \(1\leq N\leq\mathfrak{c}\). Thus we have
\[\frac{1}{\varepsilon^{2}}\int_{0}^{t}\exp\Big{(}-\frac{\tilde{ \nu_{\mathfrak{c}}}(t,s)}{\varepsilon}\Big{)}ds\int_{0}^{s-\kappa\varepsilon} \exp\Big{(}-\frac{\tilde{\nu_{\mathfrak{c}}}^{\prime}(s,s^{\prime})}{ \varepsilon}\Big{)}ds^{\prime}\] \[\qquad\times\int_{|q|\leq 2N}\hat{k}_{w}(p,q)dq\int_{|q^{\prime}| \leq 3N}\hat{k}_{2}(q,q^{\prime})|\varepsilon^{\frac{3}{2}}w_{\ell}(q^{\prime})h _{R}^{\varepsilon,\mathfrak{c}}(s^{\prime},y_{2},q^{\prime})|dq^{\prime}\] \[\leq C_{N,\kappa}\sup_{0\leq s\leq T}\|f_{R}^{\varepsilon, \mathfrak{c}}(s)\|_{2}.\]
Collecting all the four cases, we obtain
\[|\varepsilon^{\frac{3}{2}}w_{\ell}(p)\mathcal{J}_{22}|\leq C\Big{(}\kappa+ \frac{1}{N}\Big{)}\sup_{0\leq s\leq T}\|\varepsilon^{\frac{3}{2}}h_{R}^{ \varepsilon,\mathfrak{c}}(s)\|_{\infty,\ell}+C_{N,\kappa}\sup_{0\leq s\leq T}\| f_{R}^{\varepsilon,\mathfrak{c}}(s)\|_{2}. \tag{6.12}\]
Therefore, combining (6.11) and (6.12), one obtains
\[\sup_{0\leq s\leq T}\|\varepsilon^{\frac{3}{2}}h_{R}^{\varepsilon,\mathfrak{c}}(s)\|_{\infty,\ell}\leq C\Big{(}\varepsilon+\kappa+\frac{1}{N} \Big{)}\sup_{0\leq s\leq T}\|\varepsilon^{\frac{3}{2}}h_{R}^{\varepsilon, \mathfrak{c}}(s)\|_{\infty,\ell}+C\|\varepsilon^{\frac{3}{2}}h_{0}\|_{\infty,\ell}\] \[\qquad\qquad\qquad+C\varepsilon^{k-\frac{3}{2}}\sup_{0\leq s\leq T }\|\varepsilon^{\frac{3}{2}}h_{R}^{\varepsilon,\mathfrak{c}}(s)\|_{\infty, \ell}^{2}+C\varepsilon^{k+\frac{5}{2}}+C_{N,\kappa}\sup_{0\leq s\leq T}\|f_{R }^{\varepsilon,\mathfrak{c}}(s)\|_{2}. \tag{6.13}\]
Choosing \(N\) suitably large and \(\kappa\), \(\varepsilon\) suitably small, one gets from (6.13) that
\[\sup_{0\leq s\leq T}\|\varepsilon^{\frac{3}{2}}h_{R}^{\varepsilon,\mathfrak{c} }(s)\|_{\infty,\ell}\leq C\|\varepsilon^{\frac{3}{2}}h_{0}\|_{\infty,\ell}+C \sup_{0\leq s\leq T}\|f_{R}^{\varepsilon,\mathfrak{c}}(s)\|_{2}+C\varepsilon^{k +\frac{5}{2}}.\]
Therefore the proof of Lemma 6.4 is completed.
Proof of Theorem 1.1.: With Lemmas 6.3-6.4 in hand, the rest proof is the same as [25, 46]. We omit the details here for brevity. Therefore the proof of Theorem 1.1 is completed.
Using Theorem 1.1, we can prove Theorem 1.5 as follows.
Proof of Theorem 1.5.: Recall \(\bar{c}_{1}\) and \(\bar{c}_{2}\) in (3.45). Using (1.29), for any \((t,x,p)\in[0,T]\times\mathbb{R}^{3}\times\mathbb{R}^{3}\), one has
\[|F^{e,\mathfrak{c}}(t,x,p)-\mathbf{M}_{\mathfrak{c}}(t,x,p)|\lesssim\varepsilon \sqrt{J_{\mathfrak{c}}(p)}\lesssim\varepsilon e^{-\frac{|p|}{2T_{M}}}. \tag{6.14}\]
A direct calculation shows that
\[\mu(t,x,p)-\mathbf{M}_{\mathfrak{c}}(t,x,p)\] \[=\frac{\rho}{(2\pi\theta)^{\frac{3}{2}}}\exp\Big{\{}-\frac{|p- \mathfrak{u}|^{2}}{2\theta}\Big{\}}-\frac{n_{0}\gamma}{4\pi\mathfrak{c}^{3}K _{2}(\gamma)}\exp\Big{\{}\frac{u^{\mu}p_{\mu}}{T_{0}}\Big{\}}\] \[=\frac{\rho}{(2\pi\theta)^{\frac{3}{2}}}\exp\Big{\{}-\frac{|p- \mathfrak{u}|^{2}}{2\theta}\Big{\}}-\frac{n_{0}}{(2\pi T_{0})^{\frac{3}{2}}} \exp\Big{\{}\frac{\mathfrak{c}^{2}+u^{\mu}p_{\mu}}{T_{0}}\Big{\}}(1+O(\gamma ^{-1}))\] \[=O(\gamma^{-1})\frac{n_{0}}{(2\pi T_{0})^{\frac{3}{2}}}\exp\Big{\{} \frac{\mathfrak{c}^{2}+u^{\mu}p_{\mu}}{T_{0}}\Big{\}}+\Big{(}\frac{\rho}{(2 \pi\theta)^{\frac{3}{2}}}-\frac{n_{0}}{(2\pi T_{0})^{\frac{3}{2}}}\Big{)}\exp \Big{\{}-\frac{|p-\mathfrak{u}|^{2}}{2\theta}\Big{\}}\] \[\quad\quad+\frac{n_{0}}{(2\pi T_{0})^{\frac{3}{2}}}\Big{(}\exp \Big{\{}-\frac{|p-\mathfrak{u}|^{2}}{2\theta}\Big{\}}-\exp\Big{\{}\frac{ \mathfrak{c}^{2}+u^{\mu}p_{\mu}}{T_{0}}\Big{\}}\Big{)}\] \[:=\mathcal{A}_{1}+\mathcal{A}_{2}+\mathcal{A}_{3}. \tag{6.15}\]
It follows from Proposition (3.8) that
\[|\mathcal{A}_{1}|\lesssim\frac{1}{\mathfrak{c}^{2}}e^{-2\bar{c}_{1}|p|},\quad| \mathcal{A}_{2}|\lesssim\frac{1}{\mathfrak{c}^{2}}e^{-\bar{c}_{2}|p|}.\]
For \(\mathcal{A}_{3}\), if \(|p|\geq\mathfrak{c}^{\frac{1}{8}}\), one has
\[|\mathcal{A}_{3}| \lesssim\exp\Big{\{}-\frac{|p|^{2}}{4\theta}\Big{\}}+\exp\Big{\{} -\frac{|p|}{2T_{0}}\Big{\}}\] \[\lesssim\exp\Big{\{}-\frac{\mathfrak{c}^{\frac{1}{4}}}{8\theta} \Big{\}}\exp\Big{\{}-\frac{|p|^{2}}{8\theta}\Big{\}}+\exp\Big{\{}-\frac{ \mathfrak{c}^{\frac{1}{8}}}{4T_{0}}\Big{\}}\exp\Big{\{}-\frac{|p|}{4T_{0}} \Big{\}}\] \[\lesssim\frac{1}{\mathfrak{c}^{2}}\big{(}e^{-\frac{c_{2}}{2}|p|}+ e^{-\bar{c}_{1}|p|}\big{)}.\]
If \(|p|\leq\mathfrak{c}^{\frac{1}{8}}\), it follows from (4.55)-(4.56) that
\[|\mathcal{A}_{3}|\leq\frac{n_{0}}{(2\pi T_{0})^{\frac{3}{2}}}\exp\Big{\{}- \frac{|p-\mathfrak{u}|^{2}}{2\theta}\Big{\}}\Big{|}1-\exp\Big{\{}\frac{|p- \mathfrak{u}|^{2}}{2\theta}+\frac{\mathfrak{c}^{2}+u^{\mu}p_{\mu}}{T_{0}} \Big{\}}\Big{|}\lesssim\mathfrak{c}^{-\frac{3}{2}}e^{-\bar{c}_{2}|p|}. \tag{6.16}\]
Combining (6.15)-(6.16), one has
\[|\mu(t,x,p)-\mathbf{M}_{\mathfrak{c}}(t,x,p)|\lesssim\mathfrak{c}^{-\frac{3}{ 2}}(e^{-\frac{c_{2}}{2}|p|}+e^{-\bar{c}_{1}|p|}). \tag{6.17}\]
Using (6.14), (6.17) and taking
\[\delta_{0}:=\min\Big{(}\frac{1}{2T_{M}},\,\bar{c}_{1},\,\frac{\bar{c}_{2}}{2} \Big{)}>0,\]
one has
\[|F^{e,\mathfrak{c}}(t)-\mu(t)|\lesssim\varepsilon e^{-\frac{|p|}{2T_{M}}}+ \mathfrak{c}^{-\frac{3}{2}}(e^{-\frac{c_{2}}{2}|p|}+e^{-\bar{c}_{1}|p|}) \lesssim(\varepsilon+\mathfrak{c}^{-\frac{3}{2}})e^{-\delta_{0}|p|},\]
which implies that
\[\sup_{0\leq t\leq T}\Big{\|}\big{(}F^{e,\mathfrak{c}}-\mu\big{)}(t)e^{\delta_{0 }|p|}\Big{\|}_{\infty}\lesssim\varepsilon+\mathfrak{c}^{-\frac{3}{2}}.\]
Therefore the proof of Theorem 1.5 is completed.
## 7. Appendix: Derivation of the orthonormal basis of \(\mathcal{N}_{\mathfrak{c}}\)
In this part, we derive the orthonormal basis of \(\mathcal{N}_{\mathfrak{c}}\). One needs to use (1.13)-(1.14) and Lemma 4.11 frequently. Suppose that
\[\chi_{0}^{\mathfrak{c}}=\mathfrak{a}_{0}\sqrt{\mathbf{M}_{\mathfrak{c}}},\quad \chi_{j}^{\mathfrak{c}}=\frac{p_{j}-\mathfrak{a}_{j}}{\mathfrak{b}_{j}}\sqrt{ \mathbf{M}_{\mathfrak{c}}}\ (j=1,2,3),\quad\chi_{4}^{\mathfrak{c}}=\frac{p^{0}/ \mathfrak{c}+\sum_{i=1}^{3}\lambda_{i}p_{i}+\mathfrak{c}}{\zeta}\sqrt{ \mathbf{M}_{\mathfrak{c}}}\]
form an orthonormal basis of \(\mathcal{N}_{\mathfrak{c}}\). Using \(\langle\chi_{0}^{\mathfrak{c}},\chi_{0}^{\mathfrak{c}}\rangle=1\), one has \(\mathfrak{a}_{0}=\Big{(}\int_{\mathbb{R}^{3}}\mathbf{M}_{\mathfrak{c}}dp\Big{)} ^{-\frac{1}{2}}=\frac{1}{\sqrt{I^{0}}}\). To compute \(\mathfrak{a}_{j}\), since \(\langle\chi_{0}^{\mathfrak{c}},\chi_{j}^{\mathfrak{c}}\rangle=0\), we have
\[0=\int_{\mathbb{R}^{3}}(p_{j}-\mathfrak{a}_{j})\mathbf{M}_{\mathfrak{c}}dp=T^ {0j}-\mathfrak{a}_{j}I^{0},\]
which yields that \(\mathfrak{a}_{j}=\frac{T^{0j}}{I^{0}}\). For \(\mathfrak{b}_{j}\), using \(\langle\chi_{j}^{\mathfrak{c}},\chi_{j}^{\mathfrak{c}}\rangle=1\), one has
\[\mathfrak{b}_{j}^{2} =\int_{\mathbb{R}^{3}}(p_{j}-\mathfrak{a}_{j})^{2}\mathbf{M}_{ \mathfrak{c}}dp=\int_{\mathbb{R}^{3}}(p_{j}^{2}+\mathfrak{a}_{j}^{2}-2 \mathfrak{a}_{j}p_{j})\mathbf{M}_{\mathfrak{c}}dp\] \[=T^{0jj}+\mathfrak{a}_{j}^{2}I^{0}-2\mathfrak{a}_{j}T^{0j}=T^{0jj }-\frac{(T^{0j})^{2}}{I^{0}},\]
which yields that \(\mathfrak{b}_{j}=\sqrt{T^{0jj}-\frac{(T^{0j})^{2}}{I^{0}}}\), \(j=1,2,3\).
To determine the coefficients \(\lambda_{i}\), \(i=1,2,3\), due to \(\langle\chi_{4}^{\mathfrak{c}},\chi_{0}^{\mathfrak{c}}\rangle=\langle\chi_{4 }^{\mathfrak{c}},\chi_{j}^{\mathfrak{c}}\rangle=0\), we have
\[\int_{\mathbb{R}^{3}}(p^{0}/\mathfrak{c}+\sum_{i=1}^{3}\lambda_{i }p_{i}+\mathfrak{c})\mathbf{M}_{\mathfrak{c}}dp =0,\] \[\int_{\mathbb{R}^{3}}(p^{0}/\mathfrak{c}+\sum_{i=1}^{3}\lambda_{i }p_{i}+\mathfrak{c})(p_{j}-\mathfrak{a}_{j})\mathbf{M}_{\mathfrak{c}}dp =0,\ j=1,2,3.\]
That is
\[\frac{T^{00}}{\mathfrak{c}}+\sum_{i=1}^{3}\lambda_{i}T^{0i}+ \mathfrak{c}I^{0} =0,\] \[\frac{T^{00j}}{\mathfrak{c}}-\frac{\mathfrak{a}_{j}}{\mathfrak{c }}T^{00}+\sum_{i=1}^{3}\lambda_{i}(T^{0ij}-\mathfrak{a}_{j}T^{0i})+\mathfrak{ c}(T^{0j}-\mathfrak{a}_{j}I^{0}) =0,\ j=1,2,3.\]
One can rewrite the above linear system as
\[\left(\begin{array}{cccc}T^{01}&T^{02}&T^{03}&I^{0}\\ T^{011}-\mathfrak{a}_{1}T^{01}&T^{021}-\mathfrak{a}_{1}T^{02}&T^{031}- \mathfrak{a}_{1}T^{03}&T^{01}-\mathfrak{a}_{1}I^{0}\\ T^{012}-\mathfrak{a}_{2}T^{01}&T^{022}-\mathfrak{a}_{2}T^{02}&T^{032}- \mathfrak{a}_{2}T^{03}&T^{02}-\mathfrak{a}_{2}I^{0}\\ T^{013}-\mathfrak{a}_{3}T^{01}&T^{023}-\mathfrak{a}_{3}T^{02}&T^{033}- \mathfrak{a}_{3}T^{03}&T^{03}-\mathfrak{a}_{3}I^{0}\end{array}\right)\left( \begin{array}{c}\lambda_{1}\\ \lambda_{2}\\ \lambda_{3}\\ \mathfrak{c}\end{array}\right)=\left(\begin{array}{c}-\frac{T^{00}}{ \mathfrak{c}}\\ \frac{\mathfrak{a}_{1}T^{00}}{\mathfrak{c}}-\frac{T^{001}}{\mathfrak{c}}\\ \frac{\mathfrak{a}_{2}T^{00}}{\mathfrak{c}}-\frac{T^{002}}{\mathfrak{c}}\\ \frac{\mathfrak{a}_{3}T^{00}}{\mathfrak{c}}-\frac{T^{003}}{\mathfrak{c}} \end{array}\right). \tag{7.1}\]
Denote
\[\mathfrak{a}:=\frac{n_{0}u^{0}}{\mathfrak{c}}\frac{K_{3}(\gamma)}{K_{2}(\gamma)},\quad\mathfrak{b}:=\frac{n_{0}u^{0}}{\mathfrak{c}\gamma K_{2}(\gamma)}(6K_{3}( \gamma)+\gamma K_{2}(\gamma)).\]
By a tedious calculation, one can transform (7.1) into the following system
\[\left(\begin{array}{ccccc}0&0&0&\mathfrak{a}\frac{K_{2}(\gamma)}{K_{3}(\gamma)}- \mathfrak{a}\frac{K_{2}(\gamma)}{K_{3}(\gamma)}(\frac{K_{3}(\gamma)}{K_{2}( \gamma)}-\frac{\mathfrak{b}}{a})\frac{|\mathfrak{u}|^{2}}{T_{0}}\\ \mathfrak{a}T_{0}&0&0&\mathfrak{a}\frac{K_{2}(\gamma)}{K_{3}(\gamma)}(\frac{K_ {3}(\gamma)}{K_{2}(\gamma)}-\frac{\mathfrak{b}}{a})u_{1}\\ 0&\mathfrak{a}T_{0}&0&\mathfrak{a}\frac{K_{2}(\gamma)}{K_{3}(\gamma)}(\frac{K_ {3}(\gamma)}{K_{2}(\gamma)}-\frac{\mathfrak{b}}{a})u_{2}\\ 0&0&\mathfrak{a}T_{0}&\mathfrak{a}\frac{K_{2}(\gamma)}{K_{3}(\gamma)}(\frac{K_ {3}(\gamma)}{K_{2}(\gamma)}-\frac{\mathfrak{b}}{a})u_{3}\end{array}\right) \left(\begin{array}{c}\lambda_{1}\\ \lambda_{2}\\ \lambda_{3}\\ \mathfrak{c}\end{array}\right)=\left(\begin{array}{c}\frac{n_{0}}{\gamma}- \frac{\mathfrak{a}u^{0}}{\mathfrak{c}}-\frac{n_{0}}{\mathfrak{c}}\big{(} \frac{K_{3}(\gamma)}{K_{2}(\gamma)}-\frac{\mathfrak{b}}{a}\big{)}\frac{| \mathfrak{u}|^{2}}{T_{0}}\\ \frac{n_{0}}{\mathfrak{c}}\big{(}\frac{K_{2}(\gamma)}{K_{2}(\gamma)}-\frac{ \mathfrak{b}}{a}\big{)}u_{1}\\ \frac{n_{0}}{\gamma}\big{(}\frac{K_{3}(\gamma)}{K_{2}(\gamma)}-\frac{ \mathfrak{b}}{a}\big{)}u_{2}\\ \frac{n_{0}}{\gamma}\big{(}\frac{K_{3}(\gamma)}{K_{2}(\gamma)}-\frac{ \mathfrak{b}}{a}\big{)}u_{3}\end{array}\right). \tag{7.2}\]
Observing (5.8), one has \(\frac{K_{3}(\gamma)}{K_{2}(\gamma)}-\frac{\mathfrak{b}}{a}<0\), which implies that (7.2) has a unique solution. More precisely, we can write down it explicitly
\[\left(\begin{array}{c}\lambda_{1}\\ \lambda_{2}\\ \mathfrak{c}\\ \mathfrak{c}\end{array}\right)=\frac{1}{\frac{\mathfrak{u}^{0}}{\mathfrak{ c}}-\left(\frac{K_{3}(\gamma)}{K_{2}(\gamma)}-\frac{\mathfrak{c}}{\gamma}-\frac{K_ {2}(\gamma)}{K_{3}(\gamma)}\right)\frac{|\mathfrak{u}|^{2}}{CT_{0}}}\left( \begin{array}{c}\left(\frac{K_{3}(\gamma)}{K_{2}(\gamma)}-\frac{\mathfrak{c }}{\gamma}-\frac{K_{2}(\gamma)}{K_{3}(\gamma)}\right)\frac{(\mathfrak{u}^{0} )^{2}}{CT_{0}}u_{1}\\ \left(\frac{K_{3}(\gamma)}{K_{2}(\gamma)}-\frac{\mathfrak{c}}{\gamma}-\frac{K _{2}(\gamma)}{K_{3}(\gamma)}\right)\frac{(\mathfrak{u}^{0})^{2}}{CT_{0}}u_{2} \\ \left(\frac{K_{3}(\gamma)}{K_{2}(\gamma)}-\frac{\mathfrak{c}}{\gamma}-\frac{K _{2}(\gamma)}{K_{3}(\gamma)}\right)\frac{(\mathfrak{u}^{0})^{2}}{CT_{0}}u_{3} \\ \frac{1}{\gamma}-\frac{(\mathfrak{u}^{0})^{2}}{\gamma T_{0}}\frac{K_{3}(\gamma )}{K_{2}(\gamma)}-\left(\frac{K_{3}(\gamma)}{K_{2}(\gamma)}-\frac{\mathfrak{c }}{\gamma}-\frac{K_{2}(\gamma)}{K_{3}(\gamma)}\right)\frac{|\mathfrak{u}|^{2} }{\gamma T_{0}}\right)\end{array}\right).\]
For \(\zeta\), it follow from \(\langle\chi_{4}^{\mathfrak{c}},\chi_{4}^{\mathfrak{c}}\rangle=1\) that
\[\zeta^{2} =\int_{\mathbb{R}^{3}}(p^{0}/\mathfrak{c}+\sum_{i=1}^{3}\lambda_ {i}p_{i}+\mathfrak{c})^{2}\mathbf{M}_{\mathfrak{c}}dp\] \[=\int_{\mathbb{R}^{3}}(\frac{(p^{0})^{2}}{\mathfrak{c}^{2}}+ \mathfrak{c}^{2}+\sum_{i,j=1}^{3}\lambda_{i}\lambda_{j}p_{i}p_{j}+2\frac{ \mathfrak{c}}{\mathfrak{c}}p^{0}+2\sum_{i=1}^{3}\lambda_{i}\mathfrak{c}p_{i} +2\sum_{i=1}^{3}\frac{\lambda_{i}}{\mathfrak{c}}p^{0}p_{i})\mathbf{M}_{ \mathfrak{c}}dp\] \[=\frac{T^{000}}{\mathfrak{c}^{2}}+\mathfrak{c}^{2}I^{0}+\sum_{i,j =1}^{3}\lambda_{i}\lambda_{j}T^{0ij}+2\frac{\mathfrak{c}}{\mathfrak{c}}T^{00}+ 2\sum_{i=1}^{3}\lambda_{i}\mathfrak{c}T^{0i}+2\sum_{i=1}^{3}\frac{\lambda_{i}} {\mathfrak{c}}T^{00i},\]
which yields that
\[\zeta=\sqrt{\frac{T^{000}}{\mathfrak{c}^{2}}+\mathfrak{c}^{2}I^{0}+\sum_{i,j=1 }^{3}\lambda_{i}\lambda_{j}T^{0ij}+2\frac{\mathfrak{c}}{\mathfrak{c}}T^{00}+ 2\sum_{i=1}^{3}\lambda_{i}\mathfrak{c}T^{0i}+2\sum_{i=1}^{3}\frac{\lambda_{i}} {\mathfrak{c}}T^{00i}}.\]
Consequently, we obtain the desired orthonormal basis of \(\mathcal{N}_{\mathfrak{c}}\).
**Acknowledgments.** Yong Wang's research is partially supported by National Key R&D Program of China No. 2021YFA1000800, National Natural Science Foundation of China No. 12022114, 12288201, CAS Project for Young Scientists in Basic Research, Grant No. YSBR-031, and Youth Innovation Promotion Association of the Chinese Academy of Science No. 2019002. Changguo Xiao's research is partially supported by National Natural Science Foundation of China No. 12361045 and Guangxi Natural Science Foundation (Grant No. 2023GXNSFAA026066).
**Conflict of Interest:** The authors declare that they have no conflict of interest.
|
2309.08989 | RMP: A Random Mask Pretrain Framework for Motion Prediction | As the pretraining technique is growing in popularity, little work has been
done on pretrained learning-based motion prediction methods in autonomous
driving. In this paper, we propose a framework to formalize the pretraining
task for trajectory prediction of traffic participants. Within our framework,
inspired by the random masked model in natural language processing (NLP) and
computer vision (CV), objects' positions at random timesteps are masked and
then filled in by the learned neural network (NN). By changing the mask
profile, our framework can easily switch among a range of motion-related tasks.
We show that our proposed pretraining framework is able to deal with noisy
inputs and improves the motion prediction accuracy and miss rate, especially
for objects occluded over time by evaluating it on Argoverse and NuScenes
datasets. | Yi Yang, Qingwen Zhang, Thomas Gilles, Nazre Batool, John Folkesson | 2023-09-16T13:09:02Z | http://arxiv.org/abs/2309.08989v1 | # RMP: A Random Mask Pretrain Framework for Motion Prediction
###### Abstract
Xi _Abstract_--As the pretraining technique is growing in popularity, little work has been done on pretrained learning-based motion prediction methods in autonomous driving. In this paper, we propose a framework to formalize the pretraining task for trajectory prediction of traffic participants. Within our framework, inspired by the random masked model in natural language processing (NLP) and computer vision (CV), objects' positions at random timesteps are masked and then filled in by the learned neural network (NN). By changing the mask profile, our framework can easily switch among a range of motion-related tasks. We show that our proposed pretraining framework is able to deal with noisy inputs and improves the motion prediction accuracy and miss rate, especially for objects occluded over time by evaluating it on Argoverse and NuScenes datasets.
## I Introduction
Accurately predicting the motion of road users is essential in autonomous driving systems. This predictive capability provides the planner with a forward-looking perspective on potential movements, thereby enhancing safety measures. While learning-based motion prediction has become increasingly popular in recent research, the exploration of pretraining and self-supervised learning within this field remains relatively limited.
The technique of random masking has demonstrated its effectiveness in various fields, such as natural language processing (NLP) and computer vision (CV), as evidenced by models like BERT [1] and Masked Autoencoders [2] in conjunction with Vision Transformers (ViT [3]). Random masking involves concealing a portion of the data (masking), and then tasking the neural network with predicting the hidden elements, thereby creating a nontrivial and beneficial self-supervisory task. This method employs an asymmetric encoder-decoder architecture, which has proven to be particularly powerful regarding training speed with large datasets. Furthermore, it has demonstrated exceptional performance in transfer learning, particularly in tasks related to image processing.
Inspired by SceneTransformer [4], the motion prediction task is linked with a mask on the future time sequential data of road users. As depicted in Fig. 1, the data for all agents can be represented as a grid, with time and agent forming the two axes. In this context, motion prediction becomes a unique task wherein future states are masked [4]. This leads us to the natural question: _Could random mask pretraining be effectively applied to general motion tasks as well?_ These tasks include motion prediction (marginal, conditional, etc.), occlusion handling, and others. We introduce a straightforward yet potent framework for random masking pretraining (RMP) for motion tasks. Our RMP selectively conceals motion patches, allowing the random mask to capture spatial and social correlations among all agents in a given scenario. This universal framework can be readily integrated into numerous motion prediction methodologies. In this paper, we demonstrate its adaptability by incorporating it into several state-of-the-art models, including Autobots [5] and Hivt [6].
We assess the impact of pretraining on performing three different tasks: motion prediction, conditional motion prediction, and occlusion handling. In case of conditional motion prediction, not only is the historical information of all agents provided, but also the desired trajectory of the ego vehicle. The network then endeavors to predict the trajectories of all other agents.
In addition to classic motion prediction, we also treat occlusion handling as a separate task to evaluate our proposed framework. In real-world scenarios, occlusions are a common occurrence where one or more agents are partially or entirely obscured from view. Under such circumstances, predicting the motion of the occluded agents become a complex task that can significantly influence the overall performance of the autonomous driving system, especially with occlusions happening over short distances. This is a nontrivial issue that has often not been specifically focused on in practice. For agents whose historical trajectories are partially or heavily
Fig. 1: **Random Masking for Motion Data.** We treat time-sequential data as one dimension and all agents in the scenario as another, with each cell representing the high-dimensional features of an agent (including position, heading, agent type, agent shape, etc.). _Left_: Motion prediction is a special case where all future timesteps are masked (shown in blue) [4]. _Right_: We apply random masking to a scenario, hiding patches for random agents and random time steps for pretraining. _Ego_ stands for the ego autonomous vehicle.
occluded, we evaluate the performance of the current state-of-the-art networks with and without masking pretraining in an object-based manner.
Our experimental results indicate that motion prediction benefits from transfer learning for generalization and random masking. Our framework demonstrates effective performance on the Argoverse [7] and NuScenes [8] datasets. Our code will be publicly accessible at [https://github.com/KTH-RPL/RMP](https://github.com/KTH-RPL/RMP).
In this paper, we make the following contributions:
* We introduce a pretraining framework for a range of motion-related tasks.
* We design experiments to validate the effectiveness of random masking.
* We highlight that occlusion handling remains a challenge for current state-of-the-art methods and demonstrate that our pretraining method enhances performance in this area.
## II Related Work
### _Motion Prediction_
Motion prediction has recently been explored rapidly with large open datasets and public benchmarks [7, 8, 9, 10]. Early approaches drew inspiration from successful computer vision techniques, where map and agents' historical trajectories were rasterized into images using specific color encoding [11, 12, 13]. However, rasterization carries certain limitations, such as the challenge of selecting an optimal Field-Of-View (FOV) due to the high computational cost of high-resolution imaging and the potential for long-distance information loss. An alternative approach to these challenges is using sparse vectors and polygons, as exemplified by VectorNet [14]. Other network architectures that have been explored include Graph Neural Networks [15, 16] and Transformers [4, 17, 6, 18]. The outputs of these representations vary: some generate a set of point trajectories in an end-to-end manner [15, 4, 6], while others generate top K trajectory samples from anchors [12], heatmaps [19, 20, 21], or kinematic models [22, 23]. Owing to its adaptability, our proposed framework can be effectively incorporated into many of these methods.
### _Self-supervised Learning_
Self-supervised learning methods have garnered substantial interest across various fields, such as NLP and CV [1, 24, 25, 26]. These methods leverage different tasks to initialize network weights in the pretraining phase. For instance, contrastive learning [27, 28] designs tasks that distinguish between similarities and dissimilarities, utilizing both original data samples and their augmented counterparts. The Masked Autoencoder, proposed by [2], uses a masking encoder to reconstruct missing pixels in images during the pretraining phase, resulting in better performance and a training speed that is four times faster than training from scratch. This technique has inspired applications in a variety of domains, such as video [29, 30], 3D point clouds [31], and visual reinforcement learning in robotics [32]. Self-supervised learning for motion prediction in autonomous driving remains largely unexplored. However, in the past year, a few studies have started investigating this area [33, 34, 35, 36]. Perrthana et al. [36] propose a suite of four pretraining tasks, including lane masking, intersection distance calculation, maneuver classification, and success/failure classification. The work most similar to ours is the recent archive preprint [37] which shows results similar to our own on one of the tasks we tested (prediction). Our work here was developed independently to [37].
### _Conditional Motion Prediction_
Compared to standard motion prediction, conditional motion prediction offers additional information by incorporating specific conditions, such as the intended path of the ego vehicle. For example, the work presented in [38] generates predictions based on hypothetical 'what-if' interactions among road users and lanes. In this way, although their targeted task closely resembles standard motion prediction, it extends the context by incorporating speculative interaction scenarios. Additionally, studies like [39] and [21] adopt a two-step approach in their prediction methodology by first predicting the destination positions, which are then used as conditions for predicting full trajectories. This effectively transforms the prediction task into a conditional one, where the trajectories are predicated on hypothesized destinations.
### _Occlusion Handling_
Handling occlusions in motion prediction is crucial for enhancing the robustness and reliability of autonomous driving systems. A widely adopted representation called Occupancy Grid Map (OGM) captures the spatial arrangement of obstacles and free space where each grid cell represents the estimated probability of an agent's presence within. Predicting future OGM allows the formation of occluded areas, thus offering a more comprehensive understanding of the environment [40, 41]. Nevertheless, these approaches based on OGM can be computationally expensive, particularly for high-resolution, large, and complex environments. For object-based methods, there has been limited work due to the lack of motion prediction datasets that annotate occluded objects. Most datasets are primarily collected from the ego vehicle's perspective [7, 8]. To help mitigate this, we have post-processed the INTERACTION dataset [10], which was captured from bird's-eye-view drones. This has allowed us to estimate occlusion labels for objects, and we openly share the resulting post-processed dataset for further research in this area.
## III Problem Formulation
Consider a scenario including \(N\) agents' trajectories \(A\) over \(T\) timestamps, denoted as \(A_{i}\in\mathbb{R}^{T\times D_{agent}}\), where \(i\in[1,N]\), along with the surrounding road topology \(Map\in\mathbb{R}^{S\times P\times D_{road}}\). Here, \(S\) represents the number of road segments, \(P\) denotes the number of points within a segment, and \(D\) signifies the vector feature dimension that includes position coordinates \(x,y\) and the validity mask for
both \(D_{agent}\) and \(D_{road}\). If yaw angle, velocity and agent size of the agents are provided in the dataset, they are also added into the feature \(D\).
In the context of motion prediction, we are provided with the historical trajectory \(A_{history}\in\mathbb{R}^{T_{obs}\times D_{agent}}\), where \(T_{obs}\) signifies the observed historical timestamps, and our task is to predict the future trajectory \(A_{future}\in\mathbb{R}^{T_{ft}\times D_{agent}}\).
Here, it is worth mentioning that occlusion can complicate this task, as \(A_{history}\) may contain many occluded objects with unknown states. In the case of conditional motion prediction, however, additional elements are taken into account. In particular, the historical information is supplemented with the ego vehicle's anticipated future route path \(A_{ego}\in\mathbb{R}^{T_{ft}\times D_{agent}}\) (where \(i\) equals to index of ego vehicle), which forms part of the input.
## IV Methodology
In this section, we outline the strategy employed in our study. Fig. 2 provides an illustration of the complete training framework, and the specifics of the random masking application are outlined in the following sub-sections.
### _Network_
Our approach is an extension of the masked autoencoder [1, 2] for time-sequential trajectory data and aims to provide a simple, yet effective framework that is applicable to many motion prediction methodologies with minimal domain-specific knowledge required.
The framework can accommodate many network architectures in a two-stage process. In the first stage, different masking strategies are applied to all timestamps including the history and future timestamps, and for all agents. Given incomplete waypoints, the model tries to predict \(K\) possible completed trajectories. Therefore, we don't need to change the loss function from the original methods. In the second fine-tuning stage, the network combines the pretrained encoder and the task-specific decoder.
Our method tests on two networks -Autobot-Joint [5] and HiVT [6]. Autobot-Joint [5] is a transformer-based network that uses an axial attention mechanism to learn the temporal and spatial correlations among agents and road topology. Hivt [6] models the local and global context in a translation and rotation invariant transformer network.
### _Masking_
By changing the validity mask within the input, the pretraining task can easily be switched among trajectory completion (pretraining task), motion prediction, and conditional prediction. The mask defines which parts can be seen by the network. For the unseen parts, we further set them as zeros to guarantee a clean input for the network.
The random masking pretraining incorporates pointwise, patchwise, and time-based strategies, as illustrated in Fig. 3, each serving a distinct purpose. The pointwise approach (Fig. 2(a)) primarily facilitates the learning of interpolation and correlation over a short period from noisy data. In contrast, the patchwise method (Fig. 2(b)) fosters an understanding of interactions over extended periods. Inspired by the masked autoencoder approach to video data [30, 29], each agent's trajectory is divided into non-overlapping patches in space and time given a certain timeframe. The size of these patches is chosen randomly, and patches are masked randomly. The time-based strategy (Fig. 2(c)) simulates scenarios where a sensor might fail abruptly, leading to missing data at random timestamps.
The three tasks - motion prediction, conditional prediction and occlusion handling are three special masking cases (Fig. 2). Each task involves the process of prediction, where future
Fig. 3: Different mask sampling strategies: (a) random pointwise masking, (b) random patchwise masking for random agents, (c) random masking in time. All show 75% masking in total (in blue) and the remaining data (in grey) will be fed into the network.
Fig. 2: The pretraining framework. In the first pretrain phase, all agents’ information including the history and future time are concatenated together. Next, random masking is applied. Then, given incomplete information about agents’ positions with time (in grey), where some positions are randomly masked (in blue), the network trains to fill in the missing positions. In the fine-tuning phase, there are three tasks that correspond to three special masking cases. Once trained, the pretrained encoder is used for different tasks.
trajectories are treated as unknown and masked out. In conditional motion prediction, alongside the full historical data, the future desired path of the ego vehicle is also provided. For occlusion handling, the input data is often incomplete due to occlusions. Since the three tasks correspond to special cases of masking, they can be carried out by adapting the same network architecture accordingly.
## V Experiments
### _Datasets_
We evaluate the efficacy of our pretraining framework for motion and conditional prediction on two widely used datasets: Argoverse [7] and nuScenes [8]. Argoverse motion forecasting dataset contains \(205,942\) training sequences and \(39,472\) validation sequences. Each sequence includes data of all agents' positions over a 5 seconds period at \(10\) Hz. The task is to predict the subsequent 3 seconds' trajectory based on the initial 2 seconds of past observations with HD map information provided. The nuScenes dataset consists of \(32,186\) training and \(9,041\) validation sequences. The objective, in this case, is to predict future 6 seconds' trajectories at a rate of 2 Hz, given the past 2 seconds' trajectory data.
In order to evaluate our model's proficiency in handling occlusions, we leverage the multi-track INTERACTION dataset [10]. This dataset is collected by drones and potential traffic cameras, which enables the potential to label occluded objects from the perspective of a single vehicle. We auto-labeled occluded objects in the validation dataset based on a randomly designated ego agent. From a bird's-eye view, and given the positions and sizes of all agents, we compute the occupancy grid following [40]. Objects within the occluded region are labeled as _occluded_, as demonstrated in Fig. 4. The network is initially trained using the original training data, after which it is tested on this postprocessed validation dataset. The training uses a bird's-eye view without occlusions, while the validation set includes realistic real-world occlusions as seen from the vehicle's perspective.
### _Masking Strategy_
We have conducted extensive testing to assess the impact of different masking strategies on performance. The results of these ablation experiments are presented in Table I. Table I displays the outcomes of tests utilizing varying mask ratios and profiles for the pretraining task. Interestingly, for pointwise masking, ratios of 50% and 75% yielded superior results. Conversely, for both patchwise and time-only masks, a 25% ratio demonstrated the best performance. Among the tested profiles, point masking proved most effective. In regards to frozen encoder weights, the experiment shows that the unfrozen encoder achieves better results (Table I). We also test with different encoder sizes. The default Autobot model utilizes 2 sets of axial attention for temporal and social relations (\(\sim\)1,160,320 parameters for the encoder). Despite
\begin{table}
\end{table} TABLE I: Ablation experiments on our pretrain framework with the Autobot model on Argoverse validation dataset. We have evaluated the influences of different mask sampling strategies, finetuning with or without the frozen pretrained encoder weights, and also the encoder size. _w/[P]_ represents the method with our random masking pretraining. The default setting is highlighted in grey.
Fig. 4: Two examples of labeling occluded objects using ray tracing occupancy grid map from one vehicle’s view. The labeled object track will be used to evaluate the occlusion handling performance. The dark blue occluded agent in the occluded area (in grey grids) is blocked by other visible agents (in cyan), from the ego vehicle’s (in teal) view.
extending the size to include 4 and 6 sets, larger networks did not result in improved performance as demonstrated in Table Ic. This could be attributed to the Argoverse1 dataset size which is not large.
To ensure fair comparison between pretraining and training from scratch, we perform experiments over comparable time periods and on identical devices. As an example, the conditional motion prediction results for Argoverse dataset (Fig. 6) show that pretraining achieves better results and converges faster. Our experiments also show that it can learn other tasks from that same pretrained network at a faster rate and to better results.
### _Motion Prediction_
We have integrated our framework into the nuScenes (Table II) and Argoverse (Table III) datasets for motion prediction. The results indicate that the implementation of random masking pretraining enhances performance. In nuScenes, our approach achieves comparable results to other state-of-the-art methods. Compared to the baseline, the application of random masking showed marked improvements in the metrics including \(minADE_{5}\), \(minADE_{10}\) and miss rate for the Top 5 in 2 meters, with percentage decreases of 3.5% and 6.7% and 9.1% respectively. Note that in order to maintain a fair comparison, the Autobot baseline we utilize does not include ensemble operations, as these are not used in our post-processing steps. In Argoverse, we incorporate two methods- Autobot-joint and HiVT. Both of them show a
\begin{table}
\begin{tabular}{l c c c} \hline \hline Method & minADE\_5 \(\downarrow\) & minADE\_10 \(\downarrow\) &
\begin{tabular}{c} Miss Rate\_1 \\ (Top 5) \\ \end{tabular} \\ \hline GOHOME [20] & 1.42 & 1.15 & 0.57 \\ THOMAS [21] & 1.33 & 1.04 & 0.55 \\ PGP [42] & 1.27 & 0.94 & 0.52 \\ FRM [43] & 1.18 & 0.88 & 0.48 \\ \hline Autobot [5] & 1.43 & 1.05 & 0.66 \\ (Baseline, w/o ensemble) & 1.38 (3.5\%) & 0.98 (6.7\%) & 0.60 (9.1\%) \\ \hline \hline \end{tabular}
\end{table} TABLE II: Performance comparison of different models on nuScenes dataset. Here we use the baseline results of Autobot without ensemble to maintain a fair comparison.
Fig. 5: Qualitative results with our random mask pretrain framework.
Fig. 6: Conditional motion prediction on Argoverse dataset. Pretraining with fine-tuning is more accurate than training from scratch. The model is Autobot-joint. X-axis represents relative wall training time (4\(\times\)A100 GPUs), and Y-axis represents the \(minADE_{6}\). The pretraining is done with 75% pointwise masking.
positive impact of masked pertaining, resulting in an decrease of \(minADE_{6}\), \(minFDE_{6}\) by 3.9% and 4.6% for Autobot, and 4.9% and 1.6% for HiVT. Note that for HiVT, we prioritized speed and trained on four GPUs, resulting in lower of performance than training on a single GPU. However, our comparison is conducted under the same environment and settings.
### _Conditional Motion Prediction_
We evaluate conditional motion prediction on nuScenes and Argoverse datasets with Autobot again. Given the history information and the ego vehicle's desired future trajectories, the task is to predict all other agents' possible future trajectories. The results for this task are shown in Table. IV. For Argoverse, it reduces the minADE6 and minFDE6 by 12.0% and 10.2%, respectively. For nuScenes, it reduces minADE10 and minFDE10 by 8.8% and 4.9%. Given that the Argoverse data features higher frequency and more waypoints, it is plausible that random masking exhibits superior performance as the input size expands.
### _Occlusion Handling_
We use the postprocessed validation INTERACTION dataset to evaluate the efficacy of Autobot in complex scenarios as well as the benefits of random masking. The network is trained with regular INTERACTION training data. However, during the inference time, the network can only access the agent's waypoints annotated as visible. Thus, for the agents that are partially occluded (Section V-A), the network can only see incomplete history. We then measure how the network can capture such partially occluded agents' future trajectories. As shown in Table V, the use of random masking enhances the network's capability to predict the partially occluded agent's future trajectory, with improvements exceeding 30% for both \(ADE\) and \(FDE\).
The results are not surprising as the pretraining is a sort of random synthetic occlusion (as opposed to the actual realistic occlusions that we model in the validation set). Therefore, the pretrained network has a considerable advantage over a network simply trained with bird's eye view data and no occlusions.
## VI Conclusion
In this paper, we propose a simple and effective random mask pretraining framework which facilitates the motion prediction task in general and conditional motion prediction. Furthermore, our framework largely improves the prediction accuracy for occlusion scenarios. The self-supervised learning and masked autoencoder can be explored further with state-of-the-art techniques in the field of motion prediction for autonomous driving. Additionally, exploring new auxiliary tasks within the self-supervised learning domain offers exciting possibilities for further advancements. We think that exploring self-supervised learning may be beneficial as the volume of motion prediction data expands.
## Acknowledgement
This work1 was funded by Vinnova, Sweden (research grant). The computations were enabled by the supercomputing resource Berzelius provided by National Supercomputer Centre at Linkoping University and the Knut and Alice Wallenberg foundation, Sweden.
Footnote 1: We have used ChatGPT for editing and polishing author-written text.
|
2309.11200 | Rotating Alfvén waves in rotating plasmas | Angular momentum coupling between a rotating magnetized plasma and torsional
Alfv\'en waves carrying orbital angular momentum (OAM) is examined. It is not
only demonstrated that rotation is the source of Fresnel-Faraday rotation - or
orbital Faraday rotation effects - for OAM carrying Alfv\'en waves, but also
that angular momentum from an OAM carrying Alfv\'en wave can be transferred to
a rotating plasma through the inverse process. For the direct process, the
transverse structure angular rotation frequency is derived by considering the
dispersion relation for modes with opposite OAM content. For the inverse
process, the torque exerted on the plasma is derived as a function of wave and
plasma parameters. | J. -M. Rax, R. Gueroult, N. J. Fisch | 2023-09-20T10:36:49Z | http://arxiv.org/abs/2309.11200v1 | # Rotating Alfven waves in rotating plasmas
###### Abstract
Angular momentum coupling between a rotating magnetized plasma and torsional Alfven waves carrying orbital angular momentum (OAM) is examined. It is not only demonstrated that rotation is the source of Fresnel-Faraday rotation - or orbital Faraday rotation effects - for OAM carrying Alfven waves, but also that angular momentum from an OAM carrying Alfven wave can be transferred to a rotating plasma through the inverse process. For the direct process, the transverse structure angular rotation frequency is derived by considering the dispersion relation for modes with opposite OAM content. For the inverse process, the torque exerted on the plasma is derived as a function of wave and plasma parameters.
## 1 Introduction
Understanding the effect of rotation on plasma dynamics is essential to a wide range of applications. Besides original efforts motivated by microwave generation in magnetrons (Brillouin, 1945), it has indeed been shown that rotation could enable new approches to thermonuclear confinement (Rax _et al._, 2017; Wilcox, 1959; Bekhtenev _et al._, 1980; Ochs & Fisch, 2017; Hassam, 1997; Fetterman & Fisch, 2010, 2008). Rotation has also be found to hold promise for developing plasma mass separation applications (Gueroult _et al._, 2017, 2019), either in pulsed plasma centrifuges (Bonnevier, 1966; Krishnan _et al._, 1981) or in steady-state cross-field rotating plasmas (Ohkawa & Miller, 2002; Shinohara & Horii, 2007; Gueroult _et al._, 2014; Fetterman & Fisch, 2011; Gueroult _et al._, 2014), advanced accelerators (Janes, 1965; Janes _et al._, 1965; Thaury _et al._, 2013; Rax & Robiche, 2010) and thrusters (Gueroult _et al._, 2013). But understanding the effect of rotation on plasma dynamics is also essential in a number of environments. Rotation is for instance key to the structure and stability of a number of astrophysical objects (Kulsrud, 1999; Miesch & Toomre, 2009). In light of this ubiquitousness, and because plasma waves are widely used both for control and diagnostics in plasmas, it seems desirable to understand what the effect of rotation on wave propagation in plasmas may be (Gueroult _et al._, 2023). In fact the importance of this task was long recognised in geophysics and astrophysics, leading to extensive studies of low frequency MHD waves in rotating plasmas (Lehnert, 1954; Hide, 1969; Acheson 1972; Acheson & Hide, 1973; Campos, 2010), and notably of Alfven waves (Stix, 1992).
Meanwhile, following the discovery that electromagnetic waves carry both spin and orbital angular momentum (Allen _et al._, 1992, 2016; Andrews & Babiker, 2012), there have been numerous theoretical developments on spin-orbit interactions (Bliokh _et al._, 2015) in modern optics, which we note are now being applied to plasmas (Bliokh & Bliokh, 2022). For spin angular momentum (SAM) carrying waves, that is circularly polarised waves,
propagation through a rotating medium is known to lead to a phase-shift between eigenmodes with opposite SAM content (Player, 1976; Guerult _et al._, 2019, 2020). This phase-shift is then the source of a rotation of polarization or polarization drag (Jones, 1976), as originally postulated by Thomson (Thomson, 1885) and Fermi (Fermi, 1923). For orbital angular momentum (OAM) carrying waves, propagation through a rotating medium is the source of a phase-shift between eigenmodes with opposite OAM content (Gotte _et al._, 2007), leading to image rotation or Faraday-Fresnel Rotation (FFR) (Padgett _et al._, 2006).
This azimuthal Fresnel drag of OAM carrying waves, which can be viewed as an orbital Faraday rotation of the amplitude, was first derived (Wisniewski-Barker _et al._, 2014) and observed (Franke-Arnold _et al._, 2011) in isotropic, nongyrotropic media. In contrast, propagation of OAM carrying wave in a rotating anisotropic (gyrotropic) medium poses greater difficulty since the polarization state and the wave vector direction - which are independent parameters for a given wave frequency in an isotropic medium - become coupled. Yet, it was recently shown that Faraday-Fresnel Rotation (FFR) is also found for the high frequency magnetized plasma modes that are Whistler-Helicon and Trivelpiece-Gould modes (Rax & Guerout, 2021). For such high frequency modes it was found that the main modifications induced by the plasma rotation are associated with Doppler shift and Coriolis effect in the dispersion relation. Interestingly, we note that the result that rotation is the source of an azimuthal component for the group velocity of low frequency waves in magnetized plasmas when \(\mathbf{\Omega}\cdot\mathbf{k}\neq 0\) was already pointed out in geophysics and astrophysics (Acheson & Hide, 1973), but the connection to a Faraday-Fresnel rotation of the transverse structure of the wave did not seem to have been made. An added complexity for these low frequency modes is that one must, in addition to anisotropy and gyrotropy, consider the strong coupling to the inertial mode (Lighthill, 1980) that then comes into play. Revisiting this problem, we derive here in this study the expression for FFR for low frequency rotating Alfven waves in a rotating magnetized plasma.
This paper is organised as follows. After briefly recalling the configuration of interest and previous results in the next section, we construct in section 3 the spectrum of low frequency, small amplitude, fluid waves in a magnetized rotating plasma. The set of linearised Euler and Maxwell equations describes an oscillating Beltrami flow-force free field (Chandrasekhar & Prendergast, 1956) whose components are expressed with a cylindrical Chandrasekhar-Kendall (CK) potential (Chandrasekhar & Kendall, 1957; Yoshida, 1991). Then, in section 4, these orbital angular momentum carrying waves are shown to display a FFR under the influence of the plasma rotation. Section 5 focuses on the inverse problem when the orbital angular momentum of the wave is absorbed by the plasma. We derive in this case the torque exerted by this wave on the fluid driven as a function of the wave and plasma parameters. Finally section 6 summarises the main findings of this study.
## 2 Background
In this study we consider a rotating magnetized plasma column with angular velocity \(\mathbf{\Omega}=\Omega\mathbf{e}_{z}\) and static uniform magnetic field \(\mathbf{B}_{0}=B_{0}\mathbf{e}_{\mathbf{z}}\). We write \((r,\theta,z)\) and \((x,y,z)\) cylindrical and Cartesian coordinates on cylindrical \((\mathbf{e}_{r},\mathbf{e}_{\theta},\mathbf{e}_{z})\) and Cartesian \((\mathbf{e}_{r},\mathbf{e}_{\theta},\mathbf{e}_{z})\) basis, respectively. The plasma dynamics is described assuming an inviscid and incompressible fluid model. We classically define the Alfven velocity \(\mathbf{V}\doteq\mathbf{B}_{0}/\sqrt{\mu_{0}\rho}\) where \(\mu_{0}\) is the permittivity of vacuum and \(\rho\) the mass density of the fluid.
In the simple case where \(B_{0}=0\) and \(\Omega\neq 0\) the rotating plasma behaves as an ordinary rotating fluid and inertial waves can propagate. Taking a phase factor
\(\exp j\left(\omega t-k_{\parallel}z-k_{\perp}y\right)\), the dispersion relation for this inertial mode (IM) is (Lighthill, 1980)
\[\omega=\pm 2\Omega k_{\parallel}/\sqrt{k_{\parallel}^{2}+k_{\perp}^{2}}. \tag{1}\]
Conversely, in the case where \(\Omega=0\) but \(B_{0}\neq 0\), Alfven waves can propagate in the magnetized plasma at rest. The dispersion of this torsional mode (TAW) is (Stix, 1992)
\[\omega=\pm B_{0}k_{\parallel}/\sqrt{\mu_{0}\rho}=\pm k_{\parallel}V. \tag{2}\]
Note that compressional Alfven wave (CAW) are not considered here as we are considering an incompressible plasma. The dispersion of uncoupled TAW and IM is plotted in Fig. 1 in the \(\left(k_{\parallel}V/\omega,k_{\perp}V/\omega\right)\) plane for a given frequency \(\omega\). In this figure the grey zones indicate regions of strong coupling between TAW and IM. Note that we have normalised for convenience the wave-vector to \(\omega/V\), and that even for the unmagnetized IM branch.
In the more general case where both \(B_{0}\neq 0\) and \(\Omega\neq 0\), then a strong coupling between IM and TAW modes rearranges the spectrum and gives rise to two new branches (Lehnert, 1954; Acheson & Hide, 1973). Since as already pointed out by Acheson & Hide (1973) the group velocity of these modes for waves such that \(\boldsymbol{\Omega}\cdot\mathbf{k}\neq 0\) has an azimuthal component, then we expect Fresnel-Faraday Rotation as recently identified for Trivelpiece-Gould and Whistler-Helicon high frequency electronic modes (Rax & Guerout, 2021).
## 3 Rotating Alfven waves in a rotating plasma
In this section we examine the properties of low frequency waves carrying orbital angular momentum in a rotating magnetized plasma.
### Classical modes
Two methods can be used to identify and describe the coupling between the angular momentum of a rotating plasma column and the angular momentum of a wave propagating in this rotating magnetized plasma. One is to consider the transformation laws of the various parameters from the lab frame to a rotating frame. The other is to perform the study in the lab frame starting from first principles. Here we will use the first method, similarly to original contributions on MHD waves in rotating conductive fluids (Lehnert, 1954; Hide, 1969), and solve the perfect MHD dynamics to calculate the rotating plasma linear response for the low frequency branches where the coupling between the fields and the particles is large. By working in the co-rotating frame (R) rather than in the
Figure 1: Uncoupled dispersion of torsional Alfvén waves (TAW) obtained for \(B_{0}\neq 0\) and \(\Omega=0\), and of inertial waves (IM) obtained for \(\Omega\neq 0\) and \(B_{0}=0\).
laboratory frame (L), both the Coriolis force \(2\boldsymbol{\Omega}\times\mathbf{v}\) and the centrifugal forces \(-\boldsymbol{\nabla}\psi\) with \(\psi=-\Omega^{2}r^{2}/2\) must be taken into account.
We model the evolution of the wave velocity field \(\mathbf{v}\left(\mathbf{r},t\right)\) using Euler's equation under the assumption of zero viscosity
\[\frac{\partial\mathbf{v}}{\partial t}+\left(\mathbf{v}\cdot\boldsymbol{\nabla }\right)\mathbf{v}+2\boldsymbol{\Omega}\times\mathbf{v}=-\boldsymbol{\nabla }\left(\frac{P}{\rho}+\psi\right)+\frac{1}{\mu_{0}\rho}\left(\boldsymbol{ \nabla}\times\mathbf{B}\right)\times\left(\mathbf{B}+\mathbf{B}_{0}\right), \tag{1}\]
and the evolution of the wave magnetic field \(\mathbf{B}\left(\mathbf{r},t\right)\) using Maxwell-Faraday's equation under the assumption of perfect conductivity
\[\frac{\partial\mathbf{B}}{\partial t}=\boldsymbol{\nabla}\times\left[\mathbf{v }\times\left(\mathbf{B}+\mathbf{B}_{0}\right)\right], \tag{2}\]
where \(\rho\) is the mass density of the fluid and \(P\) is the pressure. These dynamical relations are completed by the flux conservation law
\[\boldsymbol{\nabla}\cdot\mathbf{B}=0 \tag{3}\]
for the magnetic field and the incompressibility relation
\[\boldsymbol{\nabla}\cdot\mathbf{v}=0 \tag{4}\]
for the velocity field. As already mentioned this last relation will restrict the plasma behaviour to the Alfvenic dynamics associated with torsional waves.
We then consider a small amplitude magnetohydrodynamic perturbation, propagating along and around the \(z\) axis, described by a magnetic perturbation
\[\mathbf{B}\left(r,\theta,z,t\right)=\mathfrak{B}\left(r,\theta,z\right)\exp(j \omega t) \tag{5}\]
with respect to the uniform static magnetic field \(\mathbf{B}_{0}=B_{0}\mathbf{e}_{z}\). The wave frequency \(\omega\) is assumed smaller than the ion cyclotron frequency and larger than the collision frequency to validate the use of the perfect MHD model Eqs.(1, 2). The oscillating magnetic wave \(\mathbf{B}\) is associated with an oscillating hydrodynamic velocity perturbation \(\mathbf{v}\)
\[\mathbf{v}\left(r,\theta,z,t\right)=\mathbf{u}\left(r,\theta,z\right)\exp(j \omega t), \tag{6}\]
with respect to rotating frame velocity equilibrium \(\mathbf{v}_{0}=\mathbf{0}\). The pressure \(P\) balances the centrifugal force at equilibrium \(\boldsymbol{\nabla}\left(P+\rho\psi\right)=\mathbf{0}\) and the pressure perturbation is \(p\left(r,\theta,z\right)\exp j\omega t\). To first order in these perturbation the linearisation of Eqs. (1) and (2) gives
\[j\omega\mathbf{u}+2\boldsymbol{\Omega}\times\mathbf{u}=- \boldsymbol{\nabla}\left(p/\rho\right)+\frac{1}{\mu_{0}\rho}\left(\boldsymbol{ \nabla}\times\mathfrak{B}\right)\times\mathbf{B}_{0}, \tag{7}\] \[j\omega\mathfrak{B}=\left(\mathbf{B}_{0}\cdot\boldsymbol{\nabla }\right)\mathbf{u}. \tag{8}\]
Flux conservation and incompressibility provide the two additional conditions
\[\boldsymbol{\nabla}\cdot\mathbf{u}=0, \tag{9}\] \[\boldsymbol{\nabla}\cdot\mathfrak{B}=0. \tag{10}\]
Taking the curl of both Eqs. (7, 8) and eliminating \(\mathfrak{B}\) gives a linear relation for the velocity perturbation
\[\omega^{2}\boldsymbol{\nabla}\times\mathbf{u}+2j\omega\left(\boldsymbol{\Omega }\cdot\boldsymbol{\nabla}\right)\mathbf{u}+\left(\mathbf{V}\cdot\boldsymbol{ \nabla}\right)^{2}\left(\boldsymbol{\nabla}\times\mathbf{u}\right)=\mathbf{0}. \tag{11}\]
Now if ones Fourier analyses this velocity perturbation as a superposition of plane waves
\[\mathbf{u}\left(\mathbf{r}\right)\exp j\omega t=\exp[j\left(\omega t-\mathbf{ k}\cdot\mathbf{r}\right)], \tag{12}\]
that is to say put the emphasis on the linear momentum dynamics rather than on the angular momentum one, one recovers the two branches of Alfvenic/Inertial perturbations in a rotating plasma (Lehnert, 1954; Acheson & Hide, 1973). Specifically, plugging Eq. (12) into Eq. (11) and then taking the cross product \(j\mathbf{k}\times\) of this algebraic relation one obtain the dispersion relation
\[\omega^{2}-\left(\mathbf{k}\cdot\mathbf{V}\right)^{2}=\pm 2\omega\left( \boldsymbol{\Omega}\cdot\mathbf{k}\right)/\left|\mathbf{k}\right|. \tag{13}\]
These two branches, which are illustrated in Fig. 2, have been widely investigated within the context of geophysical and astrophysical magnetohydrodynamics models. For short wavelengths the \(\Omega=0\) torsional Alfven wave (TAW) splits into inertial (IM) and a magneto-inertial (MI) waves. For long wavelengths, that is in the grey zone in Fig. 2, inertial terms dominate the dispersion and the IM mode is found to reduce to its zero rotation behaviour already shown in Fig. 1. Note finally that the torsional Alfven wave is recovered for large \(k_{\perp}\) where a local dispersion becomes valid as opposed to small \(k_{\perp}\) where the large wavelength allows the wave to probe the large scale behaviour of the rotation.
### Beltrami flow
Instead of this usual procedure using a full Fourier decomposition as given by Eq. (12), we start here by considering a travelling perturbations along \(z\) of the form
\[\mathbf{u}\left(r,\theta,z\right)=\mathbf{w}(r,\theta)\exp(-jk_{\parallel}z). \tag{14}\]
Note that this is analog to what was already done by Shukla (2012) to study OAM carrying dispersive shear Alfven waves though in this earlier study the paraxial approximation and a two fluid model were used, and the plasma was considered at rest (_i. e._ non-rotating). Plugging Eq. (14) in the dispersion relation for a rotating plasma Eq. (11) gives
\[\boldsymbol{\nabla}\times\mathbf{u}=\mathcal{K}\mathbf{u} \tag{15}\]
where we have defined
\[\mathcal{K}\left(k_{\parallel},\omega\right)\doteq 2\frac{\Omega}{\omega}k_{ \parallel}\left(\frac{k_{\parallel}^{2}V^{2}}{\omega^{2}}-1\right)^{-1}. \tag{16}\]
From Eq. (8) the oscillating magnetic field then writes
\[\omega\mathfrak{B}=-\sqrt{\mu_{0}\rho}k_{\parallel}V\mathbf{u}. \tag{17}\]
The two modes identified in Fig. 2 can be recovered from Eq. (16). More specifically,
Figure 2: Coupled dispersion of magnetoinertial waves (MI) and inertial waves (IM).
for \(k_{\parallel}V>\omega\) Eq. (3.15) describes an Alfven wave modified by inertial effect. Conversely for \(k_{\parallel}V<\omega\) Eq. (3.15) describes an inertial wave modified by MHD coupling. In the following we will focus on the Alfven wave dynamics and thus assume \(\mathcal{K}>0\).
Equation (3.15) is characteristic of a _Beltrami_ flow (Chandrasekhar & Prendergast, 1956). As such \(\mathbf{u}\) can be written in terms of the so called _Chandrasekhar-Kendall_ (CK) potential \(\Phi\)(Chandrasekhar & Kendall, 1957) as
\[\mathbf{u} =\frac{1}{\mathcal{K}}\boldsymbol{\nabla}\times\left(\boldsymbol{ \nabla}\times\Phi\mathbf{e}_{z}\right)+\boldsymbol{\nabla}\times\Phi\mathbf{e} _{z}\] \[=-\left[\frac{1}{\mathcal{K}}\boldsymbol{\nabla}\times\mathbf{e} _{z}\times\boldsymbol{\nabla}+\mathbf{e}_{z}\times\boldsymbol{\nabla}\right]\Phi \tag{3.18}\]
where the CK potential is solution of the scalar Helmholtz equation
\[\Delta\Phi+\mathcal{K}^{2}\Phi=0. \tag{3.19}\]
One verifies that the three components of Eq. (3.18) are independent.
Before examining the structure of OAM carrying modes through the CK potential, two additional results can be obtained from Eq. (3.16). First, for the Fourier decomposition used above, plugging Eq. (3.13) in Eq. (3.16) gives
\[\frac{\mathcal{K}^{2}}{k_{\parallel}^{2}}=1+\frac{k_{\perp}^{2}}{k_{\parallel }^{2}}>1. \tag{3.20}\]
Second, we can derive the dimensionless group-velocity dispersion coefficient
\[\frac{\omega}{\mathcal{K}}\frac{\partial\mathcal{K}}{\partial\omega}=-\frac{k _{\parallel}}{\mathcal{K}}\frac{\partial\mathcal{K}}{\partial k_{\parallel}} =\frac{k_{\parallel}^{2}V^{2}+\omega^{2}}{k_{\parallel}^{2}V^{2}-\omega^{2}} \tag{3.21}\]
which we will use later to explicit the axial wave vector difference for two eigenmodes with opposite OAM content.
### Structure of OAM carrying modes
Because we are interested in waves carrying orbital angular momentum around \(z\) and linear momentum along \(z\), we search for solutions of the form
\[\Phi\left(r,\theta,z\right)=\phi\left(r\right)\exp[-j\left(m\theta+k_{ \parallel}z\right)] \tag{3.22}\]
where \(m\in\mathbb{Z}\) is the azimuthal mode number associated with the orbital angular momentum of the wave. From Eq. (3.19) the radial amplitude of this rotating CK potential \(\phi(r)\) must be solution of the Bessel equation
\[\frac{1}{r}\frac{d}{dr}\left(r\frac{d\phi}{dr}\right)-\frac{m^{2}}{r^{2}}\phi +\left(\mathcal{K}^{2}-k_{\parallel}^{2}\right)\phi=0. \tag{3.23}\]
Since as shown in Eq. (3.20) \(\mathcal{K}^{2}>k_{\parallel}^{2}\), \(\phi(r)\) is in general the combination of Bessel functions of the first and the second kind and order \(m\in\mathbb{Z}\), \(J_{m}\) and \(Y_{m}\). Yet, the finite value of \(\phi\) at \(r=0\) demands to restrict the physical solution to Bessel functions of the first kind \(J_{m}\) so that we find
\[\phi\left(r\right)=J_{m}\left(\alpha r\right) \tag{3.24}\]
with the cylindrical dispersion relation
\[\alpha^{2}+k_{\parallel}^{2}=\mathcal{K}^{2}\left(k_{\parallel},\omega\right). \tag{3.25}\]
Note that, like the ordinary plane wave Eq. (3.12) used in the standard analysis, the cylindrical Bessel waves Eq. (3.24) can not be normalised.
Putting these pieces together one finally gets
\[\mathbf{v} =\left[\frac{1}{\mathcal{K}}\boldsymbol{\nabla}\times\mathbf{e}_{z} \times\boldsymbol{\nabla}+\mathbf{e}_{z}\times\boldsymbol{\nabla}\right]J_{m} \left(\sqrt{\mathcal{K}^{2}-k_{\parallel}^{2}}r\right)\exp[j\left(\omega t-m \theta-k_{\parallel}z\right)]\] \[=-\frac{\omega\mathbf{B}}{\sqrt{\mu_{0}}\rho k_{\parallel}V}. \tag{3.26}\]
The components in the plasma frame of a rotating Alfven wave with azimuthal mode number \(m\) thus have an amplitude proportional to combination of Bessel functions of the first kind and of orders \(m\) and \(m\pm 1\). All these Bessel functions have the same radial dependence, namely \(\sqrt{\mathcal{K}^{2}\left(k_{\parallel},\omega\right)-k_{\parallel}^{2}}r\), where \(\mathcal{K}\left(k_{\parallel},\omega\right)\) is given by Eq. (3.16).
## 4 Direct rotational Fresnel drag-orbital Faraday rotation
Let us now rewrite these perturbations as seen from the laboratory frame. We use the index \(R\) for the rotating plasma rest frame and \(L\) for the laboratory frame. The radial Eulerian coordinates \(r\) and \(z\) are unchanged through this change of frame or reference, but azimuthal coordinates \(\theta\) changes, with
\[r\big{|}_{L} =\left.r\right|_{R} \tag{4.1}\] \[z\big{|}_{L} =\left.z\right|_{R}\] (4.2) \[\theta\big{|}_{L} =\left.\theta\right|_{R}+\Omega t. \tag{4.3}\]
Since the axial wave-vector is unchanged \(\left.k_{\parallel}\right|_{R}=\left.k_{\parallel}\right|_{L}\), the phase of the wave in the plasma rest-frame
\[\omega t-k_{\parallel}z\pm m\left.\theta\right|_{R} \tag{4.4}\]
becomes
\[\left(\omega\mp m\Omega\right)t-k_{\parallel}z\pm m\left.\theta \right|_{L} \tag{4.5}\]
in the laboratory frame.
Equipped with these transformations we can now describe the conditions to observe Fresnel-Faraday Rotation. For this we consider two CK potentials describing two Alfven modes with opposite OAM content in the rotating frame \(R\)
\[\left.\Phi_{+}\right|_{R} =J_{m}\left(\alpha r\right)\exp\left(j\left[\left(\omega-m\Omega \right)t-\left(k_{\parallel}-\delta k_{\parallel}\right)z-m\left.\theta\right| _{R}\right]\right), \tag{4.6}\] \[\left.\Phi_{-}\right|_{R} =J_{-m}\left(\alpha r\right)\exp\left(j\left[\left(\omega+m \Omega\right)t-\left(k_{\parallel}+\delta k_{\parallel}\right)z+m\left.\theta \right|_{R}\right]\right).\]
These transform in the CK potentials in the laboratory frame \(L\)
\[\left.\Phi_{+}\right|_{L} =J_{m}\left(\alpha r\right)\exp\left(j\left[\omega t-\left(k_{ \parallel}-\delta k_{\parallel}\right)z-m\left.\theta\right|_{L}\right] \right), \tag{4.7}\] \[\left.\Phi_{-}\right|_{L} =J_{-m}\left(\alpha r\right)\exp\left(j\left[\omega t-\left(k_{ \parallel}+\delta k_{\parallel}\right)z+m\left.\theta\right|_{L}\right]\right),\]
as a result of the rotational Doppler shift \(\left.\theta\right|_{L}=\left.\theta\right|_{R}+\Omega t\). These Alfven CK potentials \(\left.\Phi_{\pm}\right|_{L}\) can be driven by a multicoil antenna similar to that used to study Whistler-Helicon modes (Stenzel & Urrutia, 2014, 2015, 2015, 2016; Urrutia & Stenzel, 2015, 2016; Stenzel & Urrutia, 2018; Stenzel, 2019). The radial field pattern is then a superposition of \(+m\) and \(-m\) Bessel amplitudes \(J_{\pm m}\left(\alpha r\right)\) where \(\alpha\) is associated with the radial modulation of the antenna currents (Rax & Guerout, 2021). The antenna then sets both the radial wave-vector \(\alpha\) and the frequency \(\omega\), whereas the axial wave-vectors
are solutions of the rotating frame dispersion relation. From Eq. (3.25)
\[\alpha =\sqrt{\mathcal{K}^{2}\left(k_{\parallel}-\delta k_{\parallel}, \omega-m\Omega\right)-\left(k_{\parallel}-\delta k_{\parallel}\right)^{2}}, \tag{4.8}\] \[\alpha =\sqrt{\mathcal{K}^{2}\left(k_{\parallel}+\delta k_{\parallel}, \omega+m\Omega\right)-\left(k_{\parallel}+\delta k_{\parallel}\right)^{2}}. \tag{4.9}\]
Since we assume \(\omega\gg\Omega\) and \(k_{\parallel}\gg\delta k_{\parallel}\) we can Taylor expand Eq. (4.8) and Eq. (4.9) to get \(\delta k_{\parallel}\), leading to
\[\frac{k_{\parallel}}{\mathcal{K}}\delta k_{\parallel}=\frac{\delta k_{ \parallel}}{2}\frac{\partial\mathcal{K}\left(\omega,k_{\parallel}\right)}{ \partial k_{\parallel}}+\frac{m\Omega}{2}\frac{\partial\mathcal{K}\left( \omega,k_{\parallel}\right)}{\partial\omega}. \tag{4.10}\]
Eq. (3.21) can then be used to finally write the axial wave-vector difference \(\delta k_{\parallel}\) for two modes with the same frequency \(\omega\), the same radial amplitude \(\left|J_{m}\left(\alpha r\right)\right|\) and equal but opposite azimuthal number \(\left|m\right|\) as
\[\frac{\delta k_{\parallel}}{k_{\parallel}}=\frac{1}{2}m\frac{\Omega}{\omega} \frac{1+\frac{k_{\parallel}^{2}V^{2}}{\omega^{2}}}{1-\frac{k_{\parallel}^{2}} {\mathcal{K}^{2}}+\left(1+\frac{k_{\parallel}^{2}}{\mathcal{K}^{2}}\right) \frac{k_{\parallel}^{2}V^{2}}{\omega^{2}}} \tag{4.11}\]
where \(\mathcal{K}\left(k_{\parallel},\omega,\Omega\right)\) is given by Eq. (3.16). This implies that there will be a difference in the axial phase velocity \(\omega/\left(k_{\parallel}\pm\delta k_{\parallel}\right)\) of these two modes, and because these two modes rotates in opposite direction due to their opposite azimuthal mode number, the transverse structure of the sum of these modes will rotate. This is Fresnel drag-Faraday orbital rotation effects. Specifically, if one launches a wave which is a superposition of \(+m\) and \(-m\) modes such that at the antenna location \(z=0\)
\[\left.\Phi\right|_{z=0}=J_{m}\left(\alpha r\right)\left(\exp[j\left(\omega t- m\left.\theta\right|_{L}\right)]+\left(-1\right)^{m}\exp[j\left(\omega t+m\left. \theta\right|_{L}\right)]\right),\]
the wave transverse amplitude rotates as it propagates along \(z>0\) with an angular velocity along the propagation axis
\[\frac{d\theta}{dz}\bigg{|}_{L}=\frac{\delta k}{m}=\frac{1}{2}\frac{\Omega}{ \omega}k_{\parallel}\mathcal{K}^{2}\frac{k_{\parallel}^{2}V^{2}+\omega^{2}}{k_ {\parallel}^{2}V^{2}\left(\mathcal{K}^{2}+k_{\parallel}^{2}\right)+\omega^{2} \left(\mathcal{K}^{2}-k_{\parallel}^{2}\right)}. \tag{4.12}\]
This CK potential rotation is illustrated in Fig. 3 for the case \(m=4\). Eqs. (4.11, 4.12) quantifies the direct Faraday-Fresnel effect of Alfven waves in rotating plasmas, completing the similar results previously obtained for Trivelpiece-Gould and Whistler-Helicon modes (Rax & Guerault, 2021). The \(1/m\) factor in Eq. (4.12) comes from the fact that the image constructed from the superposition of \(\pm m\) modes has a \(2m\)-fold symmetry.
To conclude this section it was shown that besides the Fresnel-Faraday Rotation associated to a phase velocity difference for \(m\) and \(-m\) modes, there can also be a spliting of the envelope of a \((m,-m)\) wave packet if the group velocity for co-rotating \((m)\) and counter-rotating \((-m)\) modes were different (Rax & Guerault, 2021). We note that this second effect is also present here for Alfven waves in rotating plasmas. Indeed, given a radial wave-vector \(\alpha\) the dispersion relation is \(\mathcal{D}=\mathcal{K}^{2}-k_{\parallel}^{2}-\alpha^{2}=0\), so that from Eq. (3.21) the axial group velocity is given by
\[-\frac{\partial\mathcal{D}}{\partial k_{\parallel}}/\frac{\partial\mathcal{D}} {\partial\omega}=\frac{k_{\parallel}}{\mathcal{K}\partial\mathcal{K}/\partial \omega}-\frac{\omega}{k_{\parallel}}. \tag{4.13}\]
and one verifies from Eq. (3.16) that the group velocity for a mode \((k_{\parallel}+\delta k_{\parallel},m)\) and that for a mode \((k_{\parallel}-\delta k_{\parallel},-m)\) are different. Rather than deriving here an explicit formula for
the Fresnel-Faraday splitting, we consider in the next section the inverse Fresnel-Faraday effect associated with wave absorption.
## 5 Inverse rotational Fresnel drag and angular moment absorption
In a perfectly conducting inviscid plasma there is no power absorption. The power exchange between the oscillating electromagnetic field and the plasma is purely reactive. To obtain an irreversible (active) angular momentum absorption, on needs a dissipative mechanism. Two different wave orbital angular momentum absorption mechanisms can be considered. One is resonant collisionless absorption, the other is collisional absorption. The former was recently studied in Rax _et al._ (2023) through quasilinear theory and will not be considered here. Instead, we consider in this section a weakly dissipative plasma where the ideal MHD hypothesis of perfect conductivity is relaxed and the inviscid assumption of zero viscosity no longer apply. In both case, collisionless or collisionless, each time an energy \(\delta\mathcal{U}\) is absorbed by the plasma, an axial angular momentum \(\delta L=m\delta\mathcal{U}/\omega\) is also absorbed by the plasma (Rax _et al._, 2017, 2023). The rate of decay of the wave angular momentum is hence equal to the wave induced density of torque on the plasma \(\Gamma=dL/dt\). In steady-state, this angular momentum transfer \(d\Gamma/dt\) is balanced by viscous damping of the velocity shear and Ohmic dissipation of the radial charge polarisation sustaining the rotation. This dissipation is larger in the collisionless case considered here than in the collisionless regime considered in Rax _et al._ (2023).
Specifically, we introduce two dissipative collisional coupling to our dissipation-less system Eqs. (19, 20), namely finite viscosity \(\rho\mu\) and finite resistivity \(\mu_{0}\eta\). We follow the notation of Taylor (1989) (devoted to Alfven wave helicity absorption) and introduce the magnetic diffusion coefficient \(\eta\) and the kinematic viscosity \(\mu\). Ohm's law then writes \(\mathbf{E}+\mathbf{v}\times\mathbf{B}=\mu_{0}\eta\mathbf{j}\) and the system Eqs. (19, 20) becomes
\[j\omega\mathbf{u}+2\boldsymbol{\Omega}\times\mathbf{u}=- \boldsymbol{\nabla}\left(p/\rho\right)+\frac{1}{\mu_{0}\rho}\left(\boldsymbol{ \nabla}\times\mathfrak{B}\right)\times\mathbf{B}_{0}+\mu\Delta\mathbf{u}, \tag{23}\] \[j\omega\mathfrak{B}=\left(\mathbf{B}_{0}\cdot\boldsymbol{\nabla }\right)\mathbf{u}+\eta\Delta\mathfrak{B}. \tag{24}\]
Since we assume weak dissipation, the resistive term \(\eta\Delta\mathbf{B}\) in Maxwell-Faraday's equation and the viscous term \(\mu\Delta\mathbf{u}\) in Navier-Stokes equation can be evaluated with the dispersive properties of the non dissipative dispersion relation. Within the bounds of this perturbative expansion scheme (\(\mathcal{K}^{2}\eta\ll\omega\) and \(\mathcal{K}^{2}\mu\ll\omega\)), and for the perturbation \(\mathbf{u}\left(r,\theta,z\right)=\mathbf{w}(r,\theta)\exp(-jk_{\parallel}z)\) already given in Eq. (20), we get from Eqs. (21, 22)
Figure 3: Fresnel drag-Faraday rotation of the Chandrasekhar-Kendall potential describing an Alfvén-Beltrami wave with \(m=\pm 4\) after a propagation along a path \(z=\pi/4\left(d\theta/dz\right)\).
the non-dissipative Laplacians
\[\varDelta\mathbf{u} =-\mathcal{K}^{2}\mathbf{u}, \tag{100}\] \[\varDelta\mathfrak{B} =-\mathcal{K}^{2}\mathfrak{B}. \tag{101}\]
Plugging these results into Eqs. (102, 103) yields the system
\[j\omega\mathbf{u}+2\boldsymbol{\Omega}\times\mathbf{u}=- \boldsymbol{\nabla}\left(p/\rho\right)+\frac{1}{\mu_{0}\rho}\left(\boldsymbol{ \nabla}\times\mathfrak{B}\right)\times\mathbf{B}_{0}-\mathcal{K}^{2}\mu \mathbf{u}, \tag{102}\] \[j\omega\mathfrak{B}=\left(\mathbf{B}_{0}\cdot\boldsymbol{\nabla }\right)\mathbf{u}-\mathcal{K}^{2}\eta\mathfrak{B} \tag{103}\]
where now viscous and resistive dissipation introduce a local relaxation.
We then take the rotational of the first equation and eliminate \(\mathfrak{B}\) using the second equation to get
\[\left[\left(j\omega+\mathcal{K}^{2}\mu\right)\left(j\omega+\mathcal{K}^{2} \eta\right)\right]\boldsymbol{\nabla}\times\mathbf{u}+2j\left(j\omega+ \mathcal{K}^{2}\eta\right)k_{\parallel}\varOmega\mathbf{u}+k_{\parallel}^{2 }V^{2}\boldsymbol{\nabla}\times\mathbf{u}=\mathbf{0}. \tag{104}\]
After some algebra we find that the linearised dissipative regime of velocity and field low frequency oscillations is now described by
\[\boldsymbol{\nabla}\times\mathbf{u}=\left[\mathcal{K}_{R}\left(k_ {\parallel},\omega\right)-j\mathcal{K}_{I}\left(k_{\parallel},\omega\right) \right]\mathbf{u} \tag{105}\] \[\left(\omega-j\mathcal{K}^{2}\eta\right)\mathbf{B}=-\sqrt{\mu_{0 }\rho}k_{\parallel}V\mathbf{u} \tag{106}\]
rather than by the collisionless Eqs. (11, 12), where we have defined the two real wave-vectors \(\mathcal{K}_{R}\approx\mathcal{K}\gg\mathcal{K}_{I}\) through
\[\mathcal{K}_{R}\left(k_{\parallel},\omega\right)-j\mathcal{K}_{I}\left(k_{ \parallel},\omega\right)=2\varOmega\frac{\left(\omega-j\mathcal{K}^{2}\eta \right)k_{\parallel}}{k_{\parallel}^{2}V^{2}-\left(\omega-j\mathcal{K}^{2} \eta\right)\left(\omega-j\mathcal{K}^{2}\eta\right)}. \tag{107}\]
We then consider an initial value problem with a weakly decaying wave of the form
\[\mathbf{v}=\mathbf{u}\exp\left[j\left(\omega+j\nu\right)t\right] \tag{108}\]
with \(\omega\gg\nu\), and with the structure
\[\mathbf{v}=\left[\frac{1}{\mathcal{K}_{R}-j\mathcal{K}_{I}} \boldsymbol{\nabla}\times\mathbf{e}_{z}\times\boldsymbol{\nabla}+\mathbf{e}_ {z}\times\boldsymbol{\nabla}\right]J_{m}\left(\alpha r\right)\exp\left(j\left[ \left(\omega+j\nu\right)t-m\theta-k_{\parallel}z\right]\right) \tag{109}\]
where \(\alpha\) is a real number, \(\omega\) and \(k_{\parallel}\) are given, and the damping rate \(\nu\left(\omega,k_{\parallel},\mathcal{K}\right)\) is to be determined from the weak dissipation expansion of the dispersion relation
\[\alpha^{2}+k_{\parallel}^{2}=\left[\mathcal{K}_{R}\left(k_{\parallel},\omega+ j\nu\right)-j\mathcal{K}_{I}\left(k_{\parallel},\omega+j\nu\right)\right]^{2} \tag{110}\]
obtained by plugging this solution in Eq. (105). Taylor expanding this last relation for \(\nu\ll\omega\), the lowest order real part gives the collisionless dispersion
\[\alpha^{2}\left(k_{\parallel},\omega\right)=\mathcal{K}_{R}^{2}\left(k_{ \parallel},\omega\right)-k_{\parallel}^{2}\approx\mathcal{K}^{2}\left(k_{ \parallel},\omega\right)-k_{\parallel}^{2} \tag{111}\]
while the lowest order imaginary part gives a relation for the decay rate \(\nu\)
\[\nu\left(k_{\parallel},\omega\right)\frac{\partial\mathcal{K}_{R}\left(\omega \right)}{\partial\omega}=\mathcal{K}_{I}\left(k_{\parallel},\omega\right) \approx\frac{\mathcal{K}^{3}}{\omega}\left[\eta+\left(\mu+\eta\right)\left( \frac{k_{\parallel}^{2}V^{2}}{\omega^{2}}-1\right)^{-1}\right]. \tag{112}\]
Here we took \(\partial\mathcal{K}_{R}/\partial\omega\approx\partial\mathcal{K}/\partial\omega\) and used Eq. (103).
Finally, Eq. (112) can be used to write an equation for the evolution of the wave energy density \(\mathcal{U}\)
\[\frac{d\mathcal{U}}{dt}=-2\nu\mathcal{U}=-2\mathcal{K}_{I}\left(\frac{\partial \mathcal{K}_{R}}{\partial\omega}\right)^{-1}\mathcal{U}. \tag{113}\]
For a rotating Alfven wave, this energy density \(\mathcal{U}\) has three distinct components
\[\mathcal{U}=\frac{\left\langle B^{2}\right\rangle}{2\mu_{0}}+\frac{\varepsilon_{0 }}{2}\left\langle\left(\mathbf{v}\times\mathbf{B}_{0}\right)^{2}\right\rangle+ \frac{\rho}{2}\left\langle v^{2}\right\rangle \tag{17}\]
where \(\left\langle\right\rangle\) indicate an average over the fast \(\omega\) oscillations. The first term on the right hand side is the magnetic energy, the second term is the electric energy and the third term is the kinetic energy. This energy density can be rewritten using the Alfven velocity \(V\) and the velocity of light \(c\) as
\[\mathcal{U}=\frac{\rho}{2}\left[\left\langle\mathbf{v}^{2}\right\rangle\left( 1+\frac{V^{2}}{c^{2}}\left(1+\frac{k_{\parallel}^{2}c^{2}}{\omega^{2}}\right) \right)-\left\langle\left(\mathbf{v}\cdot\frac{\mathbf{V}}{c}\right)^{2} \right\rangle\right]. \tag{18}\]
Combining Eq. (16), Eq. (17) and the relation between energy and angular momentum absorption, one finally gets
\[\Gamma=2\rho\frac{m}{\omega}\mathcal{K}_{I}\left(\frac{\partial\mathcal{K}_{ R}}{\partial\omega}\right)^{-1}\left[\left\langle\mathbf{v}^{2}\right\rangle \left(1+\frac{V^{2}}{c^{2}}\left(1+\frac{k_{\parallel}^{2}c^{2}}{\omega^{2}} \right)\right)-\frac{\left\langle\left(\mathbf{v}\cdot\mathbf{V}\right)^{2} \right\rangle}{c^{2}}\right] \tag{19}\]
where \(\mathcal{K}_{R}\) and \(\mathcal{K}_{I}\) are given by Eq. (10) and \(\mathbf{v}\) is given by Eq. (26).
## 6 Conclusion
Building on previous contibutions studying Alfven waves in rotating plasmas in geophysical and astrophysical settings, we have examined here the dynamics of orbital angular momentum (OAM) carrying torsional Alfven waves in a rotating plasma. It is found that two new couplings between the orbital angular momentum of the Alfven waves and the angular momentum of the rotating plasma exist.
One is Fresnel-Faraday rotation (FFR), that is a rotation of the transverse structure of the wave due to the medium's rotation, which had already been predicted for the high frequency electronic modes that are Trivelpiece-Gould and Whistler-Helicon modes (Rax & Guerault, 2021). Extending these earlier contributions, direct Fresnel-Faraday rotation (FFR) for torsional Alfven waves in a rotating plasma is described by Eqs. (11) and (12). It is the orbital angular momentum analog of the polarization drag effect for spin angular momentum waves (Jones, 1976; Player, 1976). An important distinction found here though is that while rotation did not introduce new high frequency modes so that FFR for Trivelpiece-Gould and Whistler-Helicon modes was simply the consequence of the interplay between Coriolis force and rotational Doppler shift (Rax & Guerault, 2021), the strong coupling to the inertial mode that exists for Alfven waves in rotating plasmas complexifies this picture.
The second coupling is the inverse effect through which the OAM carrying wave exerts a torque on the plasma. Inverse FFR is described by Eqs. (10) and (19). This inverse effect is akin to the spin angular momentum inverse Faraday effect but for the orbital angular momentum of the wave. It is found that for a plasma with non-zero collisional absorption the damping of an OAM carrying wave is the source of a torque on the plasma.
Looking ahead, these results suggest that direct FFR could in principle be used to diagnose plasma rotation with Alfven waves. Conversely, it may be possible to utilise inverse FFR to sustain plasma rotation through Alfven waves angular momentum absorption. The detailed analysis of these promising prospects is left for future studies.
## Acknowledgments
The authors would like to thank Dr. E. J. Kolmes, I. E. Ochs, M. E. Mlodik and T. Rubin for constructive discussions.
## Funding
This work was supported by the U.S. Department of Energy (N. J. F., grant number ARPA-E Grant No. DE-AR001554); and the French National Research Agency (R. G., grand number ANR-21-CE30-0002). JMR acknowledges Princeton University and the Andlinger Center for Energy + the Environment for the ACEE fellowship which made this work possible.
## Declaration of interests
The authors report no conflict of interest.
|
2309.09329 | A Few-Shot Approach to Dysarthric Speech Intelligibility Level
Classification Using Transformers | Dysarthria is a speech disorder that hinders communication due to
difficulties in articulating words. Detection of dysarthria is important for
several reasons as it can be used to develop a treatment plan and help improve
a person's quality of life and ability to communicate effectively. Much of the
literature focused on improving ASR systems for dysarthric speech. The
objective of the current work is to develop models that can accurately classify
the presence of dysarthria and also give information about the intelligibility
level using limited data by employing a few-shot approach using a transformer
model. This work also aims to tackle the data leakage that is present in
previous studies. Our whisper-large-v2 transformer model trained on a subset of
the UASpeech dataset containing medium intelligibility level patients achieved
an accuracy of 85%, precision of 0.92, recall of 0.8 F1-score of 0.85, and
specificity of 0.91. Experimental results also demonstrate that the model
trained using the 'words' dataset performed better compared to the model
trained on the 'letters' and 'digits' dataset. Moreover, the multiclass model
achieved an accuracy of 67%. | Paleti Nikhil Chowdary, Vadlapudi Sai Aravind, Gorantla V N S L Vishnu Vardhan, Menta Sai Akshay, Menta Sai Aashish, Jyothish Lal. G | 2023-09-17T17:23:41Z | http://arxiv.org/abs/2309.09329v1 | # A Few-Shot Approach to Dysarthrie Speech Intelligibility Level Classification Using Transformers
###### Abstract
Dysarthria is a speech disorder that hinders communication due to difficulties in articulating words. Detection of dysarthria is important for several reasons as it can be used to develop a treatment plan and help improve a person's quality of life and ability to communicate effectively. Much of the literature focused on improving ASR systems for dysarthric speech. The objective of the current work is to develop models that can accurately classify the presence of dysarthria and also give information about the intelligibility level using limited data by employing a few-shot approach using a transformer model. This work also aims to tackle the data leakage that is present in previous studies. Our whisper-large-v2 transformer model trained on a subset of the UASpeech dataset containing medium intelligibility level patients achieved an accuracy of 85%, precision of 0.92, recall of 0.8 F1-score of 0.85, and specificity of 0.91. Experimental results also demonstrate that the model trained using the 'words' dataset performed better compared to the model trained on the 'letters' and 'digits' dataset. Moreover, the multiclass model achieved an accuracy of 67%.
Dysarthria, UA-Speech, Whisper-large-v2, Few Shot Learning, PEFT, LORA, Transfer Learning, Voice Pathology
## I Introduction
Dysarthria, a neuro-motor impairment affecting speech articulation and coordination, significantly impacts an individual's ability to produce coherent and intelligible verbal communication. In [1] F.Rudzicz et al. claimed that dysarthria arises from congenital conditions or traumatic events that impact the neuromotor system involved in speech production. The congenital causes of dysarthria encompass conditions like brain asphyxiation during birth, which result in long-term speech impairments. On the other hand, traumatic causes of dysarthria include events such as stroke, cerebral palsy, multiple sclerosis, Parkinson's disease, myasthenia gravis, and amyotrophic lateral sclerosis (ALS). Individuals with dysarthria encounter difficulties related to articulation, speech rate, breath control, resonance, and overall communication [2, 3, 4]. These challenges can result in diminished comprehensibility, limited expressive abilities, and obstacles in social interactions.
The field of dysarthria research has seen advancements in automatic speech recognition (ASR) systems [5] for aiding individuals with dysarthria in communication. However, the automatic classification of dysarthria and its severity levels remain limited. Using the Frenchay Dysarthria Assessment [6], doctors undertake perceptual evaluations of speech to determine the kind and severity of the disease. Subjective assessments by clinicians are costly, time-consuming, and prone to biases, raising concerns about their reliability. This motivates the development of an impartial objective technique for evaluating dysarthric speech.
More and more researchers are employing deep learning and machine learning algorithms to develop automatic dysarthria identification in order to objectively and reliably identify individuals with the condition. Many researchers extract characteristics from voice signals using various feature extraction techniques [7]. For example, Stephanie et al. [8]
used Teager Energy Operator (TEO) and the glottal waveform features. Chitralekha et al. [9] utilized audio descriptors or features that are often used to determine the timbre of musical instruments. Dong et al. [10] and Amlu et al. [11] used MFCC-based features. N.P. Narendra et al. [12] used Two sets of glottal features and acoustic features. Then, deep learning and machine learning techniques, including convolutional neural networks (CNNs), artificial neural networks (ANNs), CNN-LSTM (long short-term memory), CNN-GRU (Gated Recurrent Unit), SVM, and other models, are used to detect dysarthria.
This research aims to develop an automatic tool that leverages vocal acoustics to detect the presence of dysarthria and accurately determine its severity level. Additionally, we investigate the efficacy of different speech tasks, such as words, letters, and digits, in training the detection model. Furthermore, we explore the feasibility of employing transformer models in pathology detection, specifically dysarthria, utilizing few-shot transfer learning techniques [13]. The training process utilizes a portion of the UASpeech Dataset [14], while the remaining dataset is reserved for testing purposes. Log Mel spectrogram features are extracted from the audio files and are employed for training the Whisper Model [15] which is a large language model, trained on 680,000 hours of multilingual audio data procured from the internet. The whisper model family comprises of five different models with varying model sizes. The large variant was considered in this research which has 1550 million parameters. Considering the computational complexity involved in training models of enormous size, various efficient training approaches were considered and LORA [16] was used to make the training process efficient and cost-effective.
The rest of the paper is organized as follows. Section 2 describes related works while Section 3 gives a detailed description of the methodology used. Section 4 presents the results and discussion and we conclude in Section 5.
## II Related Works
There have been numerous techniques and models developed to predict the presence of dysarthria. Some of the approaches are discussed in this section and Table I presents the overview of the literature review.
In [8], Stephanie et al. employed a cross-database training strategy in their study to distinguish speech samples with and without dysarthria. Specifically, they trained their model on the UA-Speech database and evaluated its performance on the AMSDC database. To mitigate the issue of repeated speech samples from the same individual, one channel per participant was randomly selected for analysis. The current analysis contains elements based on the Teager Energy Operator (TEO) and the glottal waveform in addition to conventional spectral and prosodic aspects. Baseline findings employing prosodic features on the UA-Speech dataset to optimize word and participant-level accuracy at 75.3% and 92.9%. However, the UA-Speech cross-training evaluated on the AMSDC maximizes word- and participant-level accuracy at 71.3% and 90%, respectively, based on TEO features.
In [9], Chitralekha et al. adopted audio descriptors or features commonly employed to characterize the timbre of musical instruments and adapted them for the purpose of their study. They utilized a dataset consisting of dysarthric utterances, including utterances associated with 10 digits and 19 computer commands, collected from all patients. Features based on multi-tapered spectral estimates were calculated and employed for classification. With the use of the TORGO database and the Universal Access dysarthric speech corpus, an Artificial Neural Network (ANN) was trained to categorize speech into different severity levels. For the UA speech corpus and the TORGO database, average classification accuracy was 96.44% and 98.7%, respectively.
In [10], Dong et al. used features based on MFCC Coefficients They utilized a dataset consisting of dysarthric utterances, including utterances associated with numbers 1 to 10, the 26 letters, collected from all patients., and they used Convolutional Neural Networks (CNNs) and Gated Recurrent Units (GRUs) to enable faster dysarthria detection. Their experimental results demonstrate that the CNN-GRU model achieves an accuracy of 98.38%, surpassing the performance of other models like CNN, LSTM, and CNN-LSTM.
In [11] Amlu et al. employ the deep neural network (DNN), the convolutional neural network (CNN), and the gated recurrent units(GRU) Long short term memory (LSTM) to classify the severity of dysarthric speech. Mel frequency cepstral coefficients (MFCCs) and their derivatives are the characteristics used in this investigation. For the UA-Speech database, they used 4,500 test files and 6,975 training files. Using the UA-Speech corpus and the TORGO database, The findings show that DNN gave 93.97% accuracy for speaker-dependent scenarios and 49.22% for speaker-independent scenarios.
In [12] N.P. Narendra et al. suggested a unique technique for classifying dysarthric speech from coded telephone voice using glottal characteristics. Each speaker's spoken utterances were utilized. calculated using a glottal inverse filtering technique based on deep neural networks. The openSMILE toolbox is used to integrate glottal information (time- and frequency-domain parameters and PCA-based parameters) with acoustic characteristics. Glottal and auditory characteristics are used to train both separate and mixed support vector machine classifiers. Studies using the TORGO and UA-Speech databases show that the glottal factors produced a classification accuracy range of 63-77%.In [17] Amlu Anna Joshy et al. also classified dysarthria using multi-head attention.
It was clear from the above literature that many methods had data leakage as audio files from the same patient were split across train and test sets. And there has not been much research conducted on few-shot learning techniques for pathology classification, which is important because the amount of audio data for pathology tasks is limited. The novelty of this work lies in exploring the effectiveness of the few-shot learning approach using transformer models like whisper
large-v2 for dysarthria detection and comparing which dataset task (Words or letters and digits) performs the best.
## III Methodology
### _Dataset_
The goal of the UA-Speech database [14] is to encourage the creation of user interfaces for talkers who have spastic dysarthria and severe neuromotor diseases. It consists of isolated-word recordings made using a 7-channel microphone array mounted on top of a computer display from 15 dysarthric speakers and 13 control speakers. Age, Gender, and Speech intelligibility of speakers list the dataset's varied dysarthric speakers' levels of intelligibility. This is represented in the table II.
Each patient has a total of 765 files which is comprised of 10 Digits of 3 repetitions, 26 Letters of 3 repetitions, 19 Computer Commands of 3 repetitions, 100 Common Words of 3 repetitions, and 300 Uncommon Words of 1 repetition.
For various experiments conducted in this study, various subsets of the dataset are considered. First, a dataset is prepared for the purpose of building binary classification models. This dataset is constructed by exclusively using only
the common words and uncommon words of the speakers. A single repetition of common words (100 words) and all uncommon words (300 words) are combined together. In order to avoid data leakage, files from two control patients and files from two pathology patients are used for training, and files from all other patients are used for testing. The training set contained a total of 1600 audio files (800 control and 800 pathology) and the test set contained a total of 9,600 files (4400 control and 5200 pathology). various experiments are conducted by considering pathology patients with various intelligibility levels. A detailed description of this data is presented in table III.
For the purpose of determining which dataset task gives better accuracy for multiclass models. A new dataset is created using the letters and numbers audio files. Each patient had 36 files ( 26 letters + 10 numbers) and again to avoid data leakage only two patients were considered in the training set of each class and all other patients were considered in the test set. The training set contained a total of 360 audio files (72 control and 288 pathology) and the test set contained a total of 648 audio files (396 control and 252 pathology). The detailed description of the multiclass dataset is presented in table V.
All of the input audio samples are resampled to 16,000 Hz for data preprocessing, and a representation of an 80-channel log magnitude Mel spectrogram is produced on 25-millisecond windows with a stride of 10 milliseconds. The whisper models are trained using this representation of the preprocessed data.
### _Whisper Model_
Whisper [15] is an Automatic Speech Recognition (ASR) system developed by OpenAI. It was trained using 680,000 hours of supervised, multilingual, and multitasking web data. The details about various architectural parameters of the whisper family models is presented in table VI. Since whisper was trained with the intention of achieving high-quality results in a zero-shot setting, this makes it very powerful and able to handle a wide range of tasks, including speech recognition. Whisper is an encoder-decoder-based architecture but since the task at hand requires only the encoder part of the model, we extracted it and added a classification head as seen in Fig. 1. Using the classification head, Given the log mel spectrogram of a speech uttered by a subject, the model will predict the probability that the subject has dysarthria and also the level of it in case of multiclass classification.
### _PEFT and LoRA_
Training large language models typically requires huge clusters of GPUs and vast amounts of data. In order to
make training accessible for everyone, various techniques are explored and presented. parameter-efficient fine-tuning (PEFT) [18] selectively updates a subset of the model's parameters, specifically targeting the most influential ones for the new task. This approach significantly reduces the computational resources required for fine-tuning, resulting in improved efficiency without compromising performance. By focusing on updating only the essential parameters, we ensured effective training while minimizing unnecessary computations.
Among various methods included in PEFT, LoRA (LOW-RANK ADAPTATION OF LARGE LANGUAGE MODELS) [16], developed by Microsoft, is by far the most popular method. It is a technique used to fine-tune large language models (LLMs) by freezing most of the parameters and updating only a small subset specific to the task. It achieves parameter reduction by employing singular value decomposition (SVD), which decomposes a matrix into three matrices. By retaining a reduced number of singular values and their corresponding vectors, the LLM can be efficiently fine-tuned while maintaining performance.
We utilized INT8 tuning along with PEFT, LoRA and bitsandbytes [19]. This approach optimized memory usage and improved training efficiency, allowing us to overcome the memory limitations and successfully train our model.
### _Training_
We opted to train the model using a cloud-reuted machine provided by Lambdalabs. The system had 30vCPUs, 200GiB RAM, and 1.4 TiB SSD, and it was equipped with an Nvidia A10 GPU with compute capability of 8.6 and cost about 0.68 per hour at the time of writing.
After preprocessing the data, the model was loaded into memory in 8-bit precision and then optimized using Lora with a projection rank of 32. Then the optimized model was put to training for 10 epochs using a batch size of 8 and a learning rate of \(10^{-3}\).
## IV Results and Discussion
We used standard evaluation metrics such as accuracy, precision, recall, and specificity. Accuracy is the percentage of the images that were correctly classified. Precision is the accuracy of the positive predictions. The recall is the fraction of the positives that were correctly predicted. F1-Score is the harmonic mean of precision and recall. Specificity is the fraction of the negatives that were correctly predicted.
\[Accuracy=\frac{TP+TN}{TP+TN+FP+FN}\]
\[Recall=\frac{TP}{TP+FN}\]
\[Precision=\frac{TP}{TP+FP}\]
\[F1-Score=\frac{2(Precision*Recall)}{Precision+Recall}\]
\[Specificity=\frac{TN}{TN+FP}\]
The results obtained from the binary classification experiment are summarized in table VII. Among the four experiments conducted, the model trained using pathology patients with medium intelligibility levels performed the best, giving an accuracy of 85%, precision of 0.92, recall of 0.8, F1-score of 0.85, and specificity of 0.91. This indicates that the model finds it easy to predict dysarthria if trained on data containing patients with medium intelligibility levels compared to models trained with other intelligibility levels such as very low, low, and high. Our model achieved around 10% improvement in accuracy in comparison with the work presented in [12], where they reported a word-level accuracy of 75.3%.
Table VIII and Table IX show accuracy, precision, recall, F1-Score, and specificity of models trained on words dataset and digits and letters dataset, respectively for multiclass classification which includes the classes: Control, High, Medium, Low and Very Low. Both models are trained with two patients belonging to each class. The accuracy of multiclass classification using the words dataset is 67% while the accuracy for multiclass classification using the 'digits' and 'letters' dataset is 58%. The multiclass model trained on the words dataset achieved 9% better accuracy than its counterpart. Analyzing the results from the table, we can see that both models are performing the best in the control class compared with other classes, and both models have a hard time predicting patients from the Low class. Both models have good precision for the high class, but the class has a very low score for other evaluation metrics.
Fig. 1: Modified Whisper Architecture
## V Conclusion and Future Works
This work explores a few-shot learning approach for dysarthria detection using the encoder of the whisper-large-v2 model. The main contributions of the proposed study are:
* Compared with the previous methods, pathology detection has improved considerably using transformer models and we are able to demonstrate the potential use of few-shot learning for pathology detection.
* From our study we have determined that to detect dysarthria, a model trained using patients having medium-level intelligibility performs better.
* We also determined that the dataset built using audio recordings of words will result in better model performance.
Potential future works include determining the minimum number of patients to accurately classify dysarthria using few-shot learning and comparative analysis can be done using a wide spectrum of deep learning models to determine which architecture performs the best.
|
2309.11566 | SignBank+: Preparing a Multilingual Sign Language Dataset for Machine
Translation Using Large Language Models | We introduce SignBank+, a clean version of the SignBank dataset, optimized
for machine translation between spoken language text and SignWriting, a
phonetic sign language writing system. In addition to previous work that
employs complex factorization techniques to enable translation between text and
SignWriting, we show that a traditional text-to-text translation approach
performs equally effectively on the cleaned SignBank+ dataset. Our evaluation
results indicate that models trained on SignBank+ surpass those on the original
dataset, establishing a new benchmark for SignWriting-based sign language
translation and providing an open resource for future research. | Amit Moryossef, Zifan Jiang | 2023-09-20T18:08:28Z | http://arxiv.org/abs/2309.11566v2 | # SignBank+: Multilingual Sign Language Translation Dataset
###### Abstract
This work advances the field of sign language machine translation by focusing on dataset quality and simplification of the translation system. We introduce SignBank+, a clean version of the SignBank dataset, optimized for machine translation. Contrary to previous works that employ complex factorization techniques for translation, we advocate for a simplified text-to-text translation approach. Our evaluation shows that models trained on SignBank+ surpass those on the original dataset, establishing a new benchmark and providing an open resource for future research.
sign language, sign language dataset, sign language translation
## 1 Introduction
Sign Language serves as an indispensable mode of communication for the deaf. Unfortunately, the available methods for translating between signed and spoken languages, have been limited in scope and effectiveness. The main objective of this research is to explore technological advancements that can enhance the translation process, focusing on the cleaning and enrichment of an existing sign language dataset, _SignBank_1, a multilingual collection of _puddles_, covering a range of domains.
Footnote 1: [https://www.signbank.org/signpuddle/](https://www.signbank.org/signpuddle/)
The pioneering work of Jiang et al. (2023) set the stage for this task. They presented an approach to translating SignWriting through specialized parsing and factorized machine translation techniques. Motivated by their efforts, this research aims to build upon their foundation by:
1. Undertaking a rigorous data cleaning process and extending the dataset they utilized.
2. Reverting to a simple text-to-text translation mechanism, omitting any factorization.
The hypothesis driving this study is twofold: First, a meticulously curated dataset will enhance the accuracy and reliability of translation models. Second, by simplifying the translation process, it becomes feasible to train a diverse array of models and streamline their deployment.
To validate our claims, we compare the translation quality of signed-to-spoken translation using the original, and cleaned data to previous work. We show that with our new, cleaner data, we can train standard machine translation models with improved quality over the original data. We share our data openly (available at [https://github.com/sign-language-processing/signbank-plus](https://github.com/sign-language-processing/signbank-plus)) to be used in future machine translation research.
## 2 Background
This work only concerns machine translation between signed and spoken languages where both the input and the output are represented as discrete tokens (or, text).
### _Signed-to-Spoken_
Jiang et al. (2023) explore text-to-text sign to spoken language translation, with SignWriting as the chosen sign language notation system. Despite SignWriting usually represented in 2D, they use the 1D Formal SignWriting specification and propose a neural factored machine translation approach to encode sequences of SignWriting graphemes as well as their positions in the 2D space. They verify the proposed approach on the SignBank dataset in both a bilingual setup (American Sign Language to English) and two multilingual setups (4 and 21 language pairs, respectively). They apply several low-resource machine translation techniques used to improve spoken language translation to similarly improve the performance of sign language translation. Their findings validate the use of an intermediate text representation for signed language translation, and pave the way for including sign language translation in natural language processing research.
### _Spoken-to-Signed_
Jiang et al. (2023) also explore the reverse translation direction, i.e., text to SignWriting translation. They conduct experiments under a same condition of their multilingual SignWriting to text (4 language pairs) experiment, and again propose a neural factored machine translation approach to decode the graphemes and their position separately. They borrow BLEU from spoken language translation to evaluate the predicted graphemes and mean absolute error to evaluate the positional numbers.
Walsh et al. (2022) explore Text to HamNoSys (T2H) translation, with HamNoSys as the target sign language notation system. They experiment with direct T2H and Text to Gloss to HamNoSys (T2G2H) on a subset of the data from the MEINE DGS dataset Hanke et al. (2020), where all glosses are mapped to HamNoSys by a dictionary lookup. They find that direct T2H translation results in higher BLEU (it still needs to be clarified how well BLEU represents the quality of HamNoSys translations, though). They encode HamNoSys with BPE Sennrich et al. (2016), outperforming character-level and word-level tokenization. They also leverage BERT to create better sentence-level embeddings and use HamNoSys to extract the hand shapes of a sign as additional supervision during training.
### Machine Translation Frameworks
Machine translation has witnessed substantial advancements in recent years, both in terms of model architectures and frameworks that facilitate their training and deployment. When it comes to text-to-text translation, several open-source platforms have emerged, leading to the democratization of machine translation technology.
Prominent machine translation frameworks include _OpenNMT_Klein et al. (2017), Sockeye Hieber et al. (2017, 2020), Joey NMT Kreutzer et al. (2019), and _Faireseq_Ott et al. (2019). They are all widely renowned for simplicity, efficiency, and emphasis on performance, promoting rapid prototyping and thus becoming a popular choice among machine translation researchers.
Bergamot (2022) aims to bring machine translation to local clients. Leveraging advancements in _Marian NMT_Junczys-Dowmunt et al. (2018), _Bergamot_ provides recipes for fast, local, multilingual machine translation models. It provides an opinionated pipeline and assumes both the source and the target come from spoken languages. It only supports text-to-text translation, and expects a shared source-target vocabulary and a huge amount of data, uncommon in sign language resources. Despite the project's disadvantages, it is the only one that includes a realistic training pipeline for machine translation deployment.
## 3 Data
In our efforts to improve sign language translation through a text-to-text approach, data quality and quantity are of paramount importance. This section outlines our data curation strategy, encompassing both the data we generate ourselves (SS3.1) and the data we clean and expand (SS3.2).
### Fingerspelling Data
Fingerspelling is a significant component of signed languages, often used for spelling out names, places, or other words that might not have a designated sign. Given its importance, we embarked on a dedicated data generation process.
We collected and annotated fingerspelling for letters and numbers across 22 different signed languages2. These annotations are largely derived from the fingerspelling keyboard3.
Footnote 2: American, Brazilian, British, Chinese, Danish, Flemish, French, French Belgian, German, Honduran, Irish, Israeli, Italian, Japanese, Mexican, Nicaraguan, Norwegian, Portuguese, Spanish, Swedish, Swiss German, and Thai.
Footnote 3: [https://www.signwriting.org/forums/software/fingkeys/fkey001.html](https://www.signwriting.org/forums/software/fingkeys/fkey001.html)
### SignBank Cleaning and Expansion
The SignBank dataset, while invaluable, includes numerous inconsistencies and imperfections. Multiple non-parallel textual entries were associated with singular signing sequences. For instance, while some entries indicated chapter and page numbers from a book, the actual text was missing. In others, definitions were jumbled with the intended word. In light of these challenges, we initiated a meticulous data-cleaning (SS3.2.1) and expansion (SS3.2.2) processes detailed below:
#### 3.2.1 Dataset Cleaning
Initially, we manually corrected at least five entries for each puddle. Given the formulaic nature of certain puddles (e.g., the bible), rule-based corrections enabled immediate annotation of multiple entries. Comprehensive rules used in this phase are detailed in Appendix A.1.
Using ChatGPT OpenAI (2022), we defined a pseudo function that gets the number of signs, language code, and existing terms to return a cleaned, parallel version of the terms: clean(number of signs, language code, terms). An illustration would be the function call: clean(1, "s1", ["Koreja (mednarodno)", "Korea", "S125- P1"]) returning ["Koreja", "Korea"]. More detailed examples are available in Appendix B.1.
To ascertain the efficacy of this cleaning method, we employed the gpt-3, 5-turbo-0613 model on the manually cleaned samples. By comparing these results to the cleaned dataset, we assessed the quality via the intersection over Union (IoU)4 metric between the predicted terms and the annotated terms.
We compared multiple settings, with various approaches to cleaning the data:
1. **E0**: No changes.
2. **E1**: Rule-based cleaning (Appendix A.2).
3. **E2**: E1 + ChatGPT with four fixed, manually selected few-shot examples.
4. **E3**: E1 + ChatGPT with five few-shot examples from the same puddle.
5. **E4**: E1 + ChatGPT with four fixed examples and five examples from the same puddle.
6. **E5**: E4 + using gpt-4-0613.
Doing nothing (_E0_) leads to a base IoU of **0.50**. The rule-based approach (_E1_), which conservatively eliminated undesired text entries, provided a slight boost, resulting in an IoU of **0.53**. Incorporating general few-shot examples into the cleaning process (_E2_) significantly increased the IoU to **0.63**. A more targeted approach using five few-shot examples from the same puddle (_E3_) further improved this to **0.71** IoU. When combining the general few-shot examples with puddle-specific examples (_E4_), we achieved an IoU of **0.74**. Our best results, however, came from GPT-4 (_E5_), which achieved an IoU of **0.80**. For cost considerations, the following pricing was assumed: \(\$0.0015/1K\) tokens for gpt-3. 5-turbo and \(\$0.03/1K\) tokens for gpt-\(4\), indicating a 20 \(\times\) price disparity. Given the average of 714 tokens for _E4_ and _E5_ and around 200K annotations, the projected costs for gpt-3. 5-turbo and gpt-4 are approximately \(\$200\) and \(\$4000\), respectively. For financial reasons, we use gpt-3. 5-turbo. The final cost ended up being \(\$230.18\), paid to OpenAI.
#### 3.2.2 Dataset Expansion
Our next objective is to further enrich the dataset by introducing variations for each cleaned term. Variability in language representation can significantly benefit the robustness of machine translation models by providing multiple ways of expressing the same idea. For this, we designed a function, expand(language code, terms), producing expanded terms and proper capitalization. As some terms were in English, outputs for both the specific language and English were generated separately. Prompt in Appendix B.2.
For an illustration, consider a term in Swedish such as 'tre'. When passed to our function like so: expand("sv", ["tre"]), the returned output could be ["sv": ["Tre", "3"], "en": ["Three", "3"]. This means that for the Swedish language ('sv'), the term 'tre' can be represented as 'Tre' or the numeral '3'. The corresponding English translation for the term would be 'Three'. Another example would be the German term for 'father'. The function call expand("de", ["Vater", "father"]) yields ["de": ["Vater", "Vati", "Papa", "Erzeuger"], "en": ["father", "Dad", "Daddy"]].Here, the term expands to multiple terms in both German and English.
This expansion approach (using gpt-3. 5-turbo with 9 fixed few-shot examples), although seemingly straightforward with a similar cost to the
cleaning process, introduces vast richness to our dataset. Each term is now associated with multiple representations, thereby enhancing the potential of our model to understand the nuances and variability of language. However, this expansion can also introduce errors, either when expanding terms that were not properly cleaned, or when the expansion itself is wrong. The expansion cost ended up being \(\$299.72\), paid to OpenAI.
Evaluating the efficacy of this expansion step is non-trivial, due to the inherent subjectivity involved in determining which expansions are valid or more useful than others. Interested readers are referred to Appendix C for more outputs.
## 4 Data Quality Experiments
To evaluate the quality of our cleaning and expansion, we test its effect on machine translation. We train machine translation models on the original data, on the cleaned data, and on the expanded data, in an imbalanced multilingual setting. For this comparison, we focus on the _signed-to-spoken_ direction, since automatic evaluation of spoken language text is well established. For a development set, in each data scenario, we consider the first 3000 entries. For our test set, we use our manually annotated data from SS3.2.1. In the source text, we include tags to indicate the source and target language for the translation. We use sacreBLEU 2.3.1 [15], to evaluate BLEU5[10] and chrF6[11]. This comparison is only made to evaluate the quality of the different datasets. Thus, for every framework, we use the default training settings and avoid attempting to optimize with smaller models or different architecture. We posit that better test-set performance in a given framework indicates higher data quality. While we believe that this effect should be highly potent for the _spoken-to-signed_ translation direction, it is not evaluated in this work since there are no human-validated automatic metrics to evaluate SignWriting output.
Footnote 5: BLEU = case:mixed[eff:no[tok:13a]smooth:exp
Footnote 6: chrF = case:mixed[eff:yes]nc:6[inv:0]space:no
**Sockey / Fairseq / OpenNMT** In preprocessing, the SignWriting text is tokenized by splitting its components (symbol, modifiers, and position), and the spoken language text is tokenized using BPE [16] with 3000 merges. For the cleaned dataset, this results in a smaller vocabulary than for the original
dataset since some unigrams are filtered out. Model training is early-stopped on validation chrF score (Sockeye), BLEU (Fairseq), and accuracy (openNMT) with a patience of 10 epochs.
**Keras** (**Chollet et al., 2015**): To address the effect of clean data on pre-trained language models, we fine-tune _m75-small_(**Xue et al., 2021) using Keras and HuggingFace Transformers (**Wolf et al., 2020). In this setting, both the source and target texts are tokenized using the _m75_ tokenizer. Since our source data is extremely out-of-domain to the original language model training, we do not expect to see improvements from the pre-trained language model. The model is fine-tuned for up to 20 epochs, early stopped on validation loss.
## 5 Results
Table 1 shows that despite the different frameworks, pre-trained models, unoptimized modeling, and imbalanced multilingual translation scenarios, performance on the cleaned data is consistently better compared to the original data. This establishes our cleaned data as more useful for signed-to-spoken machine translation.
In the _signed-to-spoken_ translation direction, the use of our expanded data is dubious. If our cleaned data is of perfectly good quality, our expansion can only add noise by introducing multiple targets for the same source. However, since we know that our cleaned data is not perfect, we hypothesize that the additional noise from the data expansion smooths out the noise in the imperfect data, by introducing more overlaps between identical translations, thus drowning the noise. This is very difficult to evaluate. As we vary the target texts in many dimensions (gender, formality, capitalization, script, and form), uncontrolled translation of the test set into the original distribution of these dimensions is improbable, even when disregarding noise coming from wrong expansions. This is reflected in the results. Using the expanded data for pre-training our Sockeye model, then fine-tuning on the cleaned data gets the model back to the target distribution, with better results of \(31.39\) BLEU and \(31.97\) chrF.
We compare these results to the state of the art. Specifically, we query the API endpoint made available by **Jiang et al.** (2023) to translate our test set. To some extent, this is an unfair comparison, since they likely saw these exact translation sources in training and since we are evaluating more languages than their model was trained on. And yet, their method achieves \(5.03\) BLEU and \(18.92\) chrF on our test set. Despite their optimization in modeling, our optimization in data quality makes up for sub-par modeling.
## 6 Conclusions
This work introduces a methodology for data cleaning and expansion for low-resource settings such as sign language translation. Its main contribution is the introduction of _SignBank+_, a cleaner and more expansive sign language translation dataset than _SignBank_. The data, and baseline models code are publically available on [https://github.com/sign-language-processing/signbank-plus](https://github.com/sign-language-processing/signbank-plus).
## 7 Future Work
We encourage future work to expand on our efforts and create _SignBank++_. The _clean_ and _expand_ steps can be executed with more, and better language models. Quality estimation filtering methods can be created to filter out text pairs likely to not be parallel.
Additionally, optimizing the input representation, by encoding SignWriting as images, reducing the token count, or standardizing phoneme order, all of which could improve translation performance.
Finally, robust evaluation metrics for spoken-to-signed translation should be created and validated with human judgments.
\begin{table}
\begin{tabular}{l c c c c c c c c c c} \hline \hline & & & \multicolumn{3}{c}{**Sockeye**} & \multicolumn{3}{c}{**Fairseq**} & \multicolumn{3}{c}{**OpenNMT**} & \multicolumn{1}{c}{**Keras (m75)**} \\ \cline{3-11} Dataset & Training Pairs & Vocab & BLEU & chrF & BLEU & chrF & BLEU & chrF & BLEU & chrF \\ \hline Original & \(521,390\) & \(6,016\) & \(0.2\) & \(8.4\) & \(0.18\) & \(4.74\) & \(0.69\) & \(9.21\) & \(0.07\) & \(6.39\) \\ Cleaned & \(357,574\) & \(5,200\) & \(\mathbf{22.32}\) & \(\mathbf{28.63}\) & \(1.1\) & \(\mathbf{7.59}\) & \(\mathbf{30.6}\) & \(\mathbf{22.46}\) & \(\mathbf{6.02}\) & \(12.35\) \\ Expanded & \(1,027,418\) & \(5,976\) & \(0.55\) & \(7.22\) & \(\mathbf{1.26}\) & \(6.52\) & \(13.38\) & \(13.0\) & \(2.99\) & \(\mathbf{12.49}\) \\ \hline \hline \end{tabular}
\end{table}
Table 1: Evaluation of the usability of our data for machine translation. |
2309.07304 | The Way We Were: Structural Operational Semantics Research in
Perspective | This position paper on the (meta-)theory of Structural Operational Semantic
(SOS) is motivated by the following two questions: (1) Is the (meta-)theory of
SOS dying out as a research field? (2) If so, is it possible to rejuvenate this
field with a redefined purpose?
In this article, we will consider possible answers to those questions by
first analysing the history of the EXPRESS/SOS workshops and the data
concerning the authors and the presentations featured in the editions of those
workshops as well as their subject matters.
The results of our quantitative and qualitative analyses all indicate a
diminishing interest in the theory of SOS as a field of research. Even though
`all good things must come to an end', we strive to finish this position paper
on an upbeat note by addressing our second motivating question with some
optimism. To this end, we use our personal reflections and an analysis of
recent trends in two of the flagship conferences in the field of Programming
Languages (namely POPL and PDLI) to draw some conclusions on possible future
directions that may rejuvenate research on the (meta-)theory of SOS. We hope
that our musings will entice members of the research community to breathe new
life into a field of research that has been kind to three of the authors of
this article. | Luca Aceto, Pierluigi Crescenzi, Anna Ingólfsdóttir, Mohammad Reza Mousavi | 2023-09-13T20:50:53Z | http://arxiv.org/abs/2309.07304v1 | # The Way We Were: Structural Operational Semantics Research in Perspective
###### Abstract
This paper presents a novel approach to the (meta-)theory of structural Operational Semantics. We show that the (meta-)theory of structural Operational Semantics is motivated by the following two questions:
* Is the (meta-)theory of SOS dying out as a research field?
* If so, is it possible to rejuvenate this field with a redefined purpose?
In this article, we will consider possible answers to those questions by first analysing the history of the EXPRESS/SOS workshops and the data concerning the authors and the presentations featured in the editions of those workshops as well as their subject matters.
The first International Workshop on Structural Operation was held in London, UK in 2004. The workshop was established as 'a forum for researchers, students and practitioners interested in new developments, and directions for future investigation, in the field of structural operational semantics. One of the specific goals of the workshop was to establish synergies between the concurrency and programming language communities working on the theory and practice of SOS.' At its ninth edition, the SOS workshop joined forces with the nineteenth edition of International Workshop on Expressiveness in Concurrency. The joint workshop was meant to cover the broader scope of 'the formal semantics of systems and programming concepts, and on the expressiveness of mathematical models of computation.'
We examined the contributions dedicated to the theory of SOS presented in the EXPRESSS/SOS workshop series (and, prior to that, in the SOS workshop) and whether they appeared before or after the merger between the EXPRESS and SOS workshops. We also used the collected data to compute a well-established measure of similarity between the two phases in the life of the SOS workshop, before and after the merger with EXPRESS. Beyond these data- and graph-mining analyses, we reflect on the major results developed in nearly four decades of research on SOS and identify, in our admittedly biased opinion, its strengths and gaps.
The results of our quantitative and qualitative analyses all indicate a diminishing interest in the theory of SOS as a field of research. Even though 'all good things must come to an end', we strive to finish this position paper on an upbeat note by addressing our second motivating question with some optimism. To this end, we use our personal reflections and an analysis of recent trends in two of the flagship conferences in the field of Programming Languages (namely POPL and PDLI) to draw some conclusions on possible future directions that may rejuvenate research on the (meta-)theory of SOS. We hope that our musings will entice members of the research community to breathe new life into a field of research that has been kind to three of the authors of this article.
Whence this collaboration?This article is the result of a collaboration between a researcher from the theory of algorithms and their applications, Pierluigi Crescenzi, and three contributors to the theory of SOS. Pierluigi Crescenzi has recently offered data- and graph-mining analyses of conferences such as CONCUR, in cooperation with Luca Aceto in [5], SIROCCO [25] and ICALP--see the presentation available at [https://slides.com/piluc/icalp-50?token=f13BBJ8j](https://slides.com/piluc/icalp-50?token=f13BBJ8j). All authors thought that it was natural to combine quantitative data- and graph-mining analysis techniques with qualitative domain-specific knowledge to offer a fairly well-rounded perspective on the developments in the (meta-)theory of SOS and its relation to the SOS and EXPRESS/SOS workshops. Both the Java code and the Julia software developed by Pierluigi Crescenzi, which was used for the quantitative analyses reported in this article and the aforementioned earlier ones, are publicly available at the following GitHub repository: [https://github.com/piluc/ConferenceMining](https://github.com/piluc/ConferenceMining). We encourage everyone interested in carrying out data- and graph-mining analyses of conferences to use it!
## 2 Data Collection and Analysis
To set the stage for our reflections on the (meta-)theory of SOS, we have carried out some data analysis on the SOS and EXPRESS/SOS workshops.
### Data Collection
We extracted the following data from all the eleven past editions of the joint EXPRESS/SOS workshop:
1. the authors and titles of contributed talks;
2. invited speakers and the titles of their presentations or papers;
3. the number of submissions and accepted papers; and
4. at least two and at most three subject matter classifiers from the scope of EXPRESS/SOS.
Much of the gathered data was extracted from the tables of contents and proceedings of those editions of the workshop, which are all available in open access form as volumes of Electronic Proceedings in Computer Science (EPTCS), and from the DBLP page devoted to the Workshop on Structural Operational Semantics. In case of missing information regarding the number of submissions, we approached the workshops chairs and gathered that information through personal communication. For subject matter classification, since the general classifications, such as the one by the ACM, were too general for our purposes, we manually read the abstract (and in a few cases full papers) and identified domain-specific classifiers, using the scope definition of the EXPRESS/SOS workshop.
The results of our data collection are publicly available online.
The choice of focusing our analysis on the last eleven editions was motivated by the fact that, since 2012, the SOS workshop decided to join forces with the EXPRESS workshop and created a new joint venue. This gave us a consistent view of how the topics featured in the joint workshop have evolved over time and of how (structural) operational semantics has been represented in the joint workshop since 2012. However, using the data we collected, we also took the opportunity to compare the two phases of the SOS workshop, the first as an independent workshop in the period 2004-2011 and the second as EXPRESS/SOS from 2012 till 2022.
### Automatic Analysis
Based on the articles that were archived in the workshop proceedings, we found that
* 194 authors contributed articles to the workshop proceedings since 2004;
* 90 colleagues published papers in the proceedings of the first eight editions of the SOS workshop;
* 122 researchers contributed articles to the joint EXPRESS/SOS workshop in the period 2012-2022;
* 18 authors published papers in the SOS workshop proceedings both before and after the merger with the EXPRESS workshop, which means that there were 104 contributors to EXPRESS/SOS who had never published in the SOS workshop in the period 2004-2011.
The above-mentioned data allow us to compute a measure of similarity between the two phases of the SOS workshop, before and after the merger with EXPRESS, using the Sorensen-Dice index, which is a statistic used to measure the similarity of two samples. Given two sets \(A\) and \(B\), the _Jaccard index_\(J(A,B)\) is equal to \(\frac{|A\cap B|}{|A\cup B|}\), and the _Sorensen-Dice index_ is equal to \(\frac{2J(A,B)}{1+J(A,B)}\), see [28, 66].
The Sorensen-Dice index for the lists of authors in the two phases of the SOS workshop is roughly 0.17. This value indicates that the SOS workshop is not as similar to the joint EXPRESS/SOS workshop as one might have expected. By way of comparison, quoting from the data- and graph-mining analysis of CONCUR presented in [4],
the conference that is most similar to CONCUR is LICS (with Sorensen-Dice index approximately equal to 0.3), followed by TACAS (approximately 0.25), CAV (approximately 0.24), and CSL (approximately 0.21).
Computing the Sorensen-Dice index for SOS 2004-2022 and CONCUR, LICS, PLDI and POPL yields low values of similarity, namely 0.106396 (CONCUR), 0.0622966 (LICS), 0.00585138 (PLDI) and 0.0303169 (POPL). This is due to the fact that the sets of authors of those conferences is much larger than that of the SOS workshop, namely 1475 (CONCUR), 1953 (LICS), 3220 (PLDI) and 1979 (POPL).
When quantifying the degree of similarity between a small workshop like SOS with larger conferences, it might be more appropriate to consider the Szymkiewicz-Simpson coefficient (also known as the overlap coefficient) [65, 68, 69, 73]. Given two sets \(A\) and \(B\), the _Szymkiewicz-Simpson coefficient_ is equal to \(\frac{|A\cap B|}{\min(|A|,|B|)}\). The values of that coefficient for the conferences we considered above are roughly 0.45 (CONCUR), 0.34 (LICS), 0.05 (PLDI) and 0.17 (POPL). Those values seem to support the view that SOS is rather similar to CONCUR and LICS, has some similarity with POPL, but is very dissimilar to PLDI.
### Centrality Measures
The _static graph_ (or collaboration graph) of SOS is an undirected graph whose nodes are the authors who presented at least one paper at SOS, and whose edges link two authors who coauthored at least one paper
(not necessarily presented at SOS). In other words, this graph is the subgraph of the DBLP collaboration graph induced by the set of SOS authors.
Centrality measures have been used as a key tool for understanding social networks, such as the static graph of SOS, and are used to assess the 'importance' of a given node in a network--see, for instance, [35]. Therefore, to quantify the role played by authors who have contributed to the SOS workshop, we have computed the following classic centrality measures on the largest connected component of the static graph of SOS.
* Degree: This is the number of neighbours of a node in the graph (that is, the number of coauthors).
* Closeness: This is the average distance from one author to all other authors of its connected component.
* Betweenness: This is the fraction of shortest paths, passing through one author, between any pair of other authors in its connected component.
The top ten SOS authors with respect to the above-mentioned three centrality measures are, in decreasing order:
* Degree: Luca Aceto, Anna Ingolfsdottir, Mohammad Reza Mousavi, Nobuko Yoshida, Rob van Glabbeek, Bas Luttik, Wan Fokkink, Michel Reniers, Catuscia Palamidessi, and Rocco De Nicola.
* Closeness: Luca Aceto, Rob van Glabbeek, Nobuko Yoshida, Matthew Hennessy, Catuscia Palamidessi, Anna Ingolfsdottir, Rocco De Nicola, Daniele Gorla, Bas Luttik, and Uwe Nestmann.
* Betweenness: Luca Aceto, Matthew Hennessy, Nobuko Yoshida, Rob van Glabbeek, Rocco De Nicola, Catuscia Palamidessi, Daniele Gorla, Frank de Boer, Bartek Klin, and Uwe Nestmann.
In addition, we also calculated the _temporal closeness_, which is an analogue of closeness that takes the number of years of a collaboration between two authors into account--see the paper [25] for more information on this centrality measure. The top ten SOS authors according to temporal closeness are, in decreasing order: Luca Aceto, Anna Ingolfsdottir, Wan Fokkink, Rocco De Nicola, Catuscia Palamidessi, Bas Luttik, Michel Reniers, Rob van Glabbeek, Jan Friso Groote, and Mohammad Reza Mousavi.
Finally, to get a glimpse of the evolution of the aforementioned measures of similarity and centrality in the two phases of the SOS workshop, we computed them on the static graphs before and after the merger with EXPRESS.
Before the merger with EXPRESS, the 2004-2011 editions of SOS had Szymkiewicz-Simpson index approximately of 0.42 with CONCUR, 0.37 with LICS, 0.067 with PLDI and 0.2 with POPL. After the merger with EXPRESS, those figures become 0.512 for CONCUR, 0.352 for LICS, 0.032 for PLDI and 0.152 for POPL. So, from 2012 onwards, SOS has become more similar to CONCUR and even more dissimilar to PLDI and POPL than before.
The top ten authors at the SOS workshop also change before and after the merger. When focusing on the period before the merger, the most central authors are as follows, in decreasing order:
* Degree: Luca Aceto, Michel Reniers, Mohammad Reza Mousavi, Anna Ingolfsdottir, Wan Fokkink, Rocco De Nicola, Jose Meseguer, Rob van Glabbeek, Catuscia Palamidessi, and David de Frutos-Escrig.
* Closeness: Luca Aceto, Anna Ingolfsdottir, Rocco De Nicola, Rob van Glabbeek, Matthew Hennessy, Georgiana Caltais, Mohammad Reza Mousavi, Eugen-Ioan Goriac, Michel Reniers, and Catuscia Palamidessi.
* Betweenness: Rocco De Nicola, Luca Aceto, Catuscia Palamidessi, Jose Meseguer, Frank de Boer, Filippo Bonchi, Matthew Hennessy, Michel Reniers, Rob van Glabbeek, and David de Frutos-Escrig.
* Temporal closeness: Luca Aceto, Anna Ingolfsdottir, Wan Fokkink, Michel Reniers, Mohammad Reza Mousavi, Jose Meseguer, Jan Friso Groote, Rob van Glabbeek, Rocco De Nicola, and Catuscia Palamidessi.
After the merger with EXPRESS, our graph-mining analysis yields the following most central authors, in decreasing order:
* Degree: Nobuko Yoshida, Luca Aceto, Bas Luttik, Rob van Glabbeek, Mohammad Reza Mousavi, Uwe Nestmann, Anna Ingolfsdottir, Jorge Perez, Jose Baeten, and Hans Huttel.
* Closeness: Nobuko Yoshida, Luca Aceto, Rob van Glabbeek, Catuscia Palamidessi, Anna Ingolfsdottir, Bas Luttik, Uwe Nestmann, Mohammad Reza Mousavi, Iain Phillips, and Mariangiola Dezani-Ciancaglini.
* Betweenness: Nobuko Yoshida, Rob van Glabbeek, Daniele Gorla, Luca Aceto, Bas Luttik, Bartek Klin, Uwe Nestmann, Catuscia Palamidessi, Hans Huttel, and Rance Cleaveland.
* Temporal closeness: Luca Aceto, Anna Ingolfsdottir, Bas Luttik, Tim Willemse, Catuscia Palamidessi, Mohammad Reza Mousavi, Jos Baeten, Jan Friso Groote, Jorge Perez, and Rob van Glabbeek.
### The Two Lives of the SOS Workshop
As we saw above, the first and the second life of the SOS workshop are not that similar after all, which seems to indicate that the eleven joint editions of the EXPRESS/SOS workshop were more about expressiveness than about structural operational semantics1. To see whether this is really the case, we visually summarise the data we collected in Figure 1 and provide its details below:
Footnote 1: Another possible explanation for the low degree of similarity between the pre- and post-merger incarnations of the SOS workshop is that the community welcomed many new authors from 2012 onwards. This would be a healthy and welcome development and is, in fact, supported by the data we collected. However, the analysis we present in what follows gives some indication that, since 2014, the scientific programme of EXPRESS/SOS has featured only a few papers on structural operational semantics.
* The proceedings of EXPRESS/SOS 2012 included 10 papers, five of which dealt with topics related to operational semantics and its mathematical (meta-)theory--that's 50% of the articles and the largest percentage of SOS contributions to EXPRESS/SOS in the period 2012-2022.
* The proceedings of EXPRESS/SOS 2013 included seven papers, two of which dealt with topics related to operational semantics and its mathematical (meta-)theory--that's 28.5% of the contributions.
* The proceedings of EXPRESS/SOS 2014 included eight papers, two of which (25%) dealt with topics related to the theory of structural operational semantics.
* The proceedings of EXPRESS/SOS 2015 included six papers, one of which (16.7%) dealt with topics related to the theory of structural operational semantics.
* The proceedings of EXPRESS/SOS 2016 included five papers, none of which dealt mainly with operational semantics.
* The proceedings of EXPRESS/SOS 2017 included six papers, one of which (16.7%) dealt mainly with operational semantics.
* The proceedings of EXPRESS/SOS 2018 included seven papers, none of which dealt mainly with operational semantics.
* The proceedings of EXPRESS/SOS 2019 included seven papers, two of which 28.5% dealt mainly with operational semantics.
* The proceedings of EXPRESS/SOS 2020 included six papers, none of which dealt mainly with operational semantics.
* The proceedings of EXPRESS/SOS 2021 included six papers, none of which dealt mainly with operational semantics.
* The proceedings of EXPRESS/SOS 2022 included eight papers, none of which dealt mainly with operational semantics.
So, only 13 out of the 76 papers published in the proceedings of EXPRESS/SOS since 2012 dealt with topics in SOS theory (17.1% of published papers). In passing, we also note that 16 out of the 110 presentations at the workshop in the period 2012-2022 were devoted to topics in SOS theory (that is, 14.5% of the workshop presentations). Research in SOS was well represented at EXPRESS/SOS in the first three editions of the joint workshop. However, five of the last seven instalments of the workshop did not include any presentations devoted to topics that were mainly related to structural operational
Figure 1: Total number of accepted paper (blue) and the number of accepted papers on SOS theory at the EXPRESS/SOS Workshop since 2012.
semantics. In particular, EXPRESS/SOS 2020-2022 did not have any talks on the theory and applications of structural operational semantics.
### Reflections on the Analysis Results
Reading through the EXPRESS/SOS contributions relevant to the theory of SOS reveals that the most recent results mostly focused on two aspects of SOS specifications: foundational aspects concerning the bialgebraic interpretation of SOS due to Turi and Plotkin [71], as well as compositionality of quantitative notions of equivalence such as probabilistic bisimilarity. Below, we provide a more nuanced analysis of this trend.
Another observation is that the diminishing strength in the provision of results on the theory of SOS can be largely attributed to a lack of projects (particularly, PhD studentships) in this area. Almost all of the results on the meta-theory of SOS contributed to the EXPRESS/SOS series had a co-author with a PhD project on this topic. A reduction in the number of doctoral students does not bode well for the healthy development of any research field.
## 3 Personal Reflections
Since the appearance of Plotkin's seminal Aarhus technical report [60], reprinted in slightly revised form as a journal paper in [62] with some historical remarks by Plotkin himself in [61], structural operational semantics has arguably become the most widely used approach to defining the semantics of programming and executable specification languages. To our mind, it is as impactful and popular today as it has been for over forty years. Indeed, one would be hard pressed to find papers on the theory of programming and specification languages that do not use structural operational semantics in some way. Moreover, the semantics of full-blown programming or domain-specific languages is still given in that style, reflecting its flexibility and applicability--see, for instance, the paper [45] for a small-step semantics of full Ethereum-virtual-machine bytecode that is formalised in the \(F*\) proof assistant [68] and then validated against the official Ethereum test suite.
As Plotkin highlights in his aforementioned piece on the origins of structural operational semantics, the essence of that approach to semantics is that it is _rule based_ and that the rules should be _syntax directed_ in order to support compositional language specifications and reasoning, as in the denotational approach to semantics. Conceptually, this rule-based view of operational semantics naturally led to the development of a theory of SOS language specifications that focused on the rules used in semantic definitions. The gist of that line of research, which can be traced back to de Simone's work [65], was to study _rule formats_ for operational specifications guaranteeing that every program in the specified language afford some semantic property of interest. So, rule formats offered a way to reduce the checking of semantic properties of programs in a language to syntactic checks on the rules used to define the operational semantics of the language. The literature on what came to be called the'meta-theory of structural operational semantics' is by now very large and we cannot do it justice in this paper. We refer the interested reader to the survey articles [7, 59] and to the references therein as well as the proceedings of SOS, EXPRESS/SOS, and of conferences such as CONCUR, LICS and POPL, for much more information and recent references. Naturally, since its first edition in 2004, the SOS workshop has served as a venue for the publication of several articles on SOS meta-theory.
Three of the authors of this piece have been amongst the contributors to the development of the fascinating research on rule formats for operational specifications and thoroughly enjoyed doing so.
However, we feel that the time has come for a critical appraisal of the strengths, weaknesses and possible future of that line of research and to speculate about whether the data we discussed in Section 2 reflects the musings we present in the rest of this note.
### Strengths
In our, admittedly biased, opinion, research on rule formats for structural operational semantics has led to a wealth of interesting and elegant theoretical results, ranging from those on the meaning of rule-based specifications using rules with negative premises (see, for instance, the articles [14, 41, 19]) to congruence formats for several behavioural equivalences obtained uniformly from their modal characterisations via modal decomposition (see, for example, [12, 35, 33, 34] and the references therein). Early studies of congruence rule formats, such as those reported in the seminal [13, 46], were accompanied by characterisations of the largest congruences included in trace semantics induced by the collection of operators that can be specified in the rule formats studied in those references. After all these years, we still find it amazing that such results could be proved at all!
Below we provide a non-exhaustive list of the available meta-theorems with sufficient strength (more than a single paper, with more than one application to a language) and we refer to the past review papers/chapters [7, 59] for a more exhaustive list to the date of their publication:
* Congruence: proving congruence (compositionality) for various notions of strong [53, 73], weak [33], higher-order [55], data-rich [57], timed [48], and quantitative behavioural equivalences [27, 17, 18]; supporting various syntactic language features such as formal variables and binders [53, 21], as well as semantic features such as negative premises and predicates, terms as labels, and ordering on rules.
* (De-)Compositional reasoning methods: decomposing logical formulae (in the multi-modal \(\mu\)-calculus, also known as Hennessy-Milner logic with recursion, [50, 51]) according to the semantics of various operators for various notions of bisimilarity [34, 33, 35] and their quantitative extensions [17, 18]; interestingly, this can lead not only to a reasoning method for checking modal formulae, but can also serve as a recipe for 'generating' congruence formats for different notions of equivalence, once their modal characterisation is defined.
* Axiomatisation and algebraic properties: to generate sound and ground-complete axiomatisations for strong bisimilarity [3], as well as weak behavioural equivalences [42], and equivalences with data [38]. An orthogonal line of enquiry considered identifying sufficient conditions guaranteeing various algebraic properties of language operators such as commutativity [58], associativity [24], zero and unit elements [4], and idempotence [2]; we refer to an accessible overview paper [9] summarising such results to its date of publication.
There have been a number of implementations of such results in tools [8, 56, 72], mostly based on rewriting logic [22].
Several of the theorems from the theory of structural operational semantics have found application in the study of process calculi, reducing the need to prove congruence and axiomatisation results, amongst others, from scratch for each calculus and have been extended to settings including, for instance, probabilistic and stochastic features (see, for example, [18, 27]), as well as to higher-order calculi, as in the recent [44]. The article [44] belongs to a fruitful and still active line of research, stemming from the seminal work by Turi and Plotkin [71], providing bialgebraic foundations to the theory of structural operational semantics.
The contributions to the work on rule formats and on the meta-theory of structural operational semantics have striven to achieve a reasonably good trade-off between the generality of the technical results and the ease with which they can be applied to specific languages. Ideally, one would always like to have simple syntactic restrictions on rule formats that guarantee desired semantic properties in a wide variety of applications. Indeed, following a Pareto Principle, very often simple rule formats cover many of the languages of interest and one quickly hits a threshold where complex and hard-to-check definitions are needed to extend the applicability of obtained results. In many cases, the 'curse of generality' led to definitions of rule formats whose constraints are arguably not purely syntactic any more and may even be undecidable. As an example, Klin and Nachyla [49] have shown that it is undecidable whether an operational specification that uses rules with negative premises has a least supported model and whether it has a unique supported model or a stable model. It is also undecidable whether such a specification is complete. As mentioned by Klin and Nachyla in the aforementioned reference, these negative results entail that formats such as the complete ntyft/ntyxt [32] 'are not _bona fide_ syntactic formats, as there is no algorithmic way to tell whether a given specification fits such a format.' So, the pursuit of generality is, to our mind, a double-edged sword and can be seen as both a strength and a weakness of several result on rule formats and the meta-theory of structural operational semantics.
In the context of EXPRESS/SOS, we observed that this tradition of strong theoretical results is dying down: from 2012 to 2017, we counted nine contribution to the foundation of SOS specifications [8, 15, 28, 38, 39, 49, 52, 63, 30], including on the bialgebraic framework [15, 49, 63], as well as congruence for quantitative notions of equivalence [28, 39, 52, 53] and axiomatisation results [38]; however, this number dropped to only one contribution from 2018 to 2022 on the meaning of SOS specification and compositionality of equivalences on open terms [43].
In summary, we believe that the study of rule formats and of the meta-theory of structural operational semantics has yielded many elegant results that have been of some use for the working concurrency theorist. However, first, the number of such contributions has significantly dropped in the past few years and, second, one has to wonder whether that line of work has had impact on the field of programming language semantics. We will offer some musings on that question in the coming section.
### Gaps
To our mind, apart from its intrinsic scientific interest, the theory of structural operational semantics based on rule formats has served the concurrency-theory community well by providing elegant, and often general and deep, results that have both explained the underlying reasons why specific languages enjoyed several semantic properties and served as tools to prove new theorems as instances of a general framework. The use of'syntactic' rule formats to establish properties of interest about formal systems has also been used in logic. By way of example, Ciabattoni and Leitsch have given algorithmic conditions guaranteeing that some logics enjoy cut elimination [20]. However, despite its undoubted successes, to our mind, the theory of rule formats has not yet had the impact one might have expected on the community working on the theory of programming languages. Perusing the proceedings of the premier conferences in that field indicates that much of the research on programming-language semantics and its applications is done in the context of proof assistants such as Coq [10, 23]2 and on frameworks built on top of those--see, for instance, the highly influential Iris framework for higher-order concurrent separation logic [47].
Footnote 2: Coq is available at [https://coq.inria.fr/](https://coq.inria.fr/).
We speculate that this relative lack of impact might be due to the fact that the theory of structural
operational semantics based on rule formats has been mostly developed within the process algebra community. This has naturally led to the development of results and frameworks having process calculi as main application area. As a consequence, despite some foundational research [6, 31, 57], the development of a widely-applicable theory of rule formats for languages having first-class notions of data and memory, as well as binding constructs is still somewhat in its infancy. This limits the applicability of the results obtained by the concurrency theory community to mainstream programming languages. Moreover, the software tools embodying the theory of structural operational semantics developed so far have mostly taken the form of prototypes and are arguably not as mature and usable as those produced by groups working on the theory of programming languages [64]. The initial work carried out within the PLanCompS [11] aimed to address this gap based on the Modular SOS framework that has been pioneered by Mosses [54]; this line of work has been influential and has led to other frameworks such as the iCoLa framework for incremental language development [37].
### Trends and Opportunities
To relate the past strengths to future trends, particularly regarding emerging application areas of operational semantics, we analysed the table of contents of five past editions of flagship conferences in programming languages: POPL (from 2021 to 2023, inclusive) and PLDI (from 2021 to 2022, inclusive). The aim of the analysis was to find areas where the available strength in the theory of SOS can be exploited. We aimed to be as inclusive as possible and tried to mention any such areas, even if the exploitation of available strength would require a major rework or transformation of ideas and results. Below we provide a raw list of keywords that we encountered in our analysis:
* POPL 2023: Semantics of Probabilistic and Quantum programs, Coq Proof Libraries, Nominal Sets, Co-Algebra and Bisimulation, Multi-Language Semantics, Session types.
* POPL 2022: Session types, Semantics of Probabilistic and Quantum programs, Semantic Substitution and congruence.
* POPL 2021: Semantics of Probabilistic Programs, Nominal Semantics, Hyper-properties and non-interference, functorial semantics
* PLDI 2022: Information flow analysis, equational and algebraic reasoning (also applied to quantum programs), sound sequentialisation, Kleene algebra, language interoperability, verified compilation (also applied to quantum programs).
* PLDI 2021: Language translation conformance and compiler verification, session types, regular expressions, semantics of probabilistic and quantum programs.
In all the POPL and PLDI editions we reviewed, abstract interpretation (also for quantum programs), analysing weak memory models, and reasoning using separation logics are featured prominently.
It appears from our analysis that the following activities may have substantial potential impact:
* semantic meta-theorems about quantitative transition systems (particularly probabilistic and quantum transition systems [16, 30]);
* providing mechanised semantic frameworks, particularly in proof assistants such as Coq;
* defining general semantic frameworks and theorems for different memory models and models of parallelism;
* defining general compositional frameworks for reasoning with separation logics and logics of incorrectness;
* devising algorithms for test-case generation, for instance, for compiler testing, based on a semantic framework.
We hope to see work on some of those topics in the near future, which might lead to a new lease of life for the (meta-)theory of SOS and its applications.
AcknowledgementsWe thank Valentina Castiglioni and Peter Mosses for their comments on a draft of this piece. Luca Aceto and Anna Ingolfsdottir were partly supported by the projects 'Open Problems in the Equational Logic of Processes (OPEL)' (grant no. 196050) and 'Mode(l)s of Verification and Monitorability (MoVeMent)' (grant no. 217987) of the Icelandic Research Fund. Mohammad Reza Mousavi have been partially supported by the UKRI Trustworthy Autonomous Systems Node in Verifiability, Grant Award Reference EP/V026801/2 and the EPSRC grant on Verified Simulation for Large Quantum Systems (VSL-Q), Grant Award Reference EP/Y005244/1.
|
2306.17792 | Towards Improving the Performance of Pre-Trained Speech Models for
Low-Resource Languages Through Lateral Inhibition | With the rise of bidirectional encoder representations from Transformer
models in natural language processing, the speech community has adopted some of
their development methodologies. Therefore, the Wav2Vec models were introduced
to reduce the data required to obtain state-of-the-art results. This work
leverages this knowledge and improves the performance of the pre-trained speech
models by simply replacing the fine-tuning dense layer with a lateral
inhibition layer inspired by the biological process. Our experiments on
Romanian, a low-resource language, show an average improvement of 12.5% word
error rate (WER) using the lateral inhibition layer. In addition, we obtain
state-of-the-art results on both the Romanian Speech Corpus and the Robin
Technical Acquisition Corpus with 1.78% WER and 29.64% WER, respectively. | Andrei-Marius Avram, Răzvan-Alexandru Smădu, Vasile Păiş, Dumitru-Clementin Cercel, Radu Ion, Dan Tufiş | 2023-06-30T16:48:22Z | http://arxiv.org/abs/2306.17792v1 | Towards Improving the Performance of Pre-Trained Speech Models for Low-Resource Languages Through Lateral Inhibition
###### Abstract
With the rise of bidirectional encoder representations from Transformer models in natural language processing, the speech community has adopted some of their development methodologies. Therefore, the Wav2Vec models were introduced to reduce the data required to obtain state-of-the-art results. This work leverages this knowledge and improves the performance of the pre-trained speech models by simply replacing the fine-tuning dense layer with a lateral inhibition layer inspired by the biological process. Our experiments on Romanian, a low-resource language, show an average improvement of 12.5% word error rate (WER) using the lateral inhibition layer. In addition, we obtain state-of-the-art results on both the Romanian Speech Corpus and the Robin Technical Acquisition Corpus with 1.78% WER and 29.64% WER, respectively.
Lateral Inhibition; Romanian Language; Speech Recognition; Wav2Vec 2.0
## I Introduction
Deep neural networks benefit from large amounts of annotated training data. However, annotated data is challenging to obtain in many settings. Except for English, generating thousands of hours of transcribed audio necessary to train a state-of-the-art speech recognition system is infeasible for most languages worldwide. Self-supervised learning [1] has become the de facto technique for addressing this issue by first teaching a general data representation from unlabeled samples and then transferring the accumulated knowledge to a downstream task via fine-tuning [2].
Working with self-supervision on unlabeled speech signals involves similar challenges as in computer vision. However, the research community continued to build pre-trained models on audio that have pushed further the state of the art in speech recognition. Schneider et al. [3] introduced the Wav2Vec model, which encodes the input audio data into a latent space to create a contextualized representation employing a Transformer encoder [4]. Baevski et al. [5] built Wav2Vec 2.0 on top of the previous work, mainly using the same model architecture while changing the pre-training objective to a discretized contrastive loss similar to the masked language model strategy from natural language processing.
Introduced by Pais [6], the lateral inhibition layer helps the model to learn when the annotated data is scarce. This paper investigates its application in transcribing human voice from audio files by integrating the lateral inhibition mechanism into a pre-trained automatic speech recognition (ASR) system. We choose the Wav2Vec 2.0 Base model pre-trained on 100k hours of unlabeled audio data extracted from VoxPopuli (i.e., Wav2Vec2.0-VP-100k) [7]. We run our experiments on a low-resource language, namely the Romanian language.
Our results for the experimental setup with the lateral inhibition layer show an average performance of 12.5% word error rate (WER) on various dataset settings compared with the feed-forward layer. In addition, we obtain state-of-the-art results on the Romanian Speech Corpus (RSC) [8] with 1.78% WER, using fewer training data than the previous model, and on the Robin Technical Acquisition Speech Corpus (RTASC) [9] with 29.64% WER, using the same training data.
We can summarize our main contributions as follows: (i) applying the technique of neural lateral inhibition to ASR; (ii) performing an analysis of the improvements brought by the lateral inhibition layer; (iii) to the best of our knowledge, creating the first publicly available Romanian Wav2Vec 2.0 model1 (called RoWav2Vec2.0-VP-100k-LI) that was thoroughly evaluated on several benchmarks; and (iv) obtaining state-of-the-art performance on two Romanian ASR datasets.
Footnote 1: [https://huggingface.co/racai](https://huggingface.co/racai)
## II Lateral Inhibition
Inspired by the human brain's biological process of lateral inhibition, the neural lateral inhibition layer has been successfully applied in named entity recognition [6]. This process accounts for exciting neurons reducing the activity of neighboring neurons in the human brain [10]. Also, it provides an increased perception of the visual cortex under challenging scenarios, such as low-lighting conditions [11]. Intuitively, we envisage that the new layer should be able to better focus on the actual voice data while possibly removing unwanted noise.
Following the original formulation [6], the lateral inhibition layer is described as follows:
\[F(x)=x\cdot Diag(\Theta(x\cdot ZeroDiag(W)+b)) \tag{1}\]
where \(x\) is the input vector of the layer (i.e., the embedding representation produced by the RoWav2Vec2.0-VP-100k-LI model), \(Diag(\cdot)\) denotes a diagonal matrix having the diagonal set to the vector presented as a parameter, \(ZeroDiag(\cdot)\) generates a matrix with the zero value on the diagonal, \(W\) is the weight matrix, \(b\) corresponds to the bias values, and \(\Theta(\cdot)\) is the Heaviside function (see Equation 2).
\[\Theta(x)=\begin{cases}1,x>0\\ 0,x\leq 0\end{cases} \tag{2}\]
Following the analogy with the biological process, the Heaviside function determines which values can pass to the next layer. The decision is based on the adjacent values in the supplied embedding representation. Equation 1 is used for the forward pass, with the Heaviside function included, thereby providing a strict pass or reject functionality for the input values. However, in the backward pass, the non-differentiable Heaviside function is replaced with the parameterized sigmoid function [12] (see Equation 3, where \(k\) is the scaling parameter). This technique, known as surrogate gradient learning [13], allows using a known derivative (see Equation 4) in the backward pass.
\[\sigma(x)=\frac{1}{1+e^{-kx}} \tag{3}\]
\[\sigma^{\prime}(x)=k\sigma(x)\sigma(-x) \tag{4}\]
## III Experimental Settings
### _Dataset_
The fine-tuning of the RoWav2Vec2.0-VP-100k-LI model was done on a speech dataset whose composition contained ten Romanian corpora with transcribed audio files. The corpora contain recordings from several domains, including Wikipedia, News, Internet, and Legal. The resulting dataset has approximately 300 hours of transcribed speech from 222.7k utterances. It is composed of both reading and spontaneous speech, distributed in an imbalanced manner, with 229 hours of reading and 71 hours of spontaneous speech, respectively.
We further split our Romanian speech dataset into five subsets based on the total recording time by random sampling without replacement audio files until the desired size was reached: Small (S) - 10 minutes, Medium (M) - 1 hour, Large (L) - 10 hours, Extra Large (XL) - 100 hours, and Extra Extra Large (XXL) - the whole dataset. The split was necessary to evaluate the lateral inhibition performance in more extreme settings, i.e., with fewer labeled audio files.
### _Fine-tuning_
We used the primary fine-tuning mechanism for the Wav2Vec 2.0 model as introduced in the original paper [5]. Therefore, using the raw audio input, we project the contextualized embeddings \(c_{i}\) obtained by the model for each time step \(i\) into a tensor \(y_{i}\) whose dimensions match the number of letters in the Romanian alphabet, plus the space character and the blank token. We project the data using either the standard fully-connected layer or the lateral inhibition layer followed by a dense layer. Using the connectionist temporal classification algorithm [16], we computed the loss between the predicted logits and target labels. We set \(k=10\) for the lateral inhibition layer, which we believe is a good enough approximation of the surrogate gradient of the Heaviside function.
We employed the Adam method [17] to optimize the loss with a learning rate set to \(3e-5\) and a weight decay to \(5e-3\). We fine-tuned each model on two NVIDIA 1080 TI GPUs. Due to GPU memory limitations, we set the batch size to 4 with a gradient accumulation of 8. In addition, we clipped the gradients from the back-propagation algorithm to 2 to improve training stability [18].
## IV Results
### _Romanian ASR_
We evaluate our models, namely RoWav2Vec2.0-VP-100k (i.e., without lateral inhibition) and RoWav2Vec2.0-VP-100k-LI (i.e., with lateral inhibition), on the test set of three corpora: Spontaneous Speech Corpus (SSC) [19], RSC, and RTASC. Compared with previous works on Romanian ASR, the results of the evaluation regarding WER and character error rate (CER) are listed in Table I. In all our experiments, the decoding phase employs a 4-gram KenLM language model [20] trained on the textual part of the corpus for contemporary Romanian language [21].
Our model with lateral inhibition, trained on the full dataset (i.e., RoWav2Vec2.0-VP-100k-LI-XXL), obtains state-of-the-art performance on the RSC and RTASC corpora, achieving 1.78% WER and 29.64% WER, respectively2. It improves the performance of the best Kaldi [22]-based ASR system, the Time Delay Neural Network - Recurrent Neural Network (TDNN-RNN) [15], by 1.01% WER on RSC and also the performance of the Romanian DeepSpeech2 model [14] on RTASC by 7.57% WER.
Footnote 2: The high difference in WER between the two corpora comes from the type of utterances found in them: RSC contains common Romanian words and phonemes, while RTASC has more specific utterances from technology, with many words and phonemes borrowed from the English language.
However, our proposed models do not improve the performance on the SSC evaluation set, with our best variant (i.e., RoWav2Vec2.0-VP-100k-LI-XXL) falling behind 2.24% WER compared to the TDNN-RNN architecture. The main driver behind this difference is the need for more spontaneous speech data within our training corpus compared to the dataset used for training the state of the art. Specifically, the TDNN - Long Short-Term Memory (TDNN-LSTM), the Convolutional Neural Network - TDNN (CNN-TDNN), the TDNN, and the TDNN-RNN were all trained on a dataset with 235 hours of speech, namely 95 hours of read speech data from RSC and 140 hours of dedicated internal spontaneous speech data, similar to the one used in the SSC evaluation set. Meanwhile, we used only 71 hours of spontaneous speech data, approximately half the amount used to train the TDNN-based models.
On the other hand, we increased the number of read speech data by decreasing the amount of spontaneous speech data within our training corpus. Hence, the performance of our best
variant on the RSC evaluation set may have benefited from this fact. However, RoWav2Vec2.0-VP-100k-LI-XL still achieves almost state-of-the-art performance with 1.80% WER on RSC, indicating that our methodology has not benefited too much from the increased amount of read speech data on this test set.
Apart from our best model, the rest of the variants performed reasonably well on each evaluation task, given the low amount of available training data. The RoWav2Vec2.0-VP-100k model obtained good results when fine-tuned on the L, XL, and XXL subsets, but the word error rate rapidly increased when the training dataset was switched to the more extreme cases (i.e., the M and S subsets). For instance, on the RSC dataset, the variants fine-tuned on the L, XL, and XXL subsets maintained a fairly good performance, achieving 4.80%, 2.31%, and 2.01% WER, respectively (or 3.95%, 1.80%, and 1.78% WER, respectively, with the lateral inhibition layer). However, the WER increased by more than three times on the RSC M subset and more than eight times on the RSC S subset, with our model obtaining 16.55% and 44.78% WER, respectively (or 13.92% and 35.00% WER with the lateral inhibition layer).
### _Lateral Inhibition Layer Improvements_
We further analyze the improvements brought by the lateral inhibition in the RoWav2Vec2.0-VP-100k-LI models on all five evaluation subsets. An illustration of the difference in performance obtained by our model fine-tuned on all subsets is depicted in Figure 1. We observe that the lateral inhibition layer decreases the error rates of the RoWav2Vec2.0-VP-100k-LI models in all our experiments. We also notice that, on average, the improvements become more significant for the smaller subsets. We believe this results from the increased regularization when the lateral inhibition layer is employed, mainly because it allows the model to focus better on the features of the actual human voice, thereby learning to distinguish the speech from the noise better even when the data is scarce.
We also compute the average relative improvement of the lateral inhibition mechanism to all the RoWav2Vec2.0-VP-100k-LI variants on each evaluated corpus. We depict the results in Figure 2. The greatest improvements are achieved on the RSC evaluation subsets, the lateral inhibition layer reducing the WER on average by 17.8% and the CER by 16.1%. The lowest average WER improvement (i.e., 9.0%) is obtained on the RTASC evaluation subsets. Also, the lowest CER improvement (i.e., 11.4%) is obtained on the SSC evaluation subsets. The average improvement over all evaluation subsets is 12.5% for WER and 13.1% for CER.
## V Conclusions
Automatic speech recognition for low-resource languages remains an important research direction. In this work, we applied the recently introduced mechanism, namely the lateral inhibition layer, which helps the speech recognition neural networks to better distinguish between the human voice and the surrounding noise. We performed experiments on the Romanian language using the RoWav2Vec2.0-VP-100k-LI models and a custom dataset composed of 300 hours of speech. The results showed that the lateral inhibition layer reduces, on average, the WER by 12.5% over all the evaluated test sets. Furthermore, we achieved state-of-the-art performance on the RSC and RTASC datasets using this mechanism, obtaining 1.78% WER and 29.64% WER, respectively.
Future work considers experimenting with the lateral inhibition layer on languages other than Romanian and an evaluation of a speech dataset containing more than 300 hours. In addition, we intend to fine-tune other variants of the Wav2Vec 2.0 model, pre-trained on various datasets and with different methodologies, to validate that our results generalize beyond the pre-trained variant employed in this work.
## Acknowledgements
The research has been funded by the University Politehnica of Bucharest through the PubArt program. |
2309.08037 | Gain and Phase: Decentralized Stability Conditions for Power
Electronics-Dominated Power Systems | This paper proposes decentralized stability conditions for multi-converter
systems based on the combination of the small gain theorem and the small phase
theorem. Instead of directly computing the closed-loop dynamics, e.g.,
eigenvalues of the state-space matrix, or using the generalized Nyquist
stability criterion, the proposed stability conditions are more scalable and
computationally lighter, which aim at evaluating the closed-loop system
stability by comparing the individual converter dynamics with the network
dynamics in a decentralized and open-loop manner. Moreover, our approach can
handle heterogeneous converters' dynamics and is suitable to analyze
large-scale multi-converter power systems that contain grid-following (GFL),
grid-forming (GFM) converters, and synchronous generators. Compared with other
decentralized stability conditions, e.g., passivity-based stability conditions,
the proposed conditions are significantly less conservative and can be
generally satisfied in practice across the whole frequency range. | Linbin Huang, Dan Wang, Xiongfei Wang, Huanhai Xin, Ping Ju, Karl H. Johansson, Florian Dörfler | 2023-09-14T21:58:50Z | http://arxiv.org/abs/2309.08037v2 | # Gain and Phase: Decentralized Stability Conditions for Power Electronics-Dominated Power Systems
###### Abstract
This paper proposes decentralized stability conditions for multi-converter systems based on the combination of the small gain theorem and the small phase theorem. Instead of directly computing the closed-loop dynamics, e.g., eigenvalues of the state-space matrix, or using the generalized Nyquist stability criterion, the proposed stability conditions are more scalable and computationally lighter, which aim at evaluating the closed-loop system stability by comparing the individual converter dynamics with the network dynamics in a decentralized and open-loop manner. Moreover, our approach can handle heterogeneous converters' dynamics and is suitable to analyze large-scale multi-converter systems that contain grid-following (GFL) and grid-forming (GFM) converters. Compared with other decentralized stability conditions, e.g., passivity-based stability conditions, the proposed conditions are significantly less conservative and can be generally satisfied in practice across the whole frequency range.
Decentralized stability conditions, grid-forming control, grid-following control, power converters, power systems, small gain theorem, small phase theorem, small signal stability.
## I Introduction
Power electronics converters play a significant role in modern power systems, acting as the interfaces between the power grid and renewable energy sources, high-voltage DC transmission systems, smart loads, energy storage systems, etc. The large-scale integration of power converters is changing the power system dynamics, as they have distinct dynamics compared with conventional synchronous generators [1]. Under such a background, new stability problems are emerging, and analyzing the stability of systems integrated with multiple power converters is essential for ensuring the secure operation of power systems [2]. In this paper, we focus on the small-signal stability of multi-converter systems. The small-signal stability analysis of power converters has been an important and popular topic for many years, due to the complicated dynamics in converters caused by the interaction among filters, multiple nested control loops, and the power grid. There have been many well-known methods to evaluate the stability of power converters, such as eigenvalue analysis [3], impedance-based analysis [4, 5, 6, 7], small gain theorem-based analysis [8], and passivity-based analysis [9, 10].
Eigenvalue analysis is based on deriving the state-space matrix of the system, which, in the context of multi-converter systems, requires a detailed, global, and closed-loop model of the whole system. Hence, it may suffer from scalability and dimensionality problems when dealing with large-scale systems. Compared with eigenvalue analysis, impedance-based analysis offers more insights into the system dynamics in a wide frequency range. Moreover, the impedance of the power grid and converters can be measured, so black-box models can be directly used for stability assessment [11]. In multi-converter systems, one may need to build the impedance network for stability analysis [12]. Nonetheless, the stability analysis relies on using the generalized Nyquist stability criterion or deriving the characteristic polynomial of the closed-loop system, which may still suffer from scalability and dimensionality problems. As a remedy, if all the converters in the system have homogeneous dynamics, one can mathematically decouple the system into small-scale subsystems, and then use state-space or impedance methods to analyze the subsystems [13, 14, 15]. For instance, Ref. [13] decouples a multi-infeed system that contains homogeneous grid-following (GFL) converters and analyzes the stability from the perspective of grid strength characterized by the generalized short-circuit ratio (gSCR).
However, it has been widely acknowledged that GFL converters, which rely on phase-locked loops (PLLs) for grid synchronization, cannot support a power electronics-dominated power system. This is because PLL aims at tracking the grid frequency, and there must be frequency sources in the system such that GFL converters can operate in a stable way. Hence, we need the so-called grid-forming (GFM) converters. Typical GFM control methods include droop control [16, 17], virtual synchronous machines [18], synchronverters [19], virtual oscillator control [20, 21], and so on. The coexistence of GFM and GFL converters makes the stability analysis of multi-converter systems more complicated, and currently it is not clear how to evaluate the stability of large-scale multi-converter systems in a scalable and computationally feasible fashion. Passivity-based analysis can potentially be used to analyze the stability of GFM-GFL hybrid multi-converter systems in a scalable and decentralized manner, i.e., if all the converters are passive, then the interconnected multi-converter system is stable [9], but it may lead to overly conservative results. Moreover, the converter's dynamics in the low-frequency range may not satisfy the passivity condition when the synchronization dynamics are taken into account due to, for instance, the negative resistance effect of PLL [6, 9].
Recent advances in control and systems theory have extended the passivity condition by defining the phases of |
2309.13371 | Small telescopes being effective: MAGIC or not? | The paper describes the MAGIC multi-mode focal reducer (Monitoring of Active
Galaxies by Investigation of their Cores), commissioned on the 1-m Zeiss-1000
telescope of the Special Astrophysical Observatory of the Russian Academy of
Sciences in September 2020. Three observational modes are currently realised:
photometry, polarimetry, and long-slit spectroscopy. Reducing the focal length
makes it possible to obtain a sufficiently large field of view for photometry
and a large slit height for spectroscopy of $\sim$12$'$, as well as a large
field of view for polarimetry with a quadrupole Wollaston prism of
$\sim$6$'$.4. This feature makes the complex study of extended nebulae and
galaxies efficient. The MAGIC capabilities are presented in examples of
observations of various astronomical objects. The spectral mode in the range of
4000-7200 AA provides the spectral resolution $R \sim$ 1000; for a starlike
target up to 14 mag in medium-band filters with a seeing of 1$''$ for 20
minutes of total exposure, the photometry accuracy is better than 0.01 mag and
the polarization accuracy is better than 0.6%. Especially for the new focal
reducer, an offset guide and a position angle rotation system were implemented.
The results of the modernization of the baffle system in the optical scheme of
the telescope for the suppression of scattered light are also described. | Victor L. Afanasiev, Eugene A. Malygin, Elena S. Shablovinskaya, Roman I. Uklein, Vladimir R. Amirkhanyan, Alexander E. Perepelitsyn, Irina V. Afanasieva | 2023-09-23T13:20:51Z | http://arxiv.org/abs/2309.13371v1 | # Small telescopes being effective: MAGIC or not?
###### Abstract
The paper describes the MAGIC multi-mode focal reducer (Monitoring of Active Galaxies by Investigation of their Cores), commissioned on the 1-m Zeiss-1000 telescope of the Special Astrophysical Observatory of the Russian Academy of Sciences in September 2020. Three observational modes are currently realised: photometry, polarimetry, and long-slit spectroscopy. Reducing the focal length makes it possible to obtain a sufficiently large field of view for photometry and a large slit height for spectroscopy of \(\sim\)12\({}^{\prime}\), as well as a large field of view for polarimetry with a quadrupole Wollaston prism of \(\sim\)6\({}^{\prime}\).4. This feature makes the complex study of extended nebulae and galaxies efficient. The MAGIC capabilities are presented in examples of observations of various astronomical objects. The spectral mode in the range of 4000-7200 A provides the spectral resolution \(R\sim\) 1000; for a starlike target up to 14 mag in medium-band filters with a _seeing_ of 1\({}^{\prime\prime}\) for 20 minutes of total exposure, the photometry accuracy is better than 0.01 mag and the polarization accuracy is better than 0.6%. Especially for the new focal reducer, an offset guide and a position angle rotation system were implemented. The results of the modernization of the baffle system in the optical scheme of the telescope for the suppression of scattered light are also described.
keywords: astronomical observing techniques - devices and instruments - telescopes
## 1 Introduction
The modern level of astronomical signal registration equipment and control systems allows small telescopes to solve observational tasks that were previously available only to large instruments. The operation of meter-class telescopes is not so strictly regulated by the observation schedule, which makes them more accessible for obtaining long-term observation series. Currently, plenty of monitoring campaigns are organized at small instruments worldwide for observations of relatively bright objects variable on time-scales from hours to years, as, e.g., active galactic nuclei (AGN).
However, many of the small Cassegrain telescopes have large focal ratios, leading to a small image scale in the focal plane. Zeiss-1000 telescope (with primary mirror diameter \(D=1\) m and focal length at the Cassegrain focus \(F=13.3\) m, Komarov et al., 2020) of the Special Astrophysical Observatory of the Russian Academy of Sciences (SAO RAS) also has a large focal ratio of \(F/13.3\). Thus, for a pixel of the linear size \(p=13.5\)\(\mu\)m, the scale in the focal plane is 0\({}^{\prime\prime}\).2/pix, providing an oversampled images within typical _seeing_\(\beta\approx 1^{\prime\prime}.5\) at SAO (Panchuk & Afanas'ev, 2011). Moreover, when extended objects, e.g., nearby Seyfert galaxies, are of particular interest, the signal-to-noise ratio (S/N) does not depend more on _seeing1_ but S/N \(\sim p\cdot D/F\) (obviously, this is true for optical systems not burdened by scattered light, which significantly reduces S/N). The manufacturing of a focal reducer (Courtes, 1960, 1964) naturally solves these problems. Decreasing the focal ratio from \(F/13.3\) to \(F/6.1\) leads to a larger scale of 0\({}^{\prime\prime}\).45/pix meeting the demands of optimal sampling (e.g. Howell, 2006). Moreover, it results in a larger field of view (FoV) important for the extended objects and also in the presence of the parallel beam allowing one to introduce dispersion elements or polarization analyzers. The latter extends the number of available observation modes for flexible reactions to weather conditions and the ability to apply diverse methods of investigation of astrophysical objects.
Footnote 1: For star-like objects (S/N) \(\sim D/\beta\).
For these reasons, considering productive experience in the development of multi-mode cameras based on focal reducers over the past few decades [e.g. focal reducer system for 1.06-m \(F/14.5\) Cassegrain Zeiss-telescope of the Hoher List Observatory (Geyer et al., 1979), OREAD focal-reducing camera for 1.27-m \(F/7.6\) McGraw-Hill telescope of the Michigan-Dartmouth-MIT Observatory (Aldering & Bothun, 1991), DFOSC for Danish 1.54-m \(F/8.6\) telescope at La Silla Observatory (Andersen et al., 1995), and many other devices, associated with the widespread use of compact small-sized telescopes with the Ritchey-Chretien system, which have a large aberration-free FOV, but not quite fast, as well as our own positive twenty-year experience of operating focal reducers (devices of the SCORPIO family, Spectral Camera with Optical Reducer for Photometrical and Interferometrical Observations, Afanasiev & Moiseev, 2005, 2011) at 6-m BTA telescope of SAO RAS - we have developed a multi-mode MAGIC focal reducer for the Zeiss-1000 of the SAO RAS, the parameters of which are given in Table 1. This device is aimed at a wide range of observational monitoring tasks within approaches developed at SAO RAS for the last 30 years (Shapovalova et al., 2004, 2019; Uklein et al., 2019; Malygin et al., 2020; Shablovinskaya et al., 2020), unified in the MAGIC project (Monitoring of Active Galaxies by Investigation
of their Cores). Among other things, in the case of Zeiss-1000, the construction of the efficient device required additional modification of the telescope components, described in this paper.
The paper structure is as follows. Section 2 describes the modernization of the optomechanical scheme of the 1-m Zeiss-1000 telescope of SAO RAS, as the installation of shielding elements, rotating platform and offset guiding system. In Section 3 the MAGIC optomechanical scheme is given together with its characteristics. Section 4 discusses the features of observations in the modes of photometry, polarimetry, and long-slit spectroscopy and provides examples of observations.
## 2 Modernization of the optical-mechanical scheme of the telescope
To increase the efficiency and accuracy of observations, we have upgraded the optomechanical scheme of the 1-m Zeiss-1000 telescope, as well as created the MAGIC multi-mode focal reducer. As part of the modernization of the telescope design, we introduced and changed several key components of the system:
\(\rightarrow\) [Baffles] \(\rightarrow\) [Rotator + Guide] \(\rightarrow\) [Calibration illumination] \(\rightarrow\) [MAGIC]
Arrows imply the path of incoming rays. After reflection on the primary and secondary mirrors of Zeiss-1000, the light is surrounded by baffles, then crosses the automated turntable consisting of the rotator and the offset guide, after which it passes through the calibration illumination module and only then enters the MAGIC entrance pupil.
The modified components in the scheme complement the MAGIC device, however, they are permanent modifications of the entire optical system of the telescope and also work in conjunction with other devices operating at the Cassegrain focus. Nevertheless, all these modules are independent and separated from each other and can be removed if necessary. At the moment, the rotation and guiding modules are implemented on the telescope and are at the stage of the test observations. Further in the section, we will sequentially describe these components in brief detail. Being the essential part of the telescope modernization, the module of telecentric illumination with discrete and continuous spectrum sources for spectral calibrations designed similarly to the concepts implemented for the 6-m BTA telescope adapter (Afanasiev et al., 2017) is in the process of development and is a point of the upcoming paper.
### Baffles
The Zeiss-1000 telescope (Komarov et al., 2020) is a Ritchey-Chretien aplanat with two hyperbolic mirrors. Due to their design, Cassegrain telescopes are the most vulnerable to parasitic light incoming to the detector during observations. Baffles have been installed into the telescope as a system of two surfaces: a truncated cone and a cylinder (near the secondary and primary mirrors, respectively). They are shown on the top panel of Fig. 1 and are called "front" and "rear" baffles. These are the default baffles originally installed into the telescope. This configuration provides an unvignetted field of \(\diameter 106\) mm (\(\sim\)27\(\arcmin\)) at the telescope focal plane. The baffles shield the detector from the most dangerous direct light and also prevent light reflected by the inner surface of the telescope tube from entering the field of view. However, the baffle near the main mirror causes additional scattered light, when direct light is reflected from the inner surface of the baffle during grazing incidence (Maksutov, 1946).
Thus, due to light re-reflection on the inner surface of the rear baffle, a complex-shaped spot was formed on the detector, which, in its meaning, was an additive parasitic signal. We observed it as a drop of the intensity at the edges of the calibration flat-field frame of the order of \(\sim\)10% (Fig. 2, left panel). The maximum of this "bell" was shifting during observations depending on the position of the telescope tube, which introduced significant errors in obtaining flat field frames and data reduction. When processing scientific frames, scattered light cannot be taken into account, which worsens the accuracy of measurements of faint objects. Since we have a parasitic additive signal, the division of frames into the flat field also introduced a systematic error of about 10% towards the edges of FoV and decreased the accuracy of high-precision photometric measurements. Moreover, the scattered light must contribute to the instrumental polarization, and its value is heterogeneous over the field.
Firstly, we performed the exact solution of the problem for calculating the optimum design of baffles (Terebizh, 2001) for Zeiss-1000 to fully replace the original baffles. Yet, it appeared that the solution led to an unacceptably high linear obstruction coefficient (ratio of diameters of the widest baffle and the entrance pupil) \(\eta\sim 0.46\). Thus, we got out of the exact solution for a more acceptable design.
To suppress unwanted light, we installed four annular diaphragms with an internal diameter of 185 to 215 mm inside the existing rear baffle (it consists of two parts, with a total height of 1100 mm) and painted the components with high absorption paint. We also made an additional cylindrical 976 mm high structure with five internal diaphragms, installed between the focal plane of the telescope and the default rear baffle, and passing through the central hole of the telescope's main mirror. A drawing of baffles with annular diaphragms
\begin{table}
\begin{tabular}{c c} \hline MAGIC main parameters & \\ \hline Input focal ratio of focal reducer & \(F/12.5\) \\ Total focal ratio at the Zeiss-1000 & \(F/6.1\) \\ QE (optics + telescope + CCD) & \(\sim\)50\% \\ Image quality (FWHM) & 0\(\arcsec\).3 \\ Spectral range & 340-990 nm \\ Weight & 23 kg \\ Dimensions & 430 \(\times\) 440 \(\times\) 265 mm \\ \hline CCD system & Andor ikonL-936 \\ CCD & E2V CCD42-40 (BEX2-DD) \\ Format & 2048 \(\times\) 2048 pix \\ Pixel size & 13.5 \(\times\) 13.5 \(\mu\)m \\ QE & 400-850 nm: \(>\)90\% \\ & 340-980 nm: \(\sim\)40\% \\ Readnoise (min) & 2.2 e\({}^{-}\) \\ \hline Photometry & \\ FoV & 12\({}^{\prime}\) \\ Image scale (binning 1 \(\times\) 1) & 0\(\arcsec\).45/pix \\ Limiting mag (\(V\), 20 min, seeing \(\sim\) 1\(\arcsec\).1) & 22\({}^{\rm{m}}\).5 \\ \hline Stokes polarimetry & \\ FoV & 6\(\arcmin\).4 \(\times\) 6\(\arcmin\).4 \\ Image scale (binning 1 \(\times\) 1) & 0\(\arcsec\).45/pix \\ Accuracy (14 mag, 20 min, seeing \(\sim\) 1\(\arcsec\)) & 0.6\% \\ \hline \multicolumn{3}{c}{Long slit spectroscopy} \\ Spectral range & 400-720 nm \\ Spectral resolution & \(R\sim\) 1000 \\ Slit size & 1\(\arcsec\).7 \(\times\) 12\(\arcmin\) \\ Monochromatic slit image (FWHM) & 3.5 pix \\ Reciprocal dispersion & 0.2 nm/pix \\ \hline \end{tabular}
\end{table}
Table 1: The main parameters of MAGIC with a CCD on the Zeiss-1000 telescope of the SAO RAS
Figure 1: Optical scheme of the Zeiss-1000 telescope after modernization of the baffles. The top panel shows default front and rear baffles near secondary and primary mirrors with annular diaphragps installed in the rear one. Also, in the scheme (to the right to the rear baffle) there is an additional construction with diaphragps, which we installed through the main mirror of the telescope. The middle panel indicates the sizes of installed elements. The bottom panel shows the idea of arranging annular diaphragps described in (Danjon & Couder 1935). Dimensions are in millimetres.
installed inside in the Zeiss-1000 optical system of the SAO RAS is shown in Fig. 1. The idea of annular diaphragms for refractors was described earlier in Danjon & Couder (1935) and is easily adapted to the design of a cylindrical baffle (the idea is visualized in the bottom panel of Fig. 1). Thus, diaphragms surround the useful light beam in the optical path and significantly reduce the level of unwanted light.
A comparison of flat field frames obtained from the twilight sky _before_ and _after_ blackening the baffle, installing painted diaphragms in it, and installing an additional structure with diaphragms is shown in Fig. 2 on the left and right panels, respectively. After the upgrade, the intensity of the flat field does not drop at the edges of the FoV, which indicates effective blocking of direct and scattered beams in the telescope tube.
### Rotator and offset guide
The rotator, offset guide, calibration illumination as well as baffles are device-independent modules. Since the end of 2022, the rotator and offset guide are already being used in a test mode with the MAGIC device and are available to be used with other devices installed at the Cassegrain focus. The calibration illumination is still under development. Below we briefly show their necessity and main features. The details of the rotator and offset guiding system will be described in the upcoming paper (Amirkhanyan et al. 2024, in prep.).
The Zeiss-1000 telescope was originally equipped with a manual rotator. We have upgraded the original Zeiss-1000 rotator by designing, manufacturing and assembling construction of large gear, worm reduction and a stepper motor with PCB control. Thus, this modification allows one to rotate remotely the devices installed in the Cassegrain focus to any given angle during the night, which makes observations using various methods much more efficient. The accuracy of the angle setting is \(\sim\)0\({}^{\circ}\).5.
The offset guide is designed to correct the position of the Zeiss-1000 telescope tube based on images from a guide digital camera mounted on a small gear platform into space inside the motorized rotator. An additional guiding module turned out to be necessary since the telescope's tracking error does not allow full-fledged exposures for several tens of minutes. Before the start of work on the production of the offset guide, the capabilities of the side telescope guide of the Zeiss-1000 telescope were tested. During guiding through the side telescope, we got a systematic drift of \(\sim\)2\({}^{\prime\prime}\).5 per hour, which became the prerequisite for the creation of the offset guide.
The rotation of the offset guide platform makes it possible to quickly find available stars for guiding in the FoV of the telescope at the Cassegrain focus. The limiting magnitude of a star for guiding is \(\sim\)14 mag in \(R\)-band.
## 3 Magic description
The MAGIC device is a multi-mode focal reducer, allowing a flexible response to changing weather conditions due to several observational modes: direct images, polarimetry and long-slit spectroscopy. MAGIC is installed in the Cassegrain focus of the 1-m Zeiss-1000 telescope and works in conjunction with the components of the optical system described earlier (see Fig. 3), but does not depend on them. The
Figure 2: Comparison of the normalized flat field frames of the twilight sky _before_ (left) and _after_ (right) the installation of annular diaphragms in the default rear baffle and an additional tube, and blackening of the components. The cuts at the bottom correspond to the blue lines in the frames above. The horizontal axes of bottom cuts correspond to the pixel location along the \(y\)-axis of the frame (the length of the blue line in the angular measure corresponds to 12\({}^{\circ}\)). Frames obtained with a 250Å-width SED700 filter.
weight of the device without a CCD detector is 23 kg, and the size is 430\(\times\)440\(\times\)265 mm.
The device is designed for an input focal ratio of \(F/12.5\) and, due to the collimator and camera, reduces it to \(F/6.1\), which solves the problem of oversampling for typical modern CCDs in the focus of Cassegrain telescopes and provides an advantage for observing faint extended objects.
### Optical design
The optical part of the MAGIC focal reducer consists of a field lens, a collimator and a camera lens. The scheme is shown in Fig. 4. The collimator is a 5-lens apochromat with a focal length of 220 mm and forms the exit pupil of the system. The camera lens is a 6-lens apochromat with a focal length of 109 mm, which focuses the resulting image on the CCD detector. All optical surfaces have an anti-reflective coating, which ensures transmission of each lens \(>\)80%.
Figure 4: MAGIC contents: (1, 2) — filter wheels; (3) — collimator; (4) — focusing mechanism of the collimator; (5) — mode changing linear guide carriage; (6) — camera; (7) — the CCD detector.
Figure 3: MAGIC in the Cassegrain focus. _Left_: An illustrative scheme with a transparent telescope tube. _Right_: photo of MAGIC and a round flat-field screen in the background.
The integral transmission2 of the focal reducer optics considering the reflection coefficient of telescope mirrors and CCD efficiency is shown in Fig. 5 and is QE \(\sim\) 50%.
Footnote 2: The quantum efficiency of MAGIC optics and observational modes was measured by on-sky standard stars in medium-band filters with the known pass-bands.
The optomechanics of the device allow introducing the movable optical elements into the optical path. The optical filters can be additionally set in front of the collimator. Also, between the collimator and the camera, a volume phase holographic grism (VPHG) and a double Wollaston prism can be introduced into the parallel beam by moving the linear guide carriage perpendicular to the central axis of the device; it is also allowed to install other optical elements on the carriage.
The optical design of MAGIC was calculated in the ZEMAX software environment. Spot diagram in Fig. 6 shows how the calculated image of a point source looks like for a series of wavelengths from 365 nm to 900 nm at various distances from the central axis of the device from 0\({}^{\circ}\) to 0\({}^{\circ}\).12. The calculated polychromatic encircled energy (the fraction of the total energy in the point spread function) is shown in Fig. 7. The quality of the image formed by the optics is no worse than 10 \(\mu\)m in the plane of the CCD detector, which corresponds to FWHM \(\sim\) 0\({}^{\prime\prime}\).3.
### Electro-mechanical scheme
In the MAGIC scheme (Fig. 4), the light from the telescope passes through the filter wheels (1) and (2). Each wheel has 9 positions for installing filters with a diameter of no more than 50 mm and a thickness of no more than 5 mm. The first wheel, in addition to optical filters, also includes:
* _slit_ -- long slit (width 1\({}^{\prime\prime}\). 7, linear width -- 0.11 mm)
* _mask_ -- mask for the Wollaston prism (angular dimensions -- 6\({}^{\prime}\).4 \(\times\) 6\({}^{\prime}\).4, linear dimensions -- 25\(\times\)25 mm)
* _dots_ -- a matrix of 8\(\times\)8 pinholes with a diameter of 0.1 mm and a step of 3 mm for focusing optics and estimating geometric distortions in polarimetry mode (linear dimensions -- 25\(\times\)25 mm)
Zero position in each wheel is always empty, and given the constant presence of \(slit\), \(mask\) and \(dots\), we have 13 positions to install the necessary replaceable filters.
Next, there is the collimator (3) with the focusing mechanism (4). In the heart of MAGIC is the mode-changing linear guide carriage for 4 positions (5) with the VPH-grism and the Wollaston prism. The switching time between the adjacent carriage positions is 1 min. After the mode carriage light comes through the camera (6) to the CCD detector (7).
To change the configuration, MAGIC has 4 stepper motors: two -- for rotating the filter wheels (1) and (2) and two more -- for the collimator focusing mechanism (4) and moving the linear guide carriage (5). The control program from the onboard PC sends commands to the ATmega8535 microprocessor, which controls the configuration and activates the mechanics of the device. The motors are controlled via the serial port from the graphical user interface (Fig. 10).
### CCD characteristics
Andor iKon-L 936 CCD system with a BEX2-DD type 2048 \(\times\) 2048 pix E2V CCD42-40 with a pixel size of 13.5 \(\times\) 13.5 \(\mu\)m is used as a detector. The mass of the CCD system is 7 kg. The quantum efficiency of this device is \(>\)90% in the range of 400-850 nm (see Fig. 5) and not less than 40% in the range of 340-990 nm, which is the working spectral range of MAGIC due to its optics. We use default air cooling, which makes it possible to conduct observations with a CCD temperature of about \(-\)80\({}^{\circ}\)C.
The laboratory measurements of the gain value for the 1\(\times\)1 binning mode used in the observations are presented in Table 2. We use two gain modes 'low' (\(\times\)1) and 'high' (\(\times\)4), as well as three readout rates for full frame - 'fast' (4 sec), 'norm' (9 sec) and'slow' (90 sec). The value of the measured readnoise for these modes is shown in Table 3. Note here that the measured values of CCD gain and readout noise differ significantly from the values provided by the manufacturer (19-28% less than the declared gain and 26-45% less than the declared readnoise, depending on the mode).
It is significant that there is a misconception that the statistics of counts (analogue digital units, ADU) in CCDs correspond to Poisson ones. This assumption is laid down when determining the gain factor of the analogue-to-digital converter of the CCD registration path (Howell 2006). However, as can be seen in Fig. 8 (and especially on the right panel, where the range of the graph is zoomed in), the dependence of the counts variance on the average registered signal is different from a strictly linear law. There are periodic fluctuations around a linear dependence. We assume that this is a feature of thick silicon CCD detectors with deep depletion technology.
Also, based on the measurements in Fig. 8, we can identify the working ranges of ADU accumulation for observations in various modes (for gain \(\times\)1 and \(\times\)4) of CCD iKon-L 936, where the signal dispersion behaves in the most acceptable way. It can be concluded that for (\(\times\)1) low gain mode it is not worth accumulating a signal of more than \(\sim\)20k ADU.
On the other hand, for astronomical observations, the particular interest is the registration of weak signals, whose statistics are distorted by the readout noise introduced by the electronics. To study
Figure 5: QE of the system MAGIC+telescope+CCD. Filled black circles with error bars mark transmission measurements of the MAGIC with the Zeiss–1000 telescope mirrors and CCD. Blue squares present the same including the transmission of the quadruple Wollaston prism. The dash-dotted line presents the QE in the spectral mode with the VPHG (including optics+telescope+CCD). The dashed line also shows the quantum efficiency of the CCD for this spectral range. The pass-bands of the medium-band SED filters used to measure QE are plotted with a dotted line.
the distortion of counts statistics, a test criterion is used using the dispersion index, the so-called Fano factor (Fano, 1947). The application of the method to CCD studies is described in detail by Afanasieva (2016). By definition, the dispersion index is the ratio of the variance of counts to the average value of the registered signal. For a Poisson distribution, this ratio is equal to one, and this corresponds only to a certain range of registered values. Fig. 9 shows graphs of the dependence of the dispersion index on the magnitude of the registered signal in different modes for the iKon-L 936 CCD. The left and right panels correspond to two gain modes - (\(\times 1\)) low and (\(\times 4\)) high respectively. These studies also provide insight into the optimal choice of exposure time in order to minimize the distortion of counts statistics when observing astrophysical objects using the MAGIC focal reducer. According to the measurements, the best fit to the Poisson statistics is achieved when the signal is accumulated in the (\(\times 1\)) low gain mode at a'slow' readout rate from about a few hundred to \(\sim\)10k ADU.
Note here that for both CCD gain modes used (\(\times 1\) and \(\times 4\)) for the 'norm' readout rate,'sawtooth' beats of the dispersion index are observed. We keep in mind this negative feature during observations.
Also in Fig. 9 on the bottom panels there are measurements of the deviation from signal linearity, which do not exceed 0.5% in the entire range of signal accumulations used in observations.
CCDs with a thick, deep-depletion silicon substrate provide high spectral sensitivity of the detector even in the 1 \(\mu\)m region. A powerful advantage of the iKon-L 936 CCD is the complete absence of interference noise in the red part of the spectrum. Under laboratory conditions, we exposed the CCD illuminated with various
\begin{table}
\begin{tabular}{c c c c} \hline \hline \multicolumn{1}{c}{} & \multicolumn{2}{c}{rate} \\ \cline{2-3} \multicolumn{1}{c}{2048 \(\times\) 2048 pix} & fast (3.0 MHz) & norm (1.0 MHz) & slow (0.1 MHz) \\ \hline \multirow{2}{*}{GAIN} & high (\(\times 4\)) & 0.89 & 0.84 & 0.84 \\ & low (\(\times 1\)) & 3.0 & 2.8 & 2.8 \\ \hline \hline \end{tabular}
\end{table}
Table 2: Measurement of the gain value for various modes Andor iKon-L 936 CCD
\begin{table}
\begin{tabular}{c c c c c} \hline \hline \multicolumn{1}{c}{} & \multicolumn{2}{c}{rate} \\ \cline{2-3} \multicolumn{1}{c}{2048 \(\times\) 2048 pix} & fast (3.0 MHz) & norm (1.0 MHz) & slow (0.1 MHz) \\ \hline \multirow{2}{*}{GAIN} & high (\(\times 4\)) & 6.7 \(\pm\) 0.03 & 4.8 \(\pm\) 0.01 & 2.2 \(\pm\) 0.01 \\ & low (\(\times 1\)) & 11.3 \(\pm\) 0.11 & 5.9 \(\pm\) 0.06 & 2.7 \(\pm\) 0.07 \\ \hline \hline \end{tabular}
\end{table}
Table 3: Measurement of the readnoise for various Andor iKon-L 936 CCD modes
Figure 6: Spot Diagram. Circle diameter — 30 microns = 1′′.
wavelengths and could not detect the contribution of the interference pattern, so-called fringes. Thus, this CCD allows one to efficiently provide research in the red part of the spectrum at high sensitivity. Additional information about the peculiarities of CCD images in the near-infrared band is given in Appendix A.
### Remote control
The control of the device, including the rotator, guide and CCD, is implemented through several compact computers installed on the telescope, which allows remote observations. In observations, we use network access to the onboard computer MR3253S-00F (with Windows 7 as the operating system) made by LEX COMPUTECH in the remote desktop format. The control interface is a graphical shell in the IDL environment MAGIC remote control, a screenshot of which is shown in Fig. 10. The upper half of the interface is used to control the CCD detector and edit the information recorded in the FITS header during the observations; the lower half is used to control the MAGIC (setting the observation mode, focusing the collimator, and orientation) and some telescope functions (small tube shifts and focusing). At the end of each exposure, the resulting FITS file is opened for analysis in the FITS-viewer (see Fig. 11) -- here the observer traditionally controls the levels of accumulation and the quality of each frame. Note here that the image in the viewer is flipped along the RA axis.
## 4 Observation modes
### Photometry
The photometric mode of observations with the MAGIC device makes it possible to obtain direct images using various light filters, which are introduced into the beam by means of two wheels. The size of the FoV is limited by the size of the round filter and is \(\sim\)12\({}^{\prime}\). Note that for photometry, as well as in other observation modes, we use 1\(\times\)1 CCD binning, which gives an image scale of 0\({}^{\prime\prime}\).45/pix and satisfies the Kotelnikov-Nyquist theorem (sampling allows us to accurately restore the PSF-profile). The device uses narrow-band and medium-band interference SED filters3 (the bandwidths of the SED filters used to measure QE are shown in Fig. 5), as well as broadband glass filters \(BVR_{\rm C}I_{\rm C}\) of the Johnson-Cousins system (Bessell, 1990). In the case of the broadband filters, for converting instrumental quantities into _standard photometric system_ the following equations were constructed neglecting the second-order extinction coefficients:
Footnote 3: Manufactured by Edmund Optics, [https://www.edmundoptics.com/](https://www.edmundoptics.com/).
\[\begin{array}{l}B=b+0.12^{\pm 0.022}(B-V)+22.43^{\pm 0.014}\\ V=v-0.23^{\pm 0.023}(B-V)+22.78^{\pm 0.015}\\ R_{\rm C}=r+0.22^{\pm 0.043}(V-R_{\rm C})+22.75^{\pm 0.017}\\ I_{\rm C}=i+0.05^{\pm 0.022}(V-I_{\rm C})+22.23^{\pm 0.019}\\ \end{array} \tag{1}\]
where \(B\), \(V\), \(R_{\rm C}\), \(I_{\rm C}\) are standard magnitudes in \(B\)-, \(V\)-, \(R_{\rm C}\)- and \(I_{\rm C}\)-bands,
\(b\), \(v\), \(r\), \(i\) - instrumental magnitudes in filters \(B\), \(V\), \(R_{\rm C}\), \(I_{\rm C}\), reduced to zenith distance z = 0, calculated as \(-2.5\cdot\lg(N)-\alpha\cdot X\), where \(N\) is the number of counts (ADU) per second acquired in the 2.8 \(e^{-}\)/ADU gain mode, \(\alpha\) is the extinction coefficient, \(X\) is the air mass.
We built equations from measurements of 36 stars (in the range of colours not exceeding 0.6 mag) in the field NGC7654, which was observed at a zenith distance z \(\sim 18^{\circ}\) on September 22, 2020. The measured extinction coefficients on this night were:
\[\begin{array}{l}\alpha_{B}=0^{\rm m}.50\pm 0^{\rm m}.030\\ \alpha_{V}=0^{\rm m}.39\pm 0^{\rm m}.028\\ \alpha_{R_{\rm C}}=0^{\rm m}.29\pm 0^{\rm m}.025\\ \alpha_{I_{\rm C}}=0^{\rm m}.28\pm 0^{\rm m}.039\\ \end{array}\]
For our monitoring tasks, typical magnitudes of observed objects are 16 mag in the \(V\)-band. For 10 minutes of total exposure within a typical seeing of about 2\({}^{\prime\prime}\) at SAO, the accuracy for a star-like object is 0.005 mag. Providing the photometry of faint sources in a \(V\)-band on a single frame with an exposure time of 20 minutes for 22.5 mag we achieved \(S/N\approx 4\) within a 1\({}^{\prime\prime}\).1 seeing.
### Polarimetry
In the MAGIC device, the polarization analyzer is installed in a parallel beam. The design of the device involves the use of any type of polarization analyzer - both a classic dichroic polaroid and birefringent prisms. At the moment, we use a double Wollaston prism for tasks of AGN polarimetry. The advantage of this analyzer is the ability to apply the _one-shot polarimetry_ approach when the number of images of the FoV sufficient to calculate the Stokes parameters is simultaneously registered at the detector in several angles of electric vector oscillation. This method minimizes the effect of atmospheric depolarization (for more details see Afanasiev & Amirkhanyan, 2012).
We use the quadrupole Wollaston prism, originally described in Geyer et al. (1993). The prism was produced by OPTEL4 and consists of two Wollaston calcite prisms glued together with a total size of 30\(\times\)30\(\times\)16 mm. The antireflection coating applied to the prism optics provides a high transmission of about 90%, which leads to QE \(\sim\) 45% concerning the contribution of the device optics, CCD and telescope (Fig. 5). To avoid overlapping images in different polarization directions, the prism is used in conjunction with a square mask giving a square FoV in each direction of 6\({}^{\prime}\).4\(\times\)6\({}^{\prime}\).4.
Figure 7: Polychromatic Encircled Energy.
As an example, Fig. 12 shows a frame of the M1 nebula, obtained with a Wollaston prism for 300 seconds of exposure in the SED600 filter. As can be seen, four directions of polarization are registered on the detector in the angles \(0^{\circ}\), \(90^{\circ}\), \(45^{\circ}\) and \(135^{\circ}\). This makes it possible to calculate three Stokes parameters \(I,Q,U\), which describe the intensity and linear polarization of radiation, as follows:
\[I=I_{0}+I_{00}+I_{45}+I_{135},\] \[\frac{Q}{I}=\frac{I_{0}-I_{90}}{I_{0}+I_{90}},\] \[\frac{U}{I}=\frac{I_{45}-I_{135}}{I_{45}+I_{135}},\]
where \(I_{0},I_{90}\), \(I_{45},I_{135}\) are the intensity in each direction, respectively.
Figure 8: The gain factor is determined from the slope of the dependence of half of the variance of counts on the average value of signal accumulation. _Left_: dependencies are presented for all gain modes and readout rates used in the observations. _Right_: zoomed in on the same dependencies.
Figure 9: Measurement of CCD characteristics for all gain modes (_left_: \(\times 4\) high, _right_: \(\times 1\) low) and readout rates. The top panel shows the dependence of the dispersion index on signal accumulation. The lower panel shows the level of non-linearity of signal registration in the entire range of accumulations.
Further, for convenience, we will use the notation \(Q\equiv Q/I\) and \(U\equiv U/I\). The degree of polarization \(P\) and the angle of the plane of polarization \(\varphi\) are calculated by the formulas:
\[P=\sqrt{Q^{2}+U^{2}},\]
\[\varphi=\frac{1}{2}\arctan(Q/U).\]
Note that to rotate the Stokes parameters to the celestial plane, the Stokes vector should be multiplied by the rotation matrix of the \(-2\)-PA angle, where PA is the instrument position angle.
Due to the huge image separation, the prism used in MAGIC has its own dispersion, much larger than the more classic wedged version. Without the use of a filter in white light, the dispersion will decompose the star-like source image into a low-dispersion spectrum of \(>\)40'' in length. The use of broadband filters, for example, the \(BVR_{\rm C}I_{\rm C}\) system, with this prism is also not justified, since the distortions introduced by dispersion will be an order of magnitude greater than seeing. For this reason, observations with this quadrupole Wollaston prism are optimally carried out in medium-band filters.
Using the observations of unpolarized standard stars, we estimated the value of the instrumental polarization of the device within the FoV inside the mask. Repeated observations of zero standards at different positions in the field, as well as measuring the polarization of images forming by the 8-dots mask, which we use to correct geometric field distortions, we found that the changes of polarization are stable over time and have a smooth field dependence (Fig. 13). The average value of the degree of polarization \(P\) introduced by the device is 3.5% and varies over the field from 2.3% to 4.5%. The pattern and absolute values of the instrumental polarization do not change with the wavelength in the range 6000-7000 A\(\AA\). Our laboratory tests of
Figure 10: MAGIC control interface.
the optics and detector with other polarization analyzers introduced into the beam showed that the source of instrumental polarization is the prism.
We have described the \(Q\) and \(U\) changes by 1st-order surfaces (Fig. 14). After correcting observations of unpolarized stars for instrumental polarization using this model, the deviations of the parameters \(Q\) and \(U\) from zero were less than 0.05%. Thus, the correction of instrumental polarization makes it possible to carry out high-precision polarimetric observations.
To determine the accuracy of the data obtained in the polarimetric mode, we observed a set of highly polarized standard stars. In Fig. 15 the dependence of the observed polarization degree \(P\) and polarization angle \(\varphi\) for a set of standard high polarization stars (after correction for instrumental effects) are plotted against their reference values. The deviations were \(\Delta P=0.18\%\) and \(\Delta\varphi=3^{\circ}\). In general, according to our observations, for a star-like target up to 14 mag in medium-band filters with a seeing of 1'' for 20 minutes of total exposure, the polarization accuracy is better than 0.6%.
The large field of view in the one-shot polarimetry mode is an important advantage for polarization observations of extended objects. An example of the results of such observations is shown in Fig. 16. For the Crab Nebula M1, a map of the change in the polarization of the continuum ('amorphous') radiation was obtained, which makes it possible to compare the polarization characteristics of the nebula with its geometry. The measurement of the surface polarization was conducted for a methodical purpose and repeated the results obtained over the extensive history of Crab polarimetric studies initiated by Baade (1956) and subsequently analyzed by Woltjer (1957). Our observations are in agreement with the surface polarization distribution, its degree, and orientation, as previously identified in earlier photographic studies (Baade, 1956; Woltjer, 1957), as well as in the initial CCD observations (Hickson & van den Bergh, 1990) with a large FoV similar to that of MAGIC. These results are also consistent with _HST_ observations using a smaller FoV (Moran et al., 2013).
### Long slit spectroscopy
The spectral mode of the MAGIC device is implemented by introducing into the collimated beam (between the camera and the collimator) a direct vision grism VPHG600@500 (600 lines/mm, 500 nm - central wavelength), as well as a slit into the converging beam in front of the collimator. The efficiency of the device in the spectral mode (telescope + optics + grating + CCD) is \(\sim\)16% at maximum (Fig. 5)5.
Footnote 5: The efficiency here was also measured by on-sky standard stars. During the observations, the seeing was comparable to the slit width, and the slit losses of \(\sim\)80% are taken into account.
The slit dimensions of 0.11 mm \(\times\) 46 mm correspond to the angular dimensions 1''.7 \(\times\) 12'' in the focal plane. The width of the projected monochromatic slit image onto the CCD plane is FWHM = 3.5 pix. We chose the slit sizes to achieve the best compromise between optimal CCD sampling, the required _extragalactic_6 spectral resolution, and minimizing light loss at the slit under average SAO weather conditions. In conjunction with the spectral grating, low-resolution spectra are obtained in the range 4000-7200 A with reciprocal dispersion 2A/pix and spectral resolution \(\delta\lambda\sim\) 7-8 A or in terms of \(R=\lambda\delta\lambda\sim 1000\).
Footnote 6: Here is meant a compromise for studies of extragalactic objects between the spectral resolution for typical extragalactic tasks and denser concentration of light in a single CCD pixel.
In Fig. 17 the sequence of obtaining observational material on the example of spectroscopy of type 1 AGN E1821+643 is demonstrated from setting the object onto the slit (in the direct image mode) to obtain the processed 1D spectrum. Observations are taken on September 21, 2020. It is interesting to note that in the presented frames, due to such a long slit, several objects are simultaneously observed, including the extended planetary nebula PN K 1-16 (indicated by number 1 in Fig. 17). It is clear that the slit height of 12' allows efficient spectroscopic observations of strongly extended objects, for example, comets. Such a long slit also simplifies sky subtraction when processing spectra.
At the moment, the development of a calibration module is underway to obtain auxiliary frames of a spectral flat field and a reference
Figure 11: Viewer interface with frames of the M27 planetary nebula in photometric (_left_, \(t_{\rm exp}\) = 10 s in \(R_{\rm C}\)-band) and spectral (_right_, \(t_{\rm exp}\) = 600 s) modes. Direct image FoV is 12′ \(\times\) 12′, slit height is 12′, slit width is 1′′.7′, the wavelength range is 340-740 nm. The frames colours are inverted.
illumination of a He-Ne-Ar lamp for constructing a dispersion curve. However, only slight bending of the device (within \(\pm 1\) pix) makes it possible to use an auxiliary appliance installed on the inside of the telescope dome (see Fig. 3, on the right) to obtain calibration frames, which gives Lambertian scattering under illumination lamp.
## 5 Conclusions
In 2020, the MAGIC multi-mode focal reducer for the 1-m Zeiss-1000 telescope of the SAO RAS was designed, manufactured and put into operation. The device effectively solved the problem of oversampling in the Cassegrain focus, making the optical system faster (from \(F/13.3\) to \(F/6.1\)) and more effective for the study of faint and/or extended objects. The optics of the device constrain an \(\sim 0^{\prime\prime}\).3 image of a point source and has an integral transmission QE \(\sim 50\%\). The ability to observe and quickly switch between observation modes allows one to respond flexibly to the weather conditions changes during the night, as well as to comprehensively explore astrophysical objects. Currently, three observation modes are implemented in the MAGIC device.
* Direct images could be taken in the Johnson-Cousins photometric system and in the medium-band interference filters. The photometry FoV is \(\sim\)12\({}^{\prime}\) with a scale of \(0^{\prime\prime}\).45/pix. The filters are set in 2 wheels each of 9 positions. For 10 minutes of total exposure within a typical seeing of about 2\({}^{\prime\prime}\) at SAO, the accuracy for a star-like object of 16 mag in \(V\)-band is 0.005 mag. The limited magnitude in \(V\)-band (\(S/N\approx 4\)) is 22.5 mag within a 1\({}^{\prime\prime}\).1 seeing and 20 minutes exposure.
* Image-polarimetry mode provides measurements of intensity and linear polarization in \(6^{\prime}\).4 \(\times\) \(6^{\prime}\).4 FoV. The introduced instrumental polarization varies over the field and could be compensated by the calculated smooth model. For a star-like target up to 14 mag in medium-band filters with a seeing of 1\({}^{\prime\prime}\) for 20 minutes of total exposure, the accuracy of the intensity measurement is better than 0.01 mag and the polarization accuracy is better than 0.6%.
* In long-slit spectroscopy the combination of 1\({}^{\prime\prime}\).7 \(\times\) 12\({}^{\prime}\) slit and volume phase holographic disperser VPHG600\(\oplus\)500 is used. Low-resolution spectra are obtained in the range 4000-7200 A with reciprocal dispersion 2A/pix and spectral resolution \(\delta\lambda\sim\) 7-8A.
To use the MAGIC device on the 1-m Zeiss-1000 telescope, the optomechanical scheme of the telescope was upgraded. The modernization of baffles made it possible to minimize parasitic rays in the telescope tube, correcting additive noises that occurred in observations. The installation of additional modules - a rotator and an offset guide - helps to solve the problem of accurate telescope guidance and instrument orientation.
It is important to note that exactly the given optical scheme and design can be used to create universal devices for a wide class of small Cassegrain telescopes with a large focal ratio (\(\lesssim F/8\)) and a large aberration-free FoV. A specific implementation of the MAGIC device is a fairly universal solution to reduce the relative focus of the system for a large number of both already built Zeiss-type telescopes and new ones. The realizable efficiency of MAGIC makes it possible to carry out joint monitoring campaigns in conjunction with other focal reducers [see, e.g., results of MAGIC observations in (Shablovinskaya et al. 2023a) obtained together with AFOSC of the 1.82-m Copernico telescope of Asiago-Cima Ekar observatory and FoReRo-2 of the 2-m telescope of Rozhen National Astronomical Observatory], as well as to carry out observations applying the original methodical approaches [see, e.g., the Stokes polarimetry of blazars with quadruple Wollaston prism in two-band filter (Shablovinskaya et al. 2023b)].
## Acknowledgements
MAGIC was the last of many astronomical devices created by Viktor Leonidovich Afanasiev (1947 - 2020). We will remember him as a brilliant practising astronomer who deeply understands the experiment - from the formulation of scientific issues, the device creation
Figure 12: Observation of M1 in four directions of polarization (each FoV = \(6^{\prime}\).4) with the quadrupole Wollaston prism in the SED600 filter (\(t_{\rm exp}=300\) s).
Figure 13: Instrumental polarization over the field inside the FoV of the quadrupole Wollaston prism. Coordinates in pixels are given along the X and Y axes, the coordinate grid is corrected for geometric distortions.
and development of observational techniques to the obtaining of observational data and its competent interpretation. He loved science and was an ideological inspirer. His contribution to the development of our observatory is invaluable.
We are grateful to E.I. Perepelitsyn for the manufacture of optics for the device. The mechanical and optical parts of MAGIC, as well as parts for the modernization of the telescope units, were produced at the SAO breadboard workshops. We also thank the engineers of the 1-m Zeiss-1000 telescope led by V.V. Komarov for constant assistance in the work with the telescope. We thank Dr. Imre Barna Biro for
Figure 16: Results of observations of the M1 nebula: _on the left_, a combined photometric image of the nebula in the \(B\) (blue), \(V\) (green), and SED650 (red) filters; _on the right_ is the polarization map of the nebula obtained with the Wollaston quadrupole prism in the SED600 filter.
Figure 14: For the Stokes parameters \(Q\) and \(U\), smooth variations over the field inside the square mask are described.
Figure 15: Comparison of the measured values of the degree of polarization \(P_{\rm obs}\) (_left_) and the polarization angle \(\varphi_{\rm obs}\) (_right_) with their reference values \(P_{\rm sub}\) and \(\varphi_{\rm sub}\).
Figure 17: MAGIC spectroscopy of the E1821+643 quasar: (a) a fragment of a direct image in the \(R_{\rm C}\) filter (\(t_{\rm cap}=10\) sec) with the position of the spectrograph slit into which four objects fall, the arrow indicates the studied quasar; (b) – single spectral frame (\(t_{\rm cap}=600\) sec), contains traces of cosmic particles; (c) – robustly averaged frame (\(t_{\rm cap}=8\times 600\) sec) with geometric correction and subtracted night sky spectrum; (d) integrated spectrum in the wavelength scale of the quasar E1821+643, marked in the figure: 1 – planetary nebula PN K 1-16; 2 – quasar E1821+643; 3 – star [SPB96] 1882; 4 – field star.
helpful discussions and advice on baffles. We express our gratitude to A.V. Moiseev for providing valuable methodological guidance throughout the study of the device. Also, we appreciate the constructive comments provided by the reviewers, which significantly enhanced the quality of this paper.
This work was supported by the Russian Scientific Foundation (grant no. 20-12-00030 "Investigation of geometry and kinematics of ionized gas in active galactic nuclei by polarimetry methods"). Observations with the SAO RAS telescopes are supported by the Ministry of Science and Higher Education of the Russian Federation.
## Data Availability
The data underlying this article will be shared on reasonable request to the corresponding author.
|
2301.00249 | Minimal surfaces and the new main inequality | We establish the new main inequality as a minimizing criterion for minimal
maps to products of $\mathbb{R}$-trees, and the infinitesimal new main
inequality as a stability criterion for minimal maps to $\mathbb{R}^n$. Along
the way, we develop a new perspective on destabilizing minimal surfaces in
$\mathbb{R}^n$, and as a consequence we reprove the instability of some
classical minimal surfaces; for example, the Enneper surface. | Vladimir Markovic, Nathaniel Sagman | 2022-12-31T16:47:10Z | http://arxiv.org/abs/2301.00249v2 | # Minimal surfaces and the new main inequality
###### Abstract.
We establish the new main inequality as a minimizing criterion for minimal maps into products of \(\mathbb{R}\)-trees, and the infinitesimal new main inequality as a stability criterion for minimal maps to \(\mathbb{R}^{n}\). Along the way, we develop a new perspective on destabilizing minimal surfaces in \(\mathbb{R}^{n}\), and as a consequence we reprove the instability of some classical minimal surfaces; for example, the Enneper surface.
## 1. Introduction
Let \(S\) be a Riemann surface, \(\phi_{1},\ldots,\phi_{n}\) integrable holomorphic quadratic differentials on \(S\) summing to zero, and \(f_{1},\ldots,f_{n}:S\to S^{\prime}\) mutually homotopic quasiconformal maps to another Riemann surface with Beltrami forms \(\mu_{1},\ldots,\mu_{n}\). If \(\partial S\) is non-empty, we ask that \(f_{1},\ldots,f_{n}\) are mutually homotopic relative to \(\partial S\). The new main inequality holds if:
\[\operatorname{Re}\sum_{i=1}^{n}\int_{S}\phi_{i}\cdot\frac{\mu_{i}}{1-|\mu_{i} |^{2}}\leq\sum_{i=1}^{n}\int_{S}|\phi_{i}|\cdot\frac{|\mu_{i}|^{2}}{1-|\mu_{i} |^{2}}. \tag{1}\]
For \(n=1\) and \(f_{1}:S\to S\) homotopic to the identity, (1) is always satisfied, and referred to as the Reich-Strebel inequality or the main inequality for quasiconformal maps. The result is a key ingredient in the proof of Teichmuller's uniqueness theorem.
The first author introduced the new main inequality in the papers [11] and [12] as a tool to study minimal surfaces in products of hyperbolic surfaces. The outcome of [12] is that there exists a product of Fuchsian representations into \(\operatorname{PSL}(2,\mathbb{R})^{n}\), \(n\geq 3\), with multiple minimal surfaces in the corresponding product of closed hyperbolic surfaces. With Smillie in [13], we gave a new proof of the result from [12]. Then in [17], the second author and Smillie found unstable minimal surfaces for Hitchin representations into Lie groups of rank at least \(3\), disproving a conjecture of Labourie [8]. In this paper we revisit the new main inequality and some aspects of the paper [12], but with applications to minimal maps to products of \(\mathbb{R}\)-trees and to \(\mathbb{R}^{n}\). The results on \(\mathbb{R}\)-trees and \(\mathbb{R}^{n}\) are proved in Sections 3 and 4 respectively, which can be read independently.
### Harmonic maps to \(\mathbb{R}\)-trees
Throughout the paper, let \(\Sigma_{g}\) be a closed and oriented surface of genus \(g\geq 2\), and let \(\mathbf{T}_{g}\) be the Teichmuller space of marked Riemann surface structures on \(\Sigma_{g}\). Let \(S\) be a Riemann surface structure on \(\Sigma_{g}\), which lifts to a Riemann surface structure \(\tilde{S}\) on the universal cover, and let \(\operatorname{QD}(S)\) be the space of holomorphic quadratic differentials on \(S\).
We review the basics about harmonic maps to \(\mathbb{R}\)-trees in Section 3. Briefly, a non-zero holomorphic quadratic differential gives the data of an \(\mathbb{R}\)-tree \((T,d)\), a representation \(\rho:\pi_{1}(\Sigma_{g})\to\operatorname{Isom}(T,d)\), and a unique \(\rho\)-equivariant harmonic map \(\pi:\tilde{S}\to(T,d).\) From non-zero \(\phi_{1},\ldots,\phi_{n}\in QD(S)\) summing to zero, we assemble the product of \(\mathbb{R}\)-trees, denoted \(X\), and the product of representations \(\rho:\pi_{1}(\Sigma_{g})\to\operatorname{Isom}(X)\). The product of the equivarant harmonic maps \(\pi_{i}\) from \(\tilde{S}\) to each individual \(\mathbb{R}\)-tree is a minimal map \(\pi:\tilde{S}\to X\). For any
###### Abstract
We consider the following problem:
**Problem 1**.: _Let \(\mathcal{B}\) be a bounded domain with boundary \(\partial\mathbb{D}\) and \(\partial\mathbb{D}\) be a bounded domain with boundary \(\partial\mathbb{D}\) and \(\partial\mathbb{D}\) be a bounded domain with boundary \(\partial\mathbb{D}\). Then the problem is formulated as follows:_
\[\begin{split}\text{Re}\sum_{i=1}^{n}\int_{S}\phi_{i}\cdot\frac{ \mu_{i}}{1-|\mu_{i}|^{2}}\leq\sum_{i=1}^{n}\int_{S}|\phi_{i}|\cdot\frac{|\mu_{ i}|^{2}}{1-|\mu_{i}|^{2}}.\end{split}\]
_The problem is formulated as follows:_
\[\begin{split}\text{Re}\sum_{i=1}^{n}\int_{S}\phi_{i}\cdot\frac{ \mu_{i}}{1-|\mu_{i}|^{2}}\leq\sum_{i=1}^{n}\int_{S}|\phi_{i}|\cdot\frac{|\mu_{ i}|^{2}}{1-|\mu_{i}|^{2}}.\end{split}\]
_The problem is formulated as follows:_
\[\begin{split}\text{Re}\sum_{i=1}^{n}\int_{S}\phi_{i}\cdot\frac{ \mu_{i}}{1-|\mu_{i}|^{2}}\leq\sum_{i=1}^{n}\int_{S}|\phi_{i}|\cdot\frac{|\mu_{ i}|^{2}}{1-|\mu_{i}|^{2}}.\end{split}\]
_The problem is formulated as follows:_
\[\begin{split}\text{Re}\sum_{i=1}^{n}\int_{S}\phi_{i}\cdot\frac{ \mu_{i}}{1-|\mu_{i}|^{2}}\leq\sum_{i=1}^{n}\int_{S}|\phi_{i}|\cdot\frac{|\mu_{ i}|^{2}}{1-|\mu_{i}|^{2}}.\end{split}\]
_The problem is formulated as follows:_
\[\begin{split}\text{Re}\sum_{i=1}^{n}\int_{S}\phi_{i}\cdot\frac{ \mu_{i}}{1-|\mu_{i}|^{2}}\leq\sum_{i=1}^{n}\int_{S}|\phi_{i}|\cdot\frac{|\mu_{ i}|^{2}}{1-|\mu_{i}|^{2}}.\end{split}\]
_The problem is formulated as follows:_
\[\begin{split}\text{Re}\sum_{i=1}^{n}\int_{S}\phi_{i}\cdot\frac{ \mu_{i}}{1-|\mu_{i}|^{2}}\leq\sum_{i=1}^{n}\int_{S}|\phi_{i}|\cdot\frac{|\mu_ {i}|^{2}}{1-|\mu_{i}|^{2}}.\end{split}\]
_The problem is formulated as follows:_
\[\begin{split}\text{Re}\sum_{i=1}^{n}\int_{S}\phi_{i}\cdot\frac{ \mu_{i}}{1-|\mu_{i}|^{2}}\leq\sum_{i=1}^{n}\int_{S}|\phi_{i}|\cdot\frac{|\mu_ {i}|^{2}}{1-|\mu_{i}|^{2}}.\end{split}\]
_The problem is formulated as follows:_
\[\begin{split}\text{Re}\sum_{i=1}^{n}\int_{S}\phi_{i}\cdot\frac{ \mu_{i}}{1-|\mu_{i}|^{2}}\leq\sum_{i=1}^{n}\int_{S}|\phi_{i}|\cdot\frac{|\mu_ {i}|^{2}}{1-|\mu_{i}|^{2}}.\end{split}\]
_The problem is formulated as follows:_
\[\begin{split}\text{Re}\sum_{i=1}^{n}\int_{S}\phi_{i}\cdot\frac{ \mu_{i}}{1-|\mu_{i}|^{2}}\leq\sum_{i=1}^{n}\int_{S}|\phi_{i}|\cdot\frac{|\mu_ {i}|^{2}}{1-|\mu_{i}|^{2}}.\end{split}\]
_The problem is formulated as follows:_
\[\begin{split}\text{Re}\sum_{i=1}^{n}\int_{S}\phi_{i}\cdot\frac{ \mu_{i}}{1-|\mu_{i}|^{2}}\leq\sum_{i=1}^{n}\int_{S}|\phi_{i}|\cdot\frac{|\mu_ {i}|^{2}}{1-|\mu_{i}|^{2}}.\end{split}\]
_The problem is formulated as follows:_
\[\begin{split}\text{Re}\sum_{i=1}^{n}\int_{S}\phi_{i}\cdot\frac{ \mu_{i}}{1-|\mu_{i}|^{2}}\leq\sum_{i=1}^{n}\int_{S}|\phi_{i}|\cdot\frac{|\mu_ {i}|^{2}}{1-|\mu_{i}|^{2}}.\end{split}\]
_The problem is formulated as follows:_
\[\begin{split}\text{Re}\sum_{i=1}^{n}\int_{S}\phi_{i}\cdot\frac{ \mu_{i}}{1-|\mu_{i}|^{2}}\leq\sum_{i=1}^{n}\int_{S}|\phi_{i}|\cdot\frac{|\mu_ {i}|^{2}}{1-|\mu_{i}|^{2}}.\end{split}\]
_The problem is formulated as follows:_
\[\begin{split}\text{Re}\sum_{i=1}^{n}\int_{S}\phi_{i}\cdot\frac{\mu_ {i}}{1-|\mu_{i}|^{2}}\leq\sum_{i=1}^{n}\int_{S}|\phi_{i}|\cdot\frac{|\mu_{i}|^{2} }{1-|\mu_{i}|^{2}}.\end{split}\]
_The problem is formulated as follows:_
\[\begin{split}\text{Re}\sum_{i=1}^{n}\int_{S}\phi_{i}\cdot\frac{\mu_ {i}}{1-|\mu_{i}|^{2}}\leq\sum_{i=1}^{n}\int_{S}|\phi_{i}|\cdot\frac{|\mu_{i}|^{2 }}{1-|\mu_{i}|^{2}}.
**Corollary B**.: \(h\) _is stable if and only if for all mutually infinitesimally equivalent functions \(\dot{\mu}_{1},\dots,\dot{\mu}_{n}\in L^{\infty}(\mathbb{D}),\) the infinitesimal new main inequality holds:_
\[-\text{Re}\sum_{i=1}^{n}\int_{\mathbb{D}}\phi_{i}\dot{\mu}_{i}T(\dot{\mu}_{i}) dxdy\leq\sum_{i=1}^{n}\int_{\mathbb{D}}|\phi_{i}||\dot{\mu}_{i}|dxdy. \tag{2}\]
Above and throughout the paper, when integrating over \(\mathbb{D}\) we use the \(\phi_{i}\) term to denote the associated holomorphic function rather than the differential.
We now give an overview of the second half of the paper. To destabilize a minimal surface, it's probably most common to perturb by normal variations of the image in \(\mathbb{R}^{n}\) that vanish on the boundary. Another option is to precompose the boundary parametrization along a flow of diffeomorphisms of the circle. One then hopes to lower the energy by taking the harmonic extension of the boundary map at each time along the flow.
Instead, motivated by Theorem A, we vary a minimal surface \(h=(h_{1},\dots,h_{n})\) by precomposing the harmonic coordinate functions \(h_{i}\) by quasiconformal maps. Let \(\mathcal{E}(\Omega,g)\) denote the energy of a map \(g\) from a domain \(\Omega\subset\mathbb{C}\) to \(\mathbb{R}\). First order variations of quasiconformal maps can be described by a real vector space \(\mathcal{V}\) whose elements are a particular class of holomorphic functions from \(\mathbb{C}\setminus\mathbb{D}\to\mathbb{C}\). Given \(\varphi\in\mathcal{V}\), it is possible to find a path of \(n\)-tuples of quasiconformal maps \(t\mapsto f_{1}^{t},\dots,f_{n}^{t}:\mathbb{C}\to\mathbb{C}\) all fixing the origin and agreeing on \(\mathbb{C}\setminus\mathbb{D}\) with a holomorphic map \(F^{t}\) that satisfies \(F^{t}(z)=z+t\varphi(z)+o(t)\). Note that \(f_{i}^{t}(\mathbb{D})=F^{t}(\mathbb{D})\) does not depend on \(i\), and the boundary of the minimal surface in \(\mathbb{R}^{n}\) remains fixed if we precompose each \(h_{i}\) by \((f_{i}^{t})^{-1}.\) Suppose that
\[\frac{d^{2}}{dt^{2}}|_{t=0}\sum_{i=1}^{n}\mathcal{E}(f_{i}^{t}(\mathbb{D}),h_{ i}\circ(f_{i}^{t})^{-1})<0. \tag{3}\]
Then, because the energy of a map to \(\mathbb{R}^{n}\) is at least the area of the image, \(h\) is unstable.
**Definition 1.2**.: We say that \(h\) is unstable via self-maps, and that \(\varphi\) destabilizes \(h\), if we can choose \(f_{i}^{t}\) so that (3) holds.
Theorem B justifies that varying by self-maps can be done in place of the usual methods. In Section 4.4 we define a real quadratic form \(\mathbf{L}_{h}:\mathcal{V}\to\mathbb{R}\) such that \(\mathbf{L}_{h}(\varphi)<0\) if and only if \(\varphi\) destabilizes.
**Definition 1.3**.: The self-maps index of \(h\), denoted \(\text{Ind}(\mathbf{L}_{h})\), is the maximal dimension of a subspace of \(\mathcal{V}\) on which \(\mathbf{L}_{h}\) is negative definite.
Let \(\text{Ind}(h)\) denote the ordinary index for the area functional.
**Theorem B**.: \(\text{Ind}(\mathbf{L}_{h})=\text{Ind}(h)\)_._
**Remark 1.4**.: The result should have implications for maps from \(\overline{\mathbb{D}}\) to products of \(\mathbb{R}\)-trees, a subject which we don't develop in this paper. Every harmonic function from any Riemann surface arises from a folding of a map to an \(\mathbb{R}\)-tree (see [4] and [13, Section 4.1]). Clearly, self-maps variations lift to variations of maps to \(\mathbb{R}\)-trees.
**Remark 1.5**.: For equivariant minimal maps to \(\mathbb{R}^{n}\), the analogous result is true and proved in [13, Lemma 4.6 and Proposition 4.8] via a different method.
The conditions (1) are (2) are tractable, so we also ask: given a minimal map \(h\) with Weierstrass-Enneper data \(\alpha\) and \(\varphi\in\mathcal{V}\), when does \(\varphi\) destabilize? As in [12, Section 5], define the functional \(\mathcal{F}:C^{1}(\mathbb{D})\to\mathbb{R}\),
\[\mathcal{F}(f)=\text{Re}\int_{\mathbb{D}}f_{z}f_{\overline{z}}+\int_{\mathbb{ D}}|f_{\overline{z}}|^{2}.\]
Given a continuous function from \(\partial\mathbb{D}\to\mathbb{C}\), the harmonic extension is the sum of the Poisson extensions of the real and imaginary parts.
**Theorem C**.: _Let \(\varphi\in\mathcal{V}.\) For each \(i\), let \(v_{i}\) be the harmonic extension of \((\frac{\partial}{\partial z}h_{i})\cdot\varphi|_{\partial\mathbb{D}}:\partial \mathbb{D}\to\mathbb{C}\). If_
\[\mathcal{F}_{\alpha}(\varphi):=\sum_{i=1}^{n}\mathcal{F}(v_{i})<0,\]
_then \(\varphi\) destabilizes \(h\)._
In the case of polynomials, we work out the explicit formulas for a particular class of variations. For a polynomial \(p(z)=\sum_{j=0}^{r}a_{j}z^{j}\), an integer \(m\geq 0\), and \(\gamma\in\mathbb{C}^{*}\), set
\[C(p,\gamma,m)=\pi\sum_{j=0}^{m-1}\frac{\operatorname{Re}(\gamma^{2}a_{j}a_{2m- j})+|\gamma|^{2}|a_{j}|^{2}}{m-j}.\]
**Theorem D**.: _For \(i=1,\ldots,n\), let \(p_{i}\) be a polynomial with no zeros on \(\partial\mathbb{D}\), and such that \(\sum_{i=1}^{n}p_{i}^{2}=0.\) On \(\mathbb{D}\), let \(\alpha_{i}\) be the holomorphic \(1\)-form \(\alpha_{i}(z)=p_{i}(z)dz\). Suppose there exists an integer \(m\geq 0\) and \(\gamma\in\mathbb{C}^{*}\) such that_
\[\sum_{i=1}^{n}C(p_{i},\gamma,m)<0.\]
_Then \(\varphi(z)=\gamma z^{-m}\) destabilizes the associated minimal surface in \(\mathbb{R}^{n}\)._
To demonstrate the result, we consider the most well known unstable minimal surface: the Enneper surface. The Weierstrass-Enneper data \((\alpha_{1},\alpha_{2},\alpha_{3})\) consists of the \(1\)-forms obtained by multiplying the following polynomials on \(\mathbb{C}\) by \(dz\):
\[p_{1}(z)=\frac{1}{2}(1-z^{2})\;,\;p_{2}(z)=\frac{i}{2}(1+z^{2})\;,\;p_{3}(z)=z.\]
We restrict to \(\overline{\mathbb{D}_{r}}=\{z\in\mathbb{C}:|z|\leq r\}\). For \(r<1\), the Enneper surface is strictly minimizing. For \(r=1\), it is strictly minimizing and stable, but not strictly stable. For \(r>1\), Theorem D gives a new and simple proof of Corollary D below.
**Corollary D**.: _For \(r>1\), the Enneper surface restricted to \(\overline{\mathbb{D}_{r}}\) is unstable._
Proof.: Let \(h=(h_{1},h_{2},h_{3}):\mathbb{C}\to\mathbb{R}^{3}\) be the minimal map defining the Enneper surface. We reparametrize to \(h|_{\mathbb{D}_{r}}\) to \(\mathbb{D}\) by defining \(h^{r}=(h_{1}^{r},h_{2}^{r},h_{3}^{r})=(h_{1}(r\cdot),h_{2}(r\cdot),h_{3}(r \cdot)).\) The holomorphic derivatives are given by
\[p_{i}^{r}(z)=\frac{\partial}{\partial z}\mathrm{Re}\int_{0}^{rz}\alpha_{i}(w) dw=rp_{i}(rz)\;,\;i=1,2,3.\]
Explicitly,
\[p_{1}^{r}(z)=\frac{r}{2}(1-r^{2}z^{2})\;,\;p_{2}^{r}(z)=\frac{ri}{2}(1+r^{2}z ^{2})\;,\;p_{3}^{2}(z)=r^{2}z.\]
We choose \(m=1,\gamma=1\) and find that for \(p(z)=az^{2}+bz+c\),
\[C(p,1,1)=|c|^{2}+\mathrm{Re}(ac). \tag{4}\]
Computing the expression (4) for each polynomial,
\[\sum_{i=1}^{3}C(p_{i}^{r},1,1)=\frac{r^{2}}{2}(1-r^{2}).\]
This is negative for \(r>1\)
There are other known conditions for minimal surfaces to be unstable. For example, let \(G:\overline{\Omega}\to S^{2}\) be the Gauss map for a minimal surface. A classical result of Schwarz says that if the first Dirichlet eigenvalue for the Laplacian on \(G(\overline{\Omega})\) is less than 2, then the minimal surface is unstable [18] (see also [2]). For the Enneper surface, the stereographic projection of the Gauss map \(G\) is \(g(z)=z\). For \(r>1\), \(G(\overline{\mathbb{D}_{r}})\) is a spherical cap containing the upper hemisphere, and hence the first Dirichlet eigenvalue for the Laplacian is less than 2 (see also [15, SS117]). We must comment that the methods developed here using quasiconformal maps are not strictly necessary to prove Theorems C and D. For these results, the self-maps variations simply provide a new model for computation, which happens to lend itself well to the situation. We explain this point carefully right after proving Theorem C.
### Acknowledgments
Vladimir Markovic is supported by the Simons Investigator Award 409745 from the Simons Foundation. Nathaniel Sagman is funded by the FNR grant O20/14766753, _Convex Surfaces in Hyperbolic Geometry._
## 2. Preliminaries
Let \(S\) be a Riemann surface, not necessarily compact and possibly with boundary. Since we will work with harmonic maps to \(\mathbb{R}\)-trees in Section 3, we define harmonic maps in the metric space context.
### Harmonic and minimal maps
Let \(\nu\) be a smooth metric on \(S\) compatible with the complex structure. Let \((M,d)\) be a complete and non-positively curved (NPC) length space, and \(h:S\to M\) a Lipschitz map. Korevaar-Schoen [7, Theorem 2.3.2] associate a locally \(L^{1}\) measurable metric \(g=g(h)\), defined locally on pairs of Lipschitz vector fields, and which plays the role of the pullback metric. If \(h\) is a \(C^{1}\) map to a smooth Riemannian manifold \((M,\sigma)\), and the distance \(d\) is induced by a Riemannian metric \(\sigma\), then \(g(h)\) is represented by the pullback metric \(h^{*}\sigma\). The energy density is the locally \(L^{1}\) function
\[e(h)=\frac{1}{2}\mathrm{trace}_{\nu}g(h), \tag{5}\]
and the total energy, which is allowed to be infinite, is
\[\mathcal{E}(S,h)=\int_{S}e(h)dA, \tag{6}\]
where \(dA\) is the area form of \(\nu\). We comment here that the measurable 2-form \(e(h)dA\) does not depend on the choice of compatible metric \(\nu\), but only on the complex structure.
**Definition 2.1**.: \(h\) is harmonic if it is a critical point for the energy \(h\mapsto\mathcal{E}(S,h)\). If \(\partial S\neq\emptyset\), we ask that \(h\) is critical among other Lipschitz maps with the same boundary values.
Let \(g_{ij}(h)\) be the components of \(g(h)\) in a holomorphic local coordinate \(z=x_{1}+ix_{2}\). The Hopf differential of a map \(h\) is the measurable tensor given in the local coordinate by
\[\phi(h)dz^{2}=\frac{1}{4}(g_{11}(h)(z)-g_{22}(h)(z)-2ig_{12}(h)(z))dz^{2}. \tag{7}\]
In the Riemannian setting, this is
\[\phi(h)(z)=h^{*}\sigma\Big{(}\frac{\partial}{\partial z},\frac{\partial}{ \partial z}\Big{)}(z)dz^{2}.\]
When \(h\) is harmonic, even in the metric space setting, the Hopf differential is represented by a holomorphic quadratic differential.
**Definition 2.2**.: The map \(h\) is minimal if it is harmonic and the Hopf differential vanishes identically.
In the Riemannian setting, a non-constant minimal map is a branched minimal immersion.
For a harmonic map to a product space, it is clear from definitions (5) and (7) that the energy density and the Hopf differential are the sum of the energy densities and the Hopf differentials of the component maps respectively.
Let \(X\) be a complete NPC length space. Given an action \(\rho:\pi_{1}(\Sigma_{g})\to\operatorname{Isom}(X)\) and a \(\rho\)-equivariant map \(h:\tilde{S}\to X\), the energy density is invariant under \(\pi_{1}(\Sigma_{g})\) action on \(\tilde{S}\) by deck transformations, and hence descends to a function \(S\). Total energy is defined as in (6) by integrating the density against the area form on \(S\), and we say that \(h\) is harmonic if it is a critical point of the total energy among other \(\rho\)-equivariant maps. Similarly, \(h\) is minimal if it is harmonic and the Hopf differential, which also descends to \(S\), is zero.
### Quasiconformal maps
For details on results below, we refer the reader to [1].
**Definition 2.3**.: An orientation preserving homeomorphism \(f\) between domains in \(\mathbb{C}\) is quasiconformal if
1. the partial derivatives with respect to the coordinates \(z\) and \(\overline{z}\) exist almost everywhere and can be represented by locally integrable functions \(f_{z}\) and \(f_{\overline{z}}\), and
2. there exists \(k\in[0,1)\) such that \(|f_{\overline{z}}|\leq k|f_{z}|\).
A map between Riemannian surfaces \(f:S\to S^{\prime}\) is quasiconformal if any holomorphic local coordinate representation is a quasiconformal map.
The Beltrami form is the measurable tensor represented in local coordinates by
\[\mu=\mu(z)\frac{d\overline{z}}{dz}=\frac{f_{\overline{z}}(z)}{f_{z}(z)}\frac{ d\overline{z}}{dz}.\]
Although \(\mu(z)\) is not globally defined, the transformation law ensures that the norm \(|\mu(z)|\) is. \(L^{\infty}_{1}(S)\) is defined as the open unit ball of the space of measurable tensors of the form \(\mu(z)\frac{d\overline{z}}{dz}\).
**Theorem 2.4** (Measurable Riemann mapping theorem).: _Let \(\hat{\mathbb{C}}\) be the Riemann sphere and \(\mu\in L^{\infty}_{1}(\hat{\mathbb{C}})\). There exists a quasiconformal homeomorphism \(f^{\mu}:\hat{\mathbb{C}}\to\hat{\mathbb{C}}\) with Beltrami form \(\mu\). \(f^{\mu}\) is unique up to postcomposing by Mobius transformations._
It is important to note that if \(t\mapsto\mu(t)\) is a real analytic path in \(L^{\infty}_{1}(S)\), then \(t\mapsto f^{\mu(t)}\) and its distributional derivatives locally vary real analytically with respect to a suitable norm (see [1, Chapter V]).
For \(\mu\in L^{\infty}_{1}(\mathbb{D})\), we extend \(\mu\) to all of \(\hat{\mathbb{C}}\) by setting \(\mu=0\). There is a unique choice of Mobius transformation so that we can make the definition below.
**Definition 2.5**.: The normal solution to the Beltrami equation for \(\mu\) is the unique solution \(f^{\mu}:\mathbb{C}\to\mathbb{C}\) satisfying \(f^{\mu}(0)=0\) and \(f^{\mu}_{z}(z)-1\in L^{p}(\mathbb{C})\) for all \(p>2\).
Next we state the Reich-Strebel energy formula (originally equation 1.1 in [16]). Here \(S\) is any Riemann surface, \(h:S\to M\) is a Lipschitz map to a metric space of finite total energy, and \(f:S\to S^{\prime}\) is a quasiconformal map between Riemann surfaces. Let \(\mu\) be the Beltrami form of \(f\), \(J_{f^{-1}}\) the Jacobian of \(f^{-1}\), and \(\phi\) the Hopf differential of \(h\), which need
not be holomorphic. One can verify the identity:
\[e(h\circ f^{-1}) =(e(h)\circ f^{-1})J_{f^{-1}}+2(e(h)\circ f^{-1})J_{f^{-1}}\frac{(| \mu_{f}|^{2}\circ f^{-1})}{1-(|\mu_{f}|^{2}\circ f^{-1})}\] \[-4\text{Re}\Big{(}(\phi(h)\circ f^{-1})J_{f^{-1}}\frac{(\mu_{f} \circ f^{-1})}{1-(|\mu_{f}|^{2}\circ f^{-1})}\Big{)}\]
Integrating against the area form, we arrive at the proposition below.
**Proposition 2.6**.: _The formula_
\[\mathcal{E}(S^{\prime},h\circ f^{-1})-\mathcal{E}(S,h)=-4\text{Re}\int_{S} \phi(h)\cdot\frac{\mu}{1-|\mu|^{2}}+2\int_{S}e(h)\cdot\frac{|\mu|^{2}}{1-|\mu|^ {2}}dA \tag{8}\]
_holds._
When the target is an \(\mathbb{R}\)-tree, which of course includes \(\mathbb{R}\), we'll explain that \(e(h)dA\) is represented by \(2|\phi(h)|\). Consequently, in the cases of interest, the formula (8) involves only \(\phi\) and \(\mu\).
## 3. Minimal maps into products of \(\mathbb{R}\)-trees
In this section, \(S\) is a closed Riemann surface structure on \(\Sigma_{g}\).
### Harmonic maps to \(\mathbb{R}\)-trees
**Definition 3.1**.: An \(\mathbb{R}\)-tree is a length space \((T,d)\) such that any two points are connected by a unique arc, and every arc is a geodesic, isometric to a segment in \(\mathbb{R}\).
A point \(x\in T\) is a vertex if the complement \(T\backslash\{x\}\) has greater than two components. Otherwise it is said to lie on an edge.
The vertical (resp. horizontal) foliation of \(\phi\in QD(S)\) is the singular foliation whose leaves are the integral curves of the line field on \(S\backslash\phi^{-1}(0)\) on which \(\phi\) is a positive (resp. negative) real number. The singularities are standard prongs at the zeros, with a zero of order \(k\) corresponding to a prong with \(k+2\) segments. Both foliations come with transverse measures \(|\text{Re}\sqrt{\phi}|\) and \(|\text{Im}\sqrt{\phi}|\) respectively (see [5, Expose 5] for precise definitions).
Throughout, we work with the vertical foliation. Lifting to a singular measured foliation on a universal cover \(\tilde{S}\), we define an equivalence relation on \(\tilde{S}\) by \(x\sim y\) if \(x\) and \(y\) lie on the same leaf. The quotient space \(\tilde{S}/\sim\) is denoted \(T\). Pushing the transverse measure down via the projection \(\pi:\tilde{S}\to T\) yields a distance function \(d\) that turns \((T,d)\) into an \(\mathbb{R}\)-tree, with an induced action \(\rho:\pi_{1}(S)\to\text{Isom}(T,d).\) Under this distance, the projection map \(\pi:\tilde{S}\to(T,d)\) is \(\rho\)-equivariant and harmonic [21, Section 4].
The energy and the Hopf differential of the projection map \(\pi\) can be described explicitly. At a point \(p\in\tilde{S}\) on which \(\phi(p)\neq 0\), the map locally isometrically factors through a segment in \(\mathbb{R}\). In a small enough neighbourhood around that point, \(g(h)\) is represented by the pullback metric of the locally defined map to \(\mathbb{R}\). From this, we see that the energy density and the Hopf differential have continuous representatives equal to \(\nu^{-1}|\phi|/2\) and \(\phi/4\) respectively.
For any other Riemann surface \(S^{\prime}\) representing a point in \(\mathbf{T}_{g}\), there is a unique \(\rho\)-equivariant harmonic map \(\tau:\tilde{S}^{\prime}\to(T,2)\) (see [21]). The energy functional on Teichmuller space \(\mathbf{E}_{\rho}:\mathbf{T}_{g}\to[0,\infty)\) is defined by \(\mathbf{E}_{\rho}(S^{\prime})=\mathcal{E}(S^{\prime},\tau)\).
Now we turn to Theorem A. Suppose that \(\phi_{1},\ldots,\phi_{n}\in QD(S)\) sum to \(0\). For each \(i\), we have an action of \(\pi_{1}(\Sigma_{g})\) on an \(\mathbb{R}\)-tree \((T_{i},d_{i})\) and an equivariant harmonic projection map \(\pi_{i}:\tilde{S}\to(T_{i},d_{i})\). We assemble the product of \(\mathbb{R}\)-trees \(X\) with the product action
\(\rho:\pi_{1}(\Sigma_{g})\to\operatorname{Isom}(X)\) and product map \(\pi=(\pi_{1},\dots,\pi_{n}).\) The energy functional \(\mathbf{E}_{\rho}\) on \(\mathbf{T}_{g}\) for \(\rho\) is the sum of the energy functionals for each component action. \(\pi\) is not only harmonic but also minimal. Theorem A is about determining when \(S\) minimizes \(\mathbf{E}_{\rho}.\)
The new main inequality comes out of the formula (8). Let \(S^{\prime}\) be another Riemann surface structure on \(\Sigma_{g}\) and let \(f_{1},\dots,f_{n}:S\to S^{\prime}\) be mutually homotopic quasiconformal maps with Beltrami forms \(\mu_{i}\). We lift each \(f_{i}\) to a quasiconformal map \(\tilde{f}_{i}\) between the universal covers. Putting previous results in our setting, we have
**Proposition 3.2**.: \(\mathbf{E}_{\rho}(S)=\mathcal{E}(S,\pi)=\sum_{i=1}^{n}\mathcal{E}(S,\pi_{i}),\) _and_
\[\sum_{i=1}^{n}\mathcal{E}(S^{\prime},\pi_{i}\circ\tilde{f}_{i}^{-1})-\sum_{i= 1}^{n}\mathcal{E}(S,\pi_{i})=-\text{Re}\sum_{i=1}^{n}\int_{S}\phi_{i}\cdot \frac{\mu_{i}}{1-|\mu_{i}|^{2}}+\sum_{i=1}^{n}\int_{S}|\phi_{i}|\cdot\frac{| \mu_{i}|^{2}}{1-|\mu_{i}|^{2}}.\]
Hence, as we stated in Section 1.1, the new main inequality (1) is equivalent to
\[\mathbf{E}_{\rho}(S)\leq\sum\mathcal{E}(S,\pi_{i})\leq\sum\mathcal{E}(S,\pi_{ i}\circ\tilde{f}_{i}^{-1}).\]
One direction of Theorem A is therefore clear: if \(S\) is a global minimum, then (1) holds for any choice of \(f_{1},\dots,f_{n}.\) To prove the harder direction of Theorem A, we show that we can nearly factor harmonic maps to \(\mathbb{R}\)-trees arising from Jenkins-Strebel differentials.
### Jenkins-Strebel differentials and the main proof
Given a singular measured foliation on \(S\), we say that a leaf entering or exiting a singular point is critical, and that a leaf connecting two singular points is a saddle connection. If two leaves connect to the same singular point, we say that they lie on a critical trajectory. So in particular, if two singular points are connected by a saddle connection, then they lie in the same critical trajectory.
A differential \(\phi\in\operatorname{QD}(S)\) is Jenkins-Strebel if every non-critical leaf of the vertical measured foliation is a closed circle. The complement of the set of critical trajectories is a disjoint union of cylinders \(C_{1},\dots,C_{p}\), each foliated by the vertical leaves. Each cylinder \(C_{k}\) corresponds to the homotopy class of its core curve, say \(\gamma_{k}\), so \(p\) is at most \(3g-3.\) The reader should be aware that it's more common to define the Jenkins-Strebel condition in terms of the horizontal foliation. The length of any arc connecting the boundaries of the cylinder \(C_{k}\) under the measure \(|\text{Re}\sqrt{\phi}|\) is called the height of the cylinder, and denoted \(h_{k}\). Likewise, the length of any of the leaves under the measure \(|\text{Im}\sqrt{\phi}|\), say \(l_{k}\), is the length. In a holomorphic coordinate \(z=x+iy\) on \(C_{k}\) that is conformal with respect to the metric \(|\phi|\), vertical leaves are the circles of the form \(\{x_{0}\}\times[0,l_{k}]/\{(x_{0},0)\sim(x_{0},l_{k})\}\), and horizontal leaves are the lines \([0,h_{k}]\times\{y_{0}\}\).
When \(\phi\) is a Jenkins-Strebel differential, the \(\mathbb{R}\)-tree \((T,d)\) is locally compact and a genuine metric tree. The quotient by the action of \(\rho\), which will always be denoted \((G,s)\), is a metric graph. Each edge in \((G,s)\) corresponds to a cylinder \(C_{k}\), and the length of the edge under \(s\) is exactly the height \(h_{k}\). Note the following converse.
**Lemma 3.3**.: _Suppose that \((T,d)\) is a metric tree, and the graph \((G,s)\) has \(p\) edges with lengths \(h_{1},\dots,h_{p}\). Then \(\phi\) is Jenkins-Strebel and has \(p\) cylinders with heights \(h_{1},\dots,h_{p}\)._
Proof.: First, descend to a map \(S\to(T,d)/\rho(\pi_{1}(\Sigma_{g})).\) Locally isometrically factoring the map near the preimage of an edge point, the regular value theorem yields that preimages of edge points, i.e., leaves in the vertical foliation, are closed circles.
Points on the same edge correspond to homotopic closed circles. The circles corresponding to an edge foliate the cylinders that make up the decomposition for \(\phi\). By definition of the transverse measure, the height is \(h_{k}\)
In the situation above, the homotopy classes of the \(\gamma_{k}\) are determined by \((T,d)\). For more details, see [19]. We say that \(\phi\) is a maximal Jenkins-Strebel differential if the number of cylinders is \(3g-3\).
**Lemma 3.4**.: _Maximal Jenkins-Strebel differentials are dense in \(\text{QD}(S)\) with respect to the \(L^{1}\) norm._
Proof.: It is foundational that Jenkins-Strebel differentials are dense in \(\text{QD}(S)\) with respect to the \(L^{1}\) norm [3]. It is proved in [10, Theorem 1.6] that any Jenkins-Strebel differential can be approximated in \(L^{1}\) by maximal ones.
The main step in the proof of Theorem A is the lemma below.
**Lemma 3.5** (Nearly factoring harmonic maps).: _Let \(\pi:\tilde{S}\to(T,d)\) be a \(\rho\)-equivariant harmonic map to an \(\mathbb{R}\)-tree arising from a maximal Jenkins-Strebel differential. Let \(S^{\prime}\) be another Riemann surface. Then there exists a sequence of quasiconformal maps \(f_{n}:S\to S^{\prime}\) in the homotopy class of the identity such that_
\[\lim_{n\to\infty}\mathcal{E}(S^{\prime},\pi\circ\tilde{f}_{n}^{-1})=\mathbf{E }_{\rho}(S^{\prime}). \tag{9}\]
The lemma is probably true for any \(\phi\in\text{QD}(S)\), but the proof would be be more involved. Our argument for Theorem A requires just the Jenkins-Strebel case.
We now prove Theorem A, deferring the proof of Lemma 3.5 to the next two subsections. Resume the notation from the introduction.
Proof of Theorem A.: In view of the comments in Section 3.1, we only need to prove that if the new main inequality always holds for \(\phi_{1},\dots,\phi_{n}\), then \(S\) minimizes \(\mathbf{E}_{\rho}.\) We assume for the sake of contradiction that there exists a Riemann surface \(S^{\prime}\) representing another point in \(\mathbf{T}_{g}\) and an \(\epsilon>0\) such that
\[\mathbf{E}_{\rho}(S^{\prime})+\epsilon<\mathbf{E}_{\rho}(S).\]
Via Lemma 3.4, for each \(i\) we find a sequence of maximal Jenkins-Strebel differentials \((\phi_{i}^{m})_{m=1}^{\infty}\subset\text{QD}(S)\) that approximate \(\phi_{i}\) in the \(L^{1}\) norm. For each \(m\), we have a product of \(\mathbb{R}\)-trees \(X_{m}\) and we let \(\rho_{m}\) be the product action. By Lemma 3.3, the associated quadratic differentials on the Riemann surface \(S^{\prime}\) are all maximal Jenkins-Strebel. For all \(m\) sufficiently large,
\[\mathbf{E}_{\rho_{m}}(S^{\prime})+\epsilon<\mathbf{E}_{\rho_{m}}(S).\]
Let \(\pi_{i}^{m}\) be the component harmonic maps from \(\tilde{S}\). Fixing a large enough \(m\), by Lemma 3.5 we can find a sequence of quasiconformal maps \(f_{i}^{r}:S\to S^{\prime}\) such that for \(r\) large enough,
\[\sum_{i=1}^{n}\mathcal{E}(S^{\prime},\pi_{i}^{m}\circ(\tilde{f}_{i}^{r})^{-1} )+\epsilon<\mathbf{E}_{\rho_{m}}(S). \tag{10}\]
Choose any such large \(r\) and let \(\mu_{i}\) be the Beltrami form of \(f_{i}^{r}\). By Proposition 3.2, (10) is equivalent to
\[\text{Re}\sum_{i=1}^{n}\int_{S}\phi_{i}^{m}\frac{\mu_{i}}{1-|\mu_{i}|^{2}}> \sum_{i=1}^{n}\int_{S}|\phi_{i}^{m}|\frac{|\mu_{i}|^{2}}{1-|\mu_{i}|^{2}}+\epsilon.\]
Taking \(m\to\infty\), an application of the dominated convergence theorem yields
\[\text{Re}\sum_{i=1}^{n}\int_{S}\phi_{i}\frac{\mu_{i}}{1-|\mu_{i}|^{2}}\geq \sum_{i=1}^{n}\int_{S}|\phi_{i}|\frac{|\mu_{i}|^{2}}{1-|\mu_{i}|^{2}}+\epsilon >\sum_{i=1}^{n}\int_{S}|\phi_{i}|\frac{|\mu_{i}|^{2}}{1-|\mu_{i}|^{2}}.\]
That is, the new main inequality fails for \(\mu_{1},\dots,\mu_{n}\). This contradiction establishes the result of Theorem A.
### Model maps between pants
The remainder of the section is devoted to the proof of Lemma 3.5. We first recall Liu's solution to the heights problem [10].
Cutting the surface \(\Sigma_{g}\) along a maximal curve system yields a decomposition of the surface into pairs of pants. Let \(\Sigma_{0,3}\) be an oriented pair of pants. Liu first proves the lemma below.
**Lemma 3.6** (Lemma 2.1 in [10]).: _Let \((h_{1},h_{2},h_{3})\) be positive numbers. For any triple of positive numbers \((l_{1},l_{2},l_{3})\) there is a unique Riemann surface structure \(P\) on \(\Sigma_{0,3}\) that comes with a Jenkins-Strebel differential \(\varphi\) such that each boundary component is a vertical leaf, and the corresponding conformal cylinders \(C_{k},\,k=1,2,3,\) have height \(h_{k}\) and length \(l_{k}\)._
Fix a maximal curve system \(\gamma_{1},\ldots,\gamma_{3g-3}\), heights \(h_{1},\ldots,h_{3g-3}\), and lengths \(l_{1},\ldots,l_{3g-3}\). Using Lemma 3.6, on each pair of pants in the corresponding decomposition we get a Riemann surface structure and a Jenkins-Strebel differential realizing specified heights \(h_{k}/2\) and lengths \(l_{k}\). By conformally welding the pants together along the curves that we originally cut along, we obtain a Riemann surface structure on \(\Sigma_{g}\) and a Jenkins Strebel differential with heights \(h_{1},\ldots,h_{3g-3}\) and lengths \(l_{1},\ldots,l_{3g-3}\). To do the welding, we take a curve \(\gamma_{k}\) in two connecting pants \(P_{i}\) and \(P_{j}\), and parametrize \(\gamma_{k}\) in the two pants by some maps \(\delta_{ki}(t)\) and \(\delta_{kj}(t)\), \(t\in[0,1]\), with \(\delta_{ki}(0)=\delta_{ki}(1)\) and \(\delta_{kj}(0)=\delta_{kj}(1)\). We weld the pants by identifying \(\delta_{ki}(t)\) with \(\delta_{kj}(\theta_{k}-t)\), for some \(\theta_{k}\in\mathbb{R}\), and where \(\theta_{k}-t\) is taken mod \(\mathbb{Z}\). With the \(l_{k}\) and \(h_{k}\) specified, the only freedom we have is how much we twist when we do the welding. It is proved in [10, Theorem 2.3] that any pair consisting of a Riemann surface and a maximal Jenkins-Strebel differential is obtained in this fashion.
We construct the maps \(f_{n}\) for Lemma 3.5 by building them on individual pants, and then gluing the maps together and possibly twisting to account for welding. Lemma 3.7 is the localized version of Lemma 3.5 that applies to \(\Sigma_{0,3}\). One difficulty is that the critical trajectories in pants can be topologically distinct: for the pants in Lemma 3.6, there are three possibilities for \(\varphi\) (see the proof of Lemma 2.1 in [10]).
1. If \(l_{1}<l_{2}+l_{3}\), then \(\varphi\) has two simple zeros connected by three saddle connections.
2. If \(l_{1}=l_{2}+l_{3}\), then \(\varphi\) has a single double zero.
3. If \(l_{1}>l_{2}+l_{3}\), then \(\varphi\) has two simple zeros that each lie on their own loop in the critical trajectory and that are connected by a single saddle connection.
See Figure 1 below. In case (i), we say that the pair of pants has type i.
In the situation above we can define a leaf space projection as usual. There's no need to pass to a covering space: we simply declare two points to be equivalent if they lie on the
Figure 1. Cases 1, 2, and 3, arranged from left to right
same leaf. The resulting quotient is a \(1\)-complex consisting of three line segments that have each been glued together at one endpoint. As before, we push the transverse measure down to get a distance function. The metric space is compact; the lengths of the segments are \(h_{1},h_{2},h_{3}\). We can extend the line segments to infinity to obtain an NPC space and apply the formalism from [7].
**Lemma 3.7**.: _For \(i,j\in\{1,2,3\}\), let \(P_{i}\) and \(P_{j}\) be Riemann surface structures of types \(i,j\) on \(\Sigma_{0,3}\) with the same heights and leaf space projections \(\pi_{i}\) and \(\pi_{j}\). There exists a sequence of quasiconformal maps \(f_{n}:P_{i}\to P_{j}\) and a constant \(C>0\) such that_
1. \(\lim_{n\to\infty}e(\pi_{i}\circ f_{n}^{-1})=e(\pi_{j})\) _almost everywhere,_
2. _and_ \(e(\pi_{i}\circ f_{n}^{-1})<C\)_._
Note that since the heights are the same, the quotient graphs are isometric. We write \((G,s)\) for the graph.
Proof.: Let \(\varphi_{i}\) and \(\varphi_{j}\) be the two holomorphic quadratic differentials. Let \(C_{k}^{i}\) and \(C_{k}^{j}\) be the conformal cylinders, \(k=1,2,3\), with core curve classes \(\gamma_{k}^{i}\), \(\gamma_{k}^{j}\). We split the proof into cases.
First, \((i,j)=(1,1).\) Choose an identification of the critical points. Each cylinder \(C_{k}^{i}\), \(C_{k}^{j}\) is bounded by a circle on the critical trajectory that is split into two segments when we remove the critical points. We map the circle for \(C_{k}^{i}\) onto the corresponding circle for \(C_{k}^{j}\) in a way that maps critical points onto critical points according to our identification and is constant speed with respect to the singular metrics \(|\varphi_{i}|\) and \(|\varphi_{j}|\) on the segments. In conformal coordinates on each cylinder \(C_{k}^{i}\), \(C_{k}^{j}\), take the straight horizontal lines from the critical points to the boundary curve, which cut each non-critical leaf into two segments. Each edge point of \((G,s)\) corresponds to a unique non-critical leaf for \(\varphi_{i}\) and a unique non-critical leaf for \(\varphi_{j}\). On each segment of each given non-critical leaf, we define \(f\) to be constant speed with respect to \(|\varphi_{i}|\) and \(|\varphi_{j}|\), mapping intersections with the horizontal line in \(P_{i}\) to the intersections with the line in \(P_{j}\). Since the metrics are smooth, these constant speed maps are varying smoothly on the complement of the critical trajectory and the horizontal lines. The resulting map \(f\) is therefore quasiconformal everywhere and smooth almost everywhere. The map \(f\) satisfies \(\pi_{i}\circ f_{n}^{-1}=\pi_{j}\). We set \(f_{n}=f\) for all \(n\).
For \((i,j)=(2,2)\) and \((i,j)=(3,3)\), we can go by essentially the same procedure, since the critical trajectories are the same. Again, we remove critical points and map the resulting segments of the critical trajectories onto each other in a constant speed way. In the \((2,2)\) case, the critical trajectory is split into two segments, and in the \((3,3)\) case it is split into three segments. We then take the horizontal lines from the critical points to the boundaries and remove them. In the \((2,2)\) case there are two cylinders such that removing the line has the effect of turning each circle into a segment, and one cylinder (the one of length \(l_{1}\)) such that each circle is broken into two segments. In the \((3,3)\) case we have the same thing. We then choose constant speed maps between the segments as before.
Next, we treat \((i,j)=(1,2)\). Let \(l_{1},l_{2},l_{3}\) be the lengths of the boundary curves for \(P_{2}\), with \(l_{1}=l_{2}+l_{3}\). Every pair of pants can be obtained by gluing conformal rectangles to get a hexagon and then doubling the hexagon along three boundary curves (see [10, Lemma 2.1] for precise expressions in coordinates). By slightly modifying this construction of \(P_{2}\), we can create a Riemann surface structure \(P_{2}^{n}\) on \(\Sigma_{0,3}\) with the same heights and so that the lengths \(l_{1}^{n},l_{2}^{n},l_{3}^{n}\) satisfy \(l_{2}^{n}=l_{2}\), \(l_{3}^{n}=l_{3}\), and \(l_{1}^{n}=l_{1}-2^{-n}.\) For each \(n\), the case \((i,j)=(1,1)\) gives us a quasiconformal map \(f_{n}:P_{1}\to P_{2}^{n}\) intertwining the harmonic maps from \(P_{1}\) and \(P_{2}^{n}\) to \((G,s)\). We postcompose with the uniformly quasiconformal identity map from \(P_{2}^{n}\to P_{2}\) to turn \(f\) into a map from \(P_{1}\to P_{2}.\) We assume the choice of identification of
critical points in the \((1,1)\) construction is the same for all \(n\). \(f_{n}\) has two speeds on each circle in the foliation, with speed determined by \(|\varphi_{i}|\) and the Jenkins-Strebel differential on \(P_{2}^{n}\). The horizontal line segments in the construction above depend only on the location of a critical point for the foliation, which is converging with \(n.\) The associated Jenkins-Strebel differentials are converging with the Riemann surface structures (their \(L^{1}\) norms are uniformly bounded, completely determined by the heights and lengths). Hence, all derivatives of \(f_{n}\) are uniformly bounded, in fact locally uniformly bounded below on the complement of the critical trajectory, and therefore \(f_{n}\) converges to a continuous map \(f\) such that \(\pi_{i}=\pi_{j}\circ f\). Moreover, \(\pi_{i}\circ f_{n}^{-1}\) converges to \(\pi_{j}\) in the space of Lipschitz maps from \(P_{1}\to(G,s)\). Both the uniform bound and the convergence of \(e(\pi_{i}\circ f_{n}^{-1})\) come out of the definition of the \(L^{1}\) metric tensor from [7, Theorem 2.3.2].
The case \((i,j)=(2,1)\) is obtained by inverting the process above. Using the solution for \((i,j)=(1,2)\), we have quasiconformal maps \(g_{n}:P_{j}\to P_{i}\) limiting to a continuous map \(g\) that factors \(\pi_{i}\circ g=\pi_{j}\). At each step \(n\) we take \(f_{n}=g_{n}^{-1}\). Although there is no \(C^{0}\) limit, the bounds on the complement of the critical trajectory give that \(\pi_{i}\circ f_{n}^{-1}\) converges to \(\pi_{j}\) in the space of Lipschitz maps from \(P_{1}\to(G,s)\). Since the critical trajectory has measure zero, the energy density converges pointwise almost everywhere and we have a uniform bound.
The case \((i,j)=(3,2)\) is analogous to the limiting process of the case \((i,j)=(1,2)\), except we replace \(P_{2}\) with pants \(P_{2}^{n}\) such that \(l_{1}^{n}=l_{1}+2^{-n}\), rather than \(l_{1}^{n}=l_{1}-2^{-n}\). Similarly, we invert that procedure to handle \((i,j)=(2,3)\).
We are left to do \((i,j)=(1,3)\) and \((3,1)\). For \((i,j)=(1,3)\) we choose an auxiliary pair of pants \(P_{2}\) of type 2, and compose the maps we obtain using the cases \((i,j)=(1,2)\) and \((i,j)=(2,3)\). By boundedness of derivatives and Beltrami forms away from the critical trajectories, convergence follows the same line of thought as above. Likewise, we compose the previous cases for \((i,j)=(3,1)\).
Figure 2. The bottom map describes the model map near the singular points for \((i,j)=(1,2)\). The map to the upper foliation illustrates the case \((i,j)=(1,1)\) near the singular points, which limits to the bottom map as we shrink the saddle connection.
### Nearly factoring harmonic maps
Equipped with our model maps, we give the proof of Lemma 3.5. From Lemma 3.3, the tree \((T,d)\) gives us the data of a maximal collection of curves \(\gamma_{1},\ldots,\gamma_{3g-3}\) cutting the surface into pants, as well as the heights \(h_{1},\ldots,h_{3g-3}\).
Proof of Lemma 3.5.: Let \(l_{1}^{1},\ldots,l_{3g-3}^{1}\) and \(l_{1}^{2},\ldots,l_{3g-3}^{2}\) be the lengths for the maximal Jenkins-Strebel differentials on \(S\) and \(S^{\prime}\) respectively. We can assume that \(S\) has been built with zero twisting, and we set \(\theta_{1},\ldots,\theta_{3g-3}\) to be the twisting angles for \(S^{\prime}\). We also have pants with Riemann surface structures \(P_{1}^{1},\ldots,P_{2g-2}^{1}\) and \(P_{1}^{2},\ldots,P_{2g-2}^{2}\) on \(S\) and \(S^{\prime}\) respectively. Using Lemma 3.7, we build model maps \(f_{k}^{n}:P_{k}^{1}\to P_{k}^{2}\) between the pants that nearly intertwine the restrictions of the harmonic maps to \((G,s)\).
We need to modify the \(f_{k}^{n}\) to account for the twisting in \(S^{\prime}\), so that we can glue the maps together for a globally defined map. We do the modification near each boundary component of each pant individually. Take pants \(P_{k}^{1}\) and \(P_{k}^{2}\) and boundary curves on each one that we aim to properly identify. In the associated cylinder, choose a very small collar neighbourhood bounded by a non-singular vertical leaf. Working in conformal coordinates in th collar, precompose \(f_{k}^{n}\) with a map that is constant in the horizontal direction and twists with constant speed in an orientation preserving fashion in the vertical direction so as to properly take identify the boundary curve in \(P_{k}^{1}\) to the boundary in \(P_{k}^{2}\). Since we're constant in the horizontal direction, the map \(\pi\circ(f_{k}^{n})^{-1}\) is unaffected, so points (1) and (2) from Lemma 3.7 continue to hold. Since the twisting is bounded, the map remains quasiconformal. We then glue the new maps on each pair of pants to obtain the map \(f_{n}\). Using points (1) and (2) from Lemma 3.7, an application of the dominated convergence theorem completes the proof.
With the proof of Theorem A complete, we can now comment on why the new main inequality is special to the leaf space projections. Any equivariant harmonic map to an \(\mathbb{R}\)-tree is the composition of a leaf space projection and a map that folds edges onto each other (see [4] and [13, Section 4.1]). Two harmonic maps to the same \(\mathbb{R}\)-tree can arise from foldings of different leaf spaces. Consequently, the critical leaves for the Hopf differentials can look quite different, and we can't expect to be able to find quasiconformal maps that nearly intertwine the critical leaves, as we did in Lemma 3.7.
In this general setting, it should be more promising to study maps to \(\mathbb{R}\)-trees that are nearby. One could perturb a variation of maps so that the critical structure is fixed, which eliminates the issue raised above. The most efficient way to perturb is to use the log-cut off trick, which negligibly affects the second variation of energy, but can force the third variation to blow up. Hence, for other maps to \(\mathbb{R}\)-trees, such as the maps to \(\mathbb{R}^{n}\) in the next section, the best one can hope for is the infinitesimal version of the new main inequality.
## 4. Classical minimal surfaces
We return to the setup from Section 1.2: \(h=(h_{1},\ldots,h_{n}):\overline{\mathbb{D}}\to\mathbb{R}^{n}\) is a non-constant admissible minimal map with Weierstrass-Enneper data \(\alpha=(\alpha_{1},\ldots,\alpha_{n})\). We denote the Hopf differential of \(h_{i}\) by \(\phi_{i}=\alpha_{i}^{2}\).
We first prove Theorem C, which is then used to prove Theorem B. We conclude with Theorem D.
### Variations by quasiconformal maps
To properly begin, we need to explain how to vary quasiconformal maps.
**Definition 4.1**.: Beltrami forms \(\mu,\nu\in L^{\infty}_{1}(\mathbb{D})\) are equivalent if the normal solutions \(f^{\mu}\) and \(f^{\nu}\) agree on \(\mathbb{C}\backslash\mathbb{D}\).
The universal Teichmuller space \(\mathbf{T}\) has many definitions, and the resulting spaces can all be identified in a reasonable way. The model we take is \(\mathbf{T}=L^{\infty}_{1}(\mathbb{D})/\sim\), where \(\mu\sim\nu\) if \(\mu\) and \(\nu\) are equivalent.
**Remark 4.2**.: It is more common to define \(\mathbf{T}\) by taking \(F^{\mu}=f^{\mu}/f^{\mu}(1)\) instead of \(f^{\mu}\). Under our definition, tangent vectors at \([\mu]=[0]\) have a more tractable expression.
Tangent vectors in \(T_{[0]}\mathbf{T}\) should arise from functions in \(L^{\infty}(\mathbb{D})\) up to a certain identification. To make this identification explicit, we first recall the operator \(P\), defined on \(L^{p}(\mathbb{C})\), \(2<p<\infty\), by
\[P(h)(z)=-\frac{1}{\pi}\int_{\mathbb{C}}h(\zeta)\Big{(}\frac{1}{\zeta-z}-\frac {1}{\zeta}\Big{)}dxdy.\]
Secondly, the Beurling transform \(T\) is defined on \(C^{\infty}_{0}(\mathbb{C})\) by the principal value
\[T(h)(z)=\lim_{\epsilon\to 0}-\frac{1}{\pi}\int_{|\zeta-z|>\epsilon}\frac{h( \zeta)}{(\zeta-z)^{2}}dxdy,\]
and extends continuously to \(L^{p}(\mathbb{C})\), \(1<p<\infty\).
For \(h\in L^{\infty}(\mathbb{D})\), we extend to \(\mathbb{C}\) by setting \(h=0\) on \(\mathbb{C}\backslash\mathbb{D}\), and we write \(P(h)\) and \(T(h)\) for \(P\) and \(T\) applied to the extension of \(h\). The normal solution to the Beltrami equation for \(\mu\in L^{\infty}_{1}(\mathbb{D})\) can be written explicitly in terms of \(P\) and \(T\):
\[f^{\mu}(z)=z+P(\mu)(z)+P(\mu T(\mu))(z)+P(\mu T(\mu T(\mu)))(z)+\ldots\]
So, if \(\mu=t\dot{\mu}+o(t)\) is a variation of Beltrami forms, then the normal solution along the variation is
\[f^{\mu_{t}}=z+tP(\dot{\mu})+o(t).\]
Therefore, \(\dot{\mu},\dot{\nu}\in L^{\infty}(\mathbb{D})\) give the same variation in \(\mathbf{T}\) if and only if \(P(\dot{\mu})=P(\dot{\nu})\) on \(\mathbb{C}\backslash\mathbb{D}\).
**Definition 4.3**.: \(\dot{\mu},\dot{\nu}\in L^{\infty}(\mathbb{D})\) are infinitesimally equivalent if \(P(\dot{\mu})=P(\dot{\nu})\) on \(\mathbb{C}\backslash\mathbb{D}\).
**Definition 4.4**.: The space \(\mathcal{V}\) from the introduction, our model for \(T_{[0]}\mathbf{T}\), is obtained by restricting every function of the form \(P(h)\), \(h\in L^{\infty}(\mathbb{D})\), to \(\mathbb{C}\backslash\mathbb{D}\).
In order to show that we can pick variations with lots of freedom, which we'll do to prove Theorems C and D, we justify the well known fact below.
**Proposition 4.5**.: _For every \(f\in C^{\infty}_{0}(\mathbb{C})\) that is holomorphic on \(\mathbb{C}\backslash\mathbb{D}\), we can find \(\dot{\mu}\in L^{\infty}(\mathbb{D})\) with \(P(\dot{\mu})=f.\)_
The following basic result can be verified immediately.
**Proposition 4.6**.: _Assume \(h\in C^{\infty}_{0}(\mathbb{C})\). Then \(P(h)\) is smooth, \((P(h))_{\overline{z}}=h\), and \(P(h)(z)\) tends to \(0\) as \(|z|\to\infty\)._
Proof of Proposition 4.5.: Let \(f\in C^{\infty}_{0}(\mathbb{C})\) be holomorphic in \(\mathbb{C}\backslash\mathbb{D}\). Define the function \(\dot{\mu}\) on \(\mathbb{C}\) by \(\dot{\mu}=f_{\overline{z}}.\) By Proposition 4.6, \((P(\dot{\mu}))_{\overline{z}}=f_{\overline{z}}\), so \((f-P(\dot{\mu}))\) is an entire function that is bounded, and therefore a constant. Since both \(f(z)\) and \(P(\dot{\mu})(z)\) tend to \(0\) as \(|z|\to\infty\), they are identically equal. Hence, this \(\dot{\mu}\) satisfies \(P(\dot{\mu})=f\).
Now we can formulate our problem more precisely. Recall from Section 2.2 that for harmonic functions to \(\mathbb{R}\), the Reich-Strebel computation gives the following.
**Lemma 4.7**.: _Let \(h:\mathbb{D}\to\mathbb{R}\) be a harmonic function with integrable Hopf differential \(\phi\), and \(f:\mathbb{C}\to\mathbb{C}\) a quasiconformal map with Beltrami form \(\mu\). The formula_
\[\mathcal{E}(h\circ f^{-1})-\mathcal{E}(h)=-4\text{Re}\int_{\mathbb{D}}\phi \cdot\frac{\mu}{1-|\mu|^{2}}dxdy+4\int_{\mathbb{D}}|\phi|\cdot\frac{|\mu|^{2}} {1-|\mu|^{2}}dxdy. \tag{11}\]
_holds._
We call paths \(\mu_{i}(t):[0,t_{0}]\to L_{1}^{\infty}(\mathbb{D})\) equivalent if they project to the same path in \(\mathbf{T}\). We fix any \(\varphi\in\mathcal{V}\) and look for mutually equivalent \(C^{2}\) paths \(\mu_{i}^{t}\) tangent at time zero to \(\varphi\) in \(\mathbf{T}\), such that if \(f_{i}^{t}\) is the normal solution at time \(t\), then
\[\frac{d^{2}}{dt^{2}}|_{t=0}\sum_{i=1}^{n}\mathcal{E}(f_{i}^{t}(\Omega),h_{i} \circ(f_{i}^{t})^{-1})<0.\]
As we noted in the introduction, since energy dominates area, it follows that the variation \(h_{t}=(h_{1}\circ(f_{1}^{t})^{-1},\ldots,h_{1}\circ(f_{n}^{t})^{-1})\) decreases the area to second order.
### The second variation of energy
In [12, Lemma 3.2] and [12, Proposition 4.2], the author computes the second variation of the new main inequality. In our context, this is the second variation of the energy. We recap the computation here.
**Proposition 4.8**.: _If \(\dot{\mu}_{1},\ldots,\dot{\mu}_{n}\in L^{\infty}(\mathbb{D})\) are mutually infinitesimally equivalent, then there exists \(C^{2}\) mutually equivalent paths \(\mu_{i}(t):[0,t_{0}]\to L_{1}^{\infty}(\mathbb{D})\) tangent to \(\dot{\mu}_{i}\) at \(t=0\) and with normal solutions \(f_{i}^{t}\) such that_
\[\frac{d^{2}}{dt^{2}}|_{t=0}\sum_{i=1}^{n}\mathcal{E}(h_{i}\circ(f_{i}^{t})^{-1 })=4\text{Re}\sum_{i=1}^{n}\int_{\mathbb{D}}\phi_{i}\dot{\mu}_{i}T(\dot{\mu}_{ i})dxdy+4\sum_{i=1}^{n}\int_{\mathbb{D}}|\phi_{i}||\dot{\mu}_{i}|^{2}dxdy. \tag{12}\]
Proof.: Let \(\mu_{i}(t)=t\dot{\mu}_{i}+t^{2}\ddot{\mu}_{i}+o(t^{2})\) be mutually equivalent paths with normal solutions \(f_{i}^{t}\). Differentiating the Reich-Strebel formula (11),
\[\frac{1}{4}\frac{d^{2}}{dt^{2}}|_{t=0}\sum_{i=1}^{n}\mathcal{E}(h_{i}\circ(f_ {i}^{t})^{-1})=-\text{Re}\sum_{i=1}^{n}\int_{\mathbb{D}}\phi_{i}\ddot{\mu_{i} }dxdy+\sum_{i=1}^{n}|\phi_{i}||\dot{\mu}_{i}|^{2}dxdy\]
(see [12, Lemma 3.2] for details). Crucially making use of the fact that \(\sum_{i=1}^{n}\phi_{i}=0\), i.e., that \(h\) is a minimal map, it follows from [12, Proposition 4.2] that we can choose mutually equivalent paths such that
\[\text{Re}\sum_{i=1}^{n}\int_{\mathbb{D}}\phi_{i}\ddot{\mu_{i}}dxdy=-\text{Re} \sum_{i=1}^{n}\int_{\mathbb{D}}\phi_{i}\dot{\mu_{i}}T(\dot{\mu_{i}})dxdy.\]
Putting the pieces together gives the result.
**Remark 4.9**.: Up to this point, we have not used that \(\phi_{i}=\alpha_{i}^{2}\). So in particular, Proposition (4.8) holds as well for minimal maps to \(\mathbb{R}\)-trees.
It is computed in [12, Section 6], using the relation \((P(h))_{z}=Th\) (distributionally), that
\[-\text{Re}\sum_{i=1}^{n}\int_{\mathbb{D}}\phi_{i}\dot{\mu}_{i}T(\dot{\mu}_{i} )dxdy=\text{Re}\sum_{i=1}^{n}\int_{\mathbb{D}}(\alpha_{i}P(\dot{\mu}_{i}))_{z} (\alpha_{i}P(\dot{\mu}_{i}))_{\overline{z}}dxdy, \tag{13}\]
and
\[\sum_{i=1}^{n}\int_{\mathbb{D}}|\phi_{i}||\dot{\mu}_{i}|^{2}dxdy=\sum_{i=1}^{n }\int_{\mathbb{D}}|(\alpha_{i}P(\dot{\mu}_{i}))_{\overline{z}}|^{2}dxdy. \tag{14}\]
Substituting (13) and (14) into (12), we arrive at the following
**Proposition 4.10**.: _If \(\dot{\mu}_{1},\ldots,\dot{\mu}_{n}\in L^{\infty}(\mathbb{D})\) are mutually infinitesimally equivalent, then there exists \(C^{2}\) mutually equivalent paths \(\mu_{i}(t):[0,t_{0}]\to L^{\infty}_{1}(\mathbb{D})\) tangent to \(\dot{\mu}_{i}\) at \(t=0\) and with normal solutions \(f_{i}^{t}\) such that_
\[\frac{d^{2}}{dt^{2}}|_{t=0}\sum_{i=1}^{n}\mathcal{E}(h_{i}\circ(f_ {i}^{t})^{-1}) =4\text{Re}\sum_{i=1}^{n}\int_{\mathbb{D}}(\alpha_{i}P(\dot{\mu}_{i }))_{z}(\alpha_{i}P(\dot{\mu}_{i}))_{\overline{z}}dxdy+4\sum_{i=1}^{n}\int_{ \mathbb{D}}|(\alpha_{i}P(\dot{\mu}_{i}))_{\overline{z}}|^{2}dxdy\] \[=4\sum_{i=1}^{n}\mathcal{F}(\alpha_{i}P(\dot{\mu}_{i})),\]
_where \(\mathcal{F}\) is the function from Section 1.2._
### Proof of Theorem C
We continue in the setting above with an admissible \(h\) with Weierstrass-Enneper data \(\alpha=(\alpha_{1},\ldots,\alpha_{n})\), and \(\phi_{i}=\alpha_{i}^{2}\) We fix a variation \(\varphi\in\mathcal{V}\).
Proposition 4.10 says that if we are given \(\varphi\in\mathcal{V}\) and we can find maps \(P(\dot{\mu}_{1}),\ldots,P(\dot{\mu}_{n})\) on \(\mathbb{D}\) extending to \(\varphi\) on \(\mathbb{C}\setminus\mathbb{D}\) such that \(\sum_{i=1}^{n}\mathcal{F}(\alpha_{i}P(\dot{\mu}_{i}))<0\), then \(\varphi\) destabilizes \(h\). The first question is, how to pick \(P(\dot{\mu}_{i})\) that have the best chance of destabilizing \(h\)? If we could pick \(P(\dot{\mu}_{i})\) so that there is a choice of quasiconformal maps \(f_{i}^{t}(z)=z+tP(\dot{\mu}_{i})(z)+o(t)\) such that \(h_{i}\circ(f_{i}^{t})^{-1}\) is harmonic, then \(h_{i}\circ(f_{i}^{t})^{-1}\) would minimize the energy over maps with the same boundary values at each time \(t\). Recalling the local pictures from Section 3, picking such \(f_{i}^{t}\) is not in general possible.
However, we can still argue heuristically. Given some choice of \(P(\dot{\mu}_{i})\) and accompanying variation of quasiconformal maps \(f_{i}^{t}\), define \(\dot{h}_{i}:\overline{\mathbb{D}}\to\mathbb{R}\) by
\[h_{i}\circ(f_{i}^{t})^{-1}=h_{i}+t\dot{h}_{i}+o(t).\]
Since the Laplacian is linear, if we demand that \(\dot{h}_{i}\) allows a variation of harmonic functions, then \(\dot{h}_{i}\) must be a harmonic function itself. Up to first order, the inverse of \(f_{i}^{t}\) is
\[(f_{i}^{t})^{-1}(z)=z-tP(\dot{\mu}_{i})(z)+o(t).\]
Computing via the chain rule,
\[\dot{h}_{i}=\frac{d}{dt}|_{t=0}h_{i}\circ(f_{i}^{t})^{-1}=-2\text{Re}(\alpha_{ i}P(\dot{\mu}_{i})).\]
Let \(v_{i}\) be the harmonic extension of the complex-valued function \((\frac{\partial}{\partial z}h)\cdot\varphi|_{\partial\mathbb{D}}\). If we pretend that we can pick \(P(\dot{\mu}_{i})\) to be \((\frac{\partial}{\partial z}h)^{-1}v_{i}\), then the choice would minimize the map
\[(g_{1},\ldots,g_{n})\mapsto\sum_{i=1}^{n}\mathcal{F}(\alpha_{i}g_{i}),\]
where the \(g_{i}\) range over every map extending \(\varphi\), since the corresponding path \(f_{i}^{t}\) would minimize the second derivative of \(\mathcal{E}(h_{i}\circ(f_{i}^{t})^{-1})\) at time zero. The problem of course is that these choices for \(P(\dot{\mu}_{i})\) blow up at the zeros of \((\frac{\partial}{\partial z}h_{i})\). We're saved by the log cut-off trick, which allows us to smoothly perturb \(v_{i}\) to be zero in a neighbourhood of the zero set of \((\frac{\partial}{\partial z}h_{i})\), so that the division is possible, while only changing the evaluation of \(\mathcal{F}\) by a controlled amount. The computation for the functional \(\mathcal{F}\) is carried out in [12, Section 5].
**Proposition 4.11** (Proposition 5.1 in [12]).: _Let \(Z\subset\mathbb{D}\) be a finite set of points and \(f:\overline{\mathbb{D}}\to\mathbb{C}\) a smooth function. Then for every \(\epsilon>0\), there exists smooth \(g:\overline{\mathbb{D}}\to\mathbb{C}\) such that_
1. \(f(z)=g(z)\) _for_ \(z\) _in a neighourhood of_ \(\partial\mathbb{D}\)_._
2. \(g(z)=0\) _for_ \(z\) _in some neighbourhood of each_ \(z_{0}\in Z\)_._
3. \(|\mathcal{F}(f)-\mathcal{F}(g)|<\epsilon\)
We're ready for the formal proof of the theorem.
Proof of Theorem C.: Suppose
\[\mathcal{F}_{\alpha}(\varphi):=\sum_{i=1}^{n}\mathcal{F}(v_{i})<0.\]
Let \(\epsilon>0\) be small enough so that
\[\mathcal{F}_{\alpha}(\varphi)+\epsilon<0. \tag{15}\]
Let \(Z_{i}\) be the zero set of \(\frac{\partial}{\partial z}h_{i}\), and apply Proposition 4.11 to \((v_{i},Z_{i})\) to find \(g_{i}:\overline{\mathbb{D}}\to\mathbb{C}\) such that \(g_{i}=(\frac{\partial}{\partial z}h_{i})\cdot\varphi\) on \(\partial\mathbb{D}\), and
\[|\mathcal{F}(v_{i})-\mathcal{F}(g_{i})|<\frac{\epsilon}{n}. \tag{16}\]
Via Proposition 4.5, we can choose \(\dot{\mu_{i}}\) so that \(P(\dot{\mu}_{i})=\alpha_{i}^{-1}g_{i}\). By (15) and (16),
\[\sum_{i=1}^{n}\mathcal{F}(\alpha_{i}g_{i})<0.\]
Theorem C now follows from Proposition 4.10.
Theorem C can probably also be proved by using the destabilizing strategy mentioned in the introduction of varying the boundary parametrization and taking harmonic extensions. To understand how to relate the two methods, we need to know how to turn \(\varphi\) into a variation of boundary parametrizations. \(\mathbf{T}\) is also the space of quasisymmetric maps of \(\partial\mathbb{D}\) mod Mobius transformations. In this model, the tangent space at the identity identifies with the Zygmund class of vector fields on \(\partial\mathbb{D}\)[14, Section 2]. Nag finds a beautiful identification of the tangent spaces to the different models in [14, Section 3], which explains how to get a Zygmund vector field out of an admissible holomorphic map on \(\mathbb{C}\backslash\mathbb{D}.\) We gave the proof of Theorem C because it is interesting to see it from our angle, and because elements of the proof will be used toward Theorem B.
### The self-maps index
Continuing in our usual setting and keeping the notation from above, we now prove Theorem B and its corollary.
**Definition 4.12**.: The real quadratic form \(\mathbf{L}_{h}:\mathcal{V}\to\mathbb{R}\) is defined by \(\mathbf{L}_{h}(\varphi)=\sum_{i=1}^{n}\mathcal{F}(v_{i})\), where \(v_{i}\) is the harmonic extension of \((\frac{\partial}{\partial z}h)\cdot\varphi|_{\partial\mathbb{D}}.\) The self-maps index is the maximum dimension of a subspace on which \(\mathbf{L}_{h}\) is negative definite.
Noting that taking the Poisson extension is a linear operation, it is routine to check that \(\mathbf{L}_{h}\) is a real quadratic form.
Let \(m\) be the Euclidean metric on \(\mathbb{R}^{n}\), and denote the volume form by \(dV\). The area of a \(C^{2}\) map \(g\) from a domain \(\Omega\subset\mathbb{C}\) to \(\mathbb{R}^{n}\) is the area of the image \(g(\Omega)\subset\mathbb{R}^{n}\),
\[A(\Omega,g):=\int_{\Omega}g^{*}dV.\]
\(h\) may be only a branched immersion, but it is well-understood that the normal bundle, apriori defined where \(h\) is regular, extends real analytically over the branch points (see, for example, [6, Lemma 1.3]). This extension of the normal bundle is denoted \(N_{h}\subset h^{*}T\mathbb{R}^{n}\). Variations of the image surface are elements of \(\Gamma_{0}(N_{h})\), the space of \(C^{\infty}\) sections of \(N_{h}\) that
extend to zero on \(\partial\mathbb{D}\), which we tacitly view as functions \(X:\mathbb{D}\to\mathbb{R}^{n}.\) The second variation of area is defined by a real quadratic form \(\mathbf{Q}_{h}:\Gamma_{0}(N_{h})\to\mathbb{R},\)
\[\mathbf{Q}_{h}(X)=\frac{d}{dt}|_{t=0}A(\Omega,h+tX)\]
(see [9, Theorem 32] for the well known formula for the right hand side). The usual index \(\operatorname{Ind}(h)\) is the maximal dimension of a subspace on which \(\mathbf{Q}_{h}\) is negative definite. Theorem D is the statement that \(\operatorname{Ind}(\mathbf{L}_{h})=\operatorname{Ind}(h).\) Before we enter the proof, we recall the following application of the log cut-off trick in its usual form (see [Section 4.4, MSS] for detailed explanation).
**Proposition 4.13**.: _Let \(\operatorname{\mathit{Ind}}_{0}(h)\) be the index of \(h\) restricted to variations in \(\Gamma_{0}(N_{h})\) that vanish on a neighbourhood of the critical points of every \(h_{i}\). Then \(\operatorname{\mathit{Ind}}(h)=\operatorname{\mathit{Ind}}_{0}(h).\)_
Proof of Theorem B.: It was already explained in Section 4.1 that a destabilizing self-maps variation yields a variation of maps \(h_{t}:\overline{\mathbb{D}}\to\mathbb{R}^{n}\) that decreases area to second order. Pulling back the Euclidean metric from \(T\mathbb{R}^{n}\) to \(h^{*}T\mathbb{R}^{n}\) and orthogonally projecting the induced section of \(h^{*}T\mathbb{R}^{n}\) onto \(N_{h}\), we obtain a section \(X\in\Gamma_{0}(N_{h})\) with \(\mathbf{Q}_{h}(X)<0\).
To prove the theorem, we need to show that if \(X\in\Gamma_{0}(N_{h})\) vanishes in a neighbourhood of the critical point of every \(h_{i}\) and destabilizes the area of \(h\), then we can find a destabilizing self-maps variation in a way that inverts the process above. For then \(\operatorname{Ind}(\mathbf{L}_{h})=\operatorname{Ind}(\mathbf{Q}_{h})\), and we can appeal to Proposition 4.13.
We will apply Theorem C by finding a variation \(\varphi\in\mathcal{V}\) with \(\mathcal{F}_{\alpha}(\varphi)<0\). Set \(h_{t}=h+tX.\) If \(h\) has branch points, then the pullback metric \(h^{*}m\) is degenerate at those points, and regular elsewhere. \(h^{*}m\) is conformal to the flat metric \(\sigma(z)=|dz|^{2}\) on \(\mathbb{D}\) in the sense that there is a bounded and \(C^{\infty}\) function \(u:\mathbb{D}\to[0,\infty)\) with isolated zeros exactly at the branch points of \(h\), and such that \(h^{*}m=u\sigma.\) Since \(X=0\) in \(U\), \(h_{t}^{*}m=h^{*}m\) in \(U\).
There exists \(t_{0}>0\) such that for \(t<t_{0}\), the degenerate locus of \(h_{t}^{*}m\) is equal to that of \(h^{*}m\). We define a family of non-degenerate \(C^{\infty}\) metrics \((\sigma_{t})_{t<t_{0}}\) on \(\mathbb{D}\) by
\[\sigma_{t}(z)=\begin{cases}\sigma(z),\;z\in U\\ u(z)^{-1}h_{t}^{*}m(z),\;z\in\mathbb{D}\backslash U\end{cases}.\]
We emphasize that \(h_{t}^{*}m\) is not necessarily conformally flat. For each \(t\leq t_{0}\), by the measurable Riemann mapping theorem, Theorem 2.4, we can find a Jordan domain \(\Omega_{t}\subset\mathbb{C}\) and a quasiconformal homeomorphism \(f_{t}:\mathbb{D}\to\Omega_{t}\) that takes \(\sigma_{t}\) to a conformally flat metric (this is a classical application). Observe that the Beltrami form \(\mu_{t}\) of each \(f_{t}\) extends to \(0\) on \(\partial\mathbb{D}\), since \(X\) extends to \(0\) on \(\partial\mathbb{D}.\) For each \(t\), we extend \(\mu_{t}\) to \(0\) on \(\mathbb{C}\backslash\mathbb{D}.\) We then take the \(L^{\infty}\) function \(\dot{\mu}=\frac{d}{dt}|_{t=0}\mu_{t}\) and the associated tangent vector \(\varphi=P(\dot{\mu})|_{C\backslash\mathbb{D}}\in\mathcal{V}.\) This is the desired self-maps variation.
Let's now verify Theorem C for \(\varphi\). Note that for every \(t\), the map \(h\circ f_{t}^{-1}:\Omega_{t}\to\mathbb{R}^{n}\) is weakly conformal. Obviously, the area of \(h\circ f_{t}^{-1}(\Omega_{t})\) is equal to area of \(h_{t}(\mathbb{D}).\) By design, the maps \(h\circ f_{t}^{-1}\) are weakly conformal, and therefore
\[A(\Omega_{t},h\circ f_{t}^{-1})=\mathcal{E}(\Omega_{t},h\circ f_{t}^{-1}).\]
Replacing each \(h_{i}\circ f_{t}^{-1}\) with the harmonic extension of the boundary map, say \(v_{i}^{t}\), cannot increase the energy. Hence,
\[\mathcal{E}(\Omega_{t},v_{i}^{t})\leq\mathcal{E}(\Omega_{t},h\circ f_{t}^{-1} )=A(\Omega_{t},h\circ f_{t}^{-1})=A(\Omega,h_{t}).\]
Taking the second derivative at time zero, we obtain
\[\mathcal{F}_{\varphi}(\alpha)\leq\mathbf{Q}_{h}(X)<0.\]
As discussed, by Theorem C we are done.
Proof of Corollary B.: By Theorem B, \(h\) is stable if and only if \(\mathrm{Ind}(\mathbf{Q}_{h})=0.\) By Proposition 4.8, \(\mathrm{Ind}(\mathbf{Q}_{h})=0\) if and only if the infinitesimal new main inequality holds for the Hopf differentials of the component maps and all choices of infinitesimally equivalent \(\dot{\mu}_{1},\ldots,\dot{\mu}_{n}.\)
### Explicit destabilizing variations
To conclude the paper, we test out the framework we've developed and prove Theorem D. We compute the functional \(\mathcal{F}_{\alpha}(\varphi)\) for polynomial Weierstrass data \(\alpha=(\alpha_{1},\ldots,\alpha_{n})\) and the variation \(\varphi(z)=\gamma z^{-m}.\) Recall from the introduction that we have defined, for a polynomial \(p(z)=\sum_{j=0}^{r}a_{j}z^{j},\)\(\gamma\in\mathbb{C}^{*},\) and \(m>0,\)
\[C(p,\gamma,m)=\pi\sum_{j=0}^{m-1}\frac{\mathrm{Re}(\gamma^{2}a_{j}a_{2m-j})+| \gamma|^{2}|a_{j}|^{2}}{m-j}. \tag{17}\]
Setting \(\alpha(z)=p(z)dz,\) the harmonic extension of \(p\cdot\varphi|_{\partial\mathbb{D}}\) is
\[f_{p,\gamma,m}(z)=\gamma(a_{0}\overline{z}^{m}+\cdots+a_{m}+a_{m+1}z+\ldots a _{n}z^{n-m}).\]
**Lemma 4.14**.: _In the setting above, \(\mathcal{F}(f_{p,\gamma,m})=C(p,\gamma,m).\)_
Proof.: For notations sake, set \(f=f_{p,\gamma,m}\). We compute the integrals individually. First,
\[|f_{\overline{z}}|^{2}=|\gamma|^{2}\sum_{j=0}^{m-1}|a_{j}|^{2}|z|^{2(m-1-j)}+2 |\gamma|^{2}\mathrm{Re}\sum_{j=0}^{m-1}\sum_{k\neq j}a_{j}\overline{a_{k}} \overline{z}^{m-1-j}z^{m-1-k}. \tag{18}\]
Due to \(L^{2}\)-orthogonality of the Fourier basis on \(S^{1}\), the second term on the right in (18) vanishes upon integration:
\[2|\gamma|^{2}\mathrm{Re}\,\sum_{j=0}^{m-1}\sum_{k\neq j}a_{j} \overline{a_{k}}\int_{\mathbb{D}}\overline{z}^{m-1-j}z^{m-1-k}|dz|^{2}\] \[=2|\gamma|^{2}\mathrm{Re}\,\sum_{j=0}^{m-1}\sum_{k\neq j}a_{j} \overline{a_{k}}\int_{0}^{1}r^{2m-1-j-k}dr\int_{0}^{2\pi}e^{i\theta(j-k)}d \theta=0.\]
Hence,
\[\int_{\mathbb{D}}|f_{\overline{z}}|^{2}=2\pi|\gamma|^{2}\sum_{j=0}^{m-1}|a_{j }|^{2}\int_{0}^{1}r^{2m-1-2j}dr=\pi|\gamma|^{2}\sum_{j=0}^{m-1}\frac{|a_{j}|^{ 2}}{m-j}. \tag{19}\]
The term \(f_{z}f_{\overline{z}}\) is a sum of terms of the form \(c_{j,k}\overline{z}^{m-j}z^{r-m-k}\). Again by \(L^{2}\)-orthogonality, the integration over the disk evaluates to a non-zero number if and only if \(0\leq j\leq m-1\), \(m+1\leq k\leq r\), and \((m-1)-j=(r-(m+1))-(r-k)\), i.e., \(k=2m-j\). This returns the formula
\[\mathrm{Re}\gamma^{2}\int_{\mathbb{D}}f_{z}f_{\overline{z}}=\mathrm{Re}\gamma ^{2}\sum_{j=0}^{m-1}a_{j}a_{2m-j}\int_{\mathbb{D}}|z|^{2(m-1-j)}|dz|^{2}=\pi \mathrm{Re}\gamma^{2}\sum_{j=0}^{m-1}\frac{a_{j}a_{2m-j}}{m-j}. \tag{20}\]
Putting (19) and (20) together,
\[\mathcal{F}(f)=\pi\sum_{j=0}^{m-1}\frac{\mathrm{Re}(\gamma^{2}a_{j}a_{2m-j})+ |\gamma|^{2}|a_{j}|^{2}}{m-j}.\]
Proof of Theorem D.: Apply Theorem C with the variation \(\gamma z^{-m}\), using Lemma 4.14\(n\) times to obtain the value of \(\mathcal{F}_{\alpha}(\varphi)\).
|
2310.00508 | Analytical Modeling of Parameter Imbalance in Permanent Magnet
Synchronous Machines | This paper presents a systematic and comprehensive analysis of the impact of
parameter imbalance in permanent magnet synchronous machines. Analytical models
that reveal the effects of imbalance are obtained for each parameter.
Thereafter, the models are verified for accuracy by comparison with complex
simulations that closely represent true machine behavior. Such models may be
utilized for developing (general) algorithms for detection, learning and
mitigation of the negative effects of parameter imbalance including current
(and thus torque) pulsations during real-time operation. | Prerit Pramod | 2023-09-30T22:07:06Z | http://arxiv.org/abs/2310.00508v1 | # Analytical Modeling of Parameter Imbalance in Permanent Magnet Synchronous Machines
###### Abstract
This paper presents a systematic and comprehensive analysis of the impact of parameter imbalance in permanent magnet synchronous machines. Analytical models that reveal the effects of imbalance are obtained for each parameter. Thereafter, the models are verified for accuracy by comparison with complex simulations that closely represent true machine behavior. Such models may be utilized for developing (general) algorithms for detection, learning and mitigation of the negative effects of parameter imbalance including current (and thus torque) pulsations during real-time operation.
## Background
Industrial applications such as electric power steering (EPS) [1, 2, 3, 4, 5] that involve mass manufacturing of electric machines, including permanent magnet synchronous machines (PMSM) [6, 7], switched reluctance machines (SRM) [8, 9, 10, 11, 12], and permanent magnet DC machines (PMDC) [13] must maintain tight control over the part-to-part variation as well as intra-part balance of machine parameters. However, very tight control of such variations and imbalances is not practical since it results in high volume rejection of manufactured parts and thus unnecessary costs. Imbalance of machine parameters results in non-ideal current and thus torque control, i.e., undesirable current and torque pulsations are observed. This effect is significantly magnified when feedforward current control [14, 15, 16, 17, 18, 19] is employed as opposed to feedback control [20, 21, 22, 23, 24, 25, 26, 27, 28, 29], although even the latter suffers from this situation due to bandwidth and maximum bus voltage limitations. While the effect of parameter imbalance is somewhat understood, a detailed analysis of the same is still lacking.
A systematic and comprehensive analysis of the impact of parameter imbalance in PMSMs is presented here. Analytical (mathematical) models that reveal the effects of imbalance are obtained for each parameter. Such mathematical models expand the ability to capture mathematically non-ideal behavior that are typically not included in conventional formulations [30, 31]. Thereafter, the models are verified for accuracy by comparison with simulations that closely represent true machine behavior. Such models may be utilized for developing (general) algorithms for detection, learning and mitigation of the negative effects of parameter imbalance including current (and thus torque) pulsations during real-time operation [32, 33, 34, 35]. Note that the focus of this write-up is on modeling of the actual machine. The behavior of the motor drive system during actual operation, where the motor control system interacts with the electric machine, is not presented here.
## Description
The mathematical model of a 3-phase PMSM in the stationary or abc reference frame consists of the electrical and magnetic relationships, i.e., the voltage to current relationship and the current to torque expression respectively. The electrical circuit equations are expressed as follows.
\[\begin{split} V_{a}&=R_{a}I_{a}+\dot{\lambda}_{a}\\ V_{b}&=R_{b}I_{b}+\dot{\lambda}_{b}\\ V_{c}&=R_{c}I_{c}+\dot{\lambda}_{c}\\ \lambda_{a}&=L_{a}I_{a}-M_{ab}I_{b}-M_{ac}I_{c}- \lambda_{am}\cos\theta\\ \lambda_{b}&=L_{b}I_{b}-M_{ba}I_{a}-M_{bc}I_{c}- \lambda_{bm}\cos(\theta-\beta)\\ \lambda_{c}&=L_{c}I_{c}-M_{ca}I_{a}-M_{cb}I_{b}- \lambda_{cm}\cos(\theta-2\beta)\end{split} \tag{1}\]
where \(V_{x}\) and \(I_{x}\) are the phase voltages and currents for phase \(x\), \(R_{x}\), \(L_{x}\) and \(\lambda_{cm}\) are the phase resistance, self-inductance and permanent magnet flux linkage respectively, and \(M_{xy}\) represents the mutual inductance of phase \(x\) due to current in phase \(y\). \(\beta\) is the spatial angle difference between the different phases of the electric machine and is equal to \(\frac{2\pi}{n}\) with \(n\) being the number of phases. The electromagnetic torque is obtained from the current and flux linkages as follows.
\[\begin{split} T_{e}&=\frac{\partial W^{\prime}}{ \partial\theta}\\ W^{\prime}&=\sum_{x=a,b,c}\int\lambda_{x}\,dI_{x} \end{split} \tag{2}\]
where \(T_{e}\) represents the electromagnetic torque, \(W^{\prime}\) is the magnetic co-energy while \(\theta\) is the electrical (phase) position of the motor. Thus, for modeling the mismatch or imbalance between phases, parameters may be written as follows.
\[\begin{split} R_{x}&=R+\Delta R_{x}\\ L_{x}&=L+\Delta L_{x}\\ M_{xy}&=M+\Delta M_{xy}\\ \lambda_{xm}&=\lambda_{m}+\Delta\lambda_{x}\end{split} \tag{3}\]
where the \(\Delta A_{x}\) term represents the deviation of the value of parameter \(A\) for phase \(x\) from the nominal value \(A_{r}\). For mathematical convenience, the lowest out of the parameter values between all the phases may be chosen to be the nominal value. In this way, the one of the error terms is always zero. The individual error terms may then be obtained by averaging the deviation of the individual phase parameters from the nominal value.
In general, the phase voltage equations are converted to the synchronously rotating or dq reference frame using the commonly known Clarke and Park transforms, which are expressed (in combined form) as follows.
\[h_{dq0}=T_{f}h_{abc} \tag{4}\]
_Technical Paper_
\[T_{f}=\frac{2}{3}\begin{bmatrix}\cos\theta&\cos(\theta-\beta)&\cos(\theta-2\beta) \\ \sin\theta&\sin(\theta-\beta)&\sin(\theta-2\beta)\\ \frac{1}{2}&\frac{1}{2}&\frac{1}{2}\end{bmatrix}\]
where \(h\) may represent the voltage or current. The inverse Clarke and Park transforms (again in combined form) are expressed as follows.
\[\begin{split} h_{abc}=T_{i}h_{dq0}\\ T_{i}=T^{-1}=\begin{bmatrix}\cos\theta&\sin\theta&1\\ \cos(\theta-\beta)&\sin(\theta-\beta)&1\\ \cos(\theta-2\beta)&\sin(\theta-2\beta)&1\end{bmatrix}\end{split} \tag{5}\]
With matched or equal phase parameters, the Park transform results in machine equations that are independent of position. These ideal equations are commonly used for the purposes of modeling, estimation and control in most industrial motor drive control systems. In order to obtain the analytical model, all the parameters are assumed to be different (as explained above). The general phase voltage equations are then transformed into the dq frame utilizing the transformation matrices. This results in the following voltage equations.
\[\begin{split} V_{d}&=V_{di}+\Delta V_{dx}+\Delta V_{dz }+\Delta V_{dzM}\\ V_{q}&=V_{qi}+\Delta V_{qR}+\Delta V_{q4}+\Delta V_{q4M} \end{split} \tag{6}\]
where the subscript \(i\) represents ideal (position independent) equations. The additional voltage terms, referenced by \(\Delta V\), are obtained by applying the transformation considering the error terms due to the imbalance. The individual voltage terms that arise due to resistance, permanent magnet flux linkage and inductance imbalance are represented by subscripts \(R\), \(\lambda\) and \(LM\) respectively. The derivation for obtaining these terms for each parameter individually is presented in the following description. The ideal dq frame model for non-salient pole machines is specified below.
\[\begin{split} V_{di}=RI_{d}+(L+M)\big{(}I_{d}+\omega_{e}I_{q} \big{)}\\ V_{qi}=RI_{q}+(L+M)\big{(}I_{q}-\omega_{e}I_{d}\big{)}+\omega_{e} \lambda_{m}\\ T_{e}=\frac{3}{2}\frac{N_{p}}{2}\lambda_{m}I_{q}\end{split} \tag{7}\]
The ideal dq model considering salient pole machines consists of separate d and q axis inductances and is specified here for reference as follows.
\[\begin{split} V_{di}=RI_{d}+\omega_{e}L_{q}I_{q}+L_{d}I_{d}\\ V_{qi}=RI_{q}-\omega_{e}L_{d}I_{d}+L_{q}I_{q}+\omega_{e}\lambda_{m} \\ T_{e}=\frac{3}{2}\frac{N_{p}}{2}\big{(}\lambda_{m}+\big{(}L_{q}-L_{ d}\big{)}I_{d}\big{)}I_{q}\end{split} \tag{8}\]
Note that the torque expressions for modeling imbalance of different parameters are not shown here. However, they can be easily obtained by following the same idea as the voltage-current derivations. It is important to understand that the models presented here are general (plant) models that describe the machine behavior and are not influenced by the control strategy whatsoever. Further, the models are valid for all synchronous machines, including wound-rotor machines with field current windings.
**Resistance Imbalance**
The additional voltage terms obtained as a result of resistance imbalance are specified below.
\[\begin{split}&\frac{3}{2}\Delta V_{dR}=\Delta R_{a}l_{a}\cos\theta+ \Delta R_{b}l_{b}\cos(\theta-\beta)+\Delta R_{c}l_{c}\cos(\theta-2\beta)\\ &\frac{3}{2}\Delta V_{qR}=\Delta R_{a}l_{a}\sin\theta+\Delta R_{b }l_{b}\sin(\theta-\beta)+\Delta R_{c}l_{c}\sin(\theta-2\beta)\\ &\Delta V_{dR}=\frac{\Delta R}{3}I_{d}+K_{R}\cos(2\theta+\phi_{R })I_{d}+K_{R}\sin(2\theta+\phi_{R})I_{q}+(...)I_{0}\\ &\Delta V_{qR}=K_{R}\sin(2\theta+\phi_{R})I_{d}-K_{R}\cos(2\theta +\phi_{R})I_{q}\\ & K_{R}=\frac{1}{3}\sqrt{\Delta R_{a}^{2}+\Delta R_{b}^{2}+\Delta R _{c}^{2}-\Delta R_{a}\Delta R_{b}-\Delta R_{b}\Delta R_{c}-\Delta R_{c}\Delta R _{a}}\\ &\phi_{R}=\tan^{-1}\left(\frac{\sqrt{3}(-\Delta R_{b}+\Delta R_{ c})}{2\Delta R_{a}-\Delta R_{b}-\Delta R_{c}}\right)\end{split} \tag{9}\]
A block diagram representation of the effect of resistance imbalance is shown in the figure below.
A comparison of the analytical prediction of resistance imbalance with a detailed simulation model having high accuracy for describing true machine behavior is shown in the figure below.
Figure 1: Block diagram representation of analytical model for resistance imbalance.
Figure 2: Results illustrating accuracy of analytical model for resistance imbalance.
### Permanent Magnet Flux Linkage Imbalance
The additional voltage terms obtained as a result of permanent magnet flux linkage imbalance are as follows.
\[\frac{3}{2}\Delta V_{dA}=\omega_{e}\Delta\lambda_{am}\sin\theta\cos\theta+\omega_ {e}\Delta\lambda_{bm}\sin(\theta-\beta)\cos(\theta-\beta)+\omega_{e}\Delta \lambda_{cm}\sin(\theta-2\beta)\cos(\theta-2\beta)\]
\[\frac{3}{2}\Delta V_{dA}=\omega_{e}\Delta\lambda_{am}\sin^{2}\theta+\omega_{e} \Delta\lambda_{bm}\sin^{2}(\theta-\beta)+\omega_{e}\Delta\lambda_{cm}\sin^{2}( \theta-2\beta)\]
\[\Delta V_{dA}=\omega_{e}K_{A}\sin(2\theta+\phi_{A})\]
\[\Delta V_{qa}=\omega_{e}(\Delta\lambda_{am}+\Delta\lambda_{bm}+\Delta\lambda_{ cm})-\omega_{e}K_{A}\cos(2\theta+\phi_{A}) \tag{10}\]
\[K_{A}=\frac{1}{3}\sqrt{\frac{\Delta\lambda_{a}^{2}+\Delta\lambda_{b}^{2}+ \Delta\lambda_{c}^{2}-\Delta\lambda_{a}\Delta\lambda_{b}-\Delta\lambda_{b} \Delta\lambda_{c}-\Delta\lambda_{c}\Delta\lambda_{a}}}\]
\[\phi_{A}=\tan^{-1}\left(\frac{\sqrt{3}(-\Delta\lambda_{b}+\Delta\lambda_{c})} {2\Delta\lambda_{a}-\Delta\lambda_{b}-\Delta\lambda_{c}}\right)\]
A block diagram representation of the effect of permanent magnet flux linkage imbalance is as follows.
A comparison of the analytical prediction of permanent magnet flux linkage imbalance with a detailed simulation model having high accuracy for describing true machine behavior is shown in the figure below.
Figure 4: Results illustrating accuracy of analytical model for permanent magnet flux linkage imbalance.
Figure 3: Block diagram representation of analytical model for permanent magnet flux linkage imbalance.
**Inductance Imbalance**
The additional voltage terms obtained as a result of inductance (including both self and mutual inductances) imbalance are specified below.
\[\begin{split}\frac{3}{2}\Delta V_{ul}=\left(p(L_{a}l_{a}-M_{ab}l_{ b}-M_{ac}l_{c})\right)\cos\theta+\left(p(L_{b}l_{b}-M_{ba}l_{a}-M_{bc}l_{c}) \right)\cos(\theta-\beta)+\left(p(L_{c}l_{c}-M_{ca}l_{a}-M_{cb}l_{b})\right) \cos(\theta-2\beta)\\ \frac{3}{2}\Delta V_{ul}=\left(p(L_{a}l_{a}-M_{ab}l_{b}-M_{ac}l_{c })\right)\sin\theta+\left(p(L_{b}l_{b}-M_{ba}l_{a}-M_{bc}l_{c})\right)\sin( \theta-\beta)+\left(p(L_{c}l_{c}-M_{ca}l_{a}-M_{cb}l_{b})\right)\sin(\theta-2 \beta)\end{split} \tag{11}\]
where \(p\) represents the derivative operator. This is the general expression for all permanent magnet synchronous machines (PMSMs). In the case of salient pole PMSMs both the self and mutual inductance terms are position dependent and so the derivative operation needs to be carried out accordingly. For non-salient pole machines, the inductances may be assumed to be position independent.
\[\begin{split}\Delta V_{ul}=(\Delta L+\Delta M+K_{L}\cos(2\theta+ \phi_{L})-K_{M}\cos(2\theta+\phi_{M}))\big{(}l_{d}+\omega_{e}l_{d}\big{)}+(K_ {L}\sin(2\theta+\phi_{L})-K_{M}\sin(2\theta+\phi_{M}))\big{(}-\omega_{e}l_{d} +l_{a}\big{)}\\ \Delta V_{ul}=(K_{L}\sin(2\theta+\phi_{L})-K_{M}\sin(2\theta+ \phi_{M}))\big{(}l_{d}+\omega_{e}l_{d}\big{)}+(\Delta L+\Delta M+K_{L}\cos(2 \theta+\phi_{L})+K_{M}\cos(2\theta+\phi_{M}))\big{(}-\omega_{e}l_{d}+l_{a} \big{)}\end{split} \tag{12}\]
\[\begin{split} K_{L}=&\frac{1}{3}\sqrt{\Delta L_{a}^ {2}+\Delta L_{b}^{2}+\Delta L_{c}^{2}-\Delta L_{a}\Delta L_{b}-\Delta L_{a} \Delta L_{c}-\Delta L_{b}\Delta L_{c}}\\ \phi_{L}=\tan^{-1}\left(\frac{\sqrt{3}(-\Delta L_{b}+\Delta L_{ c})}{2\Delta L_{a}-\Delta L_{b}-\Delta L_{c}}\right)\end{split} \tag{13}\]
\[\begin{split} K_{M}=&\frac{2}{3}\sqrt{M_{ab}^{2}+M_{ bc}^{2}+M_{ca}^{2}-M_{ab}M_{ac}-M_{ab}M_{cb}-M_{ac}M_{cb}}\\ \phi_{M}=\tan^{-1}\left(\frac{\sqrt{3}(-M_{ab}+M_{ac})}{-M_{ab}- M_{ac}+2M_{cb}}\right)\end{split} \tag{14}\]
A block diagram representation of the effect of inductance imbalance in shown in the figure below.
A comparison of the analytical prediction of inductance imbalance with a detailed simulation model having high accuracy for describing true machine behavior is shown in the figure below.
Figure 5: Block diagram representation of analytical model for inductance imbalance.
Note that the above derivations concerning inductance imbalance are only valid for non-salient pole PMSMs. While the derivation or results for modeling salient pole machines are not shown here, it is easy to extend the idea presented here to obtain those as well. As mentioned earlier, for salient pole machines, additional terms will be introduced due to the existence of second order position dependent terms in the stationary frame self and mutual inductances and therefore the derivative operator must be applied appropriately to correct determine the desired inductance imbalance model for salient pole synchronous machines.
## Conclusions
This paper presents analytical models capturing the effects of imbalance for all the different parameters of PMSMs are presented. These models are not commonly known and may be used to develop algorithms (that may be implemented at the manufacturing end of line or in the controller software for real-time operation) for the detection, identification, learning and mitigation of the negative effects of parameter imbalance in PMSM machines.
Figure 6: Results illustrating accuracy of analytical model for inductance imbalance. |
2309.09301 | RenderIH: A Large-scale Synthetic Dataset for 3D Interacting Hand Pose
Estimation | The current interacting hand (IH) datasets are relatively simplistic in terms
of background and texture, with hand joints being annotated by a machine
annotator, which may result in inaccuracies, and the diversity of pose
distribution is limited. However, the variability of background, pose
distribution, and texture can greatly influence the generalization ability.
Therefore, we present a large-scale synthetic dataset RenderIH for interacting
hands with accurate and diverse pose annotations. The dataset contains 1M
photo-realistic images with varied backgrounds, perspectives, and hand
textures. To generate natural and diverse interacting poses, we propose a new
pose optimization algorithm. Additionally, for better pose estimation accuracy,
we introduce a transformer-based pose estimation network, TransHand, to
leverage the correlation between interacting hands and verify the effectiveness
of RenderIH in improving results. Our dataset is model-agnostic and can improve
more accuracy of any hand pose estimation method in comparison to other real or
synthetic datasets. Experiments have shown that pretraining on our synthetic
data can significantly decrease the error from 6.76mm to 5.79mm, and our
Transhand surpasses contemporary methods. Our dataset and code are available at
https://github.com/adwardlee/RenderIH. | Lijun Li, Linrui Tian, Xindi Zhang, Qi Wang, Bang Zhang, Mengyuan Liu, Chen Chen | 2023-09-17T15:30:58Z | http://arxiv.org/abs/2309.09301v3 | # RenderIH: A Large-scale Synthetic Dataset for 3D Interacting Hand Pose Estimation
###### Abstract
The current interacting hand (IH) datasets are relatively simplistic in terms of background and texture, with hand joints being annotated by a machine annotator, which may result in inaccuracies, and the diversity of pose distribution is limited. However, the variability of background, pose distribution, and texture can greatly influence the generalization ability. Therefore, we present a large-scale synthetic dataset -RenderIH- for interacting hands with accurate and diverse pose annotations. The dataset contains 1M photo-realistic images with varied backgrounds, perspectives, and hand textures. To generate natural and diverse interacting poses, we propose a new pose optimization algorithm. Additionally, for better pose estimation accuracy, we introduce a transformer-based pose estimation network, TransHand, to leverage the correlation between interacting hands and verify the effectiveness of RenderIH in improving results. Our dataset is model-agnostic and can improve more accuracy of any hand pose estimation method in comparison to other real or synthetic datasets. Experiments have shown that pretraining on our synthetic data can significantly decrease the error from 6.76mm to 5.79mm, and our Transhand surpasses contemporary methods. Our dataset and code are available at [https://github.com/adwardlee/RenderIH](https://github.com/adwardlee/RenderIH).
## 1 Introduction
3D interacting hand (IH) pose estimation from a single RGB image is a key task for human action understanding and has many applications, such as human-computer interaction, augmented and virtual reality, and sign language recognition. However, obtaining 3D interacting hand pose annotations from real images is very challenging and time-consuming due to the severe self-occlusion problem. Some previous works [12, 18] have collected some real hand interaction data using a sophisticated multi-view camera system and made manual annotations, but the amount of data is limited. Synthetic 3D annotation data has become increasingly popular among researchers because of its easy acquisition and accurate annotation [27, 22, 3, 7, 15, 24, 41]. However, there remain two main challenges: the validity of the generated 3D hand poses and the diversity and realism of the generated images. Therefore, in this paper, we present a high-fidelity synthetic dataset of 3D hand interaction poses for precise monocular hand pose estimation.
Firstly, ensuring the validity of the generated 3D interacting hand poses is a crucial challenge for a synthetic hand system. For example, the pose of Ego3d [22] is randomized which means a significant portion of the data is not valid. To ensure effective hand interactions, the generated two-hand poses must be proximal to each other, while increasing the risk of hand interpenetration. Therefore, we design an optimization process that considers the constraints of hand attraction and anti-penetration in the meantime, to ensure the proximity of two interacting hands and prevent the occur
Figure 1: **Randomly selected samples from RenderIH dataset.** The rendered hands are realistic and varied, capturing a variety of poses, textures, backgrounds, and illuminations.
rence of hand penetration (Section 3.1). In addition, the plausibility of hand poses must also be considered. Hence, we introduce anatomic pose constraints and apply adversarial learning to ensure that the generated hand poses adhere to anatomical constraints and realism. Benefiting from pose optimization, our generated dataset contains a rich set of validated two-hand interaction poses as shown in Figure 1.
Secondly, most existing 3D synthetic hand images lack diversity in terms of backgrounds, lighting, and texture conditions, which prevents them from capturing the complex distribution of real hand data [22, 3, 15]. Most existing datasets for hand gesture recognition, such as Ego3d [22], Obman [15], and MVHM [3], do not consider the quality and diversity of the images. For instance, Ego3d [22] uses the same texture as the MANO model [29], which is unrealistic and monotonous. In contrast, our rendering system introduces various textures, backgrounds, and lighting effects that can produce vivid and realistic synthetic hand images (see Section 3.2). By combining HDR background, dynamic lighting, and ray-tracing renderer, we obtain 1M high-quality gesture images (see Figure 1).
To assess the performance of our proposed dataset, we carried out comprehensive experiments on it. We demonstrate how much we can reduce the dependency on real data by using our synthetic dataset. Then we contrast our proposed RenderIH with other 3D hand datasets, such as H2O-3D [12] and Ego3d [22], by training a probing model for each of them and testing on a third-party dataset. Finally, we train a transformer-based network on a mixed dataset of RenderIH and InterHand2.6M (IH2.6M) and achieve state-of-the-art (SOTA) results on 3D interacting hand pose estimation. Our main contributions are as follows:
* We propose an optimization method to generate valid and natural hand-interacting poses that are tightly coupled and avoid interpenetration. For image generation, we design a high-quality image synthesis system that combines rich textures, backgrounds, and lighting, which ensures the diversity and realism of the generated images.
* Based on our data generation system, we construct a large-scale high-fidelity synthetic interacting hand dataset called **RenderIH**, which contains 1 million synthetic images and 100K interacting hand poses. To the best of our knowledge, this is the largest and most high-quality synthetic interacting dataset so far.
* We conduct extensive experiments to verify the effectiveness of our proposed dataset-RenderIH. The results show that with the help of our synthetic dataset, using only 10% of real data can achieve comparable accuracy as the models trained on real hand data. We also propose a transformer-based network that leverages our dataset and achieves SOTA results.
## 2 Related work
### Realistic hand dataset
Establishing a realistic hand dataset is a tedious and challenging procedure, most realistic data are collected by different sensors [26, 13, 11, 42, 40, 30, 20] including multiple cameras and depth sensors. STB dataset [40] obtained 3D annotations of a single hand (SH) via 2D manual labels and depth data. Since manual annotations are time-consuming [26], some researchers [30, 26, 13, 42] utilized semi-automatic methods to make annotations. Moon et al. [26] captured hand interactions with hundreds of cameras. They manually annotated the 2D keypoints of both hands on a few images and utilized a machine detector to help annotate the rest data. While some researchers [11, 1, 31] proposed automatic methods to make annotations, Hampali et al. [11] collected hand-object (HO) interactions and jointly optimized 2D key points on multiple RGB-D images to estimate 3D hand poses. Some researchers [8, 38, 9] obtain the 3D annotations of hands via some special equip
\begin{table}
\begin{tabular}{l|c c c c c c c} \hline \hline
**Dataset** & **Type** & **Data size** & **MT** & **AP** & **background** & **illumination** & **Hand type** & **IH Size** \\ \hline NYU [31] & real & 243K & - & \(\bigstar\) & lab & uniform & SH & - \\ STB [40] & real & 36K & - & \(\bigstar\) & lab & uniform & SH & - \\ H2O-3D [12] & real & 76K & - & \(\bigstar\) & lab & uniform & HO & - \\ H2O [38] & real & 571K & - & \(\bigstar\) & indoor scenes & uniform & HO & - \\ MVHM [3] & synthetic & 320K & \(\bigstar\) & \(\bigstar\) & static scenes & uniform & SH & - \\ ObMan [15] & synthetic & 147K & \(\bigvee\) & \(\bigstar\) & static scenes & uniform & HO & - \\ DARTset [7] & synthetic & 800K & \(\bigvee\) & \(\bigstar\) & static scenes & manual & SH & - \\ \hline IH2.6M [26] & real & 2.6M & - & \(\bigstar\) & lab & uniform & **IH** & 628K \\ Ego3d [22] & synthetic & 50K & \(\bigstar\) & \(\bigstar\) & static scenes & random & **IH** & 40K \\ \hline
**RenderIH (Ours)** & synthetic & 1M & \(\bigvee\) & \(\bigvee\) & HDR scenes & **dynamic** & **IH** & **1M** \\ \hline \hline \end{tabular}
\end{table}
Table 1: **Comparison of the related hand datasets. “MT” is short for multi-textures and means whether the hand models in the dataset are assigned with diverse textures, AP is short for anti-penetration, “Hand type” means which interaction type the dataset focus on (SH-single hand, HO-hand to object, IH-hand to hand), and “IH Size” means the proportion of IH poses. “HDR” is short for High Dynamic Range. Static scenes refer to the use of randomly selected images as the background.**
ment. Ye et al. [38] captured hand poses via multiple joint trackers. Due to the limitation of the data collection scene, most realistic datasets are in simple scenarios, e.g. lab [40, 30, 11] or green screen [42, 26, 38, 1, 32]. Most realistic datasets focus on SH or HO interactions and very few papers [26, 32] collect interacting hand data.
### Synthetic hand dataset
To obtain precise annotations and increase the dataset's diversity, several papers [27, 22, 3, 7, 15, 24, 41] established synthetic hand dataset by applying multiple backgrounds [41] or different hand textures [7]. Most datasets [3, 7, 27, 41] focus on SH pose data. DARTset [7] introduced a shaped wrist and rendered hand images with different skins and accessories. But the dataset did not contain IH. To simulate the HO interactions, Hasson et al. [15] utilized physics engine [25] to generate object manipulation poses, but their rendered images are not photo-realistic. Although some datasets [22, 24] provide poses of both hands, the rendered images are not natural enough and lack diversity. Those poses of Ego3d [22] were randomized, which leads to severe interpenetration between hands and the pose is relatively strange. Based entirely on the pose annotations of IH2.6M [26], AJH [24] produced a synthetic interacting hand dataset, but only hand masks were created and other annotations were missing.
We summarize some representative hand datasets and compare them to ours in Table 1. While most datasets focus on SH or HO interactions, they are deficient in handling mesh collision, maintaining high-quality annotations, and providing pose diversity to some extent.
## 3 RenderIH dataset
One of the main contributions of our paper is the interacting hand pose optimization method that can generate valid and natural poses. In our paper, **valid** poses are non-penetration hands and conform to the anatomic constraint outlined in Table 2. The **natural** poses not only conform to the anatomy but also frequently occur in daily life. We uniformly combine generated poses with a variety of hand textures, high dynamic range (HDR) backgrounds, and camera views. All collections are sampled independently to create images as diverse as possible. In Section 3.1, we introduce our new hand pose generation algorithm. After hand pose generation, how to render the synthetic image is demonstrated in Section 3.2. In Section 3.3, we briefly introduce some statistics about our RenderIH dataset.
### Interacting hand pose optimization
**Hand model**. Based on the widely used parametric hand model MANO [29], Yang et al. [37] proposed A-MANO, which assigns the twist-splay-bend Cartesian coordinate frame to each joint along the kinematic tree and fit the natural hand better. Therefore, we adopt A-MANO to make our optimization more biologically plausible.
**Initial pose generation**. To produce massive valid and natural IH interaction poses, we derive raw poses from the IH2.6M [26] and then augment the raw poses by assigning random rotation offsets to hand joints. The augmented poses are shown in Figure 3, after augmentation, the rotation of the \(j_{th}\) finger joint can be expressed as:
\[\{R_{ji}\in SO(3)\}_{i=1}^{I}=\{R_{j}R_{b}(\theta_{i}^{b})R_{s}(\theta_{i}^{s })\}_{i=1}^{I}, \tag{1}\]
where \(I\) is the number of augmentation, \(R_{b/s}(\theta)\) denotes the rotation along the bend/splay axe, the angle offset \(\theta^{b}\in[-90^{\circ},90^{\circ}]\) and \(\theta^{s}\in[-30^{\circ},30^{\circ}]\). \(SO(3)\) is a group of 3D rotations. \(\theta^{s}=0\) when the joint is not the finger root joint. To avoid abnormal gestures, each augmented joint is restricted according to Table 2. As the augmented poses are totally random, most of them suffer from serious mesh penetration and their gestures are unnatural, it is necessary to optimize the poses.
**Anti-penetration**. Inspired by [17], we adopt multi-person interpenetration loss to interacting hands and propose to divide the hand region into 16 parts. Let \(\Omega\) be the modified Signed Distance Field (SDF) [14] for each hand. The SDF is defined on a voxel grid of dimensions \(N\times N\times N\). It is defined as follows:
\[\Omega(x,y,z)=-min(SDF(x,y,z),0), \tag{2}\]
Figure 3: Visualization for the effect of different components in optimization.
Figure 2: The distribution of anchors and hand subdivision. Purple points denote the anchors.
where \(\Omega\) states that its value within a hand is positive and proportional to the distance from the surface, and it is simply zero outside. The penetration loss for a single hand is calculated as follows:
\[L_{p}^{s}=\sum_{v\in\{V\}}\Omega_{\hat{s}}(v). \tag{3}\]
\(V\) means the hand vertices, \(s\) is the side of the hand, and \(\hat{s}\) is the side of the other hand. While the hand is highly articulated with a complex pose and shape, basic hand mesh SDF is not accurate enough. We propose to divide the hand into 16 parts based on its joint position and compute a separate \(\Omega\) function for each hand submesh which is divided according to the hand subdivision in Figure 2. After applying for each submesh, the penetration loss is defined as:
\[L_{p}^{s}=\sum_{i=1}^{N}\sum_{j=1}^{N}(\sum_{v\in\{M_{sj}\}}\Omega_{\hat{s}i}( v)), \tag{4}\]
where \(M_{sj}\) means the \(j^{th}\) submesh of the hand. The total loss of this part is \(L_{p}=L_{p}^{right}+L_{p}^{left}\). The detailed visualization comparison between basic SDF loss and our penetration loss is shown in the supplementary material (SM).
**Interhand attraction**. When the IH is in close contact, severe hand occlusion may occur, making it difficult to make annotations. Additionally, the available close contact data are limited. To address this problem, it is recommended to ensure the IH remains in tight contact.
To create contact between the hands, simply bringing the closest vertices together would suffice. However, to reduce the optimization's time complexity, we adopt anchors to guide the position and pose of both hands. As shown in Figure 2, to downsample the hand vertices as anchors, we traverse IH2.6M to assess the contact frequency of each vertex with the other hand. We selected the vertices with the highest contact frequency as the initial anchors and proceeded to sample the remaining vertices sequentially. Subsequently, we skip the 2-hop neighbors and then continue to sample the yet-to-be-selected ones. Finally, we obtained 108 anchors.
If anchor \(a_{j}^{l}\) on the left and the anchor \(a_{i}^{r}\) on the right hand are the closest, they will establish an anchor pair, and the loss of anchor pairs is defined as:
\[L_{ij}^{A}=\frac{1}{2}k_{ij}\Delta{d_{ij}}^{2}, \tag{5}\]
where \(\Delta{d_{ij}}=||a_{i}^{r}-a_{j}^{l}||_{2}\). And \(k_{ij}=0.5*cos(\frac{\pi}{s}\Delta{\bar{d}_{ij}})+0.5\), in which \(\Delta{\bar{d}_{ij}}\) is the initial distance between anchors pair. This definition means the initially close anchors tend to keep in contact. Especially the factor \(s\) is set to \(0.02m\), and we set \(k_{ij}=0\) if \(\Delta{\bar{d}_{ij}}>s\). The anchor pairs connection and \(k_{ij}\) will be rebuilt during the optimization to adapt to dynamically changing IH poses.
However, only these constraints cannot keep interacting poses valid with random joint angles, we further introduce anatomic optimization.
**Anatomic Optimization.** The finger comprises joints, namely the Carpometacarpal joint (CMC), the Metacarpophalangeal joint (MP), and the Interphalangeal joint (IP). According to the coordinates systems of A-MANO, each finger has three joints, and we denote them as root (CMC of thumb, MP of the others), middle (MP of thumb, Proximal IP of the others), and end joint (IP of thumb, Distal IP of the others). Each of them theoretically has 3 DOF. We define the hand pose in Figure 2 as the T-pose, where all rotation angles are zero. The constraints are defined as follows:
* **Available rotation directions.** Middle and end joint can only rotate \(\theta_{i}^{b}\) around the B (Bend) axe, while the root can also rotate \(\theta_{i}^{s}\) around S (Splay) axe. Always keep \(\theta_{i}^{t}=0\) around the T (Twist) axe.
* **Angle limitations.** According to hand kinematics [10, 19], the joint rotation limitations are presented in Table 2.
The anatomic optimization objective for each hand is defined as:
\[L_{a}=\sum_{i=1}^{15}\sum_{a\in\{b,s,t\}}(\beta(\theta_{i}^{a}))^{2}, \tag{6}\]
where \(\beta(\theta_{i}^{a})=max(\theta_{i}^{a}-\hat{\theta_{i}^{a}},0)+min(\theta_{ i}^{a}-\hat{\theta_{i}^{a}},0)\) is the deviation of the rotation angle from its range, and \(\hat{\theta_{i}^{a}}\)/\(\theta_{i}^{a}\) is the max/min value of \(\theta_{i}^{a}\)'s range.
**Natural discriminator.** After anatomic optimization, the poses become valid. However, as shown in Figure 3(e), some optimized poses would not be natural enough. To get the natural poses, we further employ a discriminator \(\mathcal{D}\). The detailed structure of the discriminator is illustrated in Figure 4. The single-hand pose \(\Theta\) is given as input to the multi-layer discriminator. The output layer predicts a value \(\in[0,1]\) which represents the probability of belonging to the natural pose. The objective for \(\mathcal{D}\) is:
\[L_{\mathcal{D}}=\mathbb{E}_{\Theta\sim P_{R}}[(\mathcal{D}(\Theta)-1)^{2}]+ \mathbb{E}_{\Theta\sim P_{G}}[\mathcal{D}(\Theta)^{2}], \tag{7}\]
where \(P_{R}\) represents a hand pose from real datasets, such as IH2.6M [26] and Freihand [42], \(P_{G}\) is a generated pose. The adversarial loss that is backpropagated to pose opti
\begin{table}
\begin{tabular}{c|c c c} \hline finger\textbackslash{}joint & root (B,S) & middle (B) & end (B) \\ \hline thumb & \([-20,40],[-30,30]\) & \([-8,50]\) & \([-10,100]\) \\ index & \([-25,70],[-25,15]\) & \([-4,110]\) & \([-8,90]\) \\ middle & \([-25,80],[-15,15]\) & \([-7,100]\) & \([-8,90]\) \\ ring & \([-25,70],[-25,15]\) & \([-10,100]\) & \([-8,90]\) \\ pinky & \([-22,70],[-20,30]\) & \([-8,90]\) & \([-8,90]\) \\ \hline \end{tabular}
\end{table}
Table 2: **Joint rotation limitations.** The values are in degrees. ’B’/’S’ denotes whether the joint can bend/splay.
mization is defined as:
\[L_{adv}=\mathbbm{E}_{\Theta\sim P_{G}}[(\mathcal{D}(\Theta)-1)^{2}]. \tag{8}\]
The discriminator is pre-trained before optimization. We extract 63K natural single-hand poses from Freihand [42], DexYCB [2], and IH2.6M [26], their "natural" probabilities \(p_{n}\) are labeled as 1. To get unnatural poses, we follow the methods in "Initial pose generation" to randomly add offsets to the poses, and calculate their probabilities according to the offsets (the higher the offsets, the closer the \(p_{n}\) is to 0). The qualitative and quantitive improvements brought by \(\mathcal{D}\) could be seen in SM. Since the natural standard may vary from person to person, we also conducted a user study to confirm the discriminator's effect in SM.
**Poses Optimization.** In IH optimization, for each hand, it has 15 joints rotation \(\Theta=\{R_{i}\in SO(3)\}_{i=1}^{15}\), hand root rotation \(R_{r}\in SO(3)\) and hand root translation \(T_{r}\in\mathbb{R}^{3}\), we take \(\psi=\{\Theta,T_{r}\}\) as the optimization parameters and the total IH loss is denoted as:
\[\small\begin{split}&\small\begin{split} argmin(w_{1}\sum_{i=1}^{A_{r}}\sum_{j=1}^{A_{l}}L_{ij}^{A}+w_{2}L_{a}+w_{3}L_{adv }+w_{4}L_{p}),\end{split}\end{split} \tag{9}\]
where \(A_{r}\)/\(A_{l}\) is the anchor numbers of right/left hand, \(L_{a}=L_{a}^{r}+L_{a}^{l}\), and \(w_{*}\) is the weight hyperparameter.
### Rendering
Our dataset offers various benefits, including high-resolution hand textures that create a more natural appearance. Additionally, we simulate natural lighting and environments to address limited diversity in studio settings. Furthermore, our dataset covers a wide range of poses and camera positions, bridging the gap between real-world applications and synthetic data.
**Texture.** To enhance the variety of skin textures we present a broad selection of hues as illustrated in Figure 5. Color tones include white, light-skinned European, dark-skinned European, Mediterranean or olive, yellow, dark brown, and black. A total of 30 textures are available. In addition, random skin tone parameters can be superimposed on these base skin tones in the shaders to adjust brightness, contrast, and more. Apart from that, these textures also depict wrinkles, bones, veins, and hand hairs to cope with differences in gender, ethnicity, and age.
**Lighting and background.** It is widely accepted that high-quality synthetic data should resemble real-world scenes as much as possible. For instance, the authors mixed their synthetic hands images with diverse real-world background photographs when creating IH synthetic data [22]. However, simply pasting the rendered hands on the background images is unnatural due to differences in lighting conditions and light angles. Since creating a large number of various synthetic 3D background models is time-consuming, we composite synthetic hands with various real-world scenery panoramic images. We collect 300 high-dynamic-range (HDR) photography with realistic indoor and outdoor scenes with appropriate lighting for rendering purposes. They enable our hand models to blend seamlessly with diverse settings resulting in highly photorealistic rendered scenes (see Figure 6).
**Camera Settings.** We defined a spherical camera arrangement that can contain both viewpoints, enhancing the generalization of the model to different viewpoints. The center of the two-handed model is first computed and placed at the center of the world, and the camera track is placed around the center with the camera pointing to the center. Figure 7 shows the layout of our simulation environment. For each pose, we define four 360-degree circular tracks, which can be averaged by the number of samples to define dense or sparse viewpoints. For sparse sampling, 10 viewpoints were selected for each track.
**Render quality.** Our major objective is to improve the photorealism of the synthetic dataset. Therefore, we render the scene in Blender based on the ray-tracing rendering engine Cycles. When creating the hand mesh, we used custom shader settings to adjust the base color, subsurface, and roughness to make the skin more realistic. The resolution of
Figure 4: The architecture of the discriminator.
Figure 5: Same hand with different hand textures.
Figure 6: Same hand under diverse illumination.
Figure 7: Different viewpoints from the camera track.
the image is 512\(\times\)334 pixels and the color depth is 8 bits.
### Analysis of RenderIH dataset
For distribution diversity comparison, we project the hand pose in IH2.6M and RenderIH into the embedding space using TSNE [34]. Figure 9 clearly shows that our data has a broader pose distribution than IH2.6M. Examples of synthetic images are depicted in Figure 1 and the rendering video can be found in the SM. More visualization effects of different optimization modules and statistical information can be found in the SM.
## 4 TransHand
We propose a transformer-based network, TransHand, for 3D interacting hand pose estimation and conduct extensive experiments on it.
As the transformer blocks are effective in modeling global interactions among mesh vertices and body joints [23, 35], we propose a transformer-based IH network. Our system contains two parts: the encoder and the decoder. Given an image with size 256\(\times\)256, the encoder outputs a global feature vector \(G_{F}\) and the intermediate feature maps \(\{F_{i},i=1,2,3\}\) where \(i\) indicates the feature level. After that, we map \(G_{F}\) to the left vertex feature \(L_{F}\) and the right vertex feature \(R_{F}\) by using fully connected layers. Since the global feature does not contain fine-grained local details, we concatenate different level features \(F_{i}\) with the hand vertex feature as input to the decoder blocks.
As shown in Figure 8, the decoder consists of 3 identical blocks. Each block consists of 2 sub-modules, each sub-module is a typical transformer encoder composed of a multi-head attention module and an MLP layer. Each block is made up of two transformer encoders. As there is usually mutual occlusion in IH, it is natural to combine the other hand feature to improve the estimation precision. Inspired by Slowfast [6], we use a symmetric structure to incorporate the other hand feature by adding it, which is the lateral connection in the Correlation Encoder (CE) shown in Figure 8. Each block has three inputs including the left vertex feature, right vertex feature, and image feature. The blocks gradually upsample the coarse mesh up to refined mesh and finally to the original dimension with 778 vertices.
**Loss Function.** For training, we apply \(L_{1}\) loss to 3D mesh vertices and hand joints, and \(L_{1}\) loss to 2D projected vertices and hand joints.
\[L_{joint}=\sum_{s=0}^{1}\sum_{i=0}^{M-1}\sum_{d\in\{3D,2D\}} \|J_{s,i}^{d}-J_{s,i}^{d,GT}\|_{1}, \tag{10}\] \[L_{mesh}=\sum_{s=0}^{1}\sum_{i=0}^{N-1}\sum_{d\in\{3D,2D\}}\|V_ {s,i}^{d}-V_{s,i}^{d,GT}\|_{1}, \tag{11}\]
where \(s\) represents the hand side, \(i\) represents the number of joints or vertices, and \(d\) denotes whether the computation is for 3D or 2D. To guarantee the geometric continuity of the predicted vertices, smoothness loss is applied which regularizes the consistency of the normal direction between the predicted and the ground truth mesh:
\[L_{smooth}=\sum_{s=0}^{1}\sum_{f=0}^{F-1}\sum_{j=0}^{2}\|e_{f,j,s} \cdot n_{f,s}^{GT}\|_{1}, \tag{12}\]
Figure 8: Network architecture. We use the global features extracted by the encoder to predict the left-hand features and right-hand features. After that, our model gradually regresses the hand vertices from 3 identical correlation encoder blocks by fusing multi-resolution image features with hand features. Each correlation encoder contains two transformer encoders and lateral connection from the other hand feature.
Figure 9: TSNE visualization for IH poses distribution. Our data not only contain the raw poses of IH2.6M but also fill the vacancy by augmentation, resulting in a broader distribution.
where \(f\) means the face index of hand mesh, \(j\) means the edge of face \(f\) and \(n^{GT}\)is the GT normal vector of this face.
## 5 Experiments
### Experiment setup
**Dataset**. IH2.6M [26] is the largest real dataset with interacting hand (IH), and most of our experiments are conducted on this dataset. As we are only focused on IH, we selected only the IH data with both human and machine annotations. After discarding single-hand samples and invalid labeling, we obtain 366K training samples and 261K testing samples. Tzionas dataset [33] is a small IH dataset. We only use it for generalization ability evaluation by using the models trained from different datasets. H2O-3D [12] is a real dataset with 3D pose annotations for two hands and an object during interactions. It contains 60K samples. Ego3d [22] provides 50K synthetic images and corresponding labels of two hands, in which 40K samples are IH and the poses are randomized.
**Implementation details**. The input images are resized to \(256\times 256\) and fed to TransHand encoder to generate the global feature and image feature maps. ResNet50 [16] is selected as the encoder. For all experiments, the networks are implemented using Pytorch [28]. We train all models with IH images using Adam optimizer. The initial learning rate is \(1e^{-4}\) and the batch size is 64. All experiments are performed on 1 NVIDIA Ampere A100 GPU. To demonstrate the usefulness of our RenderIH, we train three mainstream IH pose estimation methods on IH2.6M and a combination of IH2.6M and RenderIH, InterNet1[26], DIGIT1[4] and state-of-the-art method IntagHand2[21].
Footnote 1: Since InterNet and DIGIT are trained on the IH subset of IH2.6M v0.0, we train them on v1.0 to make fair comparisons.
Footnote 2: All the training codes have been open-sourced by the authors.
**Evaluation metrics**. To evaluate these methods, we report results by two standard metrics: Mean Per Joint Position Error (MPJPE) and the Mean Per Joint Position Error with Procrustes Alignment (PAMPJPE) in millimeters (mm). Additionally, to ensure a fair evaluation with prior research [21, 39], we select the MCP joint of middle finger as root joint and also report SMPJPE which performs scaling to the ground truth bone length. To evaluate the accuracy of estimating the relative position between the left and right hand roots during interaction, we utilize the mean relative-root position error (MRRPE) [4] and hand-to-hand contact deviation (CDev) [5] metrics. More results are presented in SM with wrist as root joint for future comparison.
### Results and analysis
**User study for naturalness**. Since the perceptions of "natural" may differ from human to human, We conduct experiments to prove the discriminator's effect. We invited 20 persons with/without computer technical background, their ages are from 20 to 60, and the proportion of male to female was approximately 2:1. For each of them, we show 120 pictures (including 30 of augmented poses, 30 of optimized poses, 30 of optimized without discriminator, and 30 of raw poses from IH2.6M) of the IH poses, they are asked to determine whether the shown poses are natural, we count the NR (natural rate) of each category. The results are presented in Table 3, the "Raw poses" are those from IH2.6M[26], they are performed by humans and have high NR, however, some serious mesh-penetration caused by annotation mistakes might make the testers hardly to determine the "natural". The "Augmented poses" are augmented from the raw poses by assigning random rotation offsets to hand joints, they follow the joint limitation but have randomness, and some of them are in mesh penetration, the NR is low in this category. Optimizing the augmented poses without \(\mathcal{D}\) solves the penetration, and the poses are valid, but the poses are not natural enough. It is clear that \(\mathcal{D}\) improves the natu
Figure 10: Qualitative results of our method on IH2.6M test set.
\begin{table}
\begin{tabular}{c c c c} \hline \hline With \(\mathcal{D}\) & No \(\mathcal{D}\) & Raw poses & Augmented poses \\ \hline
81.25\% & 54.68\% & 90.82\% & 32.92\% \\ \hline \hline \end{tabular}
\end{table}
Table 3: User study on natural rate. The higher the number, the more natural it is.
ralness of the poses.
**Effectiveness of correlation encoder.** Table 4 shows the performance of models with and without the CE. The baseline method fuses the left-hand feature and right-hand feature with the image feature independently through a transform encoder. The result indicates CE can improve performance by fusing the correlation between hands. Our model is used as default model for subsequent experiments.
**Mixing synthetic with real images.** To demonstrate the usefulness of RenderIH, we test InterNet, DIGIT, IntagHand, and our TransHand on the IH2.6M test set under the setting of training with or without using the full 1M data from the RenderIH dataset. As shown in Table 5, RenderIH is helpful to further reduce the estimation error. For example, the error can be greatly reduced from 10.9mm to 9.72mm for the SOTA IntagHand method. The results prove that our RenderIH has great complementarity with real data. Meanwhile, when hand-hand occlusion is severe, training with our synthetic dataset can handle those cases better than IH2.6M only which is shown in Figure 11.To quantify the impact of interaction and occlusion, we use the IoU between left and right hand ground truth masks following DIGIT [4]. The higher IoU implies more occlusion and half-length of the error bars correspond to 0.5 times of MPJPE standard deviation. With minimal occlusion, the MPJPE is similar between the mixed image model and IH2.6M only. As occlusion increases, the mixed image model reduces MPJPE more substantially than IH2.6M alone. This highlights the value of our RenderIH data.
**Synthetic data size influence.** During the training phase that involved various combinations of synthetic data and the IH2.6M training set, an obvious decline in the error is observed initially, followed by a gradual decrease after the incorporation of 900K synthetic images, as illustrated in Figure 12. The trend indicates that beyond a certain volume of synthetic data, the benefits of incorporating additional data become marginal. To balance the cost of training and accuracy, we select 1M as the optimal size for RenderIH.
**Training strategy comparison.** The training strategy of synthetic data and real data is studied in this section. From Figure 13, both data mix training and pretraining from synthetic data can lead to significantly higher accuracy. Compared to dataset mixing, pretraining on the synthetic followed by fine-tuning on real images led to better precision. In contrast to dataset mixing, our results suggest that pretraining on synthetic data followed by finetuning on real images offers a more effective approach for reducing error.
**Real data size influence.** We study how the real data
\begin{table}
\begin{tabular}{c|c} \hline \hline method\textbackslash{}metric & PAMPJPE/MPJPE/SMPJPE(mm)\textbackslash{} \\ \hline Baseline & 7.32/11.12/10.82 \\ Baseline+CE & 6.76/10.6/9.63 \\ \hline \hline \end{tabular}
\end{table}
Table 4: Effect of correlation encoder (CE) on IH2.6M test set (PAMPJPE/MPJPE/SMPJPE(mm)\textbackslash{}). It is shown that CE helps reduce the error by a clear margin.
Figure 11: Comparing MPJPE by the degree of occlusion on Tzionas dataset. The IoU between groundtruth left/right masks measures the degree of interaction. The left (yellow) and right (blue) hand masks provide interaction examples in each IoU range
Figure 12: Results of training IH2.6M with different number of RenderIH images on MPJPE(mm)\textbackslash{}.
\begin{table}
\begin{tabular}{c|c c} \hline \hline method\textbackslash{}train set & IH2.6M & Mixed \\ \hline InterNet & 18.28 & 17.19 \\ DIGIT & 15.48 & 14.28 \\ IntagHand & 10.9 & 9.72 \\ \hline Ours & 10.6 & 10.06 \\ \hline \hline \end{tabular}
\end{table}
Table 5: Comparison between models trained from IH2.6M and a mixture of RenderIH and IH2.6M in MPJPE(mm)\textbackslash{}. The methods are reproduced using their official training code.
Figure 13: Comparison between training with RenderIH only, with part of IH2.6M only, the combination of the two, with pretraining on RenderIH and finetuning on IH2.6M.
size affects the estimation precision in Figure 13. We use all the samples from RenderIH in this section. For real data, we sample the number of data ranging from 3663 to 366358, which takes 1%, 5%, 10%, 30%, 50%, 70%, and 100% of the real data. Although training only on RenderIH performs poorly, the MPJPE can be greatly reduced from 27.73mm to 12.6mm by finetuning on only 1% of real data. With finetuning on 10% of real data, the MPJPE can be almost the same as training on the full real data. When finetuning on all real data, the error can be 0.96mm lower than training only on all real data.
**Comparison with H2O-3D dataset, Ego3d dataset and RenderIH subset.** In Table 7 and Table 8, we compare the generalization ability of these datasets with the same number of 40K samples. The model pretrained on RenderIH reaches lower error than other models pretrained on H2O-3D and Ego3d in Table 7, which proves that our artificial data is realistic and the knowledge is more easily transferable. The model trained on RenderIH performs better, possibly because all images have objects that interfere with two-handed interaction in H2O-3D. When training TransHand on RenderIH and IH2.6M, the estimation error is the lowest both in the IH2.6M and Tzionas dataset which is shown in Table 8. Especially the result on Tzionas dataset shows our varied pose distribution, background, and texture is helpful for improving generalization.
**Comparison with SOTA methods.** As is shown in Table 6, our TransHand can outperform SOTA IntagHand method trained from its official code. Furthermore, their method involves multitask learning and their network comprises of complex graph transformer modules. In comparison, our method is simpler yet highly effective. When pretraining on RenderIH and finetuning on the IH2.6M data, our method can further reduce the MPJPE by about 1mm. Better hand-hand contact (CDev) and better relative root translation (MRRPE) can be observed in this table. Moreover, it is shown in Table 9 that training on our dataset in addition to IH2.6M can lead to obviously lower error on the Tzionas dataset compared with training on IH2.6M alone. Results that are computed with wrist as root is shown in Section 3.3 of SM.
**Qualitative results**. Our qualitative results are shown in Figure 10. We can see our method can generate high-quality IH results in IH2.6M images. More in-the-wild results can be found in the SM.
## 6 Conclusion
In this paper, we propose a new large-scale synthetic dataset for 3D IH pose estimation. Various experiments are conducted to study the effectiveness of RenderIH. With the whole synthetic hand images and only 10% of real hand images, we can achieve precision that is comparable to the same method which is trained on all the real hand images. We hope that this dataset could be a meaningful step towards developing 3D IH pose estimation models that do not depend on real data and adaptable to to various scenes.
\begin{table}
\begin{tabular}{c|c c c c c} \hline \hline Method & PAMPJPE\(\downarrow\) & MPJPE\(\downarrow\) & SMPJPE\(\downarrow\) & MRRPE\(\downarrow\) & CDev\(\downarrow\) \\ \hline InterNet\({}^{*}\)[26] & 11.72 & 18.28 & 16.68 & - & - \\ DIGIT\({}^{*}\)[4] & 9.72 & 15.48 & 13.43 & - & - \\ InterShape [39] & - & - & 13.07 & - & - \\ HDR [24] & - & 13.12 & - & - & - \\ IntagHand [21] & 6.10 & 10.30 & 8.79 & 12.1 & 25.1 \\ IntagHand\({}^{*}\) & 7.16 & 10.90 & 10.47 & 13.6 & 29.6 \\ \hline Ours & 6.76 & 10.66 & 9.63 & 12.98 & 27.9 \\ Ours\({}^{\#}\) & 5.79 & 9.64 & 8.18 & 11.95 & 24.6 \\ \hline \hline \end{tabular}
\end{table}
Table 6: Comparing with SOTA methods on IH2.6M test set (\(*\) means official code reproduction, \(\#\) means RenderIH pretraining)
\begin{table}
\begin{tabular}{c|c c} \hline \hline train set\(\backslash\)test set & IH2.6M & Tzionas \\ \hline H2O-3D+IH2.6M & 11.05/9.91 & 12.03/12.02 \\ Ego3d+IH2.6M & 10.66/9.60 & 11.13/11.06 \\ RenderIH+IH2.6M & 10.58/9.52 & 10.63/10.56 \\ \hline \hline \end{tabular}
\end{table}
Table 7: Generalization ability comparison between H2O-3D, Ego3d, and RenderIH on MPJPE/SMPJPE (mm)\(\downarrow\). The number of samples is 40K and fixed for each dataset.
\begin{table}
\begin{tabular}{c|c c} \hline \hline \multicolumn{1}{c|}{train set\(\backslash\)test set} & IH2.6M & Tzionas \\ \hline H2O-3D+IH2.6M & 11.05/9.91 & 12.03/12.02 \\ Ego3d+IH2.6M & 10.66/9.60 & 11.13/11.06 \\ RenderIH+IH2.6M & 10.58/9.52 & 10.63/10.56 \\ \hline \hline \end{tabular}
\end{table}
Table 8: Training on the mixture of datasets with all IH2.6M data on MPJPE/SMPJPE (mm)\(\downarrow\). The number of samples is 40K for each dataset.
\begin{table}
\begin{tabular}{l|c} \hline \hline Metrics & MPJPE/MRRPE/CDev\(\downarrow\) \\ \hline Training set\(\backslash\)Test set & Tzionas \\ \hline RenderIH & 22.11/25.8/47.7 \\ IH2.6M & 11.38/11.1/19.9 \\ IH2.6M+RenderIH & 10.49/9.37/19.5 \\ \hline \hline \end{tabular}
\end{table}
Table 9: The comparison of training with or without our dataset and test on Tzionas dataset.
RenderIH: A large-scale synthetic dataset for 3D interacting hand pose estimation (_Supplementary Material_)
This supplementary material contains additional information that could not be included in the main manuscript due to space limitations. We will begin by providing more detailed information about the dataset. Following that, we will briefly discuss the pose optimization details in our approach. Then we will then present additional visualization results from our qualitative experiments. Finally, we will discuss the broader impacts and limitations of our dataset.
## 1 More details on RenderIH
RenderIH is composed of 1 million synthetic images by varying the pose, camera view, and environment (texture, lighting, and background). By collecting annotations from IH2.6M, we removed samples of similar poses resulting in 3680 distinctive poses. For each distinctive pose, we augment \(I=30\) poses. After augmenting and optimization, we filter out those IH poses that still have notable penetration or exceed joint limits, the remaining data accounts for 93% of the total, and we produce approximately 100K natural and non-interpenetration IH poses. Then we apply 10 camera viewpoints to each pose and produce 1M synthetic images in total. For each image, we randomly pick from a collection of 300 HDR images to illuminate the hand and provide the background together with a hand texture map. The rendering process took more than 200 hours using 4 NVIDIA A100 GPUs. As for the corresponding annotation, we provide pose and shape parameters, 3D joint coordinates, 2D joint coordinates, and camera intrinsic and extrinsic parameters. It is worth noting that the synthetic data labels can be freely extended based on the user's preferences, such as generating hand parts segmentation masks. The automatically generated annotations are free of noise and are more flexible than the traditional labels of the real dataset. Some rendering examples to illustrate our photo-realistic effect are provided in the **video demo**.
hands, making the IH has more contact. As shown in Figure 15 b), to avoid abnormal anchors pairs, the pair can only be established when \(\bar{n_{i}^{a}}\cdot\bar{n_{j}^{a}}<0\), in which \(\bar{n^{a}}\) is the mesh face normal vector of the anchor. However, the IH attraction might cause a negative influence when the parts are in serious overlaps, as shown in Figure 15 c), there are conflicts between pairs, making the mesh hard to separate, the simple way to solve this problem is separating the hands at first so that we could have better anchor pairs.
\[\underset{\psi^{\tau},\psi^{l}}{argmin}(w_{1}\sum_{i=1}^{A_{\tau}}\sum_{j=1}^{A _{l}}L_{ij}^{A}+w_{2}L_{a}+w_{3}L_{adv}+w_{4}L_{p}), \tag{13}\]
In our implementation, we optimize the loss function in Equation 13 which is defined in the main paper in 215 iterations, we assign a larger weight \(w_{4}\) for \(L_{p}\) and a smaller weight \(w_{1}\) for \(L^{A}\) at the beginning to separate the hands, \(w_{1}\) will increase while \(w_{4}\) decrease during the optimization until 165th iteration. The anchor pairs will be rebuilt every 40 iterations to adapt to dynamically changing IH. The learning rate is set to 0.01 and will reduce after 20 no-loss-decaying iterations. Adam solver is utilized for optimization.
## 3 More visualization results
### Results for different optimization components
**Visualization of the effect of different components.** We define multiple optimization loss functions to get valid and natural IH poses. As shown in Figure 16, the "Augmented Pose" is randomly augmented from the raw poses in IH2.6M, the joint poses are restricted according to Table 2 in the main paper. After being optimized by the full constraints, we get natural and non-interpenetration poses. Comparing Figure 16(b) and Figure 16(c), we can see that adopting anchors to make IH attraction has no significant differences from employing full vertices while reducing the time complexity. Furthermore, as demonstrated in Figure 16(d), the natural discriminator \(\mathcal{D}\) could make the IH more natural, the **natural** poses are defined in the main paper, they not only conform to the anatomy but also frequently occur in daily life. Additionally, as shown in Figure 16(e), IH attraction enhances hand contact, which is hard to annotate in reality due to inter-occlusion.
### Qualitative results comparison
**Comparison with IntagHand.** To better demonstrate the superiority of our data and method, we compare our result with the existing state-of-the-art method IntagHand [21] (Their models is also trained on the combination of IH2.6M [26] and synthetic images). Some qualitative comparisons with IntagHand are shown in Figure 19. By directly projecting 3D hand mesh onto the image, we can
\begin{table}
\begin{tabular}{l|c} \hline \hline Training set\textbackslash{}Metrics & PAMPJPE/MPJPE/SMPJPE/MRRPE\textbackslash{} \\ \hline RenderIH & 13.50/47.73/49.42/32.08 \\ IH2.6M & 6.76/16.78/13.97/14.63 \\ IH2.6M+RenderIH & 5.79/15.78/12.16/14.15 \\ \hline \hline \end{tabular}
\end{table}
Table 10: The comparison of training with or without our dataset and test on IH2.6M dataset. Wrist joint is used as root.
Figure 17: Qualitative comparison with our method and IntagHand [21] on InterHand2.6M under a variety of viewpoints and different levels of inter-hand occlusion. Red circles are used to highlight the positions where our methods can generate better results. In the first row, our result can be even better than the ground truth, where the middle, ring, and little fingers of the right hand are curved.
see our result is closer to the pose in the raw image. Additionally, the results of these images from various views are also presented (see Figure 17). In the first row of Figure 17, our result can be even better than the ground truth, where the middle, ring, and little fingers of the right hand are curved. To further compare our generalization ability, we compare with IntagHand on in-the-wild images (see Figure 18). The results show that our method can clearly achieve less interpretation of two hands and more accurate finger interactions.
**Impact of synthetic data.** When only RenderIH is used for training, the performance is worse than when only IH2.6M is used, in part because the background variation in Tzionas is limited. The trend can be seen in the qualitative result in Figure 20. However, as a synthetic dataset, the function of our dataset is to largely reduce the number of real data needed for training instead of replacing real data entirely.
### Quantitive results with wrist joint as root joint
For convenient future comparison, we report our model's performance using wrist joint as root joint following common practice. As shown in Table 10, the model trained on a mixture of RenderIH and IH2.6M demonstrates consistent improvement across all metrics compared to training on IH2.6M alone.
## 4 Broader impacts and limitations
**Broader impacts.** In this paper, we introduce a synthetic 3D hand dataset, RenderIH, with accurate and diverse poses. Since there are no large-scale synthetic interacting hand datasets, RenderIH will be impactful for the community, due to its unprecedented scale, diversity, and rendering quality. Moreover, the dataset not only can be used to improve the generalization ability in real scenes but also can be used for domain adaptation.
**Limitations.** The hyperparameters of pose optimization are chosen on the basis of experimental results, such as factor \(k\) and \(s\) in Interhand attraction and weights in the final optimization loss. In the future, we may set them as learnable parameters that can be automatically learned from data.
|
2309.06032 | Explicit formula for the Gamma-convergence homogenized quadratic
curvature energy in isotropic Cosserat shell models | We show how to explicitly compute the homogenized curvature energy appearing
in the isotropic $\Gamma$-limit for flat and for curved initial configuration
Cosserat shell models, when a parental three-dimensional minimization problem
on $\Omega \subset \mathbb{R}^3$ for a Cosserat energy based on the second
order dislocation density tensor $\alpha:=\overline{R} ^T {\rm
Curl}\,\overline{R} \in \mathbb{R}^{3\times 3}$, $\overline{R}\in {\rm SO}(3)$
is used. | Maryam Mohammadi Saem, Emilian Bulgariu, Ionel-Dumitrel Ghiba, Patrizio Neff | 2023-09-12T08:05:20Z | http://arxiv.org/abs/2309.06032v1 | Explicit formula for the Gamma-convergence homogenized quadratic curvature energy in isotropic Cosserat shell models
###### Abstract
We show how to explicitly compute the homogenized curvature energy appearing in the isotropic \(\Gamma\)-limit for flat and for curved initial configuration Cosserat shell models, when a parental three-dimensional minimization problem on \(\Omega\subset\mathbb{R}^{3}\) for a Cosserat energy based on the second order dislocation density tensor \(\alpha:=\overline{R}^{T}\mathrm{Curl}\overline{R}\in\mathbb{R}^{3\times 3}\), \(\overline{R}\in\mathrm{SO}(3)\) is used.
###### Contents
* 1 Introduction
* 2 Three dimensional geometrical nonlinear and physical linearCosserat models
* 2.1 General notation
* 2.2 Geometrical nonlinear and physically linear Cosserat elastic 3D models
* 2.3 More on Cosserat-curvature strain measures
* 3 Homogenized curvature energy for the flat Cosserat-shell model via \(\Gamma\)-convergence
* 4 Homogenized curvature energy for the curved Cosserat-shell model via \(\Gamma\)-convergence
* 4.1 The calculation of the homogenized curvature energy
* 4.2 \(\Gamma\)-convergence result for the curved shell model
* 5 Conclusion
## 1 Introduction
The Cosserat theory introduced by the Cosserat brothers in 1909 [16, 14] represents a generalization of the elasticity theory. While the elasticity theory models each constituent particle of the body as a material point, i.e., it is able to model only the translation of each particle through the classical deformation \(\varphi\colon\Omega\subset\mathbb{R}^{3}\to\mathbb{R}^{3}\), the Cosserat theory models the micro-rotation of each particle attaching to each material point an independent triad of orthogonal directors, the microrotation \(\overline{R}\colon\Omega\subset\mathbb{R}^{3}\to\mathrm{SO}(3)\). Invariance of the energy under superposed rigid body motions (left-invariance under \(\mathrm{SO}(3)\)) allowed them to conclude the suitable form of the energy density as
\(W=W(\overline{U},\mathfrak{K})\), where \(\overline{U}:=\overline{R}^{T}\mathrm{D}\varphi\) is the first Cosserat deformation tensor and \(\mathfrak{K}:=(\overline{R}^{T}\partial_{x_{i}}\overline{R},\overline{R}^{T} \partial_{x_{2}}\overline{R},\overline{R}^{T}\partial_{x_{3}}\overline{R})\) is the second Cosserat deformation tensor. The Cosserat brothers never considered specific forms of the elastic energy and they never linearised their model to obtain the well-known linear Cosserat (micropolar) model [27]. In the present paper we only consider isotropic material, i.e., the behaviour of the elastic material is modelled with the help of an additionally right-invariant energy under \(\mathrm{SO}(3)\). In addition we will consider quadratic energies in suitable strains (a physically linear dependence of the stress tensor and of the couple-stress tensor on the strain measures) which allows an explicit and practical [31, 42] representation of the energy.
In [41] we have provided a nonlinear membrane-like Cosserat shell model on a curved reference configuration starting from a geometrically nonlinear, physically linear three-dimensional isotropic Cosserat model. Beside the change of metric, the obtained membrane-like Cosserat shell model [41] is still capable to capture the transverse shear deformation and the Cosserat-curvature due to remaining Cosserat effects. The Cosserat-shell model presented in [41] for curved initial configuration generalizes the Cosserat-shell model constructed in [34] for flat initial configurations. There are many different ways to mathematically model shells [29], e.g., the _derivation approach_[33, 32, 21, 22, 25, 23, 24], the _intrinsic approach_[1, 2, 28], the _asymptotic method_, the _direct approach_[26, 3, 10, 11, 12, 16, 19, 28, 40, 5, 9, 6, 7]). However, _Gamma-convergence_ methods are preferred in the mathematical community.
When the Cosserat parental three-dimensional energy is considered, in the deduction of the Gamma-limit for the curved initial configuration we have to construct four homogenized energies, while in the expression of the Gamma-limit will appear only two: the homogenized membrane energy and the homogenized curvature energy. In the deduction of the Gamma-limit in [41], we have explicitly stated the form of the homogenized membrane energy, while the explicit form of the homogenized curvature energy was only announced and we have used only some implicit properties (its continuity, convexity, etc.). The same was done in the deduction of the Gamma-limit for a flat initial configuration in [34] (we notice that another form of the Cosserat-curvature energy was considered) and no explicit form of the homogenized curvature energy could be given. In [41] we have announced the form of the homogenized curvature energy without giving details about its deduction. Therefore, this is the main aim of this paper, i.e., to provide the solutions of all optimization problems needed for having an explicit approach of Cosserat shell model for flat (Section 3) and curved (Section 4) initial configurations via the Gamma-limit. The second goal is to point out the advantages, at least from a computation point of view, of the usage of the curvature strain tensor \(\alpha:=\overline{R}^{T}\mathrm{Curl}\overline{R}\) in the parental three-dimensional Cosserat-curvature energy, instead of other curvature strain tensors considered in literature. We mention that, even if \(\alpha\) is controlled by \(\widehat{\mathfrak{K}}:=\left(\overline{R}^{T}\mathrm{D}(\overline{R}.e_{1}),R^{T}\mathrm{D}(\overline{R}.e_{2}),R^{T}\mathrm{D}(\overline{R}.e_{3}) \right)\in\mathbb{R}^{3\times 9}\) used in [32, 34], an explicit expression via Gamma-convergence of the homogenisation of the quadratic curvature energy in terms of the third order tensor \(\widehat{\mathfrak{K}}\) is missing in the literature, even for flat initial configuration. In fact it turned out that \(\widehat{\mathfrak{K}}\) is frame-indifferent but not itself isotropic, a fact which makes it unsuitable to be used in an isotropic model. Beside these advantages of the usage of the second order dislocation density tensor \(\alpha\), there is a one-two-one relation between \(\alpha\) and the so-called wryness tensor \(\Gamma:=\left(\mathrm{axl}(\overline{R}^{T}\,\partial_{x_{1}}\overline{R})\,| \,\mathrm{axl}(\overline{R}^{T}\,\partial_{x_{2}}\overline{R})\,|\,\mathrm{axl} (\overline{R}^{T}\,\partial_{x_{3}}\overline{R})\,\right)\in\mathbb{R}^{3 \times 3}\) (second order Cosserat deformation tensor [16], a Lagrangian strain measure for curvature-orientation change[17]). This property is not shared with \(\widehat{\mathfrak{K}}\) and \(\mathfrak{K}\). We show that considering \(\mathfrak{K}\) is equivalent to a particular choice of the constitutive coefficients in a model in which \(\alpha\) is used and, therefore, the formulas determined for the homogenized quadratic curvature energy via Gamma convergence are valid for parental isotropic three-dimensional energies which are quadratic in \(\mathfrak{K}\), too. However, the general form of a quadratic isotropic energy has a more complicated form in terms of \(\mathfrak{K}\) in comparison to the case when we express it in terms of \(\alpha\), see Subsection 2.3. Therefore, from a computational point of view it is more convenient to consider \(\alpha\) in a isotropic Cosserat model. Moreover, using [36] (see Subsection 2.3), we have that \(\alpha\) controls \(\mathfrak{K}\) in \(L^{2}(\Omega,\mathbb{R}^{3\times 3\times 3})\) which controls \(\widehat{\mathfrak{K}}\) in \(L^{2}(\Omega,\mathbb{R}^{3\times 3\times 3})\) which controls \(\alpha\) in \(L^{2}(\Omega,\mathbb{R}^{3\times 3})\). Therefore, a positive definite quadratic form in terms of one of the three variants of appropriate curvature tensors is energetically controlled by each of the three Cosserat-curvature tensors. This is why when the problem of existence of the solution or other similar qualitative results are considered, the form of the used Cosserat-curvature strain tensor is irrelevant, in the sense that if such a result is obtained for a Cosserat-curvature quadratic in Cosserat-curvature strain tensor then it may be immediately extended for Cosserat-curvature quadratic energies in the other two Cosserat-curvature strain tensor considered in the present paper. However, the usage of the Cosserat-curvature strain tensor \(\alpha\) has two main advantages:
* a quadratic isotropic energy in terms of the wryness tensor \(\Gamma\) is rewritten in a transparent and explicit
form as a quadratic energy in terms of dislocation density tensor \(\alpha\) (and vice versa), see Subsection 2.3;
* the expression of a quadratic isotropic energy in terms of the wryness tensor \(\Gamma\) is very simple and suitable for analytical computations, see Subsection 2.3;
* it admits the explicit analytical calculation of the homogenized quadratic curvature energies in the construction of the Cosserat shell model via \(\Gamma\)-convergence method, see Sections 3 and 4.
## 2 Three dimensional geometrical nonlinear and physical linear
Cosserat models
### General notation
Before continuing, let us introduce the notation we will use or have already used in Section 1 and in the abstract. We denote by \(\mathbb{R}^{m\times n}\), \(n,m\in\mathbb{N}\), the set of real \(m\times n\) second order tensors, written with capital letters. We adopt the usual abbreviations of Lie-group theory, i.e., \(\mathrm{GL}(n)=\{X\in\mathbb{R}^{n\times n}\mid\det(X)\neq 0\}\) the general linear group, \(\mathrm{SL}(n)=\{X\in\mathrm{GL}(n)\mid\det(X)=1\}\), \(\mathrm{O}(n)=\{X\in\mathrm{GL}(n)\mid X^{T}X=\mathbb{1}_{n}\}\), \(\mathrm{SO}(n)=\{X\in\mathrm{GL}(n)|X^{T}X=\mathbb{1}_{n},\det(X)=1\}\) with corresponding Lie-algebras \(\mathfrak{so}(n)=\{X\in\mathbb{R}^{n\times n}\mid X^{T}=-X\}\) of skew symmetric tensors and \(\mathfrak{sl}(n)=\{X\in\mathbb{R}^{n\times n}\mid\mathrm{tr}(X)=0\}\) of traceless tensors. Here, for \(a,b\in\mathbb{R}^{n}\) we let \(\big{<}a,b\big{>}_{\mathbb{R}^{n}}\) denote the scalar product on \(\mathbb{R}^{n}\) with associated (squared) vector norm \(\|a\|_{\mathbb{R}^{n}}^{2}=\big{<}a,a\big{>}_{\mathbb{R}^{n}}\). The standard Euclidean scalar product on \(\mathbb{R}^{n\times n}\) is given by \(\big{<}X,Y\big{>}_{\mathbb{R}^{n\times n}}=\mathrm{tr}(XY^{T})\), and thus the (squared) Frobenius tensor norm is \(\|X\|^{2}=\big{<}X,X\big{>}_{\mathbb{R}^{n\times n}}\). In the following we omit the index \(\mathbb{R}^{n},\mathbb{R}^{n\times n}\). The identity tensor on \(\mathbb{R}^{n\times n}\) will be denoted by \(\mathbb{1}_{n}\), so that \(\mathrm{tr}(X)=\big{<}X,\mathbb{1}_{n}\big{>}\). We let \(\mathrm{Sym}(n)\) and \(\mathrm{Sym}^{+}(n)\) denote the symmetric and positive definite symmetric tensors, respectively. For all \(X\in\mathbb{R}^{3\times 3}\) we set \(\mathrm{sym}\,X=\frac{1}{2}(X^{T}+X)\in\mathrm{Sym}(3)\), \(\mathrm{skew}\,X=\frac{1}{2}(X-X^{T})\in\mathfrak{so}(3)\) and the deviatoric part \(\mathrm{dev}\,X=X-\frac{1}{n}\)\(\mathrm{tr}(X)\,\mathbb{1}_{n}\in\mathfrak{sl}(n)\) and we have the orthogonal Cartan-decomposition of the Lie-algebra \(\mathfrak{gl}(3)=\{\mathfrak{sl}(3)\cap\mathrm{Sym}(3)\}\oplus\mathfrak{so}(3 )\oplus\mathbb{R}\cdot\mathbb{1}_{3},\ X=\mathrm{dev}\,\mathrm{sym}\,X+ \mathrm{skew}\,X+\frac{1}{3}\mathrm{tr}(X)\,\mathbb{1}_{3}\,.\) We use the canonical identification of \(\mathbb{R}^{3}\) with \(\mathfrak{so}(3)\), and, for \(A=\begin{pmatrix}0&-a_{3}&a_{2}\\ a_{3}&0&-a_{1}\\ -a_{2}&a_{1}&0\end{pmatrix}\in\mathfrak{so}(3)\) we consider the operators \(\mathrm{axl}\,:\,\mathfrak{so}(3)\to\mathbb{R}^{3}\) and \(\mathrm{anti}:\mathbb{R}^{3}\to\mathfrak{so}(3)\) through \(\mathrm{axl}\,A:=(a_{1},a_{2},a_{3})^{T}\), \(A.\,v=(\mathrm{axl}\,A)\times v\), \((\mathrm{anti}(v))_{ij}=-\epsilon_{ijk}\,v_{k}\ \ \forall\,v\in\mathbb{R}^{3}\), \((\mathrm{axl}\,A)_{k}=-\frac{1}{2}\,\epsilon_{ijk}A_{ij}=\frac{1}{2}\, \epsilon_{kij}A_{ji}\,,\)\(A_{ij}=-\epsilon_{ijk}\,(\mathrm{axl}\,A)_{k}=:\mathrm{anti}(\mathrm{axl}\,A)_{ij}\), where \(\epsilon_{ijk}\) is the totally antisymmetric third order permutation tensor. For \(X\in\mathrm{GL}(n)\), \(\mathrm{Adj}(X)\) denotes the tensor of transposed cofactors, while the \((i,j)\) entry of the cofactor is the \((i,j)\)-minor times a sign factor. Here, given \(z_{1},z_{2},z_{3}\in\mathbb{R}^{n\times k}\), the notation \((z_{1}\,|\,z_{2}\,|\,z_{3})\) means a matrix \(Z\in\mathbb{R}^{n\times 3k}\) obtained by taking \(z_{1},z_{2},z_{3}\) as block matrices. A third order tensor \(A=(A_{ijk})\in\mathbb{R}^{3\times 3\times 3}\) will be replaced with an equivalent object, by reordering its components in a \(\mathbb{R}^{3\times 9}\) matrix \(A\equiv(A_{1}\,|\,A_{2}\,|\,A_{3})\in\mathbb{R}^{3\times 9},\ A_{k}:=(A_{ijk})_{ij}=A.\,e_{k}\in\mathbb{R}^{3\times 3},\ k=1,2,3\), and we consider \(\mathrm{sym}A=\big{(}\mathrm{sym}\,A_{1}\,|\,\mathrm{sym}\,A_{2},\,|\,\mathrm{ sym}\,A_{3}\big{)}\in\mathbb{R}^{3\times 9}\), \(\mathrm{skew}A=\big{(}\mathrm{skew}A_{1}\,|\,\mathrm{skew}A_{2}\,|\,\mathrm{ skew}A_{3}\big{)}\in\mathbb{R}^{3\times 9}\), \(\mathrm{tr}(A)=\mathrm{tr}(A_{1})+\mathrm{tr}(A_{2})+\mathrm{tr}(A_{3}).\) Moreover, we define the products of a second order tensor \(B=(B_{ij})_{ij}\in\mathbb{R}^{3\times 3}\) and a third order tensor \(A=(A_{1}\,|\,A_{2}\,|\,A_{3})\in\mathbb{R}^{3\times 9}\) in a natural way as \(B\,A=(B\,A_{1}\,|\,B\,A_{2}\,|\,B\,A_{3})\in\mathbb{R}^{3\times 9}\), \(A\,B=\big{(}\sum_{k=1}^{3}A_{k}\,B_{k1}\,|\,\sum_{k=1}^{3}A_{k}\,B_{k2}\,|\, \sum_{k=1}^{3}A_{k}\,B_{k3}\big{)}\in\mathbb{R}^{3\times 9}.\) Let us remark that for \(B=(B_{ij})_{ij}\in\mathrm{GL}^{+}(3)\) having the inverse \(B=(B^{ij})_{ij}\) and \(A,C\in\mathbb{R}^{3\times 9}\) the following equivalences hold true \(A\,B=C\ \Leftrightarrow\ \sum_{k=1}^{3}A_{k}\,B_{kl}=C_{l}\ \Leftrightarrow\ \sum_{l=1}^{3}(B^{lm}\sum_{k=1}^{3}A_{k}\,B_{kl})=\sum_{l=1}^{3}C_{l}\,B^{lm}\ \Leftrightarrow\ A=C\,B^{-1}.\) We define the norm of a third order tensor \(A=(A_{1}\,|\,A_{2}\,|\,A_{3})\in\mathbb{R}^{3\times 9}\) by \(\|A\|^{2}=\sum_{k=1}^{3}\|A_{k}\|^{2}.\) For \(A_{1},A_{2},A_{3}\in\mathfrak{so}(3)\) we define \(\mathrm{axl}\,A=(\mathrm{axl}\,A_{1}\,|\,\mathrm{axl}\,A_{2}\,|\,\mathrm{axl} \,A_{3})\in\mathbb{R}^{3\times 3}\), while for \(z=(z_{1}\,|\,z_{2}\,|\,z_{3})\in\mathbb{R}^{3\times 3}\) we define \(\mathrm{anti}\,z=(\mathrm{anti}\,z_{1}\,|\,\mathrm{anti}\,z_{2}\,|\,\mathrm{ anti}\,z_{3})\in\mathbb{R}^{3\times 9}\). For a given matrix \(M\in\mathbb{R}^{2\times 2}\) we define the lifted quantity \(M^{\flat}=\begin{pmatrix}M_{11}&M_{12}&0\\ M_{21}&M_{22}&0\\ 0&0&0\end{pmatrix}\in\mathbb{R}^{3\times 3}\).
Let \(\Omega\) be an open domain of \(\mathbb{R}^{3}\). The usual Lebesgue spaces of square integrable functions, vector or tensor fields on \(\Omega\) with values in \(\mathbb{R}\), \(\mathbb{R}^{3}\) or \(\mathbb{R}^{3\times 3}\), respectively will be denoted by \(\mathrm{L}^{2}(\Omega)\). Moreover, we introduce the standard Sobolev spaces \(\mathrm{H}^{1}(\Omega)=\{u\in\mathrm{L}^{2}(\Omega)\,|\,\mathrm{D}\,u\in \mathrm{L}^{2}(\Omega)\}\), \(\mathrm{H}(\mathrm{curl};\Omega)=\{v\in\mathrm{L}^{2}(\Omega)\,|\,\mathrm{ curl}\,v\in\mathrm{L}^{2}(\Omega)\}\) of functions \(u\) or vector fields \(v\), respectively. For vector fields \(u=(u_{1},u_{
corresponding Sobolev-spaces will be denoted by \(\mathrm{H}^{1}(\Omega)\) and \(\mathrm{H}^{1}(\mathrm{Curl};\Omega)\), respectively. We will use the notations: \(\mathrm{D}_{\xi}\), \(\mathrm{D}_{x}\), \(\mathrm{Curl}_{\xi}\), \(\mathrm{Curl}_{x}\) etc. to indicate the variables for which these quantities are calculated.
### Geometrical nonlinear and physically linear Cosserat elastic 3D models
We consider an elastic material which in its reference configuration fills the three dimensional domain \(\Omega\subset R^{3}\). In the Cosserat theory, each point of the reference body is endowed with three independent orthogonal directors, i.e., with a matrix field \(\overline{R}:\Omega\to\mathrm{SO}(3)\) called the _microrotation_ tensor. Let us remark that while the tensor \(\mathrm{polar}(\mathrm{D}\varphi)\in\mathrm{SO}(3)\) of the polar decomposition of \(F:=\mathrm{D}\varphi=\mathrm{polar}(\mathrm{D}\varphi)\sqrt{(\mathrm{D} \varphi)^{T}\mathrm{D}\varphi}\) is not independent of \(\varphi\)[38, 8, 37], the tensor \(\overline{R}\) in the Cosserat theory is independent of \(\mathrm{D}\varphi\). In other words, in general, \(\overline{R}\neq\mathrm{polar}(\mathrm{D}\varphi)\). In geometrical nonlinear and physically linear Cosserat elastic 3D models, the deformation \(\varphi\) and the microrotation \(\overline{R}\) are the solutions of the following _nonlinear minimization problems_ on \(\Omega\):
\[I(\varphi,F,\overline{R},\partial_{x_{i}}\overline{R})=\int_{\Omega}\left[W_{ \mathrm{strain}}(F,\overline{R})+W_{\mathrm{Cosserat-curv}}(\overline{R}, \partial_{x_{i}}\overline{R})\right]\,dV\quad\mapsto\min\,.\qquad\mathrm{w.r.t }\quad(\varphi,\overline{R}), \tag{2.1}\]
where \(F=\mathrm{D}\varphi\) represents the deformation gradient, \(W_{\mathrm{strain}}(F,\overline{R})\) is the strain energy, \(W_{\mathrm{Cosserat-curv}}(\overline{R},\partial_{x_{i}}\overline{R})\) is the Cosserat curvature (bending) energy and and \(dV\) denotes the volume element in the \(\Omega\)-configuration. For simplicity of exposition we consider that the external loadings are not present and we have only Dirichlet-type boundary conditions for \(\varphi\).
In this paper, the strain energy is considered to be a general isotropic quadratic energy (physically linear) in terms of the non-symmetric Biot-type stretch tensor \(\overline{U}:\,=\,\overline{R}^{T}F\in\mathbb{R}^{3\times 3}\) (the first Cosserat deformation tensor), i.e.,
\[W_{\mathrm{strain}}(F,\overline{R})=W_{\mathrm{mp}}(\overline{U}):\,=\,\mu\, \|\mathrm{dev}\,\mathrm{sym}(\overline{U}-\mathbb{1}_{3})\|^{2}+\mu_{\mathrm{ c}}\,\|\mathrm{skew}(\overline{U}-\mathbb{1}_{3})\|^{2}+\frac{\kappa}{2}\,[ \mathrm{tr}(\mathrm{sym}(\overline{U}-\mathbb{1}_{3}))]^{2}\,, \tag{2.2}\]
while the Cosserat curvature (bending) energy \(W_{\mathrm{Cosserat-curv}}(\overline{R},\partial_{x_{i}}\overline{R})\) is considered to be isotropic in terms of \(\overline{R}\) and quadratic in the following curvature strain candidates
\[\mathfrak{K}:= \,\overline{R}^{T}\mathrm{D}\overline{R}=\left(\overline{R}^{T} \partial_{x_{1}}\overline{R},\overline{R}^{T}\partial_{x_{1}}\overline{R}, \overline{R}^{T}\partial_{x_{1}}\overline{R}\right)\in\mathbb{R}^{3\times 9},\] \[\widehat{\mathfrak{K}}:= \,\left(\overline{R}^{T}\mathrm{D}(\overline{R}.e_{1}),R^{T} \mathrm{D}(\overline{R}.e_{2}),R^{T}\mathrm{D}(\overline{R}.e_{3})\right)\in \mathbb{R}^{3\times 9}, \tag{2.3}\] \[\alpha: = \,\overline{R}^{T}\,\mathrm{Curl}\,\overline{R}\in\mathbb{R}^{3 \times 3}\] \[\Gamma:= \,\left(\mathrm{axl}(\overline{R}^{T}\,\partial_{x_{1}}\overline {R})\,|\,\mathrm{axl}(\overline{R}^{T}\,\partial_{x_{2}}\overline{R})\,|\, \mathrm{axl}(\overline{R}^{T}\,\partial_{x_{3}}\overline{R})\,\right)\in \mathbb{R}^{3\times 3}\text{ (the wyness tensor)}.\]
The second order Cosserat deformation tensor \(\Gamma\) (the wyness tensor) was considered as a Lagrangian strain measure for curvature-orientation change [17]) since the introduction of the Cosserat model [16], the second order dislocation density tensor \(\alpha\) is in a direct relation to the wynyness tensor via Nye's formulas and energetically controls \(\mathfrak{K}\) (see [36, 30]), the tensor \(\mathfrak{K}\) represents a first impulse choice while \(\widehat{\mathfrak{K}}\) is an ad hoc choice which is not suitable for an isotropic quadratic curvature energy. Let us notice that all the mentioned curvature tensors are frame-indifferent by definition, i.e., they remain invariant after the change \(\overline{R}\to\overline{Q}\,\overline{R}\), with \(\overline{Q}\in\mathrm{SO}(3)\) constant. In addition, \(\Gamma\), \(\alpha\) and \(\mathfrak{K}\) are isotropic, property which are not shared with \(\widehat{\mathfrak{K}}\).
The suitable form of the isotropic Cosserat-curvature energy is discussed in Section 2.3. However, let us already announce that from our point of view, the most suitable expression for analytical computations of the general isotropic energy quadratic in \(\overline{R}\) is
\[W_{\mathrm{Cosserat-curv}}(\overline{R},\partial_{x_{i}}\overline {R})=W_{\mathrm{curv}}(\alpha): = \,\mu\,L_{\mathrm{c}}^{2}\left(b_{1}\,\|\mathrm{sym}\,\alpha\|^ {2}+b_{2}\,\|\mathrm{skew}\,\alpha\|^{2}+\frac{b_{3}}{4}\,[\mathrm{tr}(\alpha)]^{ 2}\right)\] \[= \,\mu\,L_{\mathrm{c}}^{2}\left(b_{1}\,\|\mathrm{sym}\,\Gamma\|^ {2}+b_{2}\,\|\mathrm{skew}\,\Gamma\|^{2}+b_{3}\,[\mathrm{tr}(\Gamma)]^{2} \right).\]
The parameters \(\mu\,\) and \(\lambda\) are the elasticity _Lame-type_1 constants, \(\kappa=\frac{2\,\mu\,+3\,\lambda}{3}\) is the _infinitesimal bulk modulus_, \(\mu_{\mathrm{c}}>0\) is the _Cosserat couple modulus_ and \(L_{\mathrm{c}}>0\) is the _internal length_ and responsible for _size effects_ in the
sense that smaller samples are relatively stiffer than larger samples. If not stated otherwise, we assume, here, that \(\mu\,>0,\,\kappa>0,\,\mu_{c}>0\). The Cosserat couple modulus \(\mu_{c}\) controls the deviation of the microrotation \(\overline{R}\) from the continuum rotation \(\mathrm{polar}(\mathrm{D}\varphi)\) in the polar decomposition of \(\mathrm{D}\varphi=\mathrm{polar}(\mathrm{D}\varphi)\cdot\sqrt{\mathrm{D} \varphi^{T}\mathrm{D}\varphi}\). For \(\mu_{c}\to\infty\) the constraint \(R=\mathrm{polar}(\mathrm{D}\varphi)\) is generated and the model would turn into a Toupin couple stress model. We also assume that \(b_{1}>0,b_{2}>0\) and \(b_{3}>0\), which assures the _coercivity_ and _convexity of the curvature_ energy [34].
### More on Cosserat-curvature strain measures
3.1 The curvature tensor \(\mathfrak{K}=\overline{R}^{T}\mathrm{D}\overline{R}\in\mathbb{R}^{3\times 9}\)
A first choice for a curvature strain tensor is the third order elastic Cosserat curvature tensor [14, 16, 13, 15]
\[\mathfrak{K}:= \overline{R}^{T}\mathrm{D}\,\overline{R}=\overline{R}^{T}\,( \partial_{x_{1}}\overline{R}\,|\,\partial_{x_{2}}\overline{R}\,|\,\partial_ {x_{3}}\overline{R})=(\overline{R}^{T}\,\partial_{x_{1}}\overline{R}\,| \,\overline{R}^{T}\,\partial_{x_{2}}\overline{R}\,|\,\overline{R}^{T}\, \partial_{x_{3}}\overline{R})\in\mathbb{R}^{3\times 9}, \tag{2.5}\]
and the curvature energy given by observing that \(\widetilde{W}_{\mathrm{curv}}(\mathfrak{K}):=a_{1}\,\|\mathfrak{K}\|^{2}\,.\) This one-parameter choice is motivated [20] by \(\mathfrak{K}\equiv\mathrm{skew}\,\mathfrak{K}\), since \(\overline{R}^{T}\partial_{x_{i}}\overline{R}\in\mathfrak{so}(3)\), \(i=1,2,3\), and
\[\mathrm{sym}\,\mathfrak{K} =\big{(}\mathrm{sym}(\overline{R}^{T}\partial_{x_{1}}\overline{R }\,|\,\mathrm{sym}(\overline{R}^{T}\partial_{x_{2}}\overline{R}\,|\, \mathrm{sym}(\overline{R}^{T}\partial_{x_{3}}\overline{R})\big{)}=(0_{3}\,|\, 0_{3}\,|\,0_{3}),\] \[\mathrm{tr}\mathfrak{K} =\mathrm{tr}(\overline{R}^{T}\partial_{x_{1}}\overline{R})+ \mathrm{tr}(\overline{R}^{T}\partial_{x_{2}}\overline{R})+\mathrm{tr}( \overline{R}^{T}\partial_{x_{3}}\overline{R})=0,\] \[\mathrm{skew}\,\mathfrak{K} =\big{(}\mathrm{skew}(\overline{R}^{T}\partial_{x_{1}}\overline{R }\,|\,\mathrm{skew}(\overline{R}^{T}\partial_{x_{2}}\overline{R}\,|\, \mathrm{skew}(\overline{R}^{T}\partial_{x_{3}}\overline{R})\big{)}. \tag{2.6}\]
However, this is not the most general form of an quadratic isotropic energy in \(\overline{R}\) as will be seen later.
The third order tensor \(\mathfrak{K}=\left(\overline{R}^{T}\partial_{x_{1}}\overline{R}\,|\,\overline {R}^{T}\partial_{x_{2}}\overline{R}\,|\,\overline{R}^{T}\partial_{x_{3}} \overline{R}\right)\in\mathbb{R}^{3\times 9}\) is usually replaced by the wryness tensor \(\Gamma=\left(\mathrm{axl}(\overline{R}^{T}\partial_{x_{1}}\overline{R})\,|\, \mathrm{axl}(\overline{R}^{T}\,\partial_{x_{2}}\overline{R})\,|\,\mathrm{axl} (\overline{R}^{T}\partial_{x_{3}}\overline{R})\,\right)\), since we have the one-to-one relations
\[\mathfrak{K}=\mathrm{anti}\,\Gamma,\qquad\qquad\Gamma=\mathrm{axl}\,\mathfrak{K}, \tag{2.7}\]
due to the fact that \(\overline{R}^{T}\partial_{x_{1}}\overline{R}\in\mathfrak{so}(3)\), \(i=1,2,3\), which in indices read
\[\mathfrak{K}_{ijk}=\overline{R}_{li}\frac{\partial\overline{R}_{lj}}{ \partial x_{k}},\qquad\qquad\mathfrak{K}_{ijk}=-\epsilon_{ijl}\Gamma_{lk}, \qquad\qquad\Gamma_{ik}=\frac{1}{2}\,\sum_{r,l=1}^{3}\epsilon_{ilr}\mathfrak{K }_{lrk}. \tag{2.8}\]
For a detailed discussion on various strain measures of the non-linear micropolar continua we refer to [39].
**Proposition 2.1**.: _A general isotropic quadratic energy depending on \(\overline{R}^{T}\mathrm{D}\overline{R}\in\mathbb{R}^{3\times 9}\) has the form_
\[\widetilde{W}(\mathfrak{K})= \,b_{1}\,\|\mathrm{sym}\,\mathrm{axl}\,\mathfrak{K}\|^{2}+b_{2}\, \|\mathrm{skew}\,\mathrm{axl}\,\mathfrak{K}\|^{2}+4\,b_{3}\,[\mathrm{tr}( \mathrm{axl}\,\mathfrak{K})]^{2}\] \[= \,b_{1}\,\|\mathrm{sym}\,\big{(}\,\mathrm{axl}(\mathfrak{K}.e_{1 })\,|\,\,\mathrm{axl}(\mathfrak{K}.e_{2})\,|\,\,\mathrm{axl}(\mathfrak{K}.e_{3} )\big{)}\|^{2} \tag{2.9}\] \[+b_{2}\,\|\mathrm{skew}\,\big{(}\,\mathrm{axl}(\mathfrak{K}.e_{1 })\,|\,\,\mathrm{axl}(\mathfrak{K}.e_{2})\,|\,\,\mathrm{axl}(\mathfrak{K}.e_{3} )\big{)}\|^{2}\] \[+b_{3}\,[\mathrm{tr}\big{(}\big{(}\,\mathrm{axl}(\mathfrak{K}.e_{ 1})\,|\,\,\mathrm{axl}(\mathfrak{K}.e_{2})\,|\,\,\mathrm{axl}(\mathfrak{K}.e_{3} )\big{)}\big{)}]^{2}.\]
Proof.: The proof is based on the result from [18] and on the identities (2.7). Indeed, a quadratic energy in \(\mathfrak{K}\) is a quadratic energy in \(\Gamma\). Due to the results presented in [18], a quadratic isotropic energy written in terms of \(\Gamma\) is given by
\[W(\Gamma)= \,b_{1}\,\|\mathrm{sym}\,\Gamma\|^{2}+b_{2}\,\|\mathrm{skew}\, \Gamma\|^{2}+b_{3}\,[\mathrm{tr}(\Gamma)]^{2}. \tag{2.10}\]
Using (2.7), the proof is complete.
We can express the uni-constant isotropic curvature term as a positive definite quadratic form in terms of \(\Gamma\), i.e.,
\[\|\mathrm{D}\overline{R}\|_{\mathbb{R}^{3\times 3\times 3}}^{2} =\|\overline{R}^{T}\mathrm{D}\overline{R}\,\|_{\mathbb{R}^{3 \times 3\times 3}}^{2}=\|\mathfrak{K}\|_{\mathbb{R}^{3\times 3\times 3}}^{2}=\| \overline{R}^{T}\partial_{x_{1}}\overline{R}\|_{\mathbb{R}^{3\times 3}}^{2}+\| \overline{R}^{T}\partial_{x_{2}}\overline{R}\|_{\mathbb{R}^{3\times 3}}^{2}+\| \overline{R}^{T}\partial_{x_{2}}\overline{R}\|_{\mathbb{R}^{3\times 3}}^{2}+\| \overline{R}^{T}\partial_{x_{3}}\overline{R}\|_{\mathbb{R}^{3\times 3}}^{2}\] \[=2\,\|\,\mathrm{axl}(\overline{R}^{T}\partial_{x_{1}}\overline{R}) \|_{\mathbb{R}^{3}}^{2}+2\,\|\,\mathrm{axl}(\overline{R}^{T}\partial_{x_{2}} \overline{R})\|_{\mathbb{R}^{3}}^{2}2\,\|\,\mathrm{axl}(\overline{R}^{T} \partial_{x_{3}}\overline{R})\|_{\mathbb{R}^{3}}^{2}=2\,\|\Gamma\|_{\mathbb{R}^{3 \times 3}}^{2}. \tag{2.11}\]
Therefore, a general positive definite quadratic isotropic curvature energy (2.9) in \(\Gamma\) is a positive definite quadratic form in terms of \(\|\overline{R}^{T}\mathrm{D}\overline{R}\|_{\mathbb{R}^{3\times 3}}^{2}\), and vice versa. Thus, working with a quadratic isotropic positive definite energy in terms of \(\overline{R}^{T}\mathrm{D}\overline{R}\) is equivalent with working with a quadratic positive definite isotropic energy in terms of \(\Gamma\). Since the expression of a isotropic curvature energy has a simpler form in terms of \(\Gamma\), in order to keep the calculations as simple as possible, we prefer to work with \(\Gamma\).
3.2 The curvature tensor \(\alpha=\overline{R}^{T}\mathrm{Curl}\,\overline{R}\in\mathbb{R}^{3\times 3}\)
Another choice as curvature strain is the _second order dislocation density tensor_\(\alpha\). Comparing with \(\overline{R}^{T}\mathrm{D}\,\overline{R}\), it simplifies considerably the representation by allowing to use the orthogonal decomposition
\[\overline{R}^{T}\,\mathrm{Curl}\,\overline{R}=\alpha=\mathrm{dev}\,\mathrm{ sym}\,\alpha+\mathrm{skew}\,\alpha+\frac{1}{3}\,\mathrm{tr}(\alpha)\mathbb{1}_{3}. \tag{2.12}\]
Moreover, it yields an equivalent control of spatial derivatives of rotations [36] and allows us to write the curvature energy in a fictitious Cartesian configuration in terms of the wryness tensor [36, 17]\(\Gamma\in\mathbb{R}^{3\times 3}\), since (see [36]) the following close relationship between the _wryness tensor_ and the _dislocation density tensor_ holds
\[\alpha=-\Gamma^{T}+\mathrm{tr}(\Gamma)\,\mathbb{1}_{3},\qquad\text{or equivalently,}\qquad\Gamma=-\alpha^{T}+\frac{1}{2}\mathrm{tr}(\alpha)\,\mathbb{1}_{3}. \tag{2.13}\]
Hence,
\[\mathrm{sym}\,\Gamma= -\mathrm{sym}\,\alpha+\frac{1}{2}\mathrm{tr}(\alpha)\,\mathbb{1} _{3},\qquad\mathrm{dev}\,\mathrm{sym}\,\Gamma=-\mathrm{dev}\,\mathrm{sym}\,\alpha,\] \[\mathrm{skew}\,\Gamma= -\mathrm{skew}(\alpha^{T})=\mathrm{skew}\,\alpha,\qquad\qquad \mathrm{tr}(\Gamma)=-\mathrm{tr}(\alpha)+\frac{3}{2}\,\mathrm{tr}(\alpha)= \frac{1}{2}\mathrm{tr}(\alpha) \tag{2.14}\]
and
\[\mathrm{sym}\,\alpha\,=\,-\mathrm{sym}\,\Gamma+\mathrm{tr}(\Gamma)\, \mathbb{1}_{3},\quad\mathrm{dev}\,\mathrm{sym}\,\alpha\,=\,-\mathrm{dev}\, \mathrm{sym}\,\Gamma,\quad\mathrm{skew}\,\alpha\,=\,\mathrm{skew}\,\Gamma, \quad\mathrm{tr}(\alpha)\,=\,2\,\mathrm{tr}(\Gamma). \tag{2.15}\]
In addition, from [18] we have
**Proposition 2.2**.: _A general quadratic isotropic energy depending on \(\alpha\) has the form_
\[W_{\mathrm{curv}}(\alpha)= \,b_{1}\,\|\mathrm{sym}\,\alpha\|^{2}+b_{2}\,\|\mathrm{skew}\, \alpha\|^{2}+\frac{b_{3}}{4}\,[\mathrm{tr}(\alpha)]^{2}. \tag{2.16}\]
Proof.: We use again that a quadratic energy in \(\alpha\) is a quadratic energy in \(\Gamma\), i.e., due to [18], is given by (2.10). The proof is complete after using the Nye's formulas (2.13).
Since a quadratic isotropic positive definite energy in terms of \(\overline{R}^{T}\mathrm{D}\overline{R}\) is equivalent with a quadratic positive definite isotropic energy in terms of \(\Gamma\), considering \(\alpha\) is equivalent with considering \(\mathfrak{K}\), as long as a quadratic isotropic energy is used. As we will see in the present paper, a quadratic curvature energy in terms of \(\alpha\) is suitable for explicit calculations of homogenized curvature energy for shell models via the \(\Gamma\)-convergence method.
3.3 The curvature (in fact: bending) tensor \(\widehat{\mathfrak{K}}=\big{(}\overline{R}^{T}\mathrm{D}(\overline{R}.e_{1}) \,|\,\overline{R}^{T}\mathrm{D}(\overline{R}.e_{2})\,|\,\overline{R}^{T} \mathrm{D}(\overline{R}.e_{3})\big{)}\in\mathbb{R}^{3\times 9}\)
The curvature tensor \(\widehat{\mathfrak{K}}=\big{(}\overline{R}^{T}\mathrm{D}(\overline{R}.e_{1}) \,|\,\overline{R}^{T}\mathrm{D}(\overline{R}.e_{2})\,|\,\overline{R}^{T} \mathrm{D}(\overline{R}.e_{3})\big{)}\in\mathbb{R}^{3\times 9}\) is motivated from the flat Cosserat shell model [33, 32]. Indeed, in this setting, the general bending energy term appearing from an engineering ansatz through the thickness of the shell appears as
\[\frac{h^{3}}{12}\left(\mu\,\|\,\mathrm{sym}(\overline{R}^{T}\mathrm{D}( \overline{R}.e_{3}))\|^{2}+\mu_{\mathrm{c}}\,\|\mathrm{skew}(\overline{R}^{T} \mathrm{D}(\overline{R}.e_{3}))\|^{2}+\,\frac{\lambda\,\mu}{\lambda+2\mu} \,\big{[}\mathrm{tr}(\overline{R}^{T}\mathrm{D}(\overline{R}.e_{3}))\big{]}^{ 2}\right). \tag{2.17}\]
Motivated by this, in earlier papers [32, 33, 35, 34], as generalization of \(\overline{R}^{T}\mathrm{D}(\overline{R}.e_{3})\), the third order elastic Cosserat curvature tensor is considered in the form
\[\widehat{\mathfrak{K}}=\big{(}\overline{R}^{T}\mathrm{D}(\overline{R}.e_{1}) \,|\,\overline{R}^{T}\mathrm{D}(\overline{R}.e_{2})\,|\,\overline{R}^{T} \mathrm{D}(\overline{R}.e_{3})\big{)}=\overline{R}^{T}\big{(}\mathrm{D}( \overline{R}.e_{1}),\mathrm{D}(\overline{R}.e_{2}),\mathrm{D}(\overline{R}.e_{ 3})\big{)}\in\mathbb{R}^{3\times 9}, \tag{2.18}\]
treating the three directions \(e_{1},e_{2},e_{3}\) equally, and the curvature energy is taken to be
\[\widehat{W}_{\text{curv}}(\widehat{\mathfrak{R}})=a_{1}\|\text{sym}\,\widehat{ \mathfrak{R}}\|^{2}+a_{2}\|\,\text{skew}\,\widehat{\mathfrak{R}}\|^{2}+a_{3}[ \text{tr}(\widehat{\mathfrak{R}})]^{2}. \tag{2.19}\]
We mean by \(\,\widehat{\mathfrak{R}}.e_{i}=\overline{R}^{T}\text{D}(\overline{R}.e_{i})\,\) and
\[\|\text{sym}\,\widehat{\mathfrak{R}}\|^{2} =\sum_{i=1}^{3}\|\text{sym}\,\widehat{\mathfrak{R}}.e_{i}\|^{2} =\sum_{i=1}^{3}\|\text{sym}\,(\overline{R}^{T}\text{D}(\overline{R}.e_{i})) \|^{2},\] \[\|\text{skew}\,\widehat{\mathfrak{R}}\|^{2} =\sum_{i=1}^{3}\|\text{skew}\,\widehat{\mathfrak{R}}.e_{i}\|^{2} =\sum_{i=1}^{3}\|\text{skew}\,(\overline{R}^{T}\text{D}(\overline{R}.e_{i})) \|^{2}, \tag{2.20}\] \[[\text{tr}(\widehat{\mathfrak{R}})]^{2} =\sum_{i=1}^{3}[\text{tr}(\widehat{\mathfrak{R}}.e_{i})]^{2}= \sum_{i=1}^{3}[\text{tr}(\overline{R}^{T}\text{D}(\overline{R}.e_{i}))]^{2}.\]
However, this curvature energy has now three abstract orthogonal preferred directions, which makes it only cubic and not isotropic, as we will see.
There does not exist an analysis to show that this is the most general form of an isotropic energy depending on \(\widehat{\mathfrak{R}}\). Actually, as we will see in the following, for general positive values for \((\alpha_{i}^{1},\alpha_{i}^{2},\alpha_{i}^{3})\) the energies of the form (2.17) are anisotropic. We simplify the discussion by considering only the energy \(\|\overline{R}^{T}\text{D}(\overline{R}.e_{i})\|^{2}\). After the transformation \(\overline{R}\to\overline{R}\,\overline{Q}\), with \(\overline{Q}=(e_{1}\,|\,e_{3}\,|\,e_{2})\in\text{SO}(3)\) constant, we have
\[\|\overline{Q}^{T}\overline{R}^{T}\text{D}(\overline{R}\,\overline{Q}.e_{3}) \|^{2}=\|\overline{R}^{T}\text{D}(\overline{R}\,\overline{Q}.e_{3})\|^{2}=\| \overline{R}^{T}\text{D}(\overline{R}.e_{2})\|^{2}\neq\|\overline{R}^{T} \text{D}(\overline{R}.e_{3})\|^{2}. \tag{2.21}\]
As regards the direct relation between \(\|\widehat{\mathfrak{R}}\|^{2}=\sum_{i=1}^{3}\left\|\overline{R}^{T}\text{D}( \overline{R}.e_{i})\right\|^{2}\) and \(\|\alpha\|^{2}\) we have
\[\sum_{i=1}^{3}\left\|\overline{R}^{T}\text{D}(\overline{R}.e_{i} )\right\|_{\mathbb{R}^{3\times 3}}^{2} =\sum_{i=1}^{3}\left\|\text{D}(\overline{R}.e_{i})\right\|_{ \mathbb{R}^{3\times 3}}^{2}=\left\|\text{D}\overline{R}\right\|_{\mathbb{R}^{3 \times 3}}^{2}\] \[=1\cdot\|\text{dev}\,\text{sym}\,\overline{R}^{T}\text{Cur} \overline{R}\|_{\mathbb{R}^{3\times 3}}^{2}+1\cdot\|\,\text{skew}\,\overline{R}^{T}\text{ Cur}\overline{R}\|_{\mathbb{R}^{3\times 3}}^{2}+\frac{1}{12}\cdot[\text{tr}(\overline{R}^{T} \text{Cur}\overline{R})]^{2} \tag{2.22}\] \[\geq c_{+}\big{\|}\text{Cur}\overline{R}\,\big{\|}_{\mathbb{R}^{3 \times 3}}^{2},\]
where \(c_{+}>0\) is a constant. Since a coercive curvature energy in \(\widehat{\mathfrak{R}}\) is completely controlled by \(\sum_{i=1}^{3}\left\|\overline{R}^{T}\text{D}(\overline{R}.e_{i})\right\|_{ \mathbb{R}^{3\times 3}}^{2}\), a positive definite quadratic isotropic energy of the form in terms of \(\alpha=\overline{R}^{T}\text{Cur}\overline{R}\) (equivalently on the wryness tensor \(\Gamma\)) is a positive definite quadratic form in terms of \(\|\overline{R}^{T}\text{D}\overline{R}\|_{\mathbb{R}^{3\times 3}}^{2}\), and vice versa. Hence, a quadratic positive definite energy in terms of \(\widehat{\mathfrak{R}}\) is energetically equivalent with a quadratic positive definite energy in terms of \(\alpha\) (and \(\Gamma\)).
Let us remark that both \(\left(\partial_{x_{1}}\overline{R}\,|\,\partial_{x_{2}}\overline{R}\,|\, \partial_{x_{3}}\overline{R}\right)\in\mathbb{R}^{3\times 9}\), \(\left(\text{D}(\overline{R}.e_{1}),\text{D}(\overline{R}.e_{2}),\text{D}( \overline{R}.e_{3})\right)\) contain the same terms, \(\frac{\partial\overline{R}_{i}}{\partial x_{k}}\), \(i,j,k=1,2,3\), but differently ordered. By multiplication with \(\overline{R}^{T}\) of \(\left(\partial_{x_{1}}\overline{R}\,|\,\partial_{x_{2}}\overline{R}\,|\, \partial_{x_{3}}\overline{R}\right)\in\mathbb{R}^{3\times 9}\) and \(\left(\text{D}(\overline{R}.e_{1})\,|\,\text{D}(\overline{R}.e_{2})\,|\,\text{ D}(\overline{R}.e_{3})\right)\) we obtain \(\mathfrak{R}\) and \(\widehat{\mathfrak{R}}\), respectively, i.e. \(\mathfrak{R}_{ijk}=\overline{R}_{li}\frac{\partial\overline{R}_{li}}{ \partial x_{k}},\,\,\widehat{\mathfrak{R}}_{ijk}=\overline{R}_{li}\frac{ \partial\overline{R}_{lk}}{\partial x_{j}},i,j,k=1,2,3\) and \(\mathfrak{R}_{ijk}=\widehat{\mathfrak{R}}_{ikj},\,\,i,j,k=1,2,3.\) We have the following relation between \(\widehat{\mathfrak{R}}\) and \(\Gamma\)
\[\widehat{\mathfrak{R}}_{ijk}=\mathfrak{R}_{ikj}=-\epsilon_{ikl}\Gamma_{lj}, \Gamma_{ik}=\frac{1}{2}\,\sum_{r,l=1}^{3}\epsilon_{ilr}\mathfrak{R}_{lrk}=\frac {1}{2}\,\sum_{r,l=1}^{3}\epsilon_{ilr}\widehat{\mathfrak{R}}_{lkr}. \tag{2.23}\]
Let us introduce the operator \(\mathcal{A}:\mathbb{R}^{3\times 9}\to\mathbb{R}^{3\times 9}\) by \((\mathcal{A}.\widehat{\mathfrak{R}})_{ijk}=\widehat{\mathfrak{R}}_{ikj}\).
**Proposition 2.3**.: _A general isotropic energy depending on \(\widehat{\mathfrak{R}}=\left(\overline{R}^{T}\text{D}(\overline{R}.e_{1})\,|\, \overline{R}^{T}\text{D}(\overline{R}.e_{2})\,|\,\overline{R}^{T}\text{D}( \overline{R}.e_{3})\right)\in\mathbb{R}^{3\times 9}\) has the form_
\[\widehat{W}(\widehat{\mathfrak{R}})= b_{1}\,\|\text{sym}\,\,\text{axl}(\mathcal{A}.\widehat{ \mathfrak{R}})\|^{2}+b_{2}\,\|\text{skew}\,\,\text{axl}(\mathcal{A}.\widehat{ \mathfrak{R}})\|^{2}+b_{3}\,[\text{tr}(\text{axl}(\mathcal{A}.\widehat{ \mathfrak{R}}))]^{2}. \tag{2.24}\]
Proof.: Using (2.1), we have that a quadratic isotropic energy in \(\widehat{\mathfrak{R}}\) is given by
\[\widehat{W}(\widehat{\mathfrak{R}})= \,b_{1}\,\|\mathrm{sym}\,\,\mathrm{axl}\,\mathfrak{R}\|^{2}+b_{2} \,\|\mathrm{skew}\,\,\mathrm{axl}\,\mathfrak{R}\|^{2}+b_{3}\,[\mathrm{tr}( \mathrm{axl}\,\mathfrak{R})]^{2}. \tag{2.25}\]
Since \(\mathcal{A}.\widehat{\mathfrak{R}}=\mathfrak{R}\), the proof is complete.
Let us remark that comparing to (2.17), a general isotropic energy depending on \(\widehat{\mathfrak{R}}\), i.e., (2.24) is different since we have the summation of different products between the elements of \(\overline{R}^{T}\) and \(\mathrm{D}\overline{R}\), due to the action of the axial operator together with the operator \(\mathcal{A}\).
From (2.17) one could obtain an isotropic energy by setting
\[\int_{\widetilde{Q}\in\mathrm{SO}(3)}\frac{h^{3}}{12}\sum_{i=1}^{3}\left(\mu \,\|\,\mathrm{sym}(\overline{R}^{T}\mathrm{D}(\widetilde{Q}.e_{i}))\|^{2}+ \mu_{\mathrm{c}}\,\|\mathrm{skew}(\overline{R}^{T}\mathrm{D}(\widetilde{Q}. e_{i}))\|^{2}+\,\frac{\lambda\,\mu}{\lambda+2\mu}\,\big{[}\mathrm{tr}( \overline{R}^{T}\mathrm{D}(\widetilde{Q}.e_{i}))\big{]}^{2}\right), \tag{2.26}\]
i.e., averaging over all directions.
## 3 Homogenized curvature energy for the flat Cosserat-shell model via \(\Gamma\)-convergence
Let us consider an elastic material which in its reference configuration fills the three dimensional _flat shell-like thin_ domain \(\Omega_{h}=\omega\times\big{[}-\frac{h}{2},\frac{h}{2}\big{]}\), and \(\omega\subset\mathbb{R}^{2}\) a bounded domain with Lipschitz boundary \(\partial\omega\). The scalar \(0<h\ll 1\) is called _thickness_ of the shell.
Due to the discussion from Subsection 2.3, in this paper we consider the Cosserat-curvature energy in terms of the wryness tensor \(\Gamma\) in the form
\[\widetilde{W}_{\mathrm{curv}}(\Gamma)\,=\,\mu\,L_{\mathrm{c}}^{2}\left(b_{1}\, \|\mathrm{sym}\,\Gamma\|^{2}+b_{2}\,\|\mathrm{skew}\,\,\Gamma\|^{2}+\,b_{3} \,[\mathrm{tr}(\Gamma)]^{2}\right)\,. \tag{3.1}\]
In order to apply the methods of \(\Gamma\)-convergence for constructing the variational problem on \(\omega\) of the flat Cosserat-shell model, the first step is to transform our problem further from \(\Omega_{h}\) to a _domain_ with fixed thickness \(\Omega_{1}=\omega\times[-\frac{1}{2},\frac{1}{2}]\subset\mathbb{R}^{3},\; \omega\subset\mathbb{R}^{2}\). For this goal, scaling of the variables (dependent/independent) would be the first step. In all our computations the mark \(\cdot^{\sharp}\) indicates the nonlinear scaling and the mark \(\cdot_{h}\) indicates that the assigned quantity depends on the thickness \(h\). In a first step we will apply the nonlinear scaling to the deformation. For \(\Omega_{1}=\omega\times\Big{[}-\frac{1}{2},\frac{1}{2}\Big{]}\subset \mathbb{R}^{3}\), \(\omega\subset\mathbb{R}^{2}\), we define the scaling transformations
\[\zeta\colon\;\eta\in\Omega_{1}\mapsto\mathbb{R}^{3}\,,\qquad\zeta(\eta_{1}, \eta_{2},\eta_{3}):=(\eta_{1},\eta_{2},h\,\eta_{3})\,,\quad\zeta^{-1}\colon \;x\in\Omega_{h}\mapsto\mathbb{R}^{3}\,,\qquad\zeta^{-1}(x_{1},x_{2},x_{3}):= (x_{1},x_{2},\frac{x_{3}}{h})\,,\]
with \(\zeta(\Omega_{1})=\Omega_{h}\). By using the above transformations (3.2) we obtain the formula for the transformed deformation \(\varphi\) as
\[\varphi(x_{1},x_{2},x_{3}) =\varphi^{\sharp}(\zeta^{-1}(x_{1},x_{2},x_{3}))\quad\forall x\in \Omega_{h}\,;\qquad\varphi^{\natural}(\eta)=\varphi(\zeta(\eta))\quad\forall \eta\in\Omega_{1}\,,\] \[\mathrm{D}_{x}\varphi(x_{1},x_{2},x_{3}) =\begin{pmatrix}\partial_{\eta_{1}}\varphi_{1}^{\natural}(\eta) \,\,\partial_{\eta_{2}}\varphi_{1}^{\natural}(\eta)\,\,\frac{1}{h}\partial_{ \eta_{2}}\varphi_{1}^{\natural}(\eta)\\ \partial_{\eta_{1}}\varphi_{2}^{\natural}(\eta)\,\,\partial_{\eta_{2}}\varphi _{2}^{\natural}(\eta)\,\,\frac{1}{h}\partial_{\eta_{3}}\varphi_{2}^{\natural} (\eta)\\ \partial_{\eta_{1}}\varphi_{3}^{\natural}(\eta)\,\,\partial_{\eta_{2}}\varphi _{3}^{\natural}(\eta)\,\,\frac{1}{h}\partial_{\eta_{3}}\varphi_{3}^{\natural} (\eta)\end{pmatrix}=\mathrm{D}_{\eta}^{\natural}\varphi^{\natural}(\eta)=:F_{h }^{\natural}\,. \tag{3.2}\]
Now we will do the same process for the microrotation tensor \(\overline{R}_{h}^{\natural}\colon\Omega_{1}\to\mathrm{SO}(3)\)
\[\overline{R}(x_{1},x_{2},x_{3})=\overline{R}_{h}^{\natural}(\zeta^{-1}(x_{1}, x_{2},x_{3}))\qquad\forall x\in\Omega_{h}\,;\,\,\,\overline{R}_{h}^{\natural}( \eta)=\overline{R}(\zeta(\eta))\,,\quad\forall\eta\in\Omega_{1}\,. \tag{3.3}\]
With this, the non-symmetric stretch tensor expressed in a point of \(\Omega_{1}\) is given by
\[\overline{U}_{e}^{\natural}=\overline{R}_{h}^{\natural,T}F_{h}^{\natural}= \overline{R}_{h}^{\natural,T}\mathrm{D}_{\eta}^{\natural}\varphi^{\natural}( \eta)\,. \tag{3.4}\]
and
\[\Gamma^{\natural}_{e,h}=\Big{(}\text{axl}(\overline{R}^{\natural,T}_{h}\,\partial_ {\eta_{1}}\overline{R}^{\natural}_{h})\,|\,\text{axl}(\overline{R}^{\natural,T}_ {h}\,\partial_{\eta_{2}}\overline{R}^{\natural}_{h})\,|\,\frac{1}{h}\text{axl}( \overline{R}^{\natural,T}_{h}\,\partial_{\eta_{3}}\overline{R}^{\natural}_{h}) \,\Big{)}. \tag{3.5}\]
The next step, in order to apply the \(\Gamma\)-convergence technique, is to transform the minimization problem onto the _fixed domain_\(\Omega_{1}\), which is independent of the thickness \(h\). According to the results from the previous subsection, we have found that the original three-dimensional variational problem (2.3) is equivalent to the following minimization problem on \(\Omega_{1}\)
\[I^{\natural}_{h}(\varphi^{\natural},\text{D}^{h}_{\eta}\varphi^{\natural}, \overline{R}^{\natural}_{h},\Gamma^{\natural}_{e,h})=\int_{\Omega_{1}}\;h\, \left[\Big{(}W_{\text{mp}}(U^{\natural}_{e,h})+\widetilde{W}_{\text{curv}}( \Gamma^{\natural}_{e,h})\Big{)}\right]\,dV_{\eta}\quad\mapsto\quad\min\;\text{ w.r.t}\;(\varphi^{\natural},\overline{R}^{\natural}_{h})\,, \tag{3.6}\]
where
\[W_{\text{mp}}(U^{\natural}_{e,h}) = \mu\,\|\text{sym}(U^{\natural}_{e,h}-\mathbb{1}_{3})\|^{2}+\mu_ {c}\,\|\,\text{skew}(U^{\natural}_{e,h}-\mathbb{1}_{3})\|^{2}+\frac{\lambda} {2}[\text{tr}(\text{sym}(U^{\natural}_{e,h}-\mathbb{1}_{3}))]^{2}\,,\] \[\widetilde{W}_{\text{curv}}(\Gamma^{\natural}_{e,h}) = \mu\,L^{2}_{c}\,\Big{(}a_{1}\,\|\text{dev}\,\text{sym}\,\Gamma^{ \natural}_{e,h}\|^{2}+a_{2}\,\|\text{skew}\,\Gamma^{\natural}_{e,h}\|^{2}+\,a _{3}\,[\text{tr}(\Gamma^{\natural}_{e,h})]^{2}\Big{)}\] \[= \mu\,L^{2}_{c}\,\Big{(}b_{1}\,\|\text{sym}\,\Gamma^{\natural}_{ e,h}\|^{2}+b_{2}\,\|\text{skew}\,\,\Gamma^{\natural}_{e,h}\|^{2}+\,b_{3}\,[ \text{tr}(\Gamma^{\natural}_{e,h})]^{2}\Big{)}\;,\]
where \(a_{1}=b_{1}\), \(a_{2}=b_{2}\) and \(a_{3}=\frac{12b_{3}-b_{1}}{3}\).
In the article [34] one aim of the authors was to to find the \(\Gamma\)-limit of the family of functional which is related to
\[\mathcal{I}^{\natural}_{h}(\varphi^{\natural},\text{D}^{h}_{\eta} \varphi^{\natural},\overline{R}^{\natural}_{h},\Gamma^{\natural}_{h})=\begin{cases} \frac{1}{h}\,I^{\natural}_{h}(\varphi^{\natural},\text{D}^{h}_{\eta}\varphi^{ \natural},\overline{R}^{\natural}_{h},\Gamma^{\natural}_{h})&\quad\text{if }\;(\varphi^{ \natural},\overline{R}^{\natural}_{h})\in\mathcal{S}^{\prime},\\ +\infty&\quad\text{else in }X,\end{cases} \tag{3.8}\]
where
\[X :=\{(\varphi^{\natural},\overline{R}^{\natural}_{h})\in\text{L}^ {2}(\Omega_{1},\mathbb{R}^{3})\times\text{L}^{2}(\Omega_{1},\text{SO}(3))\}\,, \tag{3.9}\] \[\mathcal{S}^{\prime} :=\{(\varphi,\overline{R}_{h})\in\text{H}^{1}(\Omega_{1},\mathbb{ R}^{3})\times\text{H}^{1}(\Omega_{1},\text{SO}(3))\,\big{|}\;\varphi|_{\partial \Omega_{1}}(\eta)=\varphi^{\natural}_{d}(\eta)\}\,.\]
That means, to obtain an energy functional expressed only in terms of the weak limit of a subsequence of \((\varphi^{\natural}_{h_{j}},\overline{R}^{\natural}_{h_{j}})\in X\), when \(h_{j}\) goes to zero. In other words, as we will see, to construct an energy function depending only on quantities definite on the planar midsurface \(\omega\). However, in [34] the authors have considered a different Cosserat-curvature energy based on the Cosserat-curvature tensor \(\widehat{\mathfrak{R}}=(\overline{R}^{T}\text{D}(\overline{R}.e_{1})R_{1}, \overline{R}^{T}\text{D}(\overline{R}.e_{2}),\overline{R}^{T}\text{D}( \overline{R}.e_{3}))\in\mathbb{R}^{3\times 3\times 3}\), which in the simplest form reads
\[\widehat{W}_{\text{curv}}(\widehat{\mathfrak{R}})=\mu\frac{L^{2}_{c}}{12} \Big{(}\alpha_{1}\|\text{sym}\widehat{\mathfrak{R}}\|^{2}+\alpha_{2}\|\,\text{ skew}\,\widehat{\mathfrak{R}}\|^{2}+\alpha_{3}\text{tr}(\widehat{\mathfrak{R}})^{2} \Big{)}\,, \tag{3.10}\]
and no explicit form of the homogenized curvature energy has been computed (perhaps even not possible to be computed for curved initial configuration). In fact \(\widehat{\mathfrak{R}}\) is not isotropic and has to be avoided in an isotropic model, as seen above.
In order to construct the \(\Gamma\)-limit there is the need to solve two auxiliary optimization problems, i.e.,
* the optimization problem which for each pair \((m,\overline{R}_{0})\), where \(m:\omega\to\mathbb{R}^{3}\), \(\overline{R}_{0}:\omega\to\text{SO}(3)\) defines the homogenized membrane energy \[W^{\text{hom,plate}}_{\text{mp}}(\mathcal{E}^{\text{plate}}_{m,\overline{R}_{0} }):=\inf_{\widetilde{d}\in\mathbb{R}^{3}}\,W_{\text{mp}}\Big{(}\overline{R}^{T} _{0}(\text{D}m|\widetilde{d})\Big{)}=\inf_{\widetilde{d}\in\mathbb{R}^{3}}\,W_{ \text{mp}}\Big{(}\mathcal{E}^{\text{plate}}_{m,\overline{R}_{0}}-(0|0| \widetilde{d})\Big{)}.\] (3.11) where \(\mathcal{E}^{\text{plate}}_{m,\overline{R}_{0}}:=\overline{R}^{T}_{0}(\text{D}m |0)-\mathbb{1}^{\flat}_{2}\,\) denotes the _elastic strain tensor_ for the flat Cosserat-shell model.
O2: the optimization problem which for each \(\overline{R}_{0}:\omega\to\mathrm{SO}(3)\) defines the homogenized curvature energy
\[\widetilde{W}^{\mathrm{hom,plate}}_{\mathrm{curv}}(\mathcal{K}^{ \mathrm{plate}}_{\overline{R}_{0}}): =\widetilde{W}^{*}_{\mathrm{curv}}\Big{(}\mathrm{axl}(\overline{R }_{0}^{T}\,\partial_{\eta_{1}}\overline{R}_{0})\,|\,\mathrm{axl}(\overline{R} _{0}^{T}\,\partial_{\eta_{2}}\overline{R}_{0})\,|\,\,\mathrm{axl}\,(A^{*})\, \Big{)} \tag{3.12}\] \[=\inf_{A\in\mathfrak{so}(3)}\widetilde{W}_{\mathrm{curv}}\Big{(} \mathrm{axl}(\overline{R}_{0}^{T}\,\partial_{\eta_{1}}\overline{R}_{0})\,|\, \mathrm{axl}(\overline{R}_{0}^{T}\,\partial_{\eta_{2}}\overline{R}_{0})\,|\, \,\mathrm{axl}\,(A)\,\Big{)}\]
where \(\mathcal{K}^{\mathrm{plate}}_{\overline{R}_{0}}:\,=\,\Big{(}\mathrm{axl}( \overline{R}_{0}^{T}\,\partial_{x_{1}}\overline{R}_{0})\,|\,\mathrm{axl}( \overline{R}_{0}^{T}\,\partial_{x_{2}}\overline{R}_{0})\,|0\Big{)}\not\in \mathrm{Sym}(3)\) denotes the elastic bending-curvature tensor for the flat Cosserat-shell model.
The first optimisation problem O1 was solved in [34] giving
\[W^{\mathrm{hom,plate}}_{\mathrm{mp}}(\mathcal{E}^{\mathrm{plate}}_{m,\overline {R}_{0}})=W_{\mathrm{shell}}\big{(}[\mathcal{E}^{\mathrm{plate}}_{m,\overline{ Q}_{e,0}}]\|\big{)}+\frac{2\,\mu\,\,\mu_{\mathrm{c}}}{\mu_{c}+\mu}\|[ \mathcal{E}^{\mathrm{plate}}_{m,\overline{Q}_{e,0}}]^{\perp}\|^{2},\]
with the orthogonal decomposition in the tangential plane and in the normal direction2
Footnote 2: Here, for vectors \(\xi,\eta\in\mathbb{R}^{n}\), we have considered the tensor product \((\xi\otimes\eta)_{ij}=\xi_{i}\,\eta_{j}\). Let us denote by \(\overline{R}_{i}\) the columns of the matrix \(\overline{R}\), i.e., \(\overline{R}=(\overline{R}_{1}\,|\,\overline{R}_{2}\,|\,\overline{R}_{3})\), \(\overline{R}_{i}=\overline{R}\,e_{i}\). Since \((1_{3}-e_{3}\otimes e_{3})\overline{R}^{T}=(\overline{R}_{1}\,|\,\overline{R }_{2}\,|\,0)^{T}\), it follows that \([\mathcal{E}^{\mathrm{plate}}_{m,\overline{Q}_{e,0}}]^{\parallel}=(\overline{ R}_{1}\,|\,\overline{R}_{2}\,|\,0)^{T}(\mathrm{D}m|0)-1_{2}^{\flat}=(( \overline{R}_{1}\,|\,\overline{R}_{2})^{T}\,\mathrm{D}m)^{\flat}-1_{2}^{\flat}\), while
\[[\mathcal{E}^{\mathrm{plate}}_{m,\overline{Q}_{e,0}}]^{\perp}=(0\,|\,0\,|\, \overline{R}_{3})^{T}(\mathrm{D}m|0)=\begin{pmatrix}0&0&0\\ 0&0&0\\ \langle\overline{R}_{3},\partial_{x_{1}}m\rangle&\langle\overline{R}_{3}, \partial_{x_{2}}m\rangle&0\end{pmatrix}\,. \tag{3.13}\]
and
\[W_{\mathrm{shell}}\big{(}[\mathcal{E}^{\mathrm{plate}}_{m,\overline{R}_{0}}]^{ \parallel}\big{)}=\,\mu\,\|\mathrm{sym}\,\,[\mathcal{E}^{\mathrm{plate}}_{m, \overline{R}_{0}}]^{\parallel}\|^{2}+\mu_{\mathrm{c}}\,\|\mathrm{skew}\,\,[ \mathcal{E}^{\mathrm{plate}}_{m,}]^{\parallel}\|^{2}+\,\frac{\lambda\,\mu}{ \lambda+2\,\mu}\,\left[\mathrm{tr}([\mathcal{E}^{\mathrm{plate}}_{m,\overline {R}_{0}}]^{\parallel})\right]^{2}. \tag{3.14}\]
As regards the second optimization problem O2, in [34] the authors had to solve a similar problem but corresponding to a curvature energy given by (3.10), i.e., the dimensionally reduced homogenized curvature energy is defined through the
\[W^{\mathrm{hom,\,plate}}_{\mathrm{curv}}(\mathcal{A})=\inf_{u,v,w\in\mathbb{R }^{3}}\widehat{W}_{\mathrm{curv}}\Big{(}(\mathcal{A}e_{1}|u),(\mathcal{A}e_{2 }|v),(\mathcal{A}e_{3}|w)\Big{)}\,, \tag{3.15}\]
where \(\mathcal{A}:=(\overline{R}_{0}^{T}(\partial_{x_{1}}(\overline{R}_{0}e_{1})| \partial_{x_{2}}(\overline{R}_{0}e_{1})),\overline{R}_{0}^{T}(\partial_{x_{1 }}(\overline{R}_{0})e_{2}|\partial_{x_{2}}(\overline{R}_{0}e_{2})),\overline{R }_{0}^{T}(\partial_{x_{1}}(\overline{R}_{0}e_{3})|\partial_{x_{2}}(\overline{R }_{0}e_{3})))\). In this representation, calculating the homogenized energy looks more difficult and it was not explicitly done.
In this section we show that considering the curvature energy depending on the Cosserat-curvature tensor \(\alpha\) (equivalently on the three-dimensional wryness tensor \(\Gamma\)) the calculation of the homogenized curvature energy (i.e., the solution of O2) is easier and analytically achievable.
**Theorem 3.1**.: _The homogenized curvature energy for a flat Cosserat-shell model is given by_
\[W^{\mathrm{hom,plate}}_{\mathrm{curv}}(\Gamma)=\mu L_{\mathrm{c}}^{2}\Big{(}b_ {1}\|\mathrm{sym}[\mathcal{K}^{\mathrm{plate}}_{\overline{R}_{0}}]^{ \parallel}\|^{2}+b_{2}\|\,\mathrm{skew}[\mathcal{K}^{\mathrm{plate}}_{ \overline{R}_{0}}]^{\parallel}\|^{2}+\frac{b_{1}b_{3}}{(b_{1}+b_{3})}\mathrm{ tr}([\mathcal{K}^{\mathrm{plate}}_{\overline{R}_{0}}]^{\parallel})^{2}+\frac{2\,b_{1}b_{2}}{b_{1}+b_{2}} \|[\mathcal{K}^{\mathrm{plate}}_{\overline{R}_{0}}]^{\perp}\|\Big{)}\]
_with the orthogonal decomposition in the tangential plane and in the normal direction_
\[\mathcal{K}^{\mathrm{plate}}_{\overline{R}_{0}}=[\mathcal{K}^{\mathrm{plate}}_{ \overline{R}_{0}}]^{\parallel}+[\mathcal{K}^{\mathrm{plate}}_{\overline{R}_{0}}]^{ \perp},\qquad[\mathcal{K}^{\mathrm{plate}}_{\overline{R}_{0}}]^{\parallel}:=e_{3} \otimes e_{3}\,\mathcal{K}^{\mathrm{plate}}_{\overline{R}_{0}},\qquad[ \mathcal{K}^{\mathrm{plate}}_{\overline{R}_{0}}]^{\perp}:=(1_{3}-e_{3}\otimes e _{3})\,\mathcal{K}^{\mathrm{plate}}_{\overline{R}_{0}}\,. \tag{3.16}\]
Proof.: Let us define \(\Gamma_{0}=(\Gamma_{1}^{0}\,|\,\Gamma_{2}^{0}\,|\,\Gamma_{3}^{0}):=\Big{(} \mathrm{axl}(\overline{R}_{0}^{T}\,\partial_{\eta_{1}}\overline{R}_{0})\,|\, \mathrm{axl}(\overline{R}_{0}^{T}\,\partial_{\eta_{2}}\overline{R}_{0})\,|\, \,\mathrm{axl}\,(A)\,\Big{)}\). Then the homogenized curvature energy turns out to be
\[W^{\mathrm{hom}}_{\mathrm{curv}}((\Gamma_{1}^{0}\,|\,\Gamma_{2}^{0}))= \widetilde{W}_{\mathrm{curv}}(\Gamma_{1}^{0}\,|\,\Gamma_{2}^{0}|c^{*})=\inf_{c \in\mathbb{R}^{3}}W_{\mathrm{curv}}((\Gamma_{1}^{0}\,|\,\Gamma_{2}^{0}|c))\,. \tag{3.17}\]
By using the relation (3.1), we start to do the calculations for the sym, skew and trace parts as
\[\mathrm{sym}\Gamma^{0}=\begin{pmatrix}\Gamma^{0}_{11}&\frac{\Gamma^{0}_{21}+ \Gamma^{0}_{21}}{2}&\frac{c_{1}+\Gamma^{0}_{31}}{2}\\ \frac{\Gamma^{0}_{21}+\Gamma^{0}_{12}}{2}&\frac{\Gamma^{0}_{22}}{2}&\frac{c_{2 }+\Gamma^{0}_{32}}{2}\\ \frac{\Gamma^{0}_{31}+c_{1}}{2}&\frac{\Gamma^{0}_{32}+c_{2}}{2}&c_{3}\end{pmatrix}\,, \qquad\mathrm{skew}\,\Gamma^{0}=\begin{pmatrix}0&\frac{\Gamma^{0}_{21}-\Gamma^{0} _{21}}{2}&\frac{c_{1}-\Gamma^{0}_{31}}{2}\\ \frac{\Gamma^{0}_{21}-\Gamma^{0}_{12}}{2}&0&\frac{c_{2}-\Gamma^{0}_{32}}{2} \\ \frac{\Gamma^{0}_{31}-c_{1}}{2}&\frac{\Gamma^{0}_{32}-c_{2}}{2}&0\end{pmatrix}\,, \tag{3.19}\]
and \(\mathrm{tr}(\Gamma_{0})=(\Gamma^{0}_{11}+\Gamma^{0}_{22}+c_{3})\,.\) We have
\[W_{\mathrm{curv}}(\Gamma_{0}) =\mu L^{2}_{c}\Big{(}b_{1}\big{(}(\Gamma^{0}_{11})^{2}+\frac{1}{2 }(\Gamma^{0}_{12}+\Gamma^{0}_{21})^{2}+\frac{1}{2}(c_{1}+\Gamma^{0}_{31})^{2}+ (\Gamma^{0}_{22})^{2}+\frac{1}{2}(c_{2}+\Gamma^{0}_{32})^{2}+c_{3}^{2}\big{)} \tag{3.20}\] \[\qquad\qquad+b_{2}\big{(}\frac{1}{2}(\Gamma^{0}_{12}-\Gamma^{0} _{21})^{2}+\frac{1}{2}(c_{1}-\Gamma^{0}_{31})^{2}+\frac{1}{2}(c_{2}-\Gamma^{0 }_{32})^{2}\big{)}+b_{3}(\Gamma^{0}_{11}+\Gamma^{0}_{22}+c_{3})^{2}\Big{)}\,.\]
But this is an easy optimization problem in \(\mathbb{R}^{3}\). Indeed, the stationary points are
\[0 =\frac{\partial W_{\mathrm{curv}}(\Gamma_{0})}{\partial c_{1}}=b_ {1}(c_{1}+\Gamma^{0}_{31})+b_{2}(c_{1}-\Gamma^{0}_{31})=(b_{1}+b_{2})c_{1}+(b _{1}-b_{2})\Gamma^{0}_{31}\quad\Rightarrow\quad c_{1}=\frac{b_{2}-b_{1}}{b_{ 1}+b_{2}}\Gamma^{0}_{31}\,,\] \[0 =\frac{\partial W_{\mathrm{curv}}(\Gamma_{0})}{\partial c_{2}}=b _{1}(c_{2}+\Gamma^{0}_{32})+b_{2}(c_{2}-\Gamma^{0}_{32})=(b_{1}+b_{2})c_{2}+( b_{1}-b_{2})\Gamma^{0}_{32}\quad\Rightarrow\quad c_{2}=\frac{b_{2}-b_{1}}{b_{ 1}+b_{2}}\Gamma^{0}_{32}\,, \tag{3.21}\] \[0 =\frac{\partial W_{\mathrm{curv}}(\Gamma_{0})}{\partial c_{3}}=b _{1}c_{3}+b_{3}(\Gamma^{0}_{11}+\Gamma^{0}_{22}+c_{3})\quad\Rightarrow\quad c _{3}=\frac{-b_{3}}{b_{1}+b_{3}}(\Gamma^{0}_{11}+\Gamma^{0}_{22})\,,\]
and the matrix defining the quadratic function in \(c_{1},c_{2},c_{3}\) is positive definite, this stationary point is the minimizer, too. By inserting the unknowns inside \(W_{\mathrm{curv}}\) we find \(W_{\mathrm{curv}}^{\mathrm{hom,\;plate}}\) given by
\[W_{\mathrm{curv}}^{\mathrm{hom,\;plate}}(\Gamma) =\mu L^{2}_{c}\Big{(}b_{1}\big{(}(\Gamma^{0}_{11})^{2}+(\Gamma^{0} _{22})^{2}+(\frac{-b_{3}}{b_{1}+b_{3}}(\Gamma^{0}_{11}+\Gamma^{0}_{22}))^{2}+ \frac{1}{2}(\Gamma^{0}_{21}+\Gamma^{0}_{12})^{2}+\frac{1}{2}(\frac{b_{2}-b_{1 }}{b_{1}+b_{2}}\Gamma^{0}_{31}+\Gamma^{0}_{31})^{2}\] \[\qquad\qquad+\frac{1}{2}\big{(}\frac{b_{2}-b_{1}}{b_{1}+b_{2}} \Gamma^{0}_{32}+\Gamma^{0}_{32})^{2}\big{)}+b_{2}\big{(}\frac{1}{2}(\Gamma^{0}_ {12}-\Gamma^{0}_{21})^{2}+\frac{1}{2}(\frac{b_{2}-b_{1}}{b_{1}+b_{2}}\Gamma^{ 0}_{31}-\Gamma^{0}_{31})^{2}\] \[\qquad\qquad+\frac{1}{2}\big{(}\frac{b_{2}-b_{1}}{b_{1}+b_{2}} \Gamma^{0}_{32}-\Gamma^{0}_{32})^{2}\big{)}+b_{3}\big{(}(\Gamma^{0}_{11}+ \Gamma^{0}_{22})-\frac{b_{3}}{b_{1}+b_{3}}(\Gamma^{0}_{11}+\Gamma^{0}_{22}) \big{)}^{2}\Big{)}\] \[=\mu L^{2}_{c}\Big{(}b_{1}\big{(}(\Gamma^{0}_{11})^{2}+(\Gamma^ {0}_{22})^{2}+\frac{b_{3}^{2}}{(b_{1}+b_{3})^{2}}(\Gamma^{0}_{11}+\Gamma^{0}_{ 22})^{2}+\frac{1}{2}(\Gamma^{0}_{21}+\Gamma^{0}_{12})^{2}+2\frac{b_{2}^{2}}{(b _{1}+b_{2})^{2}}(\Gamma^{0}_{31})^{2}\] \[\qquad\qquad+2\frac{b_{2}^{2}}{(b_{1}+b_{2})^{2}}(\Gamma^{0}_{32 })^{2}\big{)}+b_{2}\big{(}\frac{1}{2}(\Gamma^{0}_{12}-\Gamma^{0}_{21})^{2}+2 \frac{b_{1}^{2}}{(b_{1}+b_{2})^{2}}(\Gamma^{0})^{2}_{31}+2\frac{b_{1}^{2}}{(b _{1}+b_{2})^{2}}(\Gamma^{0}_{32})^{2}\big{)}\] \[\qquad\qquad+b_{3}\frac{b_{1}^{2}}{(b_{1}+b_{3})^{2}}(\Gamma^{0}_ {11}+\Gamma^{0}_{22})^{2}\Big{)} \tag{3.22}\] \[=\mu L^{2}_{c}\Big{(}b_{1}\big{(}(\Gamma^{0}_{11})^{2}+(\Gamma^ {0}_{22})^{2}\big{)}+\frac{b_{1}b_{3}}{(b_{1}+b_{3})}(\Gamma^{0}_{11}+\Gamma^{0}_ {22})^{2}+\frac{b_{1}}{2}(\Gamma^{0}_{21}+\Gamma^{0}_{12})^{2}\] \[\qquad\qquad+2\frac{b_{1}b_{2}}{(b_{1}+b_{2})}(\Gamma^{0}_{31})^{ 2}+2\frac{b_{1}b_{2}}{(b_{1}+b_{2})}(\Gamma^{0}_{32})^{2}+\frac{b_{2}}{2}( \Gamma^{0}_{21}-\Gamma^{0}_{12})^{2}\Big{)}\] \[=\mu L^{2}_{c}\Big{(}b_{1}\|\mathrm{sym}\Gamma_{\square}\|^{2}+b_ {2}\|\,\mathrm{skew}\,\Gamma_{\square}\|^{2}+\frac{b_{1}b_{3}}{(b_{1}+b_{3})} \mathrm{tr}(\Gamma_{\square})^{2}+\frac{2b_{1}b_{2}}{(b_{1}+b_{2})}\|\,\Big{(} \Gamma^{0}_{31}\Big{)}\,\|^{2}\Big{)}\,,\]
where \(\Gamma_{\square}=\begin{pmatrix}\Gamma^{0}_{11}&\Gamma^{0}_{12}\\ \Gamma^{0}_{21}&\Gamma^{0}_{22}\end{pmatrix}\).
Therefore, the homogenized curvature energy for the flat Cosserat-shell model is
\[W_{\mathrm{curv}}^{\mathrm{hom,\;plate}}(\Gamma) =\mu L^{2}_{c}\Big{(}b_{1}\|\mathrm{sym}\Gamma_{\square}\|^{2}+b_ {2}\|\,\mathrm{skew}\,\Gamma_{\square}\|^{2}+\frac{b_{1}b_{3}}{(b_{1}+b_{3})} \mathrm{tr}(\Gamma_{\square})^{2}+\frac{2b_{1}b_{2}}{(b_{1}+b_{2})}\|\, \begin{pmatrix}\Gamma^{0}_{
Since we have now the explicit form of both homogenized energies (membrane and curvature), we are ready to indicate the exact form of the \(\Gamma\)-limit of the sequence of functionals \(\mathcal{J}_{h_{j}}\colon X\to\overline{\mathbb{R}}\) and to provide the following theorem, see [41]
**Theorem 3.2**.: _Assume the boundary data satisfy the conditions_
\[\varphi_{d}^{\natural}=\varphi_{d}\big{|}_{\partial\Omega_{1}}\text{(in the sense of traces) for }\ \varphi_{d}\in\mathrm{H}^{1}(\Omega_{1};\mathbb{R}^{3}),\qquad\Gamma_{1}\subset\partial \tag{3.24}\]
_and let the constitutive parameters satisfy_
\[\mu\,>0,\qquad\quad\kappa>0,\qquad\quad\mu_{\mathrm{c}}>0,\qquad\quad a_{1}>0, \qquad a_{2}>0,\qquad\quad a_{3}>0\,. \tag{3.25}\]
_Then, for any sequence \((\varphi_{h_{j}}^{\natural},\overline{R}_{h_{j}}^{\natural})\in X\) such that \((\varphi_{h_{j}}^{\natural},\overline{R}_{h_{j}}^{\natural})\to(\varphi_{0}, \overline{R}_{0})\) as \(h_{j}\to 0\), the sequence of functionals \(\mathcal{I}_{h_{j}}\colon X\to\overline{\mathbb{R}}\) from (3.8) \(\Gamma\)-converges to the limit energy functional \(\mathcal{I}_{0}\colon X\to\overline{\mathbb{R}}\) defined by_
\[\mathcal{I}_{0}(m,\overline{R}_{0})=\begin{cases}\int_{\omega}[W_{\mathrm{mp} }^{\mathrm{hom,plate}}(\mathcal{E}_{m,\overline{R}_{0}}^{\mathrm{plate}})+ \widetilde{W}_{\mathrm{curv}}^{\mathrm{hom,plate}}(\mathcal{K}_{\overline{R}_ {0}}^{\mathrm{plate}})]\;d\omega&\text{if}\quad(m,\overline{R}_{0})\in \mathcal{S}_{\omega}^{\prime}\,,\\ +\infty&\text{else in }X,\end{cases} \tag{3.26}\]
_where_
\[m(x_{1},x_{2}) :=\varphi_{0}(x_{1},x_{2})=\lim_{h_{j}\to 0}\varphi_{h_{j}}^{ \natural}(x_{1},x_{2},\frac{1}{h_{j}}x_{3}),\qquad\overline{Q}_{e,0}(x_{1},x_ {2})=\lim_{h_{j}\to 0}\overline{R}_{h_{j}}^{\natural}(x_{1},x_{2},\frac{1}{h_{j}}x_{ 3}),\] \[\mathcal{E}_{m,\overline{R}_{0}}^{\mathrm{plate}} =\overline{R}_{0}^{T}(\mathrm{D}m|0)-\mathbb{1}_{2}^{\flat}\,, \qquad\mathcal{K}_{\overline{R}_{0}}^{\mathrm{plate}}=\Big{(}\mathrm{axl}( \overline{R}_{0}^{T}\,\partial_{x_{1}}\overline{R}_{0})\,|\,\mathrm{axl}( \overline{R}_{0}^{T}\,\partial_{x_{2}}\overline{R}_{0})\,|0\Big{)}\not\in \mathrm{Sym}(3)\,,\]
_and_
\[W_{\mathrm{mp}}^{\mathrm{hom,plate}}(\mathcal{E}_{m,\overline{R}_ {0}}^{\mathrm{plate}}) =\,\mu\,\|\mathrm{sym}\ [\mathcal{E}_{m,\overline{R}_{0}}^{\mathrm{plate}}]\|^{2}+ \mu_{\mathrm{c}}\,\|\mathrm{skew}\ [\mathcal{E}_{m,\overline{R}_{0}}^{\mathrm{plate}}]\|^{2}+ \frac{\lambda\,\mu}{\lambda+2\,\mu}\,\big{[}\mathrm{tr}([\mathcal{E}_{m, \overline{R}_{0}}^{\mathrm{plate}}]\|)^{2}+\frac{2\,\mu\;\mu_{\mathrm{c}}}{ \mu_{\mathrm{c}}\,+\mu}\|[\mathcal{E}_{m,\overline{R}_{0}}^{\mathrm{plate}}]^ {T}n_{0}\|^{2} \tag{3.27}\] \[=W_{\mathrm{shel}}\big{(}[\mathcal{E}_{m,\overline{R}_{0}}^{ \mathrm{plate}}]\|\big{)}+\frac{2\,\mu\;\mu_{\mathrm{c}}}{\mu_{\mathrm{c}}\,+ \mu}\|[\mathcal{E}_{m,\overline{R}_{0}}^{\mathrm{plate}}]^{\perp}\|^{2},\] \[\widetilde{W}_{\mathrm{curv}}^{\mathrm{hom,plate}}(\mathcal{K}_{ \overline{R}_{0}}^{\mathrm{plate}}) =\inf_{A\in\mathfrak{s}\mathfrak{s}(3)}\widetilde{W}_{\mathrm{ curv}}\Big{(}\mathrm{axl}(\overline{R}_{0}^{T}\,\partial_{\eta_{1}}\overline{R}_{0})\,|\, \mathrm{axl}(\overline{R}_{0}^{T}\,\partial_{\eta_{2}}\overline{R}_{0})\,|\, \mathrm{axl}(A)\,\Big{)}[(\mathrm{D}_{x}\Theta)^{\natural}\!(0)]^{-1}\] \[=\mu L_{c}^{2}\Big{(}b_{1}\|\mathrm{sym}[\mathcal{K}_{\overline{ R}_{0}}^{\mathrm{plate}}]\|^{2}+b_{2}\|\,\mathrm{skew}[\mathcal{K}_{\overline{R}_{0}}^{\mathrm{plate}}]\|^{2}+ \frac{b_{1}b_{3}}{(b_{1}+b_{3})}\mathrm{tr}([\mathcal{K}_{\overline{R}_{0}}^{ \mathrm{plate}}]\|)^{2}+\frac{2\,b_{1}b_{2}}{b_{1}+b_{2}}\|[\mathcal{K}_{ \overline{R}_{0}}^{\mathrm{plate}}]^{\perp}\|\Big{)}\,.\]
## 4 Homogenized curvature energy for the curved Cosserat-shell model via \(\Gamma\)-convergence
In this section we consider the case of a curved Cosserat-shell model and we give the explicit form and the detailed calculation of the homogenized curvature energy. In comparison to the flat Cosserat-shell model, the calculations are more complicated. Hence, let us consider an elastic material which in its reference configuration fills the three dimensional _shell-like thin_ domain \(\Omega_{\xi}\subset R^{3}\), i.e., we assume that there exists a \(C^{1}\)-diffeomorphism \(\Theta\colon R^{3}\to R^{3}\) with \(\Theta(x_{1},x_{2},x_{3}):=(\xi_{1},\xi_{2},\xi_{3})\) such that \(\Theta(\Omega_{h})=\Omega_{\xi}\) and \(\omega_{\xi}=\Theta(\omega\times\{0\})\), where \(\Omega_{h}\subset R^{3}\) for \(\Omega_{h}=\omega\times\big{[}-\frac{h}{2},\frac{h}{2}\big{]}\), with \(\omega\subset R^{2}\) a bounded domain with Lipschitz boundary \(\partial\omega\). The scalar \(0<h\ll 1\) is called _thickness_ of the shell, while the domain \(\Omega_{h}\) is called _fictitious Cartesian configuration_ of the body. In fact, in this paper, we consider the following diffeomorphism \(\Theta\colon R^{3}\to R^{3}\) which describes the curved surface of the shell
\[\Theta(x_{1},x_{2},x_{3})=y_{0}(x_{1},x_{2})+x_{3}\,n_{0}(x_{1},x_{2})\,, \tag{4.1}\]
where \(y_{0}\colon\omega\to R^{3}\) is a \(C^{2}(\omega)\)-function and \(n_{0}=\frac{\partial_{x_{1}}y_{0}\times\partial_{x_{2}}y_{0}}{\|\partial_{x_{1}}y_ {0}\times\partial_{x_{2}}y_{0}\|}\) is the unit normal vector on \(\omega_{\xi}\). Remark that
\[\mathrm{D}_{x}\Theta(x_{3})\,=\,(\mathrm{D}y_{0}|n_{0})+x_{3}(\mathrm{D}n_{0}|0 )\ \,\forall\,x_{3}\in\left(-\frac{h}{2},\frac{h}{2}\right),\ \,\mathrm{D}_{x}\Theta(0)\,=\,(\mathrm{D}y_{0}|\,n_{0}),\ \ [ \mathrm{D}_{x}\Theta(0)]^{-T}\,e_{3}\,=n_{0}, \tag{4.2}\]
and \(\det\mathrm{D}_{x}\Theta(0)=\det(\mathrm{D}y_{0}|n_{0})=\sqrt{\det[(\mathrm{D}y_{0} )^{T}\mathrm{D}y_{0}]}\) represents the surface element. We also have the polar decomposition \(\mathrm{D}_{x}\Theta(0)=Q_{0}\,U_{0}\), where
\[Q_{0}=\mathrm{polar}(\mathrm{D}_{x}\Theta(0))=\mathrm{polar}([\mathrm{D}_{x} \Theta(0)]^{-T})\in\mathrm{SO}(3)\quad\text{and}\quad U_{0}\in\mathrm{Sym}^{+ }(3)\,. \tag{4.3}\]
The first step in our shell model is to transform the problem to a variational problem defined on the fictitious flat configuration \(\Omega_{h}=\omega\times\big{[}-\frac{h}{2},\frac{h}{2}\big{]}\). The next step, in order to apply the \(\Gamma\)-convergence technique, is to transform the minimization problem onto the _fixed domain_\(\Omega_{1}\), which is independent of the thickness \(h\). These two steps were done in [41], the three-dimensional problem (2.1) (corresponding to the Cosserat-curvature tensor \(\alpha\)) being equivalent to the following minimization problem on \(\Omega_{1}\)
\[I_{h}^{\natural}(\varphi^{\natural},\mathrm{D}_{\eta}^{h} \varphi^{\natural},\overline{Q}_{e,h}^{\natural},\Gamma_{e,h}^{\natural})= \int_{\Omega_{1}}\Big{(}W_{\mathrm{mp}}(U_{e,h}^{\natural})+\widetilde{W}_{ \mathrm{curv}}(\Gamma_{e,h}^{\natural})\Big{)}\det(\mathrm{D}_{\eta}\zeta( \eta))\det((\mathrm{D}_{x}\Theta)^{\natural}(\eta_{3}))\;dV_{\eta}\] \[=\int_{\Omega_{1}}\;h\;\Big{[}\Big{(}W_{\mathrm{mp}}(U_{e,h}^{ \natural})+\widetilde{W}_{\mathrm{curv}}(\Gamma_{e,h}^{\natural})\Big{)}\det ((\mathrm{D}_{x}\Theta)^{\natural}(\eta_{3}))\Big{]}\;dV_{\eta}\mapsto\min\; \mathrm{w.r.t}\;(\varphi^{\natural},\overline{Q}_{e,h}^{\natural})\,, \tag{4.4}\]
where
\[W_{\mathrm{mp}}(U_{e,h}^{\natural}) =\;\mu\,\|\mathrm{sym}(U_{e,h}^{\natural}-\mathbb{1}_{3})\|^{2} +\mu_{c}\,\|\,\mathrm{skew}(U_{e,h}^{\natural}-\mathbb{1}_{3})\|^{2}+\frac{ \lambda}{2}[\mathrm{tr}(\mathrm{sym}(U_{e,h}^{\natural}-\mathbb{1}_{3}))]^{2}\,,\] \[\widetilde{W}_{\mathrm{curv}}(\Gamma_{e,h}^{\natural}) =\;\mu\,L_{c}^{2}\,\Big{(}b_{1}\,\|\,\mathrm{sym}\,\Gamma_{e,h}^ {\natural}\|^{2}+b_{2}\,\|\mathrm{skew}\,\Gamma_{e,h}^{\natural}\|^{2}+\,b_{3 }\,[\mathrm{tr}(\Gamma_{e,h}^{\natural})]^{2}\Big{)}\;, \tag{4.5}\] \[U_{e,h}^{\natural} =\overline{Q}_{e,h}^{\natural,T}h_{h}^{\natural}[(\mathrm{D}_{x }\Theta)^{\natural}(\eta_{3})]^{-1}=\overline{Q}_{e,h}^{\natural,T}D_{\eta}^{ h}\varphi^{\natural}(\eta)[(\mathrm{D}_{x}\Theta)^{\natural}(\eta_{3})]^{-1}\,,\] \[\Gamma_{e,h}^{\natural} =\Big{(}\mathrm{axl}(\overline{Q}_{e,h}^{\natural,T}\partial_{ \eta_{1}}\overline{Q}_{e,h}^{\natural})\,|\,\mathrm{axl}(\overline{Q}_{e,h}^{ \natural,T}\partial_{\eta_{2}}\overline{Q}_{e,h}^{\natural})\,|\,\frac{1}{h} \mathrm{axl}(\overline{Q}_{e,h}^{\natural,T}\partial_{\eta_{3}}\overline{Q}_{ e,h}^{\natural})\,\Big{)}[(\mathrm{D}_{x}\Theta)^{\natural}(\eta_{3})]^{-1},\]
with \((\mathrm{D}_{x}\Theta)^{\natural}(\eta_{3})\) the nonlinear scaling (see (3.2)) of \(\mathrm{D}_{x}\Theta\), \(F_{h}^{\natural}=\mathrm{D}_{\eta}^{h}\varphi^{\natural}\) the nonlinear scaling of the gradient of the mapping \(\varphi\colon\Omega_{h}\to\Omega_{c}\,,\;\varphi(x_{1},x_{2},x_{3})=\varphi_ {\xi}(\Theta(x_{1},x_{2},x_{3}))\,,\,\overline{Q}_{e,h}^{\natural}\) the nonlinear scaling of the _elastic microrotation_\(\overline{Q}_{e}\colon\Omega_{h}\to\mathrm{SO}(3)\) defined by \(\overline{Q}_{e}(x_{1},x_{2},x_{3}):=\overline{R}_{\xi}(\Theta(x_{1},x_{2},x_{ 3}))\,.\) Since for \(\eta_{3}=0\) the values of \(\mathrm{D}_{x}\Theta\), \(Q_{0}\), \(U_{0}\) expressed in terms of \((\eta_{1},\eta_{2},0)\) and \((x_{1},x_{2},0)\) coincide, we will omit the sign \(.\lx@note{footnote}{Here, ”non-fully” means that the introduced quantities still depend on $\eta_{3}$ and $h$, because the elements $\mathrm{D}_{(\eta_{1},\eta_{2})}\varphi^{\natural}$ still depend on $\eta_{3}$ and $\overline{Q}^{\natural,T}$ depends on $h$.}\) and we will understand from the context the variables into discussion, i.e.,
\[(\mathrm{D}_{x}\Theta)(0) :=(\mathrm{D}y_{0}\,|n_{0})=(\mathrm{D}_{x}\Theta)^{\natural}( \eta_{1},\eta_{2},0)\equiv(\mathrm{D}_{x}\Theta)(x_{1},x_{2},0), \tag{4.6}\] \[Q_{0}(0) :=Q_{0}^{\natural}(\eta_{1},\eta_{2},0)\equiv Q_{0}(x_{1},x_{2},0), \qquad\qquad U_{0}(0):=U_{0}^{\natural}(\eta_{1},\eta_{2},0)\equiv U_{0}(x_{1},x _{2},0).\]
In order to construct the \(\Gamma\)-limit of the rescaled energies
\[\mathcal{I}_{h}^{\natural}(\varphi^{\natural},\mathrm{D}_{\eta}^{h} \varphi^{\natural},\overline{R}_{h}^{\natural},\Gamma_{h}^{\natural})=\begin{cases} \frac{1}{h}\,I_{h}^{\natural}(\varphi^{\natural},\mathrm{D}_{\eta}^{h} \varphi^{\natural},\overline{Q}_{e,h}^{\natural},\Gamma_{e,h}^{\natural})& \text{if }\;(\varphi^{\natural},\overline{Q}_{e,h}^{\natural})\in\mathcal{S}^{ \prime},\\ +\infty&\text{else in $X$,}\end{cases} \tag{4.7}\]
for curved Cosserat-shell model we have to solve the following **four (not only two as for flat Cosserat-shell models)** auxiliary optimization problem.
1. For each \(\varphi^{\natural}:\Omega_{1}\to\mathbb{R}^{3}\) and \(\overline{Q}_{e,h}^{\natural}:\Omega_{1}\to\mathrm{SO}(3)\) we determine a vector \(d^{*}\in\mathbb{R}^{3}\) through
\[W_{\mathrm{mp}}^{\mathrm{hom},\natural}(\mathcal{E}_{\varphi^{ \natural},\overline{Q}_{e,h}^{\natural}}) :=W_{\mathrm{mp}}\Big{(}\overline{Q}_{e,h}^{\natural,T}(\mathrm{D}_{( \eta_{1},\eta_{2})}\varphi^{\natural}|d^{*})[(\mathrm{D}_{x}\Theta)^{\natural} (\eta_{3})]^{-1}\Big{)}\] \[=\inf_{c\in\mathbb{R}^{3}}W_{\mathrm{mp}}\Big{(}\overline{Q}_{e,h}^ {\natural,T}(\mathrm{D}_{(\eta_{1},\eta_{2})}\varphi^{\natural}|c)[(\mathrm{D}_{x }\Theta)^{\natural}(\eta_{3})]^{-1}\Big{)}, \tag{4.8}\]
where \(\mathcal{E}_{\varphi^{\natural},\overline{Q}_{e,h}^{\natural}}:=(\overline{Q}_{e }^{\natural,T}\mathrm{D}_{(\eta_{1},\eta_{2})}\varphi^{\natural}-(\mathrm{D}y_{0} )^{\natural}|0)[(\mathrm{D}_{x}\Theta)^{\natural}(\eta_{3})]^{-1}\,\) represents the non fully3 dimensional reduced elastic shell strain tensor.
O2: For each pair \((m,\overline{Q}_{e,0})\), where \(m:\omega\to\mathbb{R}^{3}\), \(\overline{Q}_{e,0}:\omega\to\mathrm{SO}(3)\) we determine the vector \(\vec{d}^{*}\in\mathbb{R}^{3}\) through \[W_{\mathrm{mp}}^{\mathrm{hom}}(\mathcal{E}_{m,\overline{Q}_{e,0}}):=W_{ \mathrm{mp}}\Big{(}\overline{Q}_{e,0}^{T}(\mathrm{D}m|\vec{d}^{*})[(\mathrm{D} _{x}\Theta)(0)]^{-1}\Big{)} =\inf_{\widetilde{c}\in\mathbb{R}^{3}}W_{\mathrm{mp}}\Big{(} \overline{Q}_{e,0}^{T}(\mathrm{D}m|\widetilde{c})[(\mathrm{D}_{x}\Theta)(0)]^{ -1}\Big{)}\] (4.9) \[=\inf_{\widetilde{c}\in\mathbb{R}^{3}}W_{\mathrm{mp}}\Big{(} \mathcal{E}_{m,\overline{Q}_{e,0}}-(0|0|\widetilde{c})[(\mathrm{D}_{x}\Theta) (0)]^{-1}\Big{)},\] where \(\mathcal{E}_{m,\overline{Q}_{e,0}}:=(\overline{Q}_{e,0}^{T}\mathrm{D}m- \mathrm{D}y_{0}|0)[\mathrm{D}_{x}\Theta(0)]^{-1}\) represents the _elastic shell strain tensor_. O3: For each \(\overline{Q}_{e,h}^{\natural}:\Omega_{1}\to\mathrm{SO}(3)\) we determine the skew-symmetric matrix \(A^{*}\in\mathfrak{so}\,3\), i.e. its axial vector \(\mathrm{axl}\,A^{*}\in\mathbb{R}^{3}\) through \[\widetilde{W}_{\mathrm{curv}}^{\mathrm{hom},\natural}(\mathcal{ K}_{\overline{Q}_{e,h}^{\natural}}): =\widetilde{W}_{\mathrm{curv}}\Big{(}\big{(}\mathrm{axl}( \overline{Q}_{e,h}^{\natural,T}\partial_{\eta_{1}}\overline{Q}_{e,h}^{ \natural})\,|\,\mathrm{axl}(\overline{Q}_{e,h}^{\natural,T}\partial_{\eta_{2}} \overline{Q}_{e,h}^{\natural})\,|\,\mathrm{axl}\,(A^{*})\,\big{)}[(\mathrm{D} _{x}\Theta)^{\natural}(\eta_{3})]^{-1}\Big{)}\] (4.10) \[=\inf_{A\in\mathfrak{so}(3)}\widetilde{W}_{\mathrm{curv}}\Big{(} \big{(}\mathrm{axl}(\overline{Q}_{e,h}^{\natural,T}\partial_{\eta_{1}} \overline{Q}_{e,h}^{\natural})\,|\,\mathrm{axl}(\overline{Q}_{e,h}^{\natural, T}\partial_{\eta_{2}}\overline{Q}_{e,h}^{\natural})\,|\,\mathrm{axl}\,(A)\, \big{)}[(\mathrm{D}_{x}\Theta)^{\natural}(\eta_{3})]^{-1}\Big{)},\] where \(\mathcal{K}_{\overline{Q}_{e,h}^{\natural}}:=\,\Big{(}\mathrm{axl}(\overline{ Q}_{e,h}^{\natural,T}\partial_{\eta_{1}}\overline{Q}_{e,h}^{\natural})\,|\, \mathrm{axl}(\overline{Q}_{e,h}^{\natural,T}\partial_{\eta_{2}}\overline{Q}_{ e,h}^{\natural})\,|0\Big{)}[(\mathrm{D}_{x}\Theta)^{\natural}(\eta_{3})]^{-1}\) represents a not fully reduced elastic shell bending-curvature tensor, in the sense that it still depends on \(\eta_{3}\) and \(h\), since \(\overline{Q}_{e,h}^{\natural}=\overline{Q}_{e,h}^{\natural}(\eta_{1},\eta_{2}, \eta_{3})\). Therefore, \(\widetilde{W}_{\mathrm{curv}}^{\mathrm{hom},\natural}(\mathcal{K}_{\overline{Q }_{e,h}^{\natural}})\) given by the above definitions still depends on \(\eta_{3}\) and \(h\). O4: For each \(\overline{Q}_{e,0}:\Omega_{1}\to\mathrm{SO}(3)\) we determine the skew-symmetric matrix \(A^{*}\in\mathfrak{so}(3)\), i.e. its axial vector \(\mathrm{axl}\,A^{*}\in\mathbb{R}^{3}\), though \[\widetilde{W}_{\mathrm{curv}}^{\mathrm{hom}}(\mathcal{K}_{ \overline{Q}_{e,0}}): =\widetilde{W}_{\mathrm{curv}}^{*}\Big{(}\big{(}\mathrm{axl}( \overline{Q}_{e,0}^{T}\,\partial_{\eta_{1}}\overline{Q}_{e,0})\,|\,\mathrm{ axl}(\overline{Q}_{e,0}^{T}\,\partial_{\eta_{2}}\overline{Q}_{e,0})\,|\,\, \mathrm{axl}\,(A^{*})\,\big{)}[(\mathrm{D}_{x}\Theta)^{\natural}(0)]^{-1} \Big{)}\] (4.11) \[=\inf_{A\in\mathfrak{so}(3)}\widetilde{W}_{\mathrm{curv}}\Big{(} \big{(}\mathrm{axl}(\overline{Q}_{e,0}^{T}\,\partial_{\eta_{1}}\overline{Q}_{e, 0})\,|\,\mathrm{axl}(\overline{Q}_{e,0}^{T}\,\partial_{\eta_{2}}\overline{Q}_{ e,0})\,|\,\mathrm{axl}\,(A)\,\big{)}[(\mathrm{D}_{x}\Theta)^{\natural}(0)]^{-1} \Big{)}\,,\] where \(\mathcal{K}_{\overline{Q}_{e,0}}:=\,\Big{(}\mathrm{axl}(\overline{Q}_{e,0}^{T} \,\partial_{x_{1}}\overline{Q}_{e,0})\,|\,\mathrm{axl}(\overline{Q}_{e,0}^{T} \,\partial_{x_{2}}\overline{Q}_{e,0})\,|0\Big{)}[\mathrm{D}_{x}\Theta(0)\,]^{-1 }\not\in\mathrm{Sym}(3)\) represents the _the elastic shell bending-curvature tensor_.
Let us remark that having the solutions of the optimization problems O1 and O3, the solutions for the optimization problems O2 and O4, respectively, follow immediately. However, we cannot skip the solution of the optimization problems O1 and O3 and use only the solutions of the optimization problems O2 and O4, since the knowledge of \(W_{\mathrm{mp}}^{\mathrm{hom}}\) and \(W_{\mathrm{curv}}^{\mathrm{hom}}\) is important in the proof of the \(\Gamma\)-convergence result. This is the first major difference between \(\Gamma\)-convergence for curved initial configurations and flat initial configuration.
The solutions of the first two optimization problems and the complete calculations where given in [41], while the analytical calculations of the last two optimization problems were left open until now.
For the completeness of the exposition we recall the following result
**Theorem 4.1**.: _[_41_]_ _The solution of the optimization problem O2 is_
\[\vec{d}^{*}=\Big{(}1-\frac{\lambda}{2\,\mu+\lambda}\langle\mathcal{E}_{m, \overline{Q}_{e,0}},\mathbbm{1}_{3}\rangle\Big{)}\overline{Q}_{e,0}n_{0}+ \frac{\mu_{c}-\mu}{\mu_{c}+\mu}\ \overline{Q}_{e,0}\mathcal{E}_{m,\overline{Q}_{e,0}}^{T}n_{0}\,, \tag{4.12}\]
_and_
\[\begin{split} W_{\mathrm{mp}}^{\mathrm{hom}}(\mathcal{E}_{m, \overline{Q}_{e,0}})&=\,\mu\,\|\mathrm{sym}\ \,\mathcal{E}_{m,\overline{Q}_{e,0}}^{\parallel}\|^{2}+\mu_{c}\,\|\mathrm{skew}\ \mathcal{E}_{m,\overline{Q}_{e,0}}^{\parallel}\|^{2}+\,\frac{\lambda\,\mu}{ \lambda+2\,\mu}\,\big{[}\mathrm{tr}(\mathcal{E}_{m,\overline{Q}_{e,0}}^{ \parallel})\big{]}^{2}+\frac{2\,\mu\ \mu_{c}}{\mu_{c}+\mu}\|\mathcal{E}_{m,\overline{Q}_{e,0}}^{T}n_{0}\|^{2}\\ &=W_{\mathrm{shell}}\big{(}\mathcal{E}_{m,\overline{Q}_{e,0}}^{ \parallel}\big{)}+\frac{2\,\mu\ \mu_{c}}{\mu_{c}+\mu}\|\mathcal{E}_{m,\overline{Q}_{e,0}}^{\perp}\|^{2}, \end{split} \tag{4.13}\]
_where \(W_{\rm shell}\big{(}\mathcal{E}^{\parallel}_{m,\overline{Q}_{e,0}}\big{)}=\ \mu\,\|{\rm sym}\ \mathcal{E}^{ \parallel}_{m,\overline{Q}_{e,0}}\|^{2}+\mu_{c}\,\|{\rm skew}\ \mathcal{E}^{\parallel}_{m,\overline{Q}_{e,0}}\|^{2}+\ \frac{\lambda\,\mu}{ \lambda+2\,\mu}\,\big{[}{\rm tr}(\mathcal{E}^{\parallel}_{m,\overline{Q}_{e,0} })\big{]}^{2}\) with the orthogonal decomposition in the tangential plane and in the normal direction_
\[\mathcal{E}_{m,\overline{Q}_{e,0}}=\mathcal{E}^{\parallel}_{m,\overline{Q}_{e,0}}+\mathcal{E}^{\perp}_{m,\overline{Q}_{e,0}},\ \ \ \ \ \ \ \ \ \ \mathcal{E}^{\parallel}_{m,\overline{Q}_{e,0}}:={\rm A}_{y_{0}}\,\mathcal{E} _{m,\overline{Q}_{e,0}},\ \ \ \ \ \ \ \ \ \ \mathcal{E}^{\perp}_{m,\overline{Q}_{e,0}}\coloneqq(\mathbb{1}_{3}-{\rm A}_{ y_{0}})\,\mathcal{E}_{m,\overline{Q}_{e,0}}, \tag{4.14}\]
_and \({\rm A}_{y_{0}}:=({\rm Dy}_{0}|0)\ [{\rm D}\Theta_{x}(0)\,]^{-1}\in\mathbb{R}^{3 \times 3}\)._
In the remainder of this section we provide the explicit solutions of the optimization problems O3 and O4. We remark that, while in the case of flat initial configuration the solution of O4 is very easy to be found, in the case of curved initial configuration the calculations are more difficult. Beside this, for curved initial configurations there is a need to solve the optimization problem O3, too. Notice that for flat initial configuration the optimization problems O3 and O4 coincide.
### The calculation of the homogenized curvature energy
We have the following isotropic curvature energy formula for a curved configuration
\[\widetilde{W}_{\rm curv}(\Gamma^{\natural}_{e,h})=\mu L_{c}^{2}\Big{(}b_{1}\|{ \rm sym}\,\Gamma^{\natural}_{e,h}\|^{2}+b_{2}\,\|\,{\rm skew}\,\Gamma^{\natural }_{e,h}\|^{2}+b_{3}{\rm tr}(\Gamma^{\natural}_{e,h})^{2}\Big{)}\,. \tag{4.15}\]
**Theorem 4.2**.: _The solution of the optimization problem O3 given by (4.10) is_
\[c^{*}=\frac{(b_{2}-b_{1})}{b_{1}+b_{2}}\mathcal{K}^{T}_{\overline{Q}^{*}_{e,h} }n_{0}-\frac{2b_{3}}{2(b_{1}+b_{3})}{\rm tr}(\mathcal{K}_{\overline{Q}^{*}_{e,h}})n_{0} \tag{4.16}\]
_and the coresponding homogenized curvature energy is_
\[W^{hom}_{curv}(\mathcal{K}_{\overline{Q}^{*}_{e,h}})=\mu L_{c}^{2}\Big{(}b_{1 }\|{\rm sym}\mathcal{K}^{\parallel}_{\overline{Q}^{*}_{e,h}}\|^{2}+b_{2}\|\,{ \rm skew}\,\mathcal{K}^{\parallel}_{\overline{Q}^{*}_{e,h}}\|^{2}+\frac{b_{1 }b_{3}}{(b_{1}+b_{3})}{\rm tr}(\mathcal{K}^{\parallel}_{\overline{Q}^{*}_{e,h }})^{2}+\frac{2b_{1}b_{2}}{b_{1}+b_{2}}\|\mathcal{K}^{\perp}_{\overline{Q}^{*}_ {e,h}}\|\Big{)}\,,\]
_where \(\mathcal{K}^{\parallel}_{\overline{Q}^{*}_{e,h}}\) and \(\mathcal{K}^{\perp}_{\overline{Q}^{*}_{e,h}}\) represent the orthogonal decomposition of the a not fully reduced elastic shell bending-curvature tensor \(\mathcal{K}_{\overline{Q}^{*}_{e,h}}\) in the tangential plane and in the normal direction, respectively._
Proof.: We need to find
\[\widetilde{W}^{\rm hom,\natural}_{\rm curv}(\mathcal{K}_{\overline{Q}^{*}_{e,h }})=\widetilde{W}_{\rm curv}(\mathcal{K}_{\overline{Q}^{*}_{e,h}}+(0|0|c^{*})[( {\rm D}_{x}\Theta)^{\natural}(\eta_{3})]^{-1})=\inf_{c\in\mathbb{R}^{3}} \widetilde{W}_{\rm curv}(\underbrace{\mathcal{K}_{\overline{Q}^{*}_{e,h}}+(0|0 |c)[({\rm D}_{x}\Theta)^{\natural}(\eta_{3})]^{-1}}_{=:\mathcal{K}^{\overline{Q }^{*}_{e,h}}})\,. \tag{4.17}\]
The Euler-Lagrange equations appear from variations with respect to arbitrary increments \(\delta c\in\mathbb{R}^{3}\).
\[\langle DW_{\rm curv}(\mathcal{K}^{c^{*}}_{\overline{Q}^{*}_{e,h }}),(0|0|\delta c)[({\rm D}_{x}\Theta)^{\natural}(\eta_{3})]^{-1}\rangle=0 \Leftrightarrow \langle[D\widetilde{W}_{\rm curv}(\mathcal{K}^{c^{*}}_{\overline {Q}^{*}_{e,h}})]\,[({\rm D}_{x}\Theta)^{\natural}(\eta_{3})]^{-T},e_{3}\otimes \delta c\rangle=0 \tag{4.18}\] \[\Leftrightarrow \langle[D\widetilde{W}_{\rm curv}(\mathcal{K}^{c^{*}}_{\overline {Q}^{*}_{e,h}})]\,[({\rm D}_{x}\Theta)^{\natural}(\eta_{3})]^{-T}\,e_{3},\delta c\rangle=0\] \[\Leftrightarrow \langle[D\widetilde{W}_{\rm curv}(\mathcal{K}^{c^{*}}_{\overline {Q}^{*}_{e,h}})]\,n_{0},\delta c\rangle=0\quad\forall\delta c\in\mathbb{R}^{3}.\]
Therefore, we deduce that if \(c^{*}\) is a minimum then
\[[D\widetilde{W}_{\rm curv}(\mathcal{K}^{c^{*}}_{\overline{Q}^{*}_{e,h}})]\,n_{ 0}=0\quad\Leftrightarrow\quad\Big{(}2a_{1}{\rm sym}(\mathcal{K}^{c^{*}}_{ \overline{Q}^{*}_{e,h}})+2\,a_{2}\,{\rm skew}(\mathcal{K}^{c^{*}}_{\overline {Q}^{*}_{e,h}})+2a_{3}{\rm tr}(\mathcal{K}^{c^{*}}_{\overline{Q}^{*}_{e,h}}) \Big{)}n_{0}=0\,. \tag{4.19}\]
Since \(\mathcal{K}^{c^{*}}_{\overline{Q}^{*}_{e,h}}=\mathcal{K}_{\overline{Q}^{*}_{e,h }}+(0|0|c^{*})[({\rm D}_{x}\Theta)^{\natural}(\eta_{3})]^{-1}\,,\) we have
\[2{\rm sym}\big{(}\mathcal{K}^{c^{*}}_{\overline{Q}^{*}_{e,h}} \big{)}n_{0} =2\Big{(}{\rm sym}(\mathcal{K}_{\overline{Q}^{*}_{e,h}})+{\rm sym}( (0|0|c^{*})[({\rm D}_{x}\Theta)^{\natural}(\eta_{3})]^{-1})\Big{)}n_{0} \tag{4.20}\] \[=\Big{(}{\rm axl}(\overline{Q}^{\natural,T}_{e,h}\,\partial_{ \eta_{1}}\overline{Q}^{\natural}_{e,h})\,|\,{\rm axl}(\overline{Q}^{\natural,T }_{e,h}\,\partial_{\eta_{2}}\overline{Q}^{\natural}_{e,h})\,|0\Big{)} \underbrace{[({\rm D}_{x}\Theta)^{\natural}]^{-1}(\eta_{3})\,n_{0}}_{=e_{3}}+ \mathcal{K}^{T}_{\overline{Q}^{*}_{e,h}}\,n_{0}\] \[\qquad+(0|0|c^{*})[({\rm D}_{x}\Theta)^{\natural}(\eta_{3})]^{ -1}n_{0}+((0|0|c^{*})[({\rm D}_{x}\Theta)^{\natural}(\eta_{3})]^{-1})^{T}n_ {0}\] \[=\mathcal{K}^{T}_{\overline{Q}^{*}_{e,h}}\,n_{0}+c^{*}+((0|0|c^{*} )[({\rm D}_{x}\Theta)^{\natural}(\eta_{3})]^{-1})^{T}n_{0}\,.\]
Similar calculations show that
\[2\,\text{skew}\,\big{(}\mathcal{K}_{\overline{Q}_{e,h}^{\natural}}^{c^ {*}}\big{)}n_{0} =2\Big{(}\,\text{skew}(\mathcal{K}_{\overline{Q}_{e,h}^{\natural}})+ \text{skew}((0|0|c^{*})[(\text{D}_{x}\Theta)^{\natural}(\eta_{3})]^{-1})\Big{)}n _{0}\] \[=-\mathcal{K}_{\overline{Q}_{e,h}^{\natural}}^{T}n_{0}+c^{*}-((0| 0|c^{*})[(\text{D}_{x}\Theta)^{\natural}(\eta_{3})]^{-1})^{T}n_{0}\,, \tag{4.21}\]
while the trace term is calculated to be
\[2\,\text{tr}(\mathcal{K}_{\overline{Q}_{e,h}^{\natural}}^{c^{*}})n _{0} =2\Big{(}\text{tr}(\mathcal{K}_{\overline{Q}_{e,h}^{\natural}})+ \text{tr}((0|0|c^{*})[(\text{D}_{x}\Theta)^{\natural}(\eta_{3})]^{-1})\Big{)}n _{0}\] \[=2\,\text{tr}(\mathcal{K}_{\overline{Q}_{e,h}^{\natural}})n_{0}+2 ((0|0|c^{*})[(\text{D}_{x}\Theta)^{\natural}(\eta_{3})]^{-1},\mathbb{1}\,_{3} )_{\mathbb{R}^{3\times 3}}\,n_{0}\] \[=2\,\text{tr}(\mathcal{K}_{\overline{Q}_{e,h}^{\natural}})n_{0}+2 (c^{*},[\underbrace{(\text{D}_{x}\Theta)^{\natural}(\eta_{3})]^{-T}e_{3}}_{= n_{0}})n_{0}=2\,\text{tr}(\mathcal{K}_{\overline{Q}_{e,h}^{\natural}})n_{0}+2 \,c^{*}\,n_{0}\otimes n_{0}\,. \tag{4.22}\]
By using (4.19), we obtain
\[b_{1}\mathcal{K}_{\overline{Q}_{e,h}^{\natural}}^{T}n_{0}+b_{1} c^{*}+b_{1}((0|0|c^{*})[(\text{D}_{x}\Theta)^{\natural}(\eta_{3})]^{-1})^{T}n_{0}-b_{2} \mathcal{K}_{\overline{Q}_{e,h}^{\natural}}^{T}n_{0}+b_{2}c^{*}\] \[\quad-b_{2}((0|0|c^{*})[(\text{D}_{x}\Theta)^{\natural}(\eta_{3} )]^{-1})^{T}n_{0}+2b_{3}\text{tr}(\mathcal{K}_{\overline{Q}_{e,h}^{\natural}} )n_{0}+2b_{3}\,c^{*}\,n_{0}\otimes n_{0}=0\,. \tag{4.23}\]
Gathering similar terms gives us
\[(b_{1}-b_{2})\mathcal{K}_{\overline{Q}_{e,h}^{\natural}}^{T}n_{0 }+(b_{1}+b_{2})c^{*}+(b_{1}-b_{2})((0|0|c^{*})[(\text{D}_{x}\Theta)^{\natural }(\eta_{3})]^{-1})^{T}n_{0}\] \[\quad+2b_{3}\text{tr}(\mathcal{K}_{\overline{Q}_{e,h}^{\natural} })n_{0}+2b_{3}\,c^{*}\,n_{0}\otimes n_{0}=0\,. \tag{4.24}\]
We have
\[((0|0|c^{*})[(\text{D}_{x}\Theta)^{\natural}(\eta_{3})]^{-1})^{T }n_{0} =(c^{*}\,(0|0|e_{3})[(\text{D}_{x}\Theta)^{\natural}(\eta_{3})]^{ -1})^{T}n_{0}=(c^{*}\,n_{0})^{T}n_{0}\] \[=n_{0}^{T}c^{*T}n_{0}=\langle n_{0},c^{*T}\rangle n_{0}=n_{0} \langle n_{0},c^{*}\rangle=n_{0}\otimes n_{0}\,c^{*}=c^{*}\,n_{0}\otimes n_{0}\,, \tag{4.25}\]
and by using the decomposition [5, 6, 7]\(\mathbb{1}\,_{3}\,c^{*}=A_{y_{0}}\,c^{*}+n_{0}\otimes n_{0}\,c^{*}\,,\) we obtain
\[(b_{1}-b_{2})\mathcal{K}_{\overline{Q}_{e,h}^{\natural}}^{T}n_{0 }+(b_{1}+b_{2})(A_{y_{0}}\,c^{*}+n_{0}\otimes n_{0}\,c^{*})+(b_{1}-b_{2})n_{0} \otimes n_{0}\,c^{*}\] \[\quad+2b_{3}\text{tr}(\mathcal{K}_{\overline{Q}_{e,h}^{\natural} })n_{0}+2b_{3}\,n_{0}\otimes n_{0}\,c^{*}=0\,, \tag{4.26}\]
and
\[[(b_{1}+b_{2})A_{y_{0}}+2(b_{1}+b_{3})n_{0}\otimes n_{0}]\,c^{*}=-(b_{1}-b_{2} )\mathcal{K}_{\overline{Q}_{e,h}^{\natural}}^{T}n_{0}-2b_{3}\text{tr}( \mathcal{K}_{\overline{Q}_{e,h}^{\natural}})\,n_{0}\,. \tag{4.27}\]
Since \(A_{y_{0}}\) is orthogonal to \(n_{0}\otimes n_{0}\) and \(A_{y_{0}}^{2}=A_{y_{0}}\),
\[\bigg{[}\frac{1}{b_{1}+b_{2}}A_{y_{0}}+\frac{1}{2(b_{1}+b_{3})}n_{0}\otimes n _{0}\bigg{]}\,[(b_{1}+b_{2})A_{y_{0}}+2(b_{1}+b_{3})n_{0}\otimes n_{0}]= \mathbb{1}_{3} \tag{4.28}\]
(see [6]), we have
\[[(b_{1}+b_{2})A_{y_{0}}+2(b_{1}+b_{3})n_{0}\otimes n_{0}]^{-1}= \frac{1}{b_{1}+b_{2}}A_{y_{0}}+\frac{1}{2(b_{1}+b_{3})}n_{0}\otimes n_{0} \tag{4.29}\]
and we find
\[c^{*}=(b_{2}-b_{1})\Big{[}\frac{1}{b_{1}+b_{2}}A_{y_{0}}+\frac{1}{2(b_{1}+b_{3}) }n_{0}\otimes n_{0}\Big{]}\mathcal{K}_{\overline{Q}_{e,h}^{\natural}}^{T}n_{0} -2b_{3}\text{tr}(\mathcal{K}_{\overline{Q}_{e,h}^{\natural}})\Big{[}\frac{1}{b_{1 }+b_{2}}A_{y_{0}}+\frac{1}{2(b_{1}+b_{3})}n_{0}\otimes n_{0}\Big{]}n_{0}\,.\]
Because,
\[\begin{split} A_{y_{0}}\mathcal{K}^{T}_{\overline{Q}^{\natural}_{e,h} }&=\mathbb{1}_{3}\mathcal{K}^{T}_{\overline{Q}^{\natural}_{e,h}}-n_ {0}\otimes n_{0}\mathcal{K}^{T}_{\overline{Q}^{\natural}_{e,h}}\\ &=\mathcal{K}^{T}_{\overline{Q}^{\natural}_{e,h}}-(0|0|n_{0})(0| 0|n_{0})^{T}[(\mathsf{D}_{x}\Theta)^{\natural}(\eta_{3})]^{-T}\Big{(}\text{ axl}(\overline{Q}^{\natural,T}_{e,h}\,\partial_{\eta_{1}}\overline{Q}^{\natural}_{e,h}) \,|\,\text{axl}(\overline{Q}^{\natural,T}_{e,h}\,\partial_{\eta_{2}}\overline{ Q}^{\natural}_{e,h})\,|0\Big{)}^{T}n_{0}\\ &=\mathcal{K}^{T}_{\overline{Q}^{\natural}_{e,h}}-(0|0|n_{0}) \big{(}\underbrace{[(\mathsf{D}_{x}\Theta)^{\natural}(\eta_{3})]^{-1}(0|0|n_{ 0})}_{(0|0|e_{3})}\big{)}^{T}\Big{(}\text{axl}(\overline{Q}^{\natural,T}_{e,h }\,\partial_{\eta_{1}}\overline{Q}^{\natural}_{e,h})\,|\,\text{axl}( \overline{Q}^{\natural,T}_{e,h}\,\partial_{\eta_{2}}\overline{Q}^{\natural}_{ e,h})\,|0\Big{)}^{T}n_{0}\\ &=\mathcal{K}^{T}_{\overline{Q}^{\natural}_{e,h}}-(0|0|n_{0}) \begin{pmatrix}0&0&0\\ 0&0&0\\ 0&0&1\end{pmatrix}\begin{pmatrix}\ast&\ast&\ast\\ \ast&\ast&\ast\\ 0&0&0\end{pmatrix}n_{0}=\mathcal{K}^{T}_{\overline{Q}^{\natural}_{e,h}}\,, \end{split} \tag{4.30}\]
we obtain the unique minimizer
\[c^{*}=\frac{(b_{2}-b_{1})}{b_{1}+b_{2}}\mathcal{K}^{T}_{\overline{Q}^{\natural }_{e,h}}n_{0}-\frac{2b_{3}}{2(b_{1}+b_{3})}\text{tr}(\mathcal{K}_{\overline{Q }^{\natural}_{e,h}})\,n_{0}\,. \tag{4.31}\]
Next, we insert the minimizer \(c^{*}\) in (4.31). We have
\[\begin{split}\|\text{sym}\,\mathcal{K}^{\overline{c}^{*}}_{ \overline{Q}^{\natural}_{e,h}}\|^{2}&=\|\text{sym}\big{(}\mathcal{K}_{ \overline{Q}^{\natural}_{e,h}}\big{)}\|^{2}+\|\text{sym}\big{(}(0|0|c)[( \mathsf{D}_{x}\Theta)^{\natural}(\eta_{3})]^{-1}\big{)}\|^{2}\\ &\qquad+2\,\Big{\langle}\text{sym}\big{(}\mathcal{K}_{\overline{Q }^{\natural}_{e,h}}\big{)},\text{sym}\big{(}(0|0|c)[(\mathsf{D}_{x}\Theta)^{ \natural}(\eta_{3})]^{-1}\big{)}\Big{\rangle}\\ &=\|\text{sym}(\mathcal{K}_{\overline{Q}^{\natural}_{e,h}}\big{)} \|^{2}\\ &\qquad+\|\text{sym}\Big{(}\frac{b_{2}-b_{1}}{b_{1}+b_{2}} \mathcal{K}^{T}_{\overline{Q}^{\natural}_{e,h}}(0|0|n_{0})[(\mathsf{D}_{x} \Theta)^{\natural}(\eta_{3})]^{-1}\Big{)}\|^{2}\\ &\qquad\qquad\qquad\qquad\qquad-\frac{b_{3}}{(b_{1}+b_{3})} \text{tr}(\mathcal{K}_{\overline{Q}^{\natural}_{e,h}})(0|0|n_{0})[(\mathsf{D}_ {x}\Theta)^{\natural}(\eta_{3})]^{-1}\Big{)}\Big{\rangle}\,,\end{split} \tag{4.32}\]
and
\[\begin{split}\|\text{sym}&\Big{(}\frac{b_{2}-b_{1}}{b_{1}+b_{ 2}}\mathcal{K}^{T}_{\overline{Q}^{\natural}_{e,h}}(0|0|n_{0})[(\mathsf{D}_{x} \Theta)^{\natural}(\eta_{3})]^{-1}-\frac{b_{3}}{(b_{1}+b_{3})}\text{tr}( \mathcal{K}_{\overline{Q}^{\natural}_{e,h}})(0|0|n_{0})[(\mathsf{D}_{x}\Theta) ^{\natural}(\eta_{3})]^{-1}\Big{)}\|^{2}\\ &=\frac{(b_{2}-b_{1})^{2}}{(b_{1}+b_{2})^{2}}\|\text{sym}( \mathcal{K}^{T}_{\overline{Q}^{\natural}_{e,h}}n_{0}\otimes n_{0})\|^{2}+ \frac{b_{3}^{2}}{(b_{1}+b_{3})^{2}}\text{tr}(\mathcal{K}_{\overline{Q}^{\natural }_{e,h}})^{2}\|n_{0}\otimes n_{0}\|^{2}\\ &\qquad-2\,\frac{b_{2}-b_{1}}{b_{1}+b_{2}}\,\frac{b_{3}}{(b_{1}+ b_{3})}\text{tr}(\mathcal{K}_{\overline{Q}^{\natural}_{e,h}})(\text{sym}( \mathcal{K}^{T}_{\overline{Q}^{\natural}_{e,h}}n_{0}\otimes n_{0}),n_{0} \otimes n_{0})\\ &=\frac{(b_{2}-b_{1})^{2}}{(b_{1}+b_{2})^{2}}\,\bigg{\langle} \text{sym}(\mathcal{K}^{T}_{\overline{Q}^{\natural}_{e,h}}n_{0}\otimes n_{0}), \text{sym}(\mathcal{K}^{T}_{\overline{Q}^{\natural}_{e,h}}n_{0}\otimes n_{0}) \bigg{\rangle}+\frac{b_{3}^{2}}{(b_{1}+b_{3})^{2}}\text{tr}(\mathcal{K}_{ \overline{Q}^{\natural}_{e,h}})^{2}\\ &\quad-\frac{b_{2}-b_{1}}{b_{1}+b_{2}}\,\frac{b_{3}}{(b_{1}+b_{3} )}\text{tr}(\mathcal{K}_{\overline{Q}^{\natural}_{e,h}})\langle\text{$\mathcal{K} ^{T}_{\overline{Q}^{\natural}_{e,h}}$}n_{0}\otimes n_{0},n_{0}\otimes n_{0} \rangle\\ &\quad-\frac{b_{2}-b_{1}}{b_{1}+b_{2}}\,\frac{b_{3}}{(b_{1}+b_{3} )}\text{tr}(\mathcal{K}_{\overline{Q}^{\natural}_{e,h}})\langle n_{0}\otimes n_{0} \,\mathcal{K}_{\overline{Q}^{\natural}_{e,h}},n_{0}\otimes n_{0}\rangle\\ &=\frac{(b_{2}-b_{1})^{2}}{4(b_{1}+b_{2})^{2}}\langle\mathcal{K}^ {T}_{\overline{Q}^{\natural}_{e,h}}n_{0}\otimes n_{0},\mathcal{K}^{T}_{ \overline{Q}^{\natural}_{e,h}}n_{0}\otimes n_{0}\rangle+\frac{(b_{2}-b_{1})^{2}} {4(b_{1}+b_{2})^{2}}\langle\mathcal{K}^{T}_{\overline{Q}^{\natural}_{e,h}}n_{0} \otimes n_{0},n_{0}\otimes n_{0}\,\mathcal{K}_{\overline{Q}^{\natural}_{e,h}}\rangle \\ &\quad+\frac{(b_{2}-b_{1})^{2}}{4(b_{1}+b_{2})^{2}} \langle n_{0}\otimes n_{0}\,\mathcal{K}_{\overline{Q}^{\natural}_{e,h}}, \mathcal{K}^{T}_{\overline{Q}^{\natural}_{e,h}}n_{0}\otimes n_{0}\rangle+ \frac{(b_{2}-b_{1})^{2}}{4(b_{1}+b_{2})^{2}}\langle n_{0}\otimes n_{0}\, \mathcal{K}_{\overline{Q}^{\natural}_{e,h}},n_{0}\otimes n_{0}\,\mathcal{K}_{ \overline{Q}^{\natural}_{e,h}}\rangle\\ &\quad+\frac{b_{3}^{2}}{(b_{1}+b_{3})^{2}}\text{tr}(\mathcal{K}_{ \overline{Q}^{\natural}_{e,h}})^{2}=\frac{(b_{2}-b_{1})^{2}}{2(b_{1}+b_{2})^{2}}\| \mathcal{K}^{T}_{\overline{Q}^{\natural}_{e,h}}n_{0}\|^{2}+\frac{b_{3}^{2}}{(b_{1}+ b_{3})^{2}}\text{tr}(\mathcal{K}_{\overline{Q}^{\natural}_{e,h}})^{2}.\end{split} \tag{4.33}\]
Note that
\[\langle\mathcal{K}^{T}_{\overline{Q}_{e,h}^{*}},n_{0}\otimes n_{0} \rangle=\langle\mathcal{K}^{T}_{\overline{Q}_{e,h}^{*}},n_{0}\otimes n_{0}\rangle\] \[=\langle\left(\operatorname{axl}(\overline{Q}_{e,h}^{\ast,T} \partial_{\eta_{1}}\overline{Q}_{e,h}^{\natural})\,|\operatorname{axl}( \overline{Q}_{e,h}^{\ast,T}\partial_{\eta_{2}}\overline{Q}_{e,h}^{\natural}) \,|0\right)\!\!\left(\operatorname{(D}_{x}\Theta)^{\natural}\!(\eta_{3}) \right]^{-1},(0|0|n_{0})[(\operatorname{D}_{x}\Theta)^{\natural}\!(0)]^{-1}\rangle\] \[=\left\langle\left(0|0|n_{0}\right)^{T}\!\left(\operatorname{axl }(\overline{Q}_{e,h}^{\ast,T}\partial_{\eta_{1}}\overline{Q}_{e,h}^{\natural} )\,|\operatorname{axl}(\overline{Q}_{e,h}^{\natural,T}\partial_{\eta_{2}} \overline{Q}_{e,h}^{\natural})\,|0\right)\!\!,\left(\begin{array}{cc} \operatorname{I}_{y_{0}}^{-1}&0\\ &0\\ 0&0\end{array}\right)^{-T}\right\rangle\] \[=\left\langle\left(0|0|n_{0}\right)^{T}\!\left(\operatorname{axl }(\overline{Q}_{e,h}^{\ast,T}\partial_{\eta_{1}}\overline{Q}_{e,h}^{\natural })\,|\operatorname{axl}(\overline{Q}_{e,h}^{\ast,T}\partial_{\eta_{2}} \overline{Q}_{e,h}^{\natural})\,|0\right)\!\!,\left(\begin{array}{cc} \operatorname{I}_{y_{0}}^{-1}&0\\ &0\\ 0&0\end{array}\right)^{-T}\right\rangle\] \[=\left\langle\left(\begin{array}{cc}0&0&0\\ 0&0&0\\ \ast&\ast&0\end{array}\right),\left(\begin{array}{cc}\ast&\ast&0\\ &\ast&\ast&0\\ 0&0&1\end{array}\right)\right\rangle=0\,, \tag{4.34}\]
where the _Weingarten map (or shape operator)_ is defined by \(\operatorname{L}_{y_{0}}=\operatorname{I}_{y_{0}}^{-1}\Pi_{y_{0}}\in\mathbb{ R}^{2\times 2}\), where \(\operatorname{I}_{y_{0}}:=[\operatorname{Dy}_{0}]^{T}\operatorname{Dy}_{0}\in \mathbb{R}^{2\times 2}\) and \(\Pi_{y_{0}}:=\,-[\operatorname{Dy}_{0}]^{T}\operatorname{D}n_{0}\in\mathbb{R} ^{2\times 2}\) are the matrix representations of the _first fundamental form (metric)_ and the _second fundamental form_ of the surface, respectively. We also observe that
\[n_{0}\otimes n_{0}[(\operatorname{D}_{x}\Theta)^{\natural}\!( \eta_{3})]^{-T} =(0|0|n_{0})[(\operatorname{D}_{x}\Theta)^{\natural}\!(0)]^{-1}[( \operatorname{D}_{x}\Theta)^{\natural}\!(\eta_{3})]^{-T} \tag{4.35}\] \[=(0|0|n_{0})[(\operatorname{D}_{x}\Theta)^{\natural}\!(0)]^{-1}[( \operatorname{D}_{x}\Theta)^{\natural}\!(0)]^{-T}\left(\begin{array}{cc} \operatorname{I}_{2}-x_{3}L_{y_{0}}&0\\ &0\\ 0&0\end{array}\right)^{-T}\] \[=(0|0|n_{0})\left(\begin{array}{cc}\operatorname{I}_{y_{0}}^{-1 }&0\\ &0\\ 0&0\end{array}\right)\left(\begin{array}{cc}\operatorname{I}_{2}-x_{3}L_{y_{0 }}&0\\ &0\\ 0&0\end{array}\right)^{-T}=(0|0|n_{0})\left(\begin{array}{cc}\ast&\ast&0\\ \ast&\ast&0\\ 0&0\end{array}\right)=(0|0|n_{0})\,.\]
For every vector \(\widehat{u},v\in\mathbb{R}^{3}\) we have
\[\langle\widehat{u}\otimes n_{0},v\otimes n_{0}\rangle=\langle(v \otimes n_{0})^{T}\widehat{u}\otimes n_{0},1\rangle=\langle(n_{0}\otimes v) \widehat{u}\otimes n_{0},\mathbb{1}\rangle=\langle n_{0}\otimes n_{0}\langle v,\widehat{u}\rangle,\mathbb{1}\rangle=\langle v,\widehat{u}\rangle\cdot \underbrace{\langle n_{0},n_{0}\rangle}_{=1}=\langle v,\widehat{u}\rangle\,,\]
and \(n_{0}\otimes n_{0}=(0|0|n_{0})[(\operatorname{D}_{x}\Theta)^{\natural}\!(0)]^ {-1}\), we deduce
\[\langle\mathcal{K}^{T}_{\overline{Q}_{e,h}^{*}}n_{0}\otimes n_{0},\mathcal{K} ^{T}_{\overline{Q}_{e,h}^{*}}n_{0}\otimes n_{0}\rangle=\langle\mathcal{K}^{T} _{\overline{Q}_{e,h}^{*}}n_{0},\mathcal{K}^{T}_{\overline{Q}_{e,h}^{*}}n_{0} \rangle=\|\mathcal{K}^{T}_{\overline{Q}_{e,h}^{*}}n_{0}\|^{2}\,. \tag{4.36}\]
On the other hand,
\[2\left\langle\operatorname{sym}\!\mathcal{K}_{\overline{Q}_{e,h}^{*}}, \operatorname{sym}\!\left(\frac{b_{2}-b_{1}}{b_{1}+b_{2}}\mathcal{K}^{T}_{ \overline{Q}_{e,h}^{*}}n_{0}\otimes n_{0}-\frac{b_{3}}{(b_{1}+b_{3})} \mathrm{tr}(\mathcal{K}_{\overline{Q}_{e,h}^{*}})n_{0}\otimes n_{0})\right\rangle =\frac{b_{2}-b_{1}}{b_{1}+b_{2}}\|\mathcal{K}^{T}_{\overline{Q}_{e,h}^{*}}n_{0} \|^{2}\,. \tag{4.37}\]
Therefore
\[\|\operatorname{sym}\!\mathcal{K}^{\overline{c}^{*}}_{\overline{Q}_{e,h}^{*}} \|^{2}=\|\operatorname{sym}\!\mathcal{K}_{\overline{Q}_{e,h}^{*}}\|^{2}+\frac{(b _{1}-b_{2})^{2}}{2(b_{1}+b_{2})^{2}}\|\mathcal{K}^{T}_{\overline{Q}_{e,h}^{*}}n_ {0}\|^{2}+\frac{b_{3}^{2}}{(b_{1}+b_{3})^{2}}\mathrm{tr}(\mathcal{K}_{ \overline{Q}_{e,h}^{*}})^{2}+\frac{b_{2}-b_{1}}{b_{1}+b_{2}}\|\mathcal{K}^{T}_{ \overline{Q}_{e,h}^{*}}n_{0}\|^{2}\,. \tag{4.38}\]
Now we continue the calculations for the skew-symmetric part,
\[\|\operatorname{skew}\!\mathcal{K}^{\overline{c}^{*}}_{\overline{Q}_{e,h}^{*}} \|^{2}= \|\operatorname{skew}\!\mathcal{K}_{\overline{Q}_{e,h}^{*}}\|^{2}+\| \operatorname{skew}((0|0|c^{*})[(\operatorname{D}_{x}\Theta)^{\natural}\!(\eta_{3}) ]^{-1})\|^{2}\] \[+2\langle\operatorname{skew}\!\mathcal{K}_{\overline{Q}_{e,h}^{*}}, \operatorname{skew}((0|0|c^{*})[(\operatorname{D}_{x}\Theta)^{\natural}\!(\eta_{3}) ]^{-1})\rangle. \tag{4.39}\]
In a similar manner, we calculate the terms separately. Since \(n_{0}\otimes n_{0}\) is symmetric, we obtain
\[\|\operatorname{skew}((0|0|c)[(\operatorname{D}_{x}\Theta)^{ \natural}\!(\eta_{3})]^{-1})\|^{2} =\|\operatorname{skew}(\frac{b_{2}-b_{1}}{b_{1}+b_{2}}\mathcal{K}^{T}_{ \overline{Q}_{e,h}^{*}}n_{0}\otimes n_{0}-\frac{b_{3}}{(b_{1}+b_{3})}\mathrm{ tr}(\mathcal{K}_{\overline{Q}_{e,h}^{*}})\,n_{0}\otimes n_{0})\|^{2} \tag{4.40}\] \[=\frac{(b_{1}-b_{2})^{2}}{(b_{1}+b_{2})^{2}}\|\operatorname{ skew}(\mathcal{K}^{T}_{\overline{Q}_{e,h}^{*}}n_{0}\otimes n_{0})\|^{2}.\]
Using that \((n_{0}\otimes n_{0})^{2}=(n_{0}\otimes n_{0})\) we deduce
\[\|\operatorname{skew}(\mathcal{K}^{T}_{\overline{Q}^{\natural}_{e,h} }\,n_{0}\otimes n_{0})\|^{2} =\frac{1}{4}\left\langle\mathcal{K}^{T}_{\overline{Q}^{\natural}_{e,h}}\,n_{0}\otimes n_{0},\mathcal{K}^{T}_{\overline{Q}^{\natural}_{e,h}}\,n_{0 }\otimes n_{0}\right\rangle-\frac{1}{4}\left\langle\mathcal{K}^{T}_{ \overline{Q}^{\natural}_{e,h}}\,n_{0}\otimes n_{0},n_{0}\otimes n_{0}\, \mathcal{K}_{\overline{Q}^{\natural}_{e,h}}\right\rangle\] \[\quad-\frac{1}{4}\left\langle n_{0}\otimes n_{0}\,\mathcal{K}_{ \overline{Q}^{\natural}_{e,h}},\mathcal{K}^{T}_{\overline{Q}^{\natural}_{e,h} }\,n_{0}\otimes n_{0}\right\rangle+\frac{1}{4}\left\langle n_{0}\otimes n_{0 }\,\mathcal{K}_{\overline{Q}^{\natural}_{e,h}},n_{0}\otimes n_{0}\,\mathcal{ K}_{\overline{Q}^{\natural}_{e,h}}\right\rangle \tag{4.41}\] \[=\frac{1}{2}\|\mathcal{K}^{T}_{\overline{Q}^{\natural}_{e,h}}\,n_ {0}\|^{2}\,.\]
We have as well
\[2(\operatorname{skew}\mathcal{K}_{\overline{Q}^{\natural}_{e,h}}, \operatorname{skew}((0|0|c^{*})[(\operatorname{D}_{x}\Theta)^{\natural}(\eta_ {3})]^{-1}))=2\,\frac{(b_{2}-b_{1})}{(b_{1}+b_{2})}\left\langle\operatorname{ skew}\mathcal{K}_{\overline{Q}^{\natural}_{e,h}},\operatorname{skew}(\mathcal{K}^{T}_{ \overline{Q}^{\natural}_{e,h}}\,n_{0}\otimes n_{0})\right\rangle \tag{4.42}\] \[\quad-\frac{(b_{2}-b_{1})}{2(b_{1}+b_{2})}\langle\mathcal{K}^{T}_ {\overline{Q}^{\natural}_{e,h}},\mathcal{K}^{T}_{\overline{Q}^{\natural}_{e,h }}\,n_{0}\otimes n_{0}\rangle+\frac{(b_{2}-b_{1})}{2(b_{1}+b_{2})}\langle \mathcal{K}^{T}_{\overline{Q}^{\natural}_{e,h}},n_{0}\otimes n_{0}\,\mathcal{ K}_{\overline{Q}^{\natural}_{e,h}}\rangle=-\frac{(b_{2}-b_{1})}{(b_{1}+b_{2})}\| \mathcal{K}^{T}_{\overline{Q}^{\natural}_{e,h}}n_{0}\|^{2}\,,\]
and we obtain
\[\|\operatorname{skew}\mathcal{K}^{\overline{c}^{*}_{\overline{Q}^{\natural}_{ e,h}}}\|^{2}=\|\operatorname{skew}\mathcal{K}_{\overline{Q}^{\natural}_{e,h}}\|^{2}+ \frac{(b_{2}-b_{1})^{2}}{2(b_{1}+b_{2})^{2}}\|\mathcal{K}^{T}_{\overline{Q}^{ \natural}_{e,h}}\,n_{0}\|^{2}-\frac{(b_{2}-b_{1})}{(b_{1}+b_{2})}\|\mathcal{K} ^{T}_{\overline{Q}^{\natural}_{e,h}}n_{0}\|^{2}. \tag{4.43}\]
Finally, the trace-term is computed. A further needed calculation is
\[\left[\operatorname{tr}(\mathcal{K}^{\overline{c}^{*}}_{\overline {Q}^{\natural}_{e,h}})\right]^{2} =\left(\operatorname{tr}(\mathcal{K}_{\overline{Q}^{\natural}_{ e,h}})+\operatorname{tr}\bigl{(}(0|0|c)[(\operatorname{D}_{x}\Theta)^{ \natural}(\eta_{3})]^{-1})\bigr{)}\right)^{2} \tag{4.44}\] \[=\left(\operatorname{tr}(\mathcal{K}_{\overline{Q}^{\natural}_{ e,h}})+\frac{(b_{2}-b_{1})}{2(b_{1}+b_{2})}\langle\mathcal{K}^{T}_{\overline{Q}^{ \natural}_{e,h}}n_{0}\otimes n_{0},\mathbb{1}_{3}\rangle-\frac{b_{3}}{(b_{1} +b_{3})}\operatorname{tr}(\mathcal{K}_{\overline{Q}^{\natural}_{e,h}}) \underbrace{\langle n_{0}\otimes n_{0},\mathbb{1}_{3}\rangle}_{\langle n_{0},n_{0}\rangle=1}\right)^{2}\] \[=\frac{b_{1}^{2}}{(b_{1}+b_{3})^{2}}\operatorname{tr}(\mathcal{K }_{\overline{Q}^{\natural}_{e,h}})^{2}.\]
Now we insert the above calculations in \(\widetilde{W}_{\operatorname{curv}}(\mathcal{K}_{\overline{Q}^{\natural}_{e,h }}+(0|0|c^{*})[(\operatorname{D}_{x}\Theta)^{\natural}(\eta_{3})]^{-1})\), and obtain
\[W^{\hom}_{\operatorname{curv}}=\mu L^{2}_{c} \Big{(}b_{1}(\|\mathrm{sym}\mathcal{K}_{\overline{Q}^{\natural }_{e,h}}\|^{2}+\frac{(b_{1}-b_{2})^{2}}{2(b_{1}+b_{2})^{2}}\|\mathcal{K}^{T}_{ \overline{Q}^{\natural}_{e,h}}n_{0}\|^{2}+\frac{b_{3}^{2}}{(b_{1}+b_{3})^{2} }\mathrm{tr}(\mathcal{K}_{\overline{Q}^{\natural}_{e,h}})^{2}+\frac{b_{2}-b_{1 }}{b_{1}+b_{2}}\|\mathcal{K}^{T}_{\overline{Q}^{\natural}_{e,h}}n_{0}\|^{2})\] \[\quad+b_{2}(\|\operatorname{skew}\mathcal{K}_{\overline{Q}^{ \natural}_{e,h}}\|^{2}+\frac{(b_{2}-b_{1})^{2}}{2(b_{1}-b_{2})^{2}}\|\mathcal{K }^{T}_{\overline{Q}^{\natural}_{e,h}}\,n_{0}\|^{2}-\frac{b_{2}-b_{1}}{b_{1}+b_ {2}}\|\mathcal{K}^{T}_{\overline{Q}^{\natural}_{e,h}}n_{0}\|^{2})\] \[\quad+b_{3}\frac{b_{1}^{2}}{(b_{1}+b_{3})^{2}}\mathrm{tr}( \mathcal{K}_{\overline{Q}^{\natural}_{e,h}})^{2}\Big{)}\,, \tag{4.45}\]
which reduces to
\[W^{\hom}_{\operatorname{curv}}(\mathcal{K}_{\overline{Q}^{\natural}_{e,h}})=\mu L ^{2}_{c}\Big{(}b_{1}\|\mathrm{sym}\mathcal{K}_{\overline{Q}^{\natural}_{e,h}}\|^{2 }+b_{2}\|\operatorname{skew}\mathcal{K}_{\overline{Q}^{\natural}_{e,h}}\|^{2}- \frac{(b_{1}-b_{2})^{2}}{2(b_{1}+b_{2})}\|\mathcal{K}^{T}_{\overline{Q}^{ \natural}_{e,h}}n_{0}\|^{2}+\frac{b_{1}b_{3}}{(b_{1}+b_{3})}\mathrm{tr}( \mathcal{K}_{\overline{Q}^{\natural}_{e,h}})^{2}\Big{)}\,. \tag{4.46}\]
One may apply the orthogonal decomposition of a matrix \(X\)
\[X=X^{\|}+X^{\perp},\qquad\qquad X^{\|}\coloneqq\mathrm{A}_{y_{0}}\,X,\qquad \qquad X^{\perp}\coloneqq(\mathbb{1}_{3}-\mathrm{A}_{y_{0}})\,X, \tag{4.47}\]
for the matrix \(\mathcal{K}_{\overline{Q}^{\natural}_{e,h}}\), where \(A_{y_{0}}=(\operatorname{D}\!y_{0}|0)[\operatorname{D}_{x}\!\Theta(0)]^{-1}\). After inserting the decomposition in the homogenized
curvature energy, we get
\[W_{\rm curv}^{\rm hom}(\mathcal{K}_{\overline{Q}^{\natural}_{e,h}}) =\mu L_{c}^{2}\Big{(}b_{1}\|{\rm sym}\mathcal{K}_{\overline{Q}^{ \natural}_{e,h}}\|^{2}+b_{2}\|\,{\rm skew}\,\mathcal{K}_{\overline{Q}^{\natural }_{e,h}}\|^{2}-\frac{(b_{1}-b_{2})^{2}}{2(b_{1}+b_{2})}\|\mathcal{K}_{\overline {Q}^{\natural}_{e,h}}^{T}n_{0}\|^{2}+\frac{b_{1}b_{3}}{(b_{1}+b_{3})}{\rm tr}( \mathcal{K}_{\overline{Q}^{\natural}_{e,h}})^{2}\Big{)} \tag{4.48}\] \[\quad+\frac{b_{1}b_{3}}{(b_{1}+b_{3})}{\rm tr}(\mathcal{K}_{ \overline{Q}^{\natural}_{e,h}}^{\parallel})^{2}+\frac{b_{1}+b_{2}}{2}\| \mathcal{K}_{\overline{Q}^{\natural}_{e,h}}^{T}n_{0}\|\Big{)}\] \[=\mu L_{c}^{2}\Big{(}b_{1}\|{\rm sym}\mathcal{K}_{\overline{Q}^{ \natural}_{e,h}}^{\parallel}\|^{2}+b_{2}\|\,{\rm skew}\,\mathcal{K}_{ \overline{Q}^{\natural}_{e,h}}^{\parallel}\|^{2}+\frac{b_{1}b_{3}}{(b_{1}+b_{ 3})}{\rm tr}(\mathcal{K}_{\overline{Q}^{\natural}_{e,h}}^{\parallel})^{2}+ \frac{2b_{1}b_{2}}{b_{1}+b_{2}}\|\mathcal{K}_{\overline{Q}^{\natural}_{e,h}}^ {\perp}\|\Big{)}\,.\qquad\blacksquare\]
As regards the homogenized curvature energy for the following curvature energy given by the optimization problem O4, some simplified calculations as for the optimization problem O3 lead us to
**Theorem 4.3**.: _The solution of the optimization problem O3 given in (4.10) is given by_
\[c^{*}=\frac{(b_{2}-b_{1})}{b_{1}+b_{2}}\mathcal{K}_{\overline{Q}_{e,0}}^{T}n_ {0}-\frac{2b_{3}}{2(b_{1}+b_{3})}{\rm tr}(\mathcal{K}_{\overline{Q}_{e,0}})n_ {0} \tag{4.49}\]
_and the coresponding homogenized curvature energy is_
\[W_{\rm curv}^{hom}(\mathcal{K}_{\overline{Q}_{e,0}})=\mu L_{c}^{2}\Big{(}b_{1} \|{\rm sym}\mathcal{K}_{\overline{Q}_{e,0}}^{\parallel}\|^{2}+b_{2}\|\,{\rm skew }\,\mathcal{K}_{\overline{Q}_{e,0}}^{\parallel}\|^{2}+\frac{b_{1}b_{3}}{(b_{ 1}+b_{3})}{\rm tr}(\mathcal{K}_{\overline{Q}_{e,0}}^{\parallel})^{2}+\frac{2b_ {1}b_{2}}{b_{1}+b_{2}}\|\mathcal{K}_{\overline{Q}_{e,0}}^{\perp}\|\Big{)}\,. \tag{4.50}\]
_where \(\mathcal{K}_{\overline{Q}_{e,0}}^{\parallel}\) and \(\mathcal{K}_{\overline{Q}_{e,0}}^{\perp}\) represent the orthogonal decomposition of the fully reduced elastic shell bending-curvature tensor \(\mathcal{K}_{\overline{Q}_{e,0}}\) in the tangential plane and in the normal direction, respectively._
### \(\Gamma\)-convergence result for the curved shell model
Together with the calculations provided in [41], we obtain for the first time in the literature the explicit form of the Cosserat-shell model via \(\Gamma\)-convergence method given by the following theorem.
**Theorem 4.4**.: _Assume that the initial configuration of the curved shell is defined by a continuous injective mapping \(\,y_{0}:\omega\subset\mathbb{R}^{2}\to\mathbb{R}^{3}\) which admits an extension to \(\overline{\omega}\) into \(C^{2}(\overline{\omega};\mathbb{R}^{3})\) such that for_
\[\Theta(x_{1},x_{2},x_{3})=y_{0}(x_{1},x_{2})+x_{3}\,n_{0}(x_{1},x_{2})\]
_we have \(\det[{\rm D}_{x}\Theta(0)]\geq\,a_{0}>0\) on \(\overline{\omega}\), where \(a_{0}\) is a constant, and assume that the boundary data satisfy the conditions_
\[\varphi_{d}^{\natural}=\varphi_{d}\big{|}_{\Gamma_{1}}\text{(in the sense of traces) for }\ \varphi_{d}\in\mathrm{H}^{1}(\Omega_{1};\mathbb{R}^{3}). \tag{4.51}\]
_Let the constitutive parameters satisfy_
\[\mu\,>0,\qquad\quad\kappa>0,\qquad\quad\mu_{\rm c}>0,\qquad\quad a_{1}>0, \qquad\quad a_{2}>0,\qquad\quad a_{3}>0\,. \tag{4.52}\]
_Then, for any sequence \((\varphi_{h_{j}}^{\natural},\overline{Q}_{e,h_{j}}^{\natural})\in X\) such that \((\varphi_{h_{j}}^{\natural},\overline{Q}_{e,h_{j}}^{\natural})\to(\varphi_{0}, \overline{Q}_{e,0})\) as \(h_{j}\to 0\), the sequence of functionals \(\mathcal{J}_{h_{j}}\colon X\to\overline{\mathbb{R}}\) from (4.7) \(\,\,\Gamma\)-converges to the limit energy functional \(\mathcal{J}_{0}\colon X\to\overline{\mathbb{R}}\) defined by_
\[\mathcal{J}_{0}(m,\overline{Q}_{e,0})=\begin{cases}\int_{\omega}[W_{\rm mp}^{ \rm hom}(\mathcal{E}_{m,\overline{Q}_{e,0}})+\widetilde{W}_{\rm curv}^{\rm hom }(\mathcal{K}_{\overline{Q}_{e,0}})]\det({\rm D}y_{0}|n_{0})\ d\omega&\text{if} \quad(m,\overline{Q}_{e,0})\in\mathcal{S}_{\omega}^{\prime}\,,\\ +\infty&\text{else in}\,X,\end{cases} \tag{4.53}\]
_where_
\[m(x_{1},x_{2}) :=\varphi_{0}(x_{1},x_{2})=\lim_{h_{j}\to 0}\varphi_{h_{j}}^{ \natural}(x_{1},x_{2},\frac{1}{h_{j}}x_{3}),\qquad\overline{Q}_{e,0}(x_{1},x_{ 2})=\lim_{h_{j}\to 0}\overline{Q}_{e,h_{j}}^{\natural}(x_{1},x_{2},\frac{1}{h_{j}}x_{3}), \tag{4.54}\] \[\mathcal{E}_{m,\overline{Q}_{e,0}} =(\overline{Q}_{e,0}^{T}{\rm D}m-{\rm D}y_{0}|0)[{\rm D}_{x}\Theta( 0)]^{-1},\] \[\mathcal{K}_{\overline{Q}_{e,0}} =\Big{(}{\rm axl}(\overline{Q}_{e,0}^{T}\,\partial_{x_{1}} \overline{Q}_{e,0})\,|\,{\rm axl}(\overline{Q}_{e,0}^{T}\,\partial_{x_{2}} \overline{Q}_{e,0})\,|0\Big{)}[{\rm D}_{x}\Theta(0)\,]^{-1}\not\in{\rm Sym}(3)\,,\]
_and_
\[W_{\rm mp}^{\rm hom}(\mathcal{E}_{m,\overline{Q}_{e,0}}) =\,\mu\,\|{\rm sym}\ \mathcal{E}_{m,\overline{Q}_{e,0}}^{\|}\|^{2}+\mu_{c}\,\|{\rm skew}\ \mathcal{E}_{m,\overline{Q}_{e,0}}^{\|}\|^{2}+\,\frac{\lambda\,\mu}{\lambda+2\, \mu}\,\left[{\rm tr}(\mathcal{E}_{m,\overline{Q}_{e,0}}^{\|})\right]^{2}+\frac {2\,\mu\ \mu_{c}}{\mu_{c}+\mu}\|\mathcal{E}_{m,\overline{Q}_{e,0}}^{\mathcal{E}_{m, \overline{Q}_{e,0}}}n_{0}\|^{2}\] \[=W_{\rm shell}\big{(}\mathcal{E}_{m,\overline{Q}_{e,0}}^{\|}\big{)} +\frac{2\,\mu\ \mu_{c}}{\mu_{c}+\mu}\|\mathcal{E}_{m,\overline{Q}_{e,0}}^{\perp}\|^{2}, \tag{4.55}\] \[\widetilde{W}_{\rm curv}^{\rm hom}(\mathcal{K}_{\overline{Q}_{e,0}}) =\inf_{A\in\mathfrak{s}\mathfrak{s}(3)}\widetilde{W}_{\rm curv} \Big{(}{\rm axl}(\overline{Q}_{e,0}^{T}\,\partial_{\eta_{1}}\overline{Q}_{e,0 })\,|\,{\rm axl}(\overline{Q}_{e,0}^{T}\,\partial_{\eta_{2}}\overline{Q}_{e,0 })\,|\ {\rm axl}(A)\,\Big{)}[({\rm D}_{x}\mathfrak{O})^{\natural}(0)]^{-1}\] \[=\mu L_{c}^{2}\Big{(}b_{1}\|{\rm sym}\mathcal{K}_{\overline{Q}_{e,0}}^{\|}\|^{2}+b_{2}\|\,{\rm skew}\,\mathcal{K}_{\overline{Q}_{e,0}}^{\|}\|^{2 }+\frac{b_{1}b_{3}}{(b_{1}+b_{3})}{\rm tr}(\mathcal{K}_{\overline{Q}_{e,0}}^{ \|})^{2}+\frac{2\,b_{1}b_{2}}{b_{1}+b_{2}}\|\mathcal{K}_{\overline{Q}_{e,0}}^{ \perp}\|\Big{)}\,.\]
Proof.: The proof is completely similar to the proof provided in [41], where only some implicit properties of the homogenized curvature energy were used and not its explicit form.
## 5 Conclusion
The present paper gives the explicit calculation of the homogenized curvature energy. This explicit form was not directly necessary in order to prove the following Gamma-convergence result, some qualitative properties of \(\widetilde{W}_{\rm curv}^{\rm hom,\natural}(\mathcal{K}_{\overline{Q}_{e,h}^{ \natural}})\) and \(\widetilde{W}_{\rm curv}^{\rm hom}(\mathcal{K}_{\overline{Q}_{e,0}})\) being enough in the proof of the \(\Gamma\)-convergence result. However, the final \(\Gamma\)-convergence model has to be written in a explicit form, all the explicit calculations being provided in this paper.
A comparison between (3.23) and (5.2), shows that the homogenized flat curvature energy can thus be obtained from the curved one, and that Theorem 3.2 may be seen as a corollary of Theorem 4.4. Indeed, let us assume that in the homogenized energy which we obtained in (4.48) we have \({\rm D}\Theta=1\,_{3},{\rm D}y_{0}=1\) and \(n_{0}=[{\rm D}_{x}\Theta(0)]e_{3}=e_{3}\), which corresponds to the flat shell case. Then \(\overline{Q}_{e,0}=\overline{R}_{0}\), \(\mathcal{K}_{\overline{Q}_{e,0}}=\mathcal{K}_{\overline{R}_{0}}^{\rm plate}\) and
\[\mathcal{K}_{\overline{Q}_{e,0}}=\left(\begin{pmatrix}\Gamma_{\square}&0\\ &0\\ \hline\Gamma_{31}&\Gamma_{32}&0\end{pmatrix}\right)\left[({\rm D}_{x}\Theta)^{ \natural}\right]^{-1}\,=\mathcal{K}_{\overline{R}_{0}}^{\rm plate},\quad{\rm with }\quad\Gamma_{\square}=\begin{pmatrix}\Gamma_{11}&\Gamma_{12}\\ \Gamma_{21}&\Gamma_{22}\end{pmatrix} \tag{5.1}\]
and we have
\[W_{\rm curv}^{\rm hom}(\Gamma)=\mu L_{c}^{2}\Big{(}b_{1}\|{\rm sym}\Gamma_{ \square}\|^{2}+b_{2}\|\,{\rm skew}\,\Gamma_{\square}\|^{2}+\frac{b_{1}b_{3}}{( b_{1}+b_{3})}{\rm tr}(\Gamma_{\square})^{2}+\frac{2b_{1}b_{2}}{b_{1}+b_{2}}\left\| \begin{pmatrix}\Gamma_{31}\\ \Gamma_{32}\end{pmatrix}\right\|^{2}\Big{)}\,, \tag{5.2}\]
and we rediscover the homogenized curvature energy for an initial flat-configuration from Theorem 3.2.
In conclusion, the present paper completes the calculations of the membrane-like model constructed via \(\Gamma\)-convergence for flat and curved initial configuration of the shell given for the first time in literature the explicit form of the \(\Gamma\)-limit for both situations.
In [6], by using a method which extends the reduction procedure from classical elasticity to the case of Cosserat shells, Birsan has obtained a Cosserat-shell by considering a general ansatz. For the particular case of a quadratic ansatz for the deformation map and skipping higher order terms, the membrane term of order \(O(h)\) from the Birsan's model [6] coincides with the homogenized membrane energy determined by us in [41], i.e., in both models the harmonic mean \(\frac{2\mu\,\mu_{c}}{\mu+\mu_{c}}\)of \(\mu\) and \(\mu_{c}\) is present. We note that in the model constructed in [21] the algebraic mean of \(\mu\) and \(\mu_{c}\) substitute the role of the harmonic mean from the model given in [6] and by the \(\Gamma\)-convergence model in [41].
However, a comparison between the curvature energy obtained in the current paper as part of the \(\Gamma\)-limit and the curvature energy obtained using other methods [21, 6], shows that, the weight of the energy term \(\|\mathcal{K}_{e,\natural}^{\perp}\|^{2}\) are different as following
* derivation approach [21] as well as in the model given in [6]: the algebraic mean of \(b_{1}\) and \(b_{2}\), i.e., \(\frac{b_{1}+b_{2}}{2}\,\);
* \(\Gamma\)-convergence: the harmonic mean of \(b_{1}\) and \(b_{2}\), i.e., \(\frac{2\,b_{1}b_{2}}{b_{1}+b_{2}}\,\). |
2309.10888 | Effect of interatomic repulsion on Majorana zero modes in a coupled
quantum-dot-superconducting-nanowire hybrid system | We study the low-energy eigenstates of a topological superconductor wire
modeled by a Kitaev chain, which is connected at one of its ends to a quantum
dot through nearest-neighbor (NN) hopping and NN Coulomb repulsion. Using an
unrestricted Hartree-Fock approximation to decouple the Coulomb term, we obtain
that the quality of the Majorana end states is seriously affected by this term
only when the dependence of the low-lying energies with the energy of the
quantum dot shows a "diamond" shape, characteristic of short wires. We discuss
limitations of the simplest effective models to describe the physics. We expect
the same behavior in more realistic models for topological superconducting
wires. | R. Kenyi Takagui Perez, A. A. Aligia | 2023-09-19T19:29:12Z | http://arxiv.org/abs/2309.10888v3 | Effect of interatomic repulsion on Majorana zero modes in a coupled quantum-dot-superconducting-nanowire hybrid system
###### Abstract
We study the low-energy eigenstates of a topological superconductor wire modeled by a Kitaev chain coupled at one of its ends to a quantum dot by nearest-neighbor (NN) hopping and NN Coulomb repulsion. Using an unrestricted Hartree-Fock approximation to decouple the Coulomb term, we obtain that the quality of the Majorana end states is seriously affected by this term only when the dependence of the low-lying energies with the energy of the quantum dot shows a "diamond" shape, characteristic of short wires.
## I Introduction
In recent years, topological superconducting wires is a field of intense research in condensed matter physics, because of both the interesting basic physics involved [1], but also because of possible applications in decoherence-free quantum computing based on the Majorana zero modes (MZMs) at their ends. [2; 3; 4; 5; 6; 7]
The simplest model that presents MZMs at the ends is the Kitaev chain for p-wave superconductors [8]. Lutchyn _et al._[9] and Oreg _et al._[10] proposed a model for topological superconducting wires that includes spin-orbit coupling (SOC), proximity-induced s-wave superconductivity, and an applied magnetic field perpendicular to the direction of the SOC. The phase diagram of the lattice version of this model has been calculated recently [11]. For reasonable parameters the model has a topological phase with MZMs localized at its ends as the Kitaev chain. MZMs of similar wires were found experimentally [12; 13; 14; 15].
A difficulty of these experiments is to identify unambiguously that the zero modes are of topological origin, which implies that they remain at zero energy and localized at the end of the nanowire if small perturbations are applied to the system. Using the model mentioned above for s-wave topological superconducting wires, Prada _et al._ proposed that a quantum dot (QD) at the end of the nanowire may be used as a powerful spectroscopic tool to quantify the degree of Majorana nonlocality through a local transport measurement [16]. This proposal has been confirmed experimentally [17]. A similar procedure has been also proposed for the Kitaev spinless model [18], and further theoretical studies have been made recently for the spinfull model [19] and a minimal Kitaev chain [20].
In general, the energy of the dot level is varied changing the gate potential, and the low-energy levels detected by the conductance show either a crossing ("bowtie" shape like if Fig. 4) or a "diamond" pattern (like in Fig. 6 top) [16; 20; 22].
Compared to the large amount of theoretical works studying non-interacting superconducting wires, studies of the effects of interactions are rare [21]. Recently Ricco _et al._ pointed out that Coulomb repulsion between the electrons of the dot and the nanowire might lead to spoiling of the quality of the MZMs due to an effective increase of the coupling between the MZMs localized at the left and at the right of the nanowire [22]. They considered a spinless model consisting of a Kitaev chain with a QD at its left end. There is hopping between the QD and the chain and also an interaction
\[H_{V}=Vn_{d}n_{w}, \tag{1}\]
where \(n_{d}\) is the number of electrons in the dot and \(n_{w}\) is the total number of electrons in the superconducting wire. The authors replaced this operator by the parity operator at low energies \(n_{w}\sim i\gamma_{L}\gamma_{R}+1/2\) (neglecting the excited states), where \(\gamma_{\nu}\) is the Majorana at the end \(\nu\) (left or right) of the wire _at a given chemical potential_. Neglecting the states at higher energy, the authors solve exactly the effective low-energy model and show that \(H_{V}\) contributes to the displacement of the MZMs from zero energy, spoiling the Majorana quality and the topological properties.
Usually the low-energy effective Hamiltonian is a very good approximation of the full one. For example, quantitative agreement has been found between both descriptions for the Josephson current between two topological superconducting wires [23]. However, a simple argument suggests that this might not be the case for the interaction given by Eq. (1). A simple mean field decoupling of this term gives
\[Vn_{d}n_{w}\simeq V\left(\langle n_{d}\rangle n_{w}+n_{d}\langle n_{w}\rangle -\langle n_{d}\rangle\langle n_{w}\rangle\right). \tag{2}\]
The first term on the right hand side is a correction to the chemical potential, and the second is a correction to the on-site energy of the dot. We obtain that even in the presence of hopping between the dot and the wire, the MZMs remain under these changes. In other words the states described by \(\gamma_{\nu}\)_change their form_ and accommodate to the new situation, as might be expected from the robustness of end states of topological character. This change is not captured by the low-energy effective Hamiltonian.
In this work we calculate the low-energy spectrum of a Kitaev chain in which the leftmost side has a hopping and
also a repulsion to a QD state. The latter is treated in the unrestricted Hartree-Fock approximation. In Section II we describe the model and the approximation. In Section III we show the numerical results. We summarize the results in Section IV.
## II Model and approximation
The Hamiltonian of the Kitaev chain interacting with a QD is
\[H = \sum_{j=1}^{N-1}(-tc_{j+1}^{\dagger}c_{j}+\Delta c_{j+1}^{\dagger} c_{j}^{\dagger}+\text{H.c.})-\mu\sum_{j=1}^{N}c_{j}^{\dagger}c_{j} \tag{3}\] \[+E_{d}d^{\dagger}d-t^{\prime}(d^{\dagger}c_{1}+\text{H.c.})\] \[+V\left(n_{d}-\frac{1}{2}\right)\left(n_{w}-\frac{1}{2}\right),\]
where \(n_{d}=d^{\dagger}d\) and \(n_{1}=c_{1}^{\dagger}c_{1}\). The first two terms of Eq. (3) describe the Kitaev chain with hopping \(t\), p-wave superconducting order parameter \(\Delta\) and chemical potential \(\mu\). The third term describes the QD. The fourth term is the hopping between the QD and the Kitaev chain and the last term is the Coulomb repulsion between the electrons in the QD and the ones at the leftmost site of the chain. We treat this term in the unrestricted Hartree-Fock approximation:
\[n_{d}n_{1} \simeq \left\langle n_{d}\right\rangle n_{1}+n_{d}\left\langle n_{1} \right\rangle-\left\langle n_{d}\right\rangle\left\langle n_{1}\right\rangle \tag{4}\] \[-\left\langle d^{\dagger}c_{1}\right\rangle c_{1}^{\dagger}d-d^{ \dagger}c_{1}\left\langle c_{1}^{\dagger}d\right\rangle+\left\langle d^{ \dagger}c_{1}\right\rangle\left\langle c_{1}^{\dagger}d\right\rangle\] \[+\left\langle d^{\dagger}c_{1}^{\dagger}\right\rangle c_{1}d+d^{ \dagger}c_{1}^{\dagger}\left\langle c_{1}d\right\rangle-\left\langle d^{ \dagger}c_{1}^{\dagger}\right\rangle\left\langle c_{1}d\right\rangle.\]
We note that our model is different from that of Ricco _et al._[22], because they considered a repulsion with all the sites of the wire with the same intensity. We believe that our model is more realistic. Another difference is that they treated the repulsion exactly in an effective model within a low-energy subspace. We include all states but treat the repulsion using the approximation of Eq. (4).
## III Results
We take \(t=1\) as the unit of energy and choose \(\Delta=0.2\). For later comparison, we discuss first the isolated Kitaev chain (without the quantum dot) for two different lengths of the chain. The resulting energies are shown in Fig. 1 as a function of the chemical potential \(\mu\). The curve is symmetric under the change of sign of \(\mu\), and therefore only positive values of \(\mu\) are displayed in the figure. As it is known, the system is topological for \(|\mu|<2t\). In this region, there are two low-energy states at energies near zero. For the infinite chain, these states correspond to the left and right MZMs \(\gamma_{L}\) and \(\gamma_{R}\) localized at the ends of the chain. In a finite chain, these modes are mixed with an effective term \(\lambda i\gamma_{L}\gamma_{R}\) and the energies are split in \(\pm\lambda\). As expected, \(\lambda\) decays exponentially with increasing system size. From Fig. 1, one can see that \(\lambda\) decreases almost four orders of magnitude when the length of the chain is increased from 20 to 50 sites.
One can also see from the figures that \(\lambda\) oscillates as the chemical potential is varied. The period of oscillation is more that two times smaller for 50 sites in comparison with 20 sites, and it is also smaller for larger \(|\mu|\) near the topological transition to the trivial phase.
For future discussion of the effects of the nearest-neighbor repulsion \(V\), we represent in Fig. 2, the independent expectation values that enter in the unrestricted Hartree-Foch approximation, Eq. 4, determined selfconsistently. We have chosen \(t^{\prime}=0.2\), \(V=1\), \(\mu=0\) and a chain of 50 sites excluding the quantum dot. The results are rather insensitive to system size.
As expected, the occupancy of the dot is near 1, when its energy is negative and large in magnitude compared to \(t^{\prime}\) (\(-\epsilon_{d}\gg t^{\prime}\)), it is equal to 1/2 for \(\epsilon_{d}=0\) and it is near 0 for \(\epsilon_{d}\gg t^{\prime}\).
In contrast, the occupancy of the first site of the dot \(\left\langle n_{1}\right\rangle\) follows qualitatively the opposite behavior: when \(\left\langle n_{d}\right\rangle>1/2\), the first site feels the repulsion with the electrons in the dot and its occupancy decreases, but its
Figure 1: (Color online) Eigenvalues of the Kitaev chain for 20 sites (top) and 50 sites (middle and bottom) as a function of the chemical potential.
hopping with the rest of the chain moderates this effect and the occupancy deviates from 0.5 in less than 0.2.
The expectation value of the hopping \(\langle d^{\dagger}c_{1}\rangle\) follows qualitatively the behavior expected for a diatomic heteronuclear molecule, with a single orbital per atom, in which the two atomic states are hybridized. The expectation value is maximum when both atomic levels coincide (\(\epsilon_{d}=0\)) and decreases symmetrically with the difference between atomic levels. Te half width of the curve is expected to be of the order of the effective hopping, which in this case is \(t^{\prime}_{\rm eff}=t^{\prime}+V\langle d^{\dagger}c_{1}\rangle\). For \(\epsilon_{d}=0\), this value is near 0.5, considerably larger than the bare value \(t^{\prime}=0.2\).
The pairing expectation value \(\langle d^{\dagger}c_{1}^{\dagger}\rangle\) follows qualitatively a similar dependence with the dot energy as the hopping contribution discussed above, but with smaller values. Its dependence with \(\epsilon_{d}\) is also narrower. Its physical origin is a proximity induced \(p\)-wave superconductivity, which is larger when the energy of the dot is nearer to the chemical potential of the wire.
The resulting eigenvalues of the system as a function of the dot energy for \(V=0\) and \(V=1\) are compared in Fig. 3 for a chain of 20 sites. For 50 sites the discussion below is practically the same, but the results are displayed more clearly in the smaller system. For the sake of brevity we omit displaying the results for 50 sites. We discuss first the case \(V=0\). For large \(|\epsilon_{d}|\), the eigenvalues at small energies (of absolute value less than 1) are practically the same as those of the isolated Kitaev chain shown in Fig. 1. For \(\mu=0\), the results are symmetric under interchange of the sign of \(\epsilon_{d}\). In addition to the states of the isolated chain, there are roughly speaking two other symmetric states at energies \(\pm E_{m}\), which at a first approximation corresponds to that of higher absolute vale of an heteronuclear molecule (as that mentioned above) that mixes two states with energies \(\epsilon_{d}\) and zero. For large \(|\epsilon_{d}|\), \(E_{m}\sim\epsilon_{d}\) and for \(\epsilon_{d}=0\), \(E_{m}\sim t^{\prime}\). These states actually hybridize with the states of the isolated Kitaev chain showing several anticrossings that are evident in Fig. 3.
When \(V\) is included, the higher energy eigenvalues are modified, particularly those related with the mixing of the dot state near \(\epsilon_{d}=0\). Since as explained above, the effective hopping between the dot and the first state of the chain increases from \(t^{\prime}=0.2\) to \(t^{\prime}\sim 0.5\), when \(V\) is increased from 0 to 1, a similar change takes place for the energies that are near \(\pm t^{\prime}\) in Fig. 3. However, the two energies with lowest absolute value, related with the splitting of the MZMs, are very little modified by \(V\).
In Fig. 4 we display the energies related to the MZMs for two values of \(\mu\) and a chain of 50 sites. One can see that for \(\mu=0\), the inclusion of nearest-neighbor repulsion \(V\), at least within our unrestricted Hartree-Fock approximation slightly _decreases_ the splitting of the two low-energy states, indicating that the quality of the MZMs actually is _improved_ when the repulsion is added. For \(\mu\neq 0\), the symmetry under change of sign of the dot energy \(\epsilon_{d}\) is lost, and the asymmetry is increased with \(V\). In any case, the effect of \(V\) on the quality of the MZMs remains very small. The shape of the curve is similar to that found in previous experiments [17] and theory [16; 22].
In Fig. 5, we show the coefficients of the lowest eigenstate with positive energy for the parameters indicated inside the figure. The fermion is written as \(\sum_{i}\alpha_{i}f_{i}\), where \(\alpha_{i}\) are the 102 coefficients and the order of the
Figure 3: (Color online) Energies of the system as a function of the energy of the dot for \(V=0\) (top) and \(V=1\) (bottom).
Figure 2: (Color online) Expectation values entering Eq. (4) as a function of dot energy.
corresponding fermions \(f_{i}\) is \(f_{1}=d^{\dagger}\), \(f_{2}=d\), \(f_{3}=c_{1}^{\dagger}\), \(f_{4}=c_{1}\),... \(f_{102}=c_{50}\). As expected, the state is a mixture of the MZMs at the ends with negligible weight in the middle of the chain. However, in contrast to the isolated Kitaev chain, there is a significant weight of the state also at the dot, with a probability which is about \(1/10\) compared to that of the first site in the chain. This probability increases with decreasing \(|\epsilon_{d}|\).
Finally in Fig. 6 we display the energies for a short chain of 5 sites, with a significant mixing of both MZMs at the ends of the chain. In this case, the weight of the MZM at the right end is significant at the left end, and therefore it also feels the repulsion with the quantum dot. For \(V=0\) the shape is characteristic of the "diamond" shape observed in experiment [17] and in calculations [16; 22] when the hopping between the quantum dot and the MZM at the right end \(\gamma_{R}\) is important [16; 22].
In contrast to the previous cases, now the effect of adding the Coulomb repulsion is significant, leading to a strong further splitting of the MZMs, of the order of a fraction of \(t^{\prime}\)
## IV Summary and discussion
We have solved a model for a Kitaev chain on a lattice, connected to a quantum dot at one of its ends by a hopping term and a Coulomb repulsion between the relevant state of the quantum dot and the end site of the chain.
As the energy of the state of the quantum dot is varied, the energies of the two eigenstates of the system nearer to zero have one of the two characteristic shapes seen in experiment and previous theories, signaling the presence of Majorana zero modes (MZMs) at the ends of the wire, coupled between them. In one of them, the energies of the two states cross when the energy of the quantum dot is near to the Fermi energy. In this case, the coupling between MZMs is weak and analyzing the wave function of these eigenstates, one sees that the one of the MZMs has a substantial weight at the quantum dot. Treating the Coulomb repulsion in the unrestricted Hartree-Fock approximation, we find that it does not affect essentially the quality of the MZMs.
In contrast in the other case, in which the energies of the low-lying states as a function of the dot level has a "diamond" shape, signaling a stronger coupling be
Figure 5: (Color online) Coefficients of the lowest eigenstate of the system.
Figure 4: (Color online) Comparison of the two energies nearer to zero in the system as a function of the energy of the dot between \(V=0\) and \(V=1\) for \(\mu=0\) (top) and \(\mu=0.5\) (bottom).
Figure 6: (Color online) Energies of the system as a function of the energy of the dot for \(V=0\) (top) and \(V=1\) (bottom).
tween the MZMs (shorter chains), the effect of the interatomic Coulomb repulsion is significant splitting further the MZMs.
###### Acknowledgements.
R. K. T. P. has a scholarship of Instituto Balseiro. A. A. A. acknowledges financial support provided by PICT 2017-2726 and PICT 2018-01546 of the ANPCyT, Argentina.
|
2302.14744 | Tightness of prescriptive tree-based mixed-integer optimization
formulations | We focus on modeling the relationship between an input feature vector and the
predicted outcome of a trained decision tree using mixed-integer optimization.
This can be used in many practical applications where a decision tree or tree
ensemble is incorporated into an optimization problem to model the predicted
outcomes of a decision. We propose tighter mixed-integer optimization
formulations than those previously introduced. Existing formulations can be
shown to have linear relaxations that have fractional extreme points, even for
the simple case of modeling a single decision tree. A formulation we propose,
based on a projected union of polyhedra approach, is ideal for a single
decision tree. While the formulation is generally not ideal for tree ensembles
or if additional constraints are added, it generally has fewer extreme points,
leading to a faster time to solve, particularly if the formulation has
relatively few trees. However, previous work has shown that formulations based
on a binary representation of the feature vector perform well computationally
and hence are attractive for use in practical applications. We present multiple
approaches to tighten existing formulations with binary vectors, and show that
fractional extreme points are removed when there are multiple splits on the
same feature. At an extreme, we prove that this results in ideal formulations
for tree ensembles modeling a one-dimensional feature vector. Building on this
result, we also show via numerical simulations that these additional
constraints result in significantly tighter linear relaxations when the feature
vector is low dimensional. We also present instances where the time to solve to
optimality is significantly improved using these formulations. | Max Biggs, Georgia Perakis | 2023-02-28T16:44:10Z | http://arxiv.org/abs/2302.14744v1 | # Tightness of prescriptive tree-based mixed-integer optimization formulations
###### Abstract
We focus on modeling the relationship between an input feature vector and the predicted outcome of a trained decision tree using mixed-integer optimization. This can be used in many practical applications where a decision tree or tree ensemble is incorporated into an optimization problem to model the predicted outcomes of a decision. We propose tighter mixed-integer optimization formulations than those previously introduced. Existing formulations can be shown to have linear relaxations that have fractional extreme points, even for the simple case of modeling a single decision tree. A formulation we propose, based on a projected union of polyhedra approach, is ideal for a single decision tree. While the formulation is generally not ideal for tree ensembles or if additional constraints are added, it generally has fewer extreme points, leading to a faster time to solve, particularly if the formulation has relatively few trees. However, previous work has shown that formulations based on a binary representation of the feature vector perform well computationally and hence are attractive for use in practical applications. We present multiple approaches to tighten existing formulations with binary vectors, and show that fractional extreme points are removed when there are multiple splits on the same feature. At an extreme, we prove that this results in ideal formulations for tree ensembles modeling a one-dimensional feature vector. Building on this result, we also show via numerical simulations that these additional constraints result in significantly tighter linear relaxations when the feature vector is low dimensional. We also present instances where the time to solve to optimality is significantly improved using these formulations.
_Key words_ : Tree ensembles, Prescriptive analytics, Mixed-integer optimization
## 1 Introduction
A fundamental problem in operations research and management science is decision-making under uncertainty. Recently, attention has been given to modeling uncertain outcomes using machine learning functions, trained from previous decisions made under a variety of circumstances (Bertsimas et al. 2016, Cheng et al. 2017, Tjeng et al. 2017, Boob et al. 2022, Anderson et al. 2018, Bunel et al. 2018, Fischetti and Jo 2018, Kumar et al. 2019, Misic 2020, Biggs et al. 2022, Bergman et al. 2022). Due to the complex nature of real-world decision-making, often the model that best represents the outcomes observed is nonlinear, such as a neural network or a tree ensemble. This leads to a potentially complex optimization problem for the decision-maker to find the best decision, as predicted by the machine learning function.
An example of this occurs in reinforcement learning, where the future reward resulting from a decision is uncertain but can be approximated using machine learning models, such as decision trees or tree ensembles. In some applications, such as playing Atari video games (Mnih et al. 2015), the decision set is small so all the decisions can be enumerated and evaluated. In comparison, in many real-world operational problems - for example, dynamic vehicle routing problems (Bent and Van Hentenryck 2007, Pillac et al. 2011) or kidney transplantation (Sonmez and Unver 2017, Ashlagi et al. 2018)- complex decisions whose outcomes are uncertain need to be made at every stage of an online process. These decisions are often high dimensional or combinatorial in nature and subject to constraints on what is feasible. This can result in a very large action space. As a result, enumeration is no longer a tractable option, and a more disciplined optimization approach must be taken. Furthermore, the selection of the best action is further complicated by the nonlinear value function approximation.
One approach to finding optimal decisions when the outcome is estimated using a complex machine learning method is to use mixed-integer optimization (MIO) to model this relationship. In particular, there has recently been significant interest in modeling trained neural networks, by encoding these relationships using auxiliary binary variables and constraints (Cheng et al. 2017,
Tjeng et al. 2017, Anderson et al. 2018, Bunel et al. 2018, Fischetti and Jo 2018, Kumar et al. 2019, Wang et al. 2021). Another popular and powerful approach for supervised learning, yet one that is less studied in the prescriptive setting, is tree ensemble methods. Misic (2020) provides unconstrained optimization examples in drug discovery, where a tree ensemble predicts a measure of the activity of a proposed compound, and customized price optimization, where a tree ensemble predicts the profit as a function of prices and store-level attributes. Biggs et al. (2022) provide examples in real estate development of maximizing the sale price of a new house that is predicted as a function of construction decisions and location features, and a method for creating fair juries based on jurors' predicted a priori propensities to vote guilty or not due to their demographics and beliefs. These applications have nontrivial constraints, but can be represented as polyhedra with integer variables. Additional applications of trained decision trees or tree ensembles embedded in an optimization problem include retail pricing (Ferreira et al. 2015), assortment optimization (Chen et al. 2019, Chen and Misic 2022), last-mile delivery (Liu et al. 2021), optimal power flow (Halilbasic et al. 2018), auction design (Verwer et al. 2017), constraint learning (Maragno et al. 2021) and Bayesian optimization (Thebelt et al. 2021).
The goal in these works is often to propose tractable optimization formulations, which allow large problem instances to be solved in a reasonable amount of time. An important consideration when formulating these mixed-integer optimization formulations is how _tight_, or strong, the formulation is. Most methods for optimizing mixed-integer formulations involve relaxing the integrality requirements on variables and solving a continuous optimization problem. In the popular branch and bound algorithm, if the optimal solution is fractional for integer variables, then multiple subproblems are created with added constraints to exclude the fractional solution. If there are fewer fractional solutions for the relaxed problem, corresponding to a tighter formulation, this can result in a significantly faster time to solve. Furthermore, some problems can be formulated in such a way that the linear relaxation doesn't have any fractional extreme points, known as an _ideal_ formulation. Oftentimes these ideal formulations can be solved extremely quickly.
Another benefit of stronger formulations is that the linear programming (LP) relaxations provide tighter upper bounds, which are also useful in many applications. An example of this is evaluating the robustness of a machine learning model (Carlini and Wagner 2017, Dvijotham et al. 2018). If an input can be perturbed by a practically insignificant amount and result in a significantly different prediction, this suggests that the model is not robust. Evaluating robustness can be formulated as a constrained optimization problem over local inputs to find the maximally different output. As finding the exact optimal bound can be time-consuming, often an upper bound on how much the solution could change is sufficient.
### Contributions
We model the relationship between the input feature vector and the predicted output for a trained decision tree. This can be used in a range of optimization applications involving decision trees or tree ensembles. We present a novel mixed-integer optimization formulation based on a projected _union of polyhedra_ approach, which we prove is ideal for a single tree. We show that existing mixed-integer optimization formulations for modeling trees, such as Biggs et al. (2022) or Misic (2020) do not have this property. We also show that the constraints in our model are facet-defining. While this formulation is generally not ideal when we impose polyhedral constraints on the decision, or when multiple trees are used in an ensemble model, the formulation generally excludes fractional extreme points present in Biggs et al. (2022) and Misic (2020), leading to tighter formulations.
We also present new formulations that use a binary representation of the feature vector as proposed in Misic (2020). While these variables are more difficult to incorporate into a constrained optimization formulation, they do have some advantages when it comes to the branching behavior in the MIO solver, leading to a faster time to solve in some instances. We propose different constraints that can be added to tighten the formulation from Misic (2020). The _expset_ formulation is based on exploiting the greater than or equal to representation of the feature vector from Misic (2020), leading to larger groups of leaf variables being turned off when a split is made. The _elbow_ formulation removes specific fractional solutions that arise when there are nested branches on the
same feature in a tree. We characterize the conditions in which each of these constraints removes fractional solutions, which generally occurs in scenarios where there are multiple splits on the same feature. Extending this, we show that the _expset_ formulation leads to an ideal formulation when all the splits are on the same feature, which occurs for tree ensembles when the feature vector is one-dimensional. This property doesn't hold for the formulation in Misic (2020). In conjunction with the _union of polyhedra_ formulation being ideal for a single tree with multiple features, this result provides insights for the practitioner on when different formulations might be tighter. While not directly comparable due to the use of different variables, when there are many trees in the ensemble but relatively few variables, the _expset_ formulation is likely to be tighter. When there are few trees but many variables, the _union of polyhedra_ formulation is likely to be tighter.
We explore the performance of these approaches through extensive simulations. In partial agreement with our theoretical findings, we show that in some instances, the _union of polyhedra_ formulation appears to have significant solve time improvements for tree ensembles with few trees. Similarly, the _elbow_ offers improvements for problems with few features. While the _expset_ formulation generally doesn't offer faster solve times, we show that the linear relaxations it provides can be significantly stronger which is useful in many applications where a bound on the optimal solution is desired, particularly for trees with few features.
## 2 Preliminaries
Given a feature vector \(\boldsymbol{w}\in D\subseteq\mathbb{R}^{d}\), our goal is to model the output of a decision tree \(f^{(t)}(\boldsymbol{w})\) using a mixed-integer optimization formulation. More formally, we model the graph, \(gr(f^{(t)};D)=\{\boldsymbol{w},y_{t}|\boldsymbol{w}\in D,y_{t}=f^{(t)}( \boldsymbol{w})\}\). With such a formulation, we can easily model a range of practical applications, such as finding the optimal feature vector to maximize the predicted outcome of a tree ensemble \(\sum_{t=1}^{T}y_{t}\), or solving a reinforcement learning subproblem with complex constraints where the value function is given by a decision tree.
### Decision trees
A decision tree \(f^{(t)}(\boldsymbol{w})\) with \(p\) leaves is a piecewise constant function, where a constant outcome \(s_{l}\) is predicted if feature vector \(\boldsymbol{w}\) falls within a particular leaf \(\mathcal{L}_{l},l\in[p]\), so that \(f^{(t)}(\boldsymbol{w})=s_{l}\) if \(\boldsymbol{w}\in\mathcal{L}_{l}\).
Each leaf, \(\mathcal{L}_{l}\), is a hyperrectangular set defined by an upper \(u_{il}\) and a lower (bottom) \(b_{il}\) bound for each feature dimension \(w_{i},i\in\;[d]\). Throughout, we assume \(w_{i}\) is bounded. A leaf is defined as:
\[\mathcal{L}_{l} = \{\boldsymbol{w},y\ |\ w_{i} \leq u_{il}\qquad\forall\ i\in\;[d], \tag{1a}\] \[w_{i} \geq b_{il}\qquad\forall\ i\in\;[d],\] (1b) \[y = s_{l}\} \tag{1c}\]
The upper bounds and lower bounds associated with each leaf are defined by a hierarchy of axis-aligned splits. We use the often-used convention that the splits in the tree are of the form \(w_{i}\leq\theta\)(Pedregosa et al., 2011). These splits define the tree and partition the feature space into leaves. We denote \(\mathbf{splits}(t)\) as the set of splits corresponding to tree \(t\in T\), \(\mathbf{left}(s)\) as the set of leaves to the left of split \(s\) in the tree (i.e., those that satisfy the split condition \(w_{i}\leq\theta\)), and \(\mathbf{right}(s)\) as the set of leaves to the right for which \(w_{i}>\theta\). The upper bounds \(u_{il}\) are defined by the threshold of the left splits that lead to the leaf, while the lower bounds \(b_{il}\) are defined by the thresholds of the right splits. In the case where there are multiple axis-aligned splits along a dimension leading to a leaf (i.e., \(w_{1}\leq 5\) then \(w_{1}\leq 2\)), the upper bound will be the minimum of all less than splits, while the lower bound will be the maximum. When there are no splits on a feature, the upper and lower bounds on the leaf are the upper and lower bounds on the feature vector.
Figure 1: Examples of decision tree with corresponding notation and partition of the feature space
### Mixed-integer optimization
Our goal is to model the graph \(gr(f;D)\) using mixed-integer optimization. To facilitate this, often auxiliary continuous \(\boldsymbol{q}\in\mathbb{R}^{n}\) and integer variables are introduced to help model the complex relationships between variables, although the formulations we study require only binary variables \(\boldsymbol{z}\in\{0,1\}^{m}\). A mixed-integer optimization formulation consists of linear constraints on \((\boldsymbol{w},y,\boldsymbol{q},\boldsymbol{z})\in\mathbb{R}^{d+1+n+m}\) which define a polyhedron \(Q\), combined with binary constraints on \(z\in\{0,1\}^{m}\). For a valid formulation, the set \((\boldsymbol{w},y)\) associated with a feasible solution \((\boldsymbol{w},y,\boldsymbol{q},\boldsymbol{z})\in Q\cap\mathbb{R}^{d+1+n} \times\{0,1\}^{m}\) must be the same as the graph we desire to model \((\boldsymbol{w},y)\in gr(f;D)\). More formally, the auxiliary variables \((\boldsymbol{q},\boldsymbol{z})\) are removed via an orthogonal projection \(Proj_{\boldsymbol{w},y}(Q)=\{\boldsymbol{w},y\mid\exists\ \boldsymbol{q}, \boldsymbol{z}\ s.t.\ \boldsymbol{w},y,\boldsymbol{q},\boldsymbol{z}\in Q\}\), to leave a set of feasible \((\boldsymbol{w},y)\). Therefore, a valid mixed-integer optimization formulation may be defined as:
Definition 1 (Valid mixed-integer optimization formulation).: \[gr(f;D)=Proj_{\boldsymbol{w},y}(Q\cap\mathbb{R}^{d+1+n}\times\{0,1\}^{m})\]
We will refer to \(Q\) as the linear relaxation of the formulation, which is the MIO formulation with the integrality requirements removed. An MIO formulation is ideal if the extreme points of the polyhedron are binary for those variables that are required to be:
Definition 2 (Ideal formulation).: \[\operatorname{ext}(Q)\subseteq\mathbb{R}^{d+1+n}\times\{0,1\}^{m}\]
where \(\operatorname{ext}(Q)\) is the extreme points of the polyhedron \(Q\).
## 3 Further relevant literature
Modeling trained tree ensembles using mixed-integer optimization is studied in Biggs et al. (2022) and Misic (2020). Misic (2020) proved this problem in NP-Hard and proposed formulations for unconstrained optimization problems or problems with simple box constraints on each variable.
Mistry et al. (2021) provide a customized branch and bound algorithm for optimizing gradient-boosted tree ensembles based on the MIO formulation in Misic (2020), while Perakis and Thayanaran (2021) also propose a customized branching procedure. Biggs et al. (2022) proposes formulations that include polyhedral constraints. This approach uses the big-M approach to linearize the nonlinear behavior of the trees. To optimize large tree ensembles in a reasonable amount of time, both Misic (2020) and Biggs et al. (2022) offer ways to decompose a large tree ensemble and propose heuristic approaches that involve truncating trees to a limited depth (Misic 2020) or sampling a subset of the trees (Biggs et al. 2022). All of these approaches involve solving a mixed-integer optimization formulation of an ensemble of trees.
We follow a "Predict then Optimize" approach, where we study formulations based on an already trained decision tree or tree ensemble, but there has also been significant recent interest in the joint estimation and optimization problem using trees to prescribe actions directly from data (Kallus 2017, Zhou et al. 2018, Bertsimas et al. 2019, Elmachtoub et al. 2020, Biggs et al. 2021, Jo et al. 2021, Amram et al. 2022).
### Formulation from Misic (2020)
We review the formulation from Misic (2020) both as a benchmark, and to motivate the formulations we propose. Rather than linking the feature vector \(\mathbf{w}\) directly to the output \(f(\mathbf{w})\), Misic (2020) uses a binary representation of the feature vector \(\mathbf{w}\), which represents whether the feature falls below each split in the tree. Specifically, binary variables are introduced with
\[x_{ij}=\begin{cases}1&\text{if }w_{i}\leq\theta_{ij}\\ 0&\text{if }w_{i}\geq\theta_{ij}\end{cases}\]
where \(\theta_{ij}\) is the \(j^{th}\) largest split threshold associated with dimension \(i\). As a result, the \(\mathbf{x}_{i}\) vector has the structure of consecutive 0's, followed by consecutive 1's. For example, \(\mathbf{x}_{i}=\{0,1,1\}\), would correspond to a solution that falls between the first and second thresholds. A drawback of this approach is that additional constraints are needed to incorporate the binary split representation \(\mathbf{x}\) into a constrained optimization problem for \(\mathbf{w}\).
To introduce the formulation from Misic (2020), we need to introduce some additional notation. \(C(s)\) corresponds to the ranking of threshold \(s\) relative to the size of other thresholds for that feature, and \(V(s)\) corresponds to the feature involved in the split. For example, if \(\theta_{ij}\) is the \(j^{th}\) largest threshold for feature \(i\) associated with split \(s\), then \(C(s)=j\) and \(V(s)=i\). \(K_{i}\) denotes the number of thresholds for feature \(i\). Auxiliary variables \(\mathbf{z}\) are introduced, where \(z_{l}=1\) if the feature vector falls in leaf \(l\). The polyhedron \(Q^{misic}\), which links the binary representation \(\mathbf{x}\) to the predicted outcome \(y\), is:
\[Q^{misic}=\{\mathbf{x},y,\mathbf{z}\mid \sum_{l\in\mathbf{left}(s)}z_{l}\leq x_{V(s)C(s)}\qquad\forall s \;\in\;\mathbf{splits}(t) \tag{2a}\] \[\sum_{l\in\mathbf{right}(s)}z_{l}\leq 1-x_{V(s)C(s)}\qquad \forall s\;\in\;\mathbf{splits}(t)\] (2b) \[x_{ij}\leq x_{ij+1}\qquad\forall i\;\in\;[d],\;\forall j\;\in\;[K_{i}]\] (2c) \[\sum_{l=1}^{p}z_{l}=1,\quad y=\sum_{l=1}^{p}s_{l}z_{l}\] (2d) \[\mathbf{x}\in[0,1]^{K_{i}}\qquad\forall i\in[d],\;\mathbf{z}\geq 0\} \tag{2e}\]
The corresponding MIO formulation imposes binary constraints on \(\mathbf{x}\in\{0,1\}^{K_{i}}\;\forall i\in[d]\), but they are not necessary for \(\mathbf{z}\). Constraint (2a) enforces that if the condition at a split is not satisfied, \(x_{V(s)C(s)}=0\), then the solution does not fall within a leaf to the left of that split in the tree, so \(z_{l}=0\;\forall l\;\in\mathbf{left}(s)\). Conversely in constraint (2b), if the split is satisfied, \(x_{V(s)C(s)}=1\), then all leaves to the right are set to 0. Constraint (2c) links the solution to the feature vector across trees. If the solution is less than the \(j^{th}\) split, \(x_{ij}=1\), then the solution must also be less than all splits greater than this. As such, \(x_{ik}=1\;\forall j<k<K_{i}\), and the vector has the structure of consecutive zeros followed by consecutive ones.
An issue with the formulations presented in both Misic (2020) and Biggs et al. (2022) is that the linear relaxation can have many fractional solutions. This can make the MIO slow to solve. In fact, neither formulation is ideal even for the simple case of modeling a single decision tree without any additional constraints on a feasible decision, as we show in the following example.
Example 1 (Misic (2020) not ideal for a single tree).: Suppose there is a tree that first branches on the condition \(w\leq 5\) and then on \(w\leq 2\), as shown in Figure 1(a). In this example, \(x_{1}=1\) if \(w\leq 5\), and \(0\) otherwise, while \(x_{2}=1\) if \(w\leq 2\). The variables \(z_{l}=1\) if the solution is in leaf \(l\). The resulting linear relaxation from Misic (2020) is:
\[\{\boldsymbol{x},\boldsymbol{z}\ |\ z_{2} \leq 1-x_{2}, z_{3} \leq 1-x_{1}, x_{2} \leq x_{1} 0 \leq \boldsymbol{x}\leq 1,\] \[z_{1} \leq x_{2}, z_{1} +z_{2} \leq x_{1}, z_{1} +z_{2} +z_{3} = 1, 0 \leq \boldsymbol{z}\}\]
This has an extreme point at \(z_{1}=0,\ z_{2}=0.5,\ z_{3}=0.5,\ x_{1}=0.5,\ x_{2}=0.5\), when constraints \(z_{2}\leq 1-x_{2},\ z_{3}\leq 1-x_{1},\ x_{2}\leq x_{1},\ z_{1}+z_{2}+z_{3}=1, \ z_{1}\geq 0\) are active.
Example 2 (Biggs et al. (2022) not ideal for a single tree).: Again, suppose there is a tree that first branches on the condition \(w\leq 5\) and then on \(w\leq 2\), as shown in Figure 1(b). This formulation uses a slightly different notation, where \(x_{ij}=1\) if the arc is on the path to the active leaf, \(i\) corresponds to the parent node, \(j=1\) refers to the left branch, and \(j=2\) refers to the right branch. For example, if \(w\leq 2\), then \(x_{11},x_{21}=1\), while \(x_{12},x_{22}=0\). We also assume \(w\) is bounded, \(0\leq w\leq 10\), and following guidance in Biggs et al. (2022) for choosing the big-M value, we set \(M=15\). The resulting formulation in Biggs et al. (2022) is:
\[\{\boldsymbol{x},w\ |\ w-15(1-x_{11})\leq 5, w-15(1-x_{21})\leq 2, x_{21}+x_{22}=x_{11}, 0\leq \boldsymbol{x}\leq 1\]
Figure 2: Examples of trees with fractional solutions and notation
\[w+15(1-x_{12})\geq 5,\quad\quad w+15(1-x_{22})\geq 2,\quad\quad x_{12}+x_{21}+x_{22}=1, \quad\quad 0\leq w\leq 10\}\]
This has an extreme point at \(x_{11}=1/3,\ x_{12}=2/3,\ x_{21}=1/3,\ x_{22}=0,w=0\), when constraints \(w+15(1-x_{12})\geq 5,\ x_{21}+x_{22}=x_{11},\ x_{11}+x_{12}+x_{21}+x_{22}=1,\ w \geq 0,\ x_{22}\geq 0\) are active. Furthermore, this is not just a consequence of the choice of \(M\) but is still an issue regardless of this choice.
## 4 Union of polyhedron formulation
We propose an alternative MIO formulation for decision trees, which is tighter in the sense that it is ideal for modeling a single tree, unlike those presented in Example 1 and 2. In contrast with the formulation in Misic (2020), our proposed formulation directly relates the feature vector \(\mathbf{w}\), to the output \(f^{(t)}(\mathbf{w})\), instead of using a binary representation of the feature vector. This has an advantage that constraints can be placed directly on the feature vector \(\mathbf{w}\) for problems with additional constraints that need to be modeled.
We can formulate a tree as a union of polyhedra since the solution will always fall into one of the leaves (hyperrectangles) that partition the feature space. This can be achieved using the classical extended formulation from Jeroslow (1987), which introduces many auxiliary variables to model the set. This is also known as a "multiple choice" formulation Vielma and Nemhauser (2011):
\[Q^{ext}=\{\mathbf{w},y,\mathbf{w}^{l},y^{l},\mathbf{z}|\ u_{li}z_{l}\geq w_{i}^{l}\quad \quad\forall i\in[d],\ \forall l\in[p] \tag{3a}\] \[b_{li}z_{l}\leq w_{i}^{l}\quad\quad\forall i\in[d],\ \forall l\in[p]\] (3b) \[y^{l}=s_{l}z_{l},\quad\quad\forall l\in[p]\] (3c) \[\sum_{l=1}^{p}z_{l}=1,\] (3d) \[w_{i}=\sum_{l=1}^{p}w_{i}^{l}\quad\quad\forall i\in[d]\] (3e) \[y=\sum_{l=1}^{p}y^{l}\quad\quad\forall l\in[p]\] (3f) \[z_{l}\in[0,1]\quad\quad\forall l\in[p]\} \tag{3g}\]
The formulation works by creating \(p\) auxiliary copies of each variable, \(\mathbf{w}^{l}\in\mathbb{R}^{d},y^{l}\in\mathbb{R}\), corresponding to each leaf to make the MIO formulation. Auxiliary binary variables \(z_{l}\in\{0,1\}^{p}\) are also
introduced, which indicate which leaf the solution falls into. When \(z_{l}=1\), constraints (3a), (3b), and (3c) define the feasible region and score for that leaf. When \(z_{l}=0\), these constraints enforce that \(\mathbf{w}^{l}\) is set to be a vector of zeros. Constraints (3d) ensures only one leaf is chosen. Constraint (3e) and (3f) in turn define \(\mathbf{w}\) and \(y\) according to which leaf is active.
This formulation is ideal as proved in Jeroslow and Lowe (1984) and Balas (1985), so the linear relaxation is guaranteed to have integer extreme points. However, these formulations often have computational issues when solved in practice (Vielma, 2019). This formulation introduces a large number of auxiliary variables (\((p+1)(d+2)\) variables in total), as well as many constraints (\(2pd+3p+d+1\)). It is well known that these formulations suffer from degeneracy, as many of the auxiliary variables are set to be 0, often resulting in poor performance in practice (Vielma, 2019).
We can improve upon this formulation by projecting onto \(\mathbf{w}\). This eliminates the variables \(\mathbf{w}^{l}\) and thus results in a significantly smaller formulation.
\[Q^{proj}=\{\mathbf{w},y,\mathbf{z}| \sum_{l=1}^{p}u_{li}z_{l}\geq w_{i} \forall i\in[d], \tag{4a}\] \[\sum_{l=1}^{p}b_{li}z_{l}\leq w_{i} \forall i\in[d],\] (4b) \[y=\sum_{l=1}^{p}z_{l}s_{l}\] (4c) \[\sum_{l=1}^{p}z_{l}=1\] (4d) \[z_{l}\in[0,1] \forall l\in[p]\} \tag{4e}\]
We can prove this formulation is still ideal for a single tree after this projection.
[Ideal formulation for a tree] The polyhedron \(Q^{proj}\) is ideal.
This is proved in Appendix 0.A.1. The main idea behind this proof is that the _union of polyhedra_ formulation (3) is ideal, and therefore the projection onto variables \(\mathbf{w}\) is also ideal. These ideal projected formulations always exist, but in general, the projection is not a tractable operation and can result in a formulation with exponentially many constraints. In this special case, the resulting
formulation (4) has only \(2d+1\) constraints (in addition to binary constraints) and \(p+d+1\) variables. Compared to formulation (3), this has significantly fewer variables and therefore does not suffer from degeneracy to the same extent. We also note that this formulation has considerably fewer constraints than in Misic (2020), which has approximately \(3p\) constraints and \(2p\) variables since typically \(d<<p\).
The significance of this result is that it suggests that tree-based optimization approaches that use formulation (4) will be tighter than those used in Biggs et al. (2022) or Misic (2020). Specifically, there are fractional solutions for each tree, as shown in Examples 1 and 2, which do not exist in formulation (4). Although in general, the intersection of different tree polytopes, as occurs in tree ensemble optimization, introduces additional fractional solutions. This also occurs for the intersection of a tree polytope and additional polyhedral constraints. However, in practice, this formulation often results in a faster time to solve, particularly for forests with relatively few trees.
If formulation (4) is reformulated slightly, we can prove some additional favorable properties, including, in particular, that the constraints are facet-defining.
Definition 3 (Facet).: A face \(\mathcal{F}\) of a polyhedron \(\mathcal{P}\), represented by the inequality \(\boldsymbol{a}^{\prime}\boldsymbol{x}\geq b\), is called a facet of \(\mathcal{P}\) if \(dim(\mathcal{F})=dim(\mathcal{P})-1\).
One of the variables \(z_{p}\) can be eliminated through the substitution \(z_{p}=1-\sum_{l=1}^{p-1}z_{l}\). Consequently, \(\boldsymbol{z}\in\{0,1\}^{p-1}\) and as a result, \(\boldsymbol{z}=0\) implies \(\boldsymbol{w}\in\mathcal{L}_{p}\). This leads to the following formulation:
\[Q^{facet}=\{\boldsymbol{w},y,\boldsymbol{z}| \ u_{pi}+\sum_{l=1}^{p-1}(u_{li}-u_{pi})z_{l}\geq w_{i}\qquad \forall i\in[d], \tag{5a}\] \[b_{pi}+\sum_{l=1}^{p-1}(b_{li}-b_{pi})z_{l}\leq w_{i}\qquad \forall i\in[d],\] (5b) \[y=s_{p}+\sum_{l=1}^{p-1}z_{l}(s_{l}-s_{p})\] (5c) \[z_{l}\in[0,1]\qquad\forall l\in[p-1]\} \tag{5d}\]
We can show that under mild assumptions, (5a) and (5b) are facet-defining.
Lemma 1: _For all \(l\in[p]\), assume \(\mathcal{L}_{l}\) is non-empty. Furthermore, assume that for some \(k\in[p]\), \(\mathcal{L}_{k}\) is full dimensional, i.e., \(dim(\mathcal{L}_{k})=d\). Then constraints (5a) and (5b) are facet-defining for leaf \(k\)._
This is proved in A.2 with a proof technique similar to that in Anderson et al. (2018). This result is significant because it suggests there is no redundancy in formulation (5). MIO formulations generally take longer to solve when there are redundant variables and constraints.
### Extensions to tree ensembles and additional constraints
The formulation can be applied to tree ensembles such as random forests or gradient-boosted tree ensembles. While the polyhedron modeling an individual tree is ideal, this formulation is not ideal in general as shown in this section. An alternative, but weaker, notion of tightness is whether a formulation is sharp. For a sharp formulation, the projection of the polyhedron \(Q\) onto the original variables \(\boldsymbol{w},y\) is equal to the convex hull (\(\operatorname{conv}(\cdot)\)) of the graph \(gr(f;D)\). This is formalized as follows:
Definition 4 (Sharp formulation): \[\operatorname{conv}(gr(f;D))=Proj_{\boldsymbol{w},y}(Q)\]
An ideal formulation is also sharp, but a sharp formulation isn't necessarily ideal. In Example 4 we give a simple tree ensemble that illustrates that the _union of polyhedra_ formulation is not ideal and not sharp.
Example 3 (Intersection of trees is not ideal or sharp): Suppose we have the following two trees in an ensemble:
\[f^{(1)}(w)=\begin{cases}1&0\leq w\leq 1\\ 4&1<w\leq 3\end{cases}\qquad\quad f^{(2)}(w)=\begin{cases}2&0\leq w\leq 2\\ 3&2<w\leq 3\end{cases}\]
This leads to a tree ensemble:
\[0.5(f^{(1)}(w)+f^{(2)}(w))=\begin{cases}1.5&0\leq w\leq 1\\ 3&1<w\leq 2\\ 3.5&2<w\leq 3\end{cases}\]
This is visualized in Figure 4, where \(f^{(1)}(w)\) is the blue line, \(f^{(2)}(w)\) is the red line and the ensemble \(0.5(f^{(1)}(w)+f^{(2)}(w))\) is the purple dashed line. The _union of polyhedra_ formulation for this is as follows:
\[\{w,y,\boldsymbol{z}\ | z_{2}^{(1)} \leq w, 2z_{2}^{(2)} \leq w, y = 0.5\left(z_{1}^{(1)}+4z_{2}^{(1)}+2z_{1}^{(2)}+3z_{2}^{(2)}\right),\] \[z_{1}^{(1)}+3z_{2}^{(1)} \geq w, 2z_{1}^{(2)}+3z_{2}^{(2)} \geq w, z_{1}^{(1)}+z_{2}^{(1)} = 1\ z_{1}^{(2)}+z_{2}^{(2)} = 1,\ \boldsymbol{z},\boldsymbol{w}\geq 0\}\]
A basic feasible solution for this formulation is \(w=1,\ z_{1}^{(1)}=0,\ z_{2}^{(1)}=1,\ z_{2}^{(1)}=0.5,\ z_{2}^{(2)}=0.5,\ y=3.25\), which is not integral, so the formulation is not ideal. Furthermore, the projected solution, \(w=1,\ y=3.25\), is not in the convex hull of \(0.5(f^{(1)}(w)+f^{(2)}(w))\), so the formulation is not sharp.
Figure 3: Tree ensemble formulation is not ideal or sharp. Extreme points of \(Q^{proj}\) are shown with hollow circles, while the convex hull of the tree ensemble graph is shown in shaded purple.
This can be observed in Figure 3c, where the convex hull of the graph of the tree ensemble is shown in shaded purple. The extreme points of \(Q^{proj}\) projected into \(w,y\) space are shown with hollow circles. As can be observed, there are two extreme points of \(Q^{proj}\) that lie outside the convex hull of the graph. \(\quad\Box\)
We also provide an example illustration that adding additional constraints to the feature vector, which may be useful for many practical applications, is not ideal.
Example 4 (Adding additional constraints to a tree is not ideal).: Take the tree from Figure 1. Suppose that we add a simple constraint that \(w_{1}+w_{2}\leq 3\). Suppose additionally that there are upper and lower bounds on each feature, such that \(0\leq w_{1},w_{2}\leq 3\). The _union of polyhedra_ formulation is:
\[\{w_{1},w_{2},\boldsymbol{z}\ |\qquad 2(z_{1}+z_{2})+3z_{3} \geq w_{1},\qquad 2z_{1}+3(z_{2}+z_{3}) \geq w_{2},\qquad\qquad z_{1}+z_{2}+z_{3}=1\] \[2z_{3} \leq w_{1},\qquad\qquad\qquad 2z_{2} \leq w_{2},\qquad w_{1}+w_{2}\leq 3,\ \boldsymbol{z}\geq 0\}\]
This has a fractional solution \(w_{1}=2/3,\ w_{2}=7/3,\ z_{1}=2/3,\ z_{2}=0.0,\ z_{3}=1/3\), so it is not ideal. \(\quad\Box\)
While the intersection of trees is not ideal or sharp, it still removes a significant number of fractional solutions from the linear relaxation compared to using formulations from Misic (2020) or Biggs et al. (2022) leading to faster solve times as explored empirically in Section 6.
## 5 Strengthening formulations with binary split variables
We next present formulations that build upon the formulation from Misic (2020). In particular, these formulations use the binary variables from Misic (2020), which denote whether the feature vector is below each threshold in the tree. An advantage of this approach is its favorable branching behavior - setting a variable \(x_{ij}=1\) will force all variables with a split threshold above this to also be 1, due to the ordering constraints \(x_{ij}\leq x_{ij+1}\) (2c). In some cases, this results in a faster time to solve than the formulation in the previous section. We propose two ways to tighten this formulation to remove some of the fractional solutions, resulting in tighter linear relaxations and a faster time to solve in certain situations.
### Tighter formulation from variable structure
To tighten the formulation from Misic (2020), we exploit the greater than or equal to representation of \(\mathbf{x}\), which leads to larger groups of leaf variables being turned off when a split is made. In Misic (2020), the \(\mathbf{x}\) variables have consecutive \(0\)'s followed by consecutive \(1\)'s. In Misic (2020), if \(x_{ij}=0\), this implies that all variables \(z_{l}\) to the left of the split are equal to \(0\) (constraint 2b). However, a stronger statement can be made. Due to the structure of \(\mathbf{x}\), all variables with lower thresholds are also equal to \(0\), i.e., \(x_{ik}=0\)\(\forall k<j\). This implies that variables \(z_{l}\) to the left of splits with lower thresholds also must be equal to \(0\).
As an illustrative example, we examine the tree in Figure 3(a). If \(w_{2}>5\) (\(x_{22}=0\)), then not only is the variable to the left of this split equal to \(0\), \(z_{3}=0\), but also \(z_{1}=0\) due to the constraint \(x_{21}\leq x_{22}\) (constraint (2c) from Misic (2020)). Rather than enforcing the relatively weak constraint from Misic (2020) that \(z_{3}\leq x_{22}\), it is tighter to directly enforce \(z_{1}+z_{3}\leq x_{22}\). Similarly, if \(x_{ij}=1\), this implies that the variables \(z_{l}\) to the right of any splits greater than the \(j^{th}\) split are also set to \(0\). For example in Figure 3(a), if \(w_{2}\leq 2\) (\(x_{12}=1\)), then not only is the variable to the right of this split equal to \(0\) (\(z_{2}=0\)), but also \(z_{4}=0\), since the structure of \(\mathbf{x}\) implies that \(w_{2}\leq 5\) (\(x_{22}=1\)).
To formalize this logic, we introduce new sets \(\mathbf{below}(s)\) and \(\mathbf{above}(s)\). The set \(\mathbf{below}(s)\) contains all leaves to the left of splits with thresholds less than or equal to the threshold at split \(s\) for a
given tree. The set \(\mathbf{above}(s)\) contains all leaves to the right of leaves with a threshold greater than or equal to the threshold at split \(s\). As such, for adjacent splits on the same feature, \(s_{ij}\) and \(s_{ij+1}\), we can define \(\mathbf{below}(s_{ij+1})=\mathbf{below}(s_{ij})\cup\mathbf{left}(s_{ij+1})\) and \(\mathbf{above}(s_{ij})=\mathbf{above}(s_{ij+1})\cup\mathbf{right}(s_{ij})\). For the smallest and largest splits, we have initial conditions that \(\mathbf{below}(s_{i1})=\mathbf{left}(s_{i1})\), and \(\mathbf{above}(s_{iK_{i}})=\mathbf{right}(s_{iK_{i}})\). An equivalent pair of definitions are \(\mathbf{below}(s_{ij})=\bigcup_{k\leq j}\mathbf{left}(s_{ik})\) and \(\mathbf{below}(s_{ij})=\bigcup_{k\geq j}\mathbf{right}(s_{ik})\). An example of these sets is illustrated in Figure 4a. As a result, we can introduce a new formulation \(Q^{\mathit{exp}set}\), named after the notion of _expanded sets_, by replacing (2a) and (2b) with the following constraints:
\[Q^{\mathit{exp}set}= \{\boldsymbol{x},y,\boldsymbol{z}\ |\ \sum_{l\in\mathbf{below}(s)}z_{l}\leq x_{V(s)C(s)}\qquad\forall s\ \in\ \mathbf{splits}(t) \tag{8a}\] \[\sum_{l\in\mathbf{above}(s)}z_{l}\leq 1-x_{V(s)C(s)}\qquad\forall s\ \in\ \mathbf{splits}(t)\] (8b) \[x_{ij}\leq x_{ij+1}\qquad\forall i\ \in\ [p],\ \forall j\ \in\ [K_{i}]\] (8c) \[\sum_{l}^{p}z_{l}=1,\quad y=\sum_{l=1}^{p}s_{l}z_{l}\] (8d) \[\boldsymbol{x}\in[0,1]^{K_{i}}\qquad\forall i\in[d],\ \boldsymbol{z}\geq 0\} \tag{8e}\]
Constraints (8a) and (8b) are the counterparts of (2a) and (2b). Constraint (8a) enforces that when the condition at the split is not satisfied \(x_{V(s)C(s)}=0\), the solution does not fall within a leaf to the left of any split in the tree with a lower threshold for the same feature, while constraint (8b) enforces that all leaves to the right of greater splits are set to 0 if \(x_{V(s)C(s)}=1\), as discussed previously. It can be shown that when intersected with a binary lattice on \(\boldsymbol{x}\in\{0,1\}^{p}\), the feasible set of the MIO formulations (2) and (8) is the same. However, the linear relaxation, \(Q^{\mathit{exp}set}\) is generally a subset of \(Q^{\mathit{m}isic}\). This is shown in Proposition 1, which formalizes the rationale given above.
**Proposition 1**: _The feasible sets associated with MIO formulations of \(Q^{\mathit{exp}set}\) and \(Q^{\mathit{m}isic}\) are equivalent, but the linear relaxation \(Q^{\mathit{exp}set}\) is a subset of \(Q^{\mathit{m}isic}\). Formally,_
\[Q^{\mathit{exp}set}\cap(\{0,1\}^{p}\times\mathbb{R}^{1+p})=Q^{\mathit{m}isic} \cap(\{0,1\}^{p}\times\mathbb{R}^{1+p}),\ \mathit{but}\ Q^{\mathit{exp}set}\subseteq Q^{\mathit{m}isic}\]
We provide a formal proof in Appendix B. It can be shown that this formulation removes some fractional solutions from the LP relaxation of (2). In particular, this will occur when there are multiple splits on the same feature within the tree. To illustrate this, suppose we have two splits on the same variable, \(s\) and \(s^{\prime}\), where without loss of generality split \(s^{\prime}\) has the larger threshold. Define a reduced polyhedron that only includes the constraints related to these splits as follows:
\[\tilde{Q}^{expset}(s,s^{\prime})=\{\boldsymbol{x},\boldsymbol{z} \ |\ \sum_{l\in\text{below}(s)}z_{l}\leq x_{V(s)C(s)},\ \sum_{l\in\text{above}(s)}z_{l}\leq 1-x_{V(s)C(s)},\] \[\sum_{l\in\text{below}(s^{\prime})}z_{l}\leq x_{V(s^{\prime})C(s^{\prime})},\ \sum_{l\in\text{above}(s^{\prime})}z_{l}\leq 1-x_{V(s^{ \prime})C(s^{\prime})},\ x_{V(s)C(s)}\leq x_{V(s^{\prime})C(s^{\prime})}\}\] \[\tilde{Q}^{misic}(s,s^{\prime})=\{\boldsymbol{x},\boldsymbol{z} \ |\ \sum_{l\in\text{left}(s)}z_{l}\leq x_{V(s)C(s)},\ \sum_{l\in\text{right}(s)}z_{l}\leq 1-x_{V(s)C(s)},\] \[\sum_{l\in\text{left}(s^{\prime})}z_{l}\leq x_{V(s^{\prime})C(s^{ \prime})},\ \sum_{l\in\text{right}(s^{\prime})}z_{l}\leq 1-x_{V(s^{\prime})C(s^{ \prime})},\ x_{V(s)C(s)}\leq x_{V(s^{\prime})C(s^{\prime})}\}\]
If we examine these polyhedrons, we see that the \(\tilde{Q}^{expset}(s,s^{\prime})\) is a strict subset of \(\tilde{Q}^{misic}(s,s^{\prime})\) when there are multiple splits on the same variable.
Proposition 2: _Suppose we have two splits on the same variable, \(s\) and \(s^{\prime}\), where \(s^{\prime}\) corresponds to the split with the larger threshold. Then_
\[\tilde{Q}^{expset}(s,s^{\prime})\subset\tilde{Q}^{misic}(s,s^{\prime})\]
This is proved in Appendix C. This proof involves exploring the potential relationships between splits \(s\) and \(s^{\prime}\) (where split \(s\) is a child of \(s^{\prime}\) in the tree, where \(s^{\prime}\) is a child of \(s\), and where neither is a child of the other) and finding solutions \((\boldsymbol{x,z})\) that are in \(\tilde{Q}^{misic}(s,s^{\prime})\) but not in \(\tilde{Q}^{expset}(s,s^{\prime})\). An example that illustrates the strict subset is given in Example 6 from Section 5.3. In this example, we see that formulation (2) has fractional solutions, while formulation (8) has only integer solutions.
Generally, the more splits there are on the same feature in the tree, the more these constraints will tighten the formulation. At an extreme, we have the scenario where all splits in the tree are on the same feature. In the one-dimensional setting, it can be shown that the above formulation is ideal even for tree ensembles.
Theorem 2 (Ideal formulation for one-dimensional tree ensembles): _The polyhedron defining a tree ensemble \(\cap_{i=1}^{T}Q_{i}^{\textit{expset}}\) is ideal if the feature is one-dimensional (\(d=1\))._
This result is proved in Appendix E. It follows by proving that the matrix representation of the polyhedron is totally unimodular. In particular, the matrix has a special structure whereby it is possible to provide a bi-coloring of the columns, such that the difference in row sums between the two groups is in \(\{-1,0,1\}\). A result from Ghouila-Houri (1962) proves that such a matrix is totally unimodular. A linear program \(\{\max\boldsymbol{c}^{\prime}\boldsymbol{x}|A\boldsymbol{x}\leq\boldsymbol{b}\}\) has integer solutions if \(b\) is integer and \(A\) is a totally unimodular matrix (Schrijver 1998).
The significance of this result is that it emphasizes the tightness of this formulation, relative to other formulations that are not ideal in this situation and have fractional solutions. In particular, in Example 1, we show that formulation (2) is not ideal even if the problem is one-dimensional with a single tree. Furthermore, although the formulation isn't ideal when the input vector has multiple dimensions, we empirically show in Section 6.1.1 that the relaxation is tighter when the input vector is low dimensional.
It is interesting to contrast this result with Theorem 1. Theorem 1.1 states that the _union of polyhedra_ formulation is ideal for a single tree even with many features. This contrasts with Theorem 2, which shows the _expset_ formulation is ideal for many trees but only if the ensemble has a single feature. While it is difficult to directly compare the tightness of these formulations since they use different variables, this gives practitioners insight into the relative tightness of the different formulations. When there are many trees in the ensemble but relatively few variables, the _expset_ formulation is likely to be tighter. When there are few trees but many variables, the _union of polyhedra_ formulation is likely to be tighter.
### Tighter formulation from nested branches
The relaxation of the formulation in the previous section still has some fractional extreme solutions, even in the case where a single tree is being modeled over multiple features. These fractional extreme solutions often arise when there are nested splits on the same feature, where one split follows another on the same branch. This is highlighted in the following example.
Example 5 (Nested branches that can be tightened).: Consider a path to a leaf, which has two splits on the same variable in opposing directions as shown in Figure 4(a). Suppose we model this using the formulation (2) from Misic (2020):
\[\{x_{1},x_{2},z\ |\ z\leq x_{1},\ z\leq 1-x_{2},\ x_{2}\leq x_{1},\ 0\leq x_{1},x_{2}\leq 1,\ 0\leq z\}\]
This has an extreme point \(z=0.5,\ x_{1}=0.5,x_{2}=0.5\), as shown in Figure 4(b). Consider the following reformulation:
\[\{x_{1},x_{2},z\ |z\leq x_{1}-x_{2},\ 0\leq x_{1},x_{2}\leq 1,\ 0\leq z\}\]
This is shown in Figure 4(c). As can be observed, this has removed the fractional extreme point, leaving only integer extreme points.
These fractional extreme points generally occur when a split to the left is followed by a split to the right for the same feature, or vice versa. More formally, we can characterize a valid set of constraints as follows: We define \(\textbf{right\_parent}(s)\) as the set of splits that are above and to the right of split \(s\) in the tree, with the additional requirement that these splits be on the same feature. That is, the split \(s\) is a left child of another split on the same feature in the tree. For the splits in this set, the thresholds are necessarily larger. We can also define \(\textbf{left\_parent}(s)\) as the set of splits that are above and to the left of split \(s\) for the same feature, for which the threshold is smaller.
Figure 5: Example: cuts removing extreme point
To illustrate this notation, in Figure 4b the split \(w_{2}\leq 2\) is the **left_parent** of the split \(w_{2}\leq 4\). We can generalize the constraints from Example 5 as follows:
\[\sum_{l\in\textbf{right}(s)}z_{l} \leq x_{V(s^{\prime})C(s^{\prime})}-x_{V(s)C(s)}\qquad\forall s\; \in\;\textbf{splits}(t),\;\;s^{\prime}\in\textbf{right\_parent}(s) \tag{9a}\] \[\sum_{l\in\textbf{left}(s)}z_{l} \leq x_{V(s)C(s)}-x_{V(s^{\prime})C(s^{\prime})}\qquad\forall s\; \in\;\textbf{splits}(t),\;\;s^{\prime}\in\textbf{left\_parent}(s) \tag{9b}\]
If we define \(Q^{elbow}\) as the polyhedron created by adding constraints (9a) and (9b) to formulation (2) from Misic (2020), we can show that the relaxation of this formulation is tighter, while still having the same feasible region when \(\mathbf{x}\) is restricted to a binary lattice, as shown in Proposition 3.
Proposition 3: _The feasible set associated with MIO formulations \(Q^{elbow}\)and \(Q^{misic}\) are equivalent, but linear relaxation \(Q^{elbow}\) is a subset of \(Q^{misic}\). Formally,_
\[Q^{elbow}\cap(\{0,1\}^{p}\times\mathbb{R}^{1+p})=Q^{misic}\cap(\{0,1\}^{p} \times\mathbb{R}^{1+p}),\;\text{but}\;Q^{elbow}\subseteq Q^{misic}\]
This is proved formally in Appendix D. As illustrated in Example 5, the feasible region is often a strict subset when there are nested splits on the same feature (\(Q^{elbow}\subset Q^{misic}\)). This suggests that when there are more splits on the same features in the tree, there will be more of an improvement using the _elbow_ formulation over Misic (2020). This also often occurs if the tree has fewer features. This is explored empirically in Section 6. However, simulation results suggest that the formulation is not ideal for tree ensembles with a single feature, unlike the _expset_ formulation.
### Comparison of tightening constraints
In this section, we compare the relative tightness of the _expset_ and _elbow_ formulations (8 and 9, respectively). We will show that when these constraints are added separately to formulation (2) from Misic (2020), neither formulation is strictly tighter than the other. Rather, there are certain situations where one formulation is tighter than the other and vice versa, which we illustrate with examples.
A simple example where formulation (8) is tighter than formulation (9) is when there are multiple splits on the same variable, but they do not have a nested structure. For example, in the tree in
Figure 4a, there are two splits on \(w_{2}\), but these occur in different branches of the tree. In this situation, formulations (2) and (9) are the same since the constraints are added only for nested pairs of the same feature. Furthermore, formulation (9) is not tight, but the formulation (8) is tight.
Example 6 (Expset Formulation is tighter than elbow formulation): For the tree given in Figure 4a, formulation (9) (and formulation (2)) is:
\[\{\boldsymbol{x},\boldsymbol{z}\ |\ x_{11}\geq z_{1}+z_{2},\qquad \quad x_{21}\geq z_{2},\qquad\quad x_{22}\geq z_{3},\qquad\quad x_{21}\leq x_{2 2},\qquad\qquad 0\leq\boldsymbol{x}\leq 1,\] \[1-x_{11}\geq z_{3}+z_{4},\quad 1-x_{21}\geq z_{2},\quad 1-x_{22} \geq z_{4},\quad\quad z_{1}+z_{2}+z_{3}+z_{4}=1,\quad\quad 0\leq\boldsymbol{z}\}\]
On the other hand formulation (8) is:
\[\{\boldsymbol{x},\boldsymbol{z}\ |\ x_{11}\geq z_{1}+z_{2},\qquad\quad x _{21}\geq z_{2},\qquad\quad\quad x_{22}\geq\boxed{z_{1}}+z_{3},\ \ x_{21}\leq x_{22},\qquad\qquad\qquad 0\leq \boldsymbol{x}\leq 1,\] \[1-x_{11}\geq z_{3}+z_{4},\ \ 1-x_{21}\geq z_{2}+\boxed{z_{4}},\quad 1-x_{22}\geq z_{4},\qquad z_{1}+z_{2}+z_{3}+z_{4}=1, \ \ 0\leq\boldsymbol{z}\}\]
For convenience, the difference in the formulations has been highlighted. Formulation (9) has fractional solutions \(x_{11}=0.5,x_{21}=0.5,x_{22}=0.5,z_{1}=0,z_{2}=0.5,z_{3}=0,z_{4}=0.5,x_{21}=0.5, x_{22}=0.5,z_{1}=0.5,z_{2}=0,z_{3}=0.5,z_{4}=0\), while formulation (8) has only integer solutions since the above fractional solutions violate the added constraints. \(\square\)
To further understand the difference between the constraints from formulations (9) and (8), it is useful to examine situations in which they are the same. In particular, suppose we have two nested splits on the same feature, such that \(s^{\prime}\in\mathbf{right\_parent}(s)\), as in the tree in Figure 5a. We will examine constraints (8a) and (8b) and see when they imply the alternative constraint (9a). Specifically, we require that that \(\mathbf{above}(s)\) and \(\mathbf{below}(s^{\prime})\) cover the whole set of leaves, that is, \(\mathbf{below}(s^{\prime})\cup\mathbf{above}(s)=p\). This is formally stated in Lemma 2.
Lemma 2: _Suppose \(s^{\prime}\in\mathbf{right\_parent}(s)\). If \(\mathbf{below}(s^{\prime})\cup\mathbf{above}(s)=p\),_
\[Q^{misic}\bigcap\sum_{l\in\mathbf{below}(s^{\prime})}z_{l}\leq x_{V(s^{\prime})C(s^{\prime})}\bigcap\sum_{l\in \mathbf{above}(s)}z_{l}\leq 1-x_{V(s)C(s)}\] \[\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \Longrightarrow Q^{misic}\bigcap\sum_{l\in\mathbf{left}(s)}z_{l}\leq x_{V(s)C(s)}- x_{V(s^{\prime})C(s^{\prime})}\]
_Similarly, suppose \(s^{\prime}\in\mathbf{left\_parent}(s)\). If \(\mathbf{above}(s^{\prime})\cup\mathbf{below}(s)=p\),_
\[Q^{misic}\bigcap\sum_{l\in\mathbf{below}(s)}z_{l}\leq x_{V(s)C(s)} \bigcap\ \sum_{l\in\mathbf{above}(s^{\prime})}z_{l}\leq 1-x_{V(s^{\prime})C(s^{ \prime})}\] \[\implies Q^{misic}\bigcap\sum_{l\in\mathbf{right}(s)}z_{l}\leq x_{V(s)C( s)}-x_{V(s^{\prime})C(s^{\prime})}\]
This is proved in Appendix F. The condition \(\mathbf{below}(s^{\prime})\cup\mathbf{above}(s)=p\) is satisfied when all splits above \(s\) are on the same feature, or as an extreme case when the tree contains only one feature (the same condition as Theorem 2). When these conditions are not met, including constraint (9a) will tighten the formulation. An example where this condition is not met and formulation (9) is tighter than formulation (8) occurs in Figure 3(b).
Example 7 (Elbow formulation is tighter than expset formulation).: For the tree from Figure 3(b), formulation (8) is:
\[\{\boldsymbol{x},\boldsymbol{z}|\ x_{11}\geq z_{1}, x_{21}\geq z_{2}, x_{22}\geq z_{2}+\boxed{z_{3}}, x_{21}\leq x_{22}, 0\leq\boldsymbol{x}\leq 1,\] \[1-x_{11}\geq z_{2}+z_{3}+z_{4}, 1-x_{21}\geq z_{3}+z_{4}, 1-x_{22}\geq z_{4}, z_{1}+z_{2}+z_{3}+z_{4}=1, 0\leq\boldsymbol{z}\}\]
Formulation (9) is:
\[\{\boldsymbol{x},\boldsymbol{z}|\ x_{11}\geq z_{1}, x_{21}\geq z_{2}, x_{22}\geq z_{2}, x_{21}\leq x_{22},\ z_{1}+z_{2}+z_{3}+z_{4}=1,\] \[1-x_{11}\geq z_{2}+z_{3}+z_{4}, 1-x_{21}\geq z_{3}+z_{4}, 1-x_{22}\geq z_{4}, \boxed{x_{22}-x_{21}\geq z_{3}}, 0\leq\boldsymbol{x}\leq 1,\ 0\leq\boldsymbol{z}\}\]
For convenience, the difference in the formulations has been highlighted again. Formulation (8) has a fractional solution \(x_{1}=0.5,x_{2}=0.5,x_{3}=0.5,z_{1}=0.5,z_{2}=0,z_{3}=0.5,z_{4}=0\), while formulation (9) has only integer solutions.
Since each formulation has the advantage of removing different fractional solutions, including both sets of constraints can tighten the formulation further. We empirically explore how much these additional constraints tighten the LP relaxation for various datasets in Section 6.1.1.
## 6 Numerical Experiments
In this section, we study the numerical performance of the formulations on both simulated and real-world data. We study two scenarios of practical interest. The first involves the time taken to solve to optimality for an objective estimated by a tree ensemble. We then focus on finding tight upper bounds to this problem, obtained by solving the linear relaxation.
### Experiments with tree ensembles
In this section, we examine the time taken to solve to optimality for a problem where the objective function is estimated using a random forest. We compare formulation (4) denoted projected and formulation (9) denoted elbow, to formulation (2) from Misic (2020), denoted misc, and a formulation that uses the big-M method from Biggs et al. (2022), denoted bigM.
The random forest is trained on previous decisions where the reward is generated from a simple triangle-shaped function, where observed samples have added noise:
\[r_{i}=\sum_{j=1}^{d}(1-|w_{ij}|)+d\cdot\epsilon_{i}\]
For this problem, \(r_{i}\) is a sampled reward, \(w_{i}\sim U(-1,1)^{d}\) is a random decision vector with \(d\) features, and \(\epsilon_{i}\sim U(0,1)\) is added noise. There are no additional constraints placed on the variables other than those used to model the tree. We train a random forest from this data using scikit-learn(Pedregosa et al., 2011). We calculate the solve time to optimality with an increasing number of trees in the forest and an increasing number of features. We increase the number of trees according to \(\{1,2,4,8,16,32\}\), and the number of features from \(1\) to \(5\). We repeat the experiment for \(10\) randomly generated datasets for each forest size and number of features. We use default parameters and a maximum depth for each tree of \(20\). For these parameters, each tree has an average of \(2893\) leaves. We show example problem sizes of the formulations when there are \(5\) features in Table 1. This shows the number of constraints, binary variables, and we show the sparsity of the constraint matrix with the number of nonzero entries. As noted earlier, the
number of constraints in projected formulation is substantially smaller, while the number of binary variables is also less than the other formulations. The MIO formulations were solved using a Gurobi solver (Gurobi Optimization (2019)), with a time limit of 30 minutes (1800s) for each trial but otherwise default parameters. The experiments were run on a MacBook Pro with an Intel 8-Core [email protected] with 32GB RAM.
In Table 2 we observe the time taken to solve optimally for different-sized trees. Each result is averaged over 50 trials: 10 trials for each input vector of 1 to 5 dimensions. We note that the
\begin{table}
\begin{tabular}{l l l l l} \hline \# trees & method & constraints & binary variables & nonzeros \\ \hline \multirow{4}{*}{1} & \multirow{4}{*}{projected} & 11 & 2766 & 27709 \\ & & misc & 8276 & 5521 & 54917 \\ & & bigM & 16560 & 8287 & 41398 \\ & & elbow & 8865 & 5521 & 61927 \\ \hline \multirow{4}{*}{2} & \multirow{4}{*}{projected} & 22 & 5627 & 56857 \\ & & misc & 16873 & 11242 & 112953 \\ & & bigM & 33720 & 16869 & 84296 \\ & & elbow & 18038 & 11242 & 128593 \\ \hline \multirow{4}{*}{4} & \multirow{4}{*}{projected} & 44 & 11312 & 114060 \\ & & misc & 34003 & 22610 & 225842 \\ & & bigM & 67818 & 33922 & 169537 \\ & & elbow & 36404 & 22610 & 254902 \\ \hline \multirow{4}{*}{8} & \multirow{4}{*}{projected} & 88 & 22832 & 227507 \\ & & misc & 68964 & 45646 & 453909 \\ & & bigM & 136914 & 68478 & 342269 \\ & & elbow & 73692 & 45646 & 520911 \\ \hline \multirow{4}{*}{16} & \multirow{4}{*}{projected} & 176 & 45206 & 455015 \\ & & misc & 137322 & 90386 & 911007 \\ & & bigM & 271110 & 135592 & 677743 \\ & & elbow & 146789 & 90386 & 1032816 \\ \hline \multirow{4}{*}{32} & \multirow{4}{*}{projected} & 352 & 91990 & 924083 \\ & & misc & 282939 & 183938 & 1847111 \\ \cline{1-1} & & bigM & 551718 & 275928 & 1379231 \\ \cline{1-1} & & elbow & 302335 & 183938 & 2097640 \\ \hline \end{tabular}
\end{table}
Table 1: Problem sizes for instance with 5 features
Figure 6: Time taken to solve to optimality for random forests of varying sizes
average time taken includes instances that didn't reach optimality, recorded as the maximum time allocated (1800s), so it is in fact a truncated mean. The percentage of instances that didn't reach optimality is recorded in the last four columns. As can be seen, the projected formulation is on average three to four times faster, and it finds an optimal solution more often within the given time.
Figure 6 shows the results further broken down by the number of features, plotted on a log-log axis for clarity. We observe that the elbow formulation is often faster for tree ensembles with few trees. This might be useful in applications where many MIO problems need to be solved rapidly, such as policy iteration in reinforcement learning with tree-based value function approximations. We also observe a substantial solve time improvement using the elbow formulation when there is one feature, which agrees with the results presented in Section 5.2.
We omitted the expset formulation (8) from these results because despite having a tighter linear relaxation (which is studied further in the following section), the solve time in practice was significantly slower. We conjecture that this is due to the increased density of the constraints, which contain many more variables, although it could also be due to other idiosyncracies of MIO solvers.
#### 6.1.1 Tighter linear relaxations
A problem of practical interest is finding tight upper bounds for maximization problems over an objective estimated by a tree ensemble. For large problem instances, finding the optimal solution can be prohibitively slow, considering that MIO
\begin{table}
\begin{tabular}{l l l l l l l l l} \hline & \multicolumn{4}{c}{truncated mean (s)} & \multicolumn{4}{c}{\% greater 1800s} \\ \cline{2-9} trees & projected & misc & bigM & elbow & projected & misc & bigM & elbow \\ \hline
1 & 0.47 & 0.98 & 1.00 & 0.75 & 0 & 0 & 0 & 0 \\
2 & 0.92 & 2.09 & 1.96 & 1.67 & 0 & 0 & 0 & 0 \\
4 & 2.16 & 6.83 & 6.15 & 5.82 & 0 & 0 & 0 & 0 \\
8 & 8.50 & 49.14 & 56.16 & 36.82 & 0 & 0 & 0 & 0 \\
16 & 103.30 & 1111.25 & 628.49 & 914.28 & 0 & 0.42 & 0.14 & 0.38 \\
32 & 983.29 & 1552.09 & 1477.53 & 1363.65 & 0.32 & 0.76 & 0.66 & 0.7 \\ \hline geometric mean & 9.67 & 32.52 & 29.27 & 26.35 & & & & \\ \end{tabular}
\end{table}
Table 2: Time taken to solve to optimality
formulations often exhibit exponential solve times. The relative quality of a fast heuristic solution can be assessed if an upper bound on the objective can be found. Another application of upper bounds is the verification of the robustness of a machine learning model (Carlini and Wagner 2017, Dvijotham et al. 2018) whereby an optimization problem is solved over local inputs to find maximally different output. Since finding the exact worst case can be prohibitively slow for large instances, a tight upper bound is often used instead (Carlini and Wagner 2017, Dvijotham et al. 2018).
We analyze the formulations from Section 5.1 by analyzing the tightness of the linear relaxation. We compare formulations that use the same variables, specifically formulation (8, expset), formulation (2, misc), and (9, elbow). Additionally, we test a formulation that has both of the tightening constraints (expset+elbow). We use the same data-generating process as in Section 6.1, except rather than solving to find the optimal integer solution, we solve only the linear relaxation. For these experiments, we use forests with \(\{2,4,6,8,10\}\) trees, and increase the features according to \(\{1,2,4,8,12\}\). Again, we repeat each experiment with 10 randomly generated datasets.
Figure 7 shows the optimality gap percentage, calculated from the difference between the objective of the linear relaxation and the optimal integer solution, as the number of features increases. We observe the effect of Theorem 2, whereby for tree ensembles with one feature, formulations based on expset are ideal. Moreover, for problems with relatively few features, the formulation is significantly tighter than formulation misc, whereas when the number of features is larger, the improvement is smaller. This is likely due to more features being associated with fewer splits per feature. We note that in isolation, the constraints introduced in expsum have a greater effect in tightening the formulation than those introduced in elbow, although combining both results in the tightest formulations. We also observe empirically that the elbow formulation is not ideal even in the single feature case.
### Real-world data
We also study some datasets used to benchmark tree ensemble solve times used in Misic (2020). In particular, we study the concrete dataset (Yeh 1998), with 1030 observations. The dependent
Figure 7: Tightness of linear relaxation
variable is the compressive strength of concrete, with independent variables being the characteristics of the concrete mix. 1 Optimization aims to find the concrete with the highest compressive strength. We also study the winequalityred dataset Cortez et al. (2009), with 1599 observations. The dependent variable is the quality of the wine, while the independent variables are characteristics of the concrete mix. 2 Optimization aims to find the concrete with the highest compressive strength. We also study the winequalityred dataset Cortez et al. (2009), with 1599 observations. The dependent variable is the quality of the wine, while the independent variables are characteristics of the concrete mix. 3 Optimization aims to find the concrete with the highest compressive strength. We also study the winequalityred dataset Cortez et al. (2009), with 1599 observations. The dependent variable is the quality of the wine, while the independent variables are characteristics of the concrete mix. 4 Optimization aims to find the concrete with the highest compressive strength. We also study the winequalityred dataset Cortez et al. (2009), with 1599 observations. The dependent variable is the quality of the wine, while the independent variables are characteristics of the concrete mix. 5 Optimization aims to find the concrete with the highest compressive strength. We also study the winequalityred dataset Cortez et al. (2009), with 1599 observations. The dependent variable is the quality of the wine, while the independent variables are characteristics of the concrete mix. 6 Optimization aims to find the concrete with the highest compressive strength. We also study the winequalityred dataset Cortez et al. (2009), with 1599 observations. The dependent variable is the quality of the wine, while the independent variables are characteristics of the concrete mix. 7 Optimization aims to find the concrete with the highest compressive strength. We also study the winequalityred dataset Cortez et al. (2009), with 1599 observations. The dependent variable is the quality of the wine, while the independent variables are characteristics of the concrete mix. 8 Optimization aims to find the concrete with the highest compressive strength. We also study the winequalityred dataset Cortez et al. (2009), with 1599 observations. The dependent variable is the quality of the wine, while the independent variables are characteristics of the concrete mix. 9 Optimization aims to find the concrete with the highest compressive strength. We also study the winequalityred dataset Cortez et al. (2009), with 1599 observations. The dependent variable is the quality of the wine, while the independent variables are characteristics of the concrete mix. 1 Optimization aims to find the concrete with the highest compressive strength. We also study the winequalityred dataset Cortez et al. (2009), with 1599 observations. The dependent variable is the quality of the wine, while the independent variables are characteristics of the concrete mix.
of the wine. 2 As such, the optimization problem is to choose characteristics of the wine such that the quality is maximized.
Footnote 2: fixed acidity, volatile acidity, citric acid, residual sugar, chlorides, free sulfur dioxide, total sulfur dioxide, density, pH, sulfates, alcohol.
#### 6.2.1 Solve time
We explore the solve time for different formulations of different size random forest tree ensembles \(\{10,20,40,80,160\}\) and varying feature vector dimension \(\{1,3,5,7\}\) for
Figure 9: Tightness of linear relaxation for random forests of varying sizes concrete data
concrete and \(\{1,5,10\}\) for winequalityred. To test the effect of dimension, we use the first \(k\) features to predict the output. As in the previous section, we set the maximum solve time to be 30 minutes (1800s).
The results for concrete and winequalityred are in Figures 8 and 10, respectively. We observe that for both datasets, the projected formulation performs relatively better than the formulation from Misic (2020) for instances where the feature vector has a lower dimension (fewer features). On the other hand, for instances with a larger number of features, the formulation Misic
Figure 10: Time taken to solve to optimality for random forests of varying sizes winequalityred data
(2020) can be faster to solve. Furthermore, the projected formulation (4) appears to be relatively faster for formulations with a small number of trees, which is particularly pronounced in Figures (c)c and (c)c. This is potentially an extension of Theorem 1; if (4) is ideal for a single tree, it is also potentially relatively tighter for a small number of trees. Again, this might have applications where many smaller problems need to be solved quickly, such as in reinforcement learning. For these datasets, the performance of the elbow formulation is generally comparable to Misic (2020), although there are improvements in the concrete dataset when there are few features.
Figure 11: Tightness of linear relaxation for random forests of varying sizes winequalityred data
#### 6.2.2 Tightness of linear relaxation
We also compare the tightness of the linear relaxations for the concrete and winequalityred datasets in Figures 11 and 9. Across both datasets, we observe a similar outcome to the synthetic data experiments, whereby elbow+expset is generally the tightest, followed by expset, and finally the original misc formulation. We also observe that generally, the difference diminishes when there are more features in the data, potentially because there are fewer splits per feature, which is typically where the new formulations remove fractional points.
## 7 Conclusions and future work
In this paper, we have proposed a variety of new mixed-integer optimization formulations for modeling the relationship between an input feature vector and the predicted output of a trained decision tree. We have introduced formulations that build on the variable structure from Misic (2020) and formulations that use the input feature directly. We have shown these formulations are provably tighter than existing formulations in some scenarios and have also characterized when some are tighter than others. We have shown conditions where these formulations are ideal, which gives further practical insight into when different formulations might be advantageous depending on the number of trees in the ensemble and the number of features the problem has. In addition to these theoretical insights, we have given experimental conditions where the different formulations succeed both in terms of the time taken to solve to optimality and the tightness of the corresponding linear relaxations. While the experimental results do not always fully agree with the theoretical findings or intuition due to the complex operations of commercial MIO solvers, we have identified situations where each different formulation has advantages and laid the groundwork for future computational studies.
For future work, an interesting avenue is exploring the relationship between the formulations we provide and different polyhedral constraints. While in general, the formulations we provide are not ideal when combined with additional constraints, there may be special cases when they are or at least cuts that can be introduced to remove some of the fractional solutions. |
2310.02337 | Hilbert Expansion of Boltzmann Equation with Soft Potentials and
Specular Boundary Condition in Half-space | Boundary effects play an important role in the study of hydrodynamic limits
in the Boltzmann theory. We justify rigorously the validity of the hydrodynamic
limit from the Boltzmann equation of soft potentials to the compressible Euler
equations by the Hilbert expansion with multi-scales. Specifically, the
Boltzmann solutions are expanded into three parts: interior part, viscous
boundary layer and Knudsen boundary layer. Due to the weak effect of collision
frequency of soft potentials, new difficulty arises when tackling the existence
of Knudsen layer solutions with space decay rate, which has been overcome under
some constraint conditions and losing velocity weight arguments. | Jing Ouyang, Yong Wang | 2023-09-19T01:44:53Z | http://arxiv.org/abs/2310.02337v1 | Hilbert expansion of Boltzmann equation with soft potentials and specular boundary condition in half-space
###### Abstract.
Boundary effects play an important role in the study of hydrodynamic limits in the Boltzmann theory. We justify rigorously the validity of the hydrodynamic limit from the Boltzmann equation of soft potentials to the compressible Euler equations by the Hilbert expansion with multi-scales. Specifically, the Boltzmann solutions are expanded into three parts: interior part, viscous boundary layer and Knudsen boundary layer. Due to the weak effect of collision frequency of soft potentials, new difficulty arises when tackling the existence of Knudsen layer solutions with space decay rate, which has been overcome under some constraint conditions and losing velocity weight arguments.
Key words and phrases:Boltzmann equation, compressible Euler equations, hydrodynamic limit, Hilbert expansion, viscous boundary layer, Knudsen boundary layer
###### Contents
* 1 Introduction and Main Results
* 1.1 Introduction
* 1.2 Asymptotic expansion
* 1.3 Hilbert expansion
* 2 Some Estimates for Soft Boltzmann Operators
* 2.1 Preliminaries
* 2.2 Estimate for \(\mathbf{L}^{-1}\)
* 3 Existence of a Steady Linear Boltzmann Equation
* 3.1 Approximate solutions and uniform estimate
* 3.2 Proof of Theorem 3.1
* 4 Hilbert Expansions for Boltzmann Equation of Soft Potentials
* 4.1 Linear parts of Hilbert expansion
* 4.2 Estimates on the remainder
* 4.3 Proof of Theorem 1.1
## 1. Introduction and Main Results
### Introduction
It is well-known that the Boltzmann equation is closely related to the fluid dynamical systems for both compressible and incompressible flows since the founding work of Maxwell [36] and Boltzmann [6]. In 1912, Hilbert proposed a systematic formal asymptotic expansion for Boltzmann equation with respect to Knudsen number \(\mathscr{K}_{n}\ll 1\). In 1916 and 1917, Enskog and Chapman independently proposed a different formal expansion, respectively. Based on Hilbert or Chapman-Enskog expansions, the standard fluid theory can be derived formally, for instance: the compressible Euler and Navier-Stokes equations, the incompressible Euler and Navier-Stokes (Fourier) equations, _et. al_.
In the past decades, great effort has been devoted to the study of the hydrodynamic limit from the Boltzmann equation to the fluid systems. When the solutions of compressible Euler equations are smooth, Caflisch [7] rigorously justified the hydrodynamic limit of Boltzmann equation to
the compressible Euler equations by a the truncated Hilbert expansion, see also [14, 33, 38, 41], and [17, 18] via a recent \(L^{2}\)-\(L^{\infty}\) framework. When the solutions of compressible Euler equations are consisted by the basic wave patterns (singularities), the convergence has been established in [25, 26, 27, 47, 48] in one-dimension case, and [43] for multi-dimensional planar rarefaction wave. There are also lots of literatures on the hydrodynamic limit of Boltzmann equation to the incompressible fluid equations, see [2, 3, 12, 5, 15, 21, 29, 35, 44] for incompressible Navier-Stokes equations, [28, 23] for incompressible Euler equations, and the references cited therein.
All of the above-mentioned works on the compressible Euler limit were carried out in either spatially periodic domain or the whole space. However, in many important physical models, the physical boundaries occur naturally, and the boundary effects play an important role in the study of hydrodynamic limits in the Boltzmann theory. For initial boundary value problem, by a formal analysis, Sone [40] showed that the solutions contains three kinds of parts, i.e., interior part, viscous boundary layer and Knudsen boundary layer. Recently, Based on a systematic study of the viscous and Knudsen layers and the \(L^{2}-L^{\infty}\) framework, Guo-Huang-Wang [20] first justified rigorously the validity of the Hilbert expansion for the hard sphere Boltzmann equation with specular reflection boundary condition in half-space, which leads to derivations of both compressible Euler equations and acoustic equations, see [31] for Maxwell reflection boundary condition of hard potentials and [32] for diffuse reflection boundary condition of hard sphere.
In the present paper, we aim to justify the hydrodynamic limit to the compressible Euler equations for the Boltzmann equation of soft potentials. The new difficulty for the soft potentials is that it is hard to establish the existence of solution for Knudsen boundary layer with enough space decay rate, which is crucial to close the Hilbert expansion.
To our knowledge, for the specular boundary condition, the known results [13, 22] on the existence of Knudsen boundary layer are for hard sphere, and the exponential space decay was also obtained due to the strong effect of collision frequency \(\nu\cong 1+|v|\). For the other boundary conditions, we refer the readers to [1, 8, 42, 45] for hard potentials and [46] for soft potentials with in-flow boundary condition, [24] for hard sphere with diffuse reflection boundary condition, [4] for hard sphere with phase transition, and the references therein.
We consider the scaled Boltzmann equation
\[F_{t}+v\cdot\nabla_{x}F=\frac{1}{\mathscr{K}_{n}}Q(F,F), \tag{1.1}\]
where \(F(t,x,v)\geq 0\) is the density distribution function for the gas particles with position \(x\in\mathbb{R}^{3}_{+}=\{x\in\mathbb{R}^{3}:x_{3}>0\}\) and velocity \(v\in\mathbb{R}^{3}\) at time \(t>0\), and \(\mathscr{K}_{n}>0\) is Knudsen number which is proportional to the mean free path. The Boltzmann collision term \(Q(F_{1},F_{2})\) on the right is defined in terms of the following bilinear form
\[Q(F_{1},F_{2}) \equiv\int_{\mathbb{R}^{3}}\int_{\mathbb{S}^{2}}B(v-u,\omega)F_{ 1}(u^{\prime})F_{2}(v^{\prime})\,d\omega du\] \[\qquad-\int_{\mathbb{R}^{3}}\int_{\mathbb{S}^{2}}B(v-u,\omega)F_ {1}(u)F_{2}(v)\,d\omega du\] \[:=Q_{+}(F_{1},F_{2})-Q_{-}(F_{1},F_{2}), \tag{1.2}\]
where the relationship between the post-collision velocity \((v^{\prime},u^{\prime})\) of two particles with the pre-collision velocity \((v,u)\) is given by
\[u^{\prime}=u+[(v-u)\cdot\omega]\omega,\quad v^{\prime}=v-[(v-u)\cdot\omega]\omega,\]
for \(\omega\in\mathbb{S}^{2}\), which can be determined by conservation laws of momentum and energy
\[u^{\prime}+v^{\prime}=u+v,\quad|u^{\prime}|^{2}+|v^{\prime}|^{2}=|u|^{2}+|v|^{ 2}.\]
The Boltzmann collision kernel \(B=B(v-u,\omega)\) in (1) depends only on \(|v-u|\) and \(\theta\) with \(\cos\theta=(v-u)\cdot\omega/|v-u|\). Throughout this paper, we consider cutoff soft potential model, i.e.,
\[B(v-u,\omega)=|v-u|^{\kappa}\cdot\beta(\theta),\quad\kappa\in(-3,0),\]
where we assume the Grad cutoff condition holds, i.e.,
\[0\leq\beta(\theta)\leq\beta_{0}|\cos\theta|,\]
for some constant \(\beta_{0}>0\).
Denote \(\vec{n}=(0,0,-1)\) to be the outward normal of \(\mathbb{R}_{+}^{3}\) and the phase boundary in the space \(\mathbb{R}_{+}^{3}\times\mathbb{R}^{3}\) as \(\gamma:=\partial\mathbb{R}_{+}^{3}\times\mathbb{R}^{3}\). We split \(\gamma\) into outgoing boundary \(\gamma_{+}\), incoming boundary \(\gamma_{-}\), and grazing boundary \(\gamma_{0}\):
\[\gamma_{+} =\{(x,v):x\in\partial\mathbb{R}_{+}^{3},v\cdot\vec{n}=-v_{3}>0\},\] \[\gamma_{-} =\{(x,v):x\in\partial\mathbb{R}_{+}^{3},v\cdot\vec{n}=-v_{3}<0\},\] \[\gamma_{0} =\{(x,v):x\in\partial\mathbb{R}_{+}^{3},v\cdot\vec{n}=-v_{3}=0\}.\]
In the present paper, we consider the Boltzmann equation with specular reflection boundary conditions, i.e.,
\[F(t,x,v)|_{\gamma_{-}}=F(t,x,R_{x}v), \tag{1.3}\]
where
\[R_{x}v=v-2\{v\cdot\vec{n}\}\vec{n}=(v_{1},v_{2},-v_{3})^{t}. \tag{1.4}\]
### Asymptotic expansion
From the formal analysis in [40], we know that the thickness of viscous boundary layer is \(\sqrt{\mathscr{L}_{n}}\). For simplicity, we use the new parameter \(\varepsilon=\sqrt{\mathscr{L}_{n}}\) and denote the Boltzmann solution to be \(F^{\varepsilon}\), then the Boltzmann equation (1.1) is rewritten as
\[\partial_{t}F^{\varepsilon}+v\cdot\nabla_{x}F^{\varepsilon}=\frac{1}{ \varepsilon^{2}}Q(F^{\varepsilon},F^{\varepsilon}). \tag{1.5}\]
#### 1.2.1. Interior expansion
We define the interior expansion
\[F^{\varepsilon}(t,x,v)\sim\sum_{k=0}^{\infty}\varepsilon^{k}F_{k}(t,x,v). \tag{1.6}\]
Substituting (1.6) into (1.5) and comparing the order of \(\varepsilon\), one obtains
\[\begin{split}\frac{1}{\varepsilon^{2}}:& 0=Q(F_{0},F_{0}),\\ \frac{1}{\varepsilon}:& 0=Q(F_{0},F_{1})+Q(F_{1},F_{0}), \\ \varepsilon^{0}:&\{\partial_{t}+v\cdot\nabla_{x}\}F_{0 }=Q(F_{0},F_{2})+Q(F_{2},F_{0})+Q(F_{1},F_{1}),\\ \varepsilon:&\{\partial_{t}+v\cdot\nabla_{x}\}F_{1 }=Q(F_{0},F_{3})+Q(F_{3},F_{0})+Q(F_{1},F_{2})+Q(F_{2},F_{1}),\\ &\vdots\\ \varepsilon^{k}:&\{\partial_{t}+v\cdot\nabla_{x}\}F_{k }=Q(F_{0},F_{k+2})+Q(F_{k+2},F_{0})+\sum_{\begin{subarray}{c}i+j=k+2\\ i,j\geq 1\end{subarray}}Q(F_{i},F_{j}).\end{split} \tag{1.7}\]
It follows from \(\eqref{eq:1.1}\) and the celebrated H-theorem that \(F_{0}\) should be a local Maxwellian, i.e.,
\[\mu(t,x,v):=F_{0}(t,x,v)\equiv\frac{\rho(t,x)}{[2\pi T(t,x)]^{3/2}}\exp\bigg{\{} -\frac{|v-\mathfrak{u}(t,x)|^{2}}{2T(t,x)}\bigg{\}}, \tag{1.8}\]
where \(\rho(t,x)\), \(\mathfrak{u}(t,x)=(\mathfrak{u}_{1},\mathfrak{u}_{2},\mathfrak{u}_{3})(t,x)\), and \(T(t,x)\) are defined by
\[\int_{\mathbb{R}^{3}}F_{0}dv=\rho,\quad\int_{\mathbb{R}^{3}}vF_{0}dv=\rho \mathfrak{u},\quad\int_{\mathbb{R}^{3}}|v|^{2}F_{0}dv=\rho|\mathfrak{u}|^{2}+ 3\rho T,\]
which represent the macroscopic density, velocity and temperature, respectively. Multiplying \(\eqref{eq:1.7}_{3}\) by \(1,v_{i},|v|^{2}\) and integrating on \(\mathbb{R}^{3}\), one obtains that \((\rho,\mathfrak{u},T)\) satisfies the compressible Euler system
\[\begin{cases}\partial_{t}\rho+\operatorname{div}(\rho\mathfrak{u})=0,\\ \partial_{t}(\rho\mathfrak{u})+\operatorname{div}(\rho\mathfrak{u}\otimes \mathfrak{u})+\nabla p=0,\\ \partial_{t}[\rho(\frac{3T}{2}+\frac{|\mathfrak{u}|^{2}}{2})]+ \operatorname{div}[\rho\mathfrak{u}(\frac{3T}{2}+\frac{|\mathfrak{u}|^{2}}{2 })]+\operatorname{div}(p\mathfrak{u})=0,\end{cases} \tag{1.9}\]
where \(p=\rho T\) is the pressure function. For the compressible Euler equations (1.9), we impose the slip boundary condition
\[\mathfrak{u}\cdot\vec{n}|_{x_{3}=0}=\mathfrak{u}_{3}|_{x_{3}=0}=0. \tag{1.10}\]
and the initial data
\[(\rho,\mathfrak{u},T)(0,x)=(1+\delta\varphi_{0},\delta\Phi_{0},1+\delta\vartheta _{0})(x), \tag{1.11}\]
with \(\|(\varphi_{0},\Phi_{0},\vartheta_{0})\|_{H^{s_{0}}}\leq 1\) where \(\delta>0\) is a parameter and \(s_{0}\geq 3\) is some given positive number. Choose \(\delta_{1}>0\) so that for any \(\delta\in(0,\delta_{1}]\), the positivity of \(1+\delta\varphi_{0}\) and \(1+\delta\vartheta_{0}\) is guaranteed. Then for each \(\delta\in(0,\delta_{1}]\), there is a family of classical solutions \((\rho^{\delta},\mathfrak{u}^{\delta},T^{\delta})\in C([0,\tau^{\delta}];H^{s_{ 0}}(\mathbb{R}^{3}_{+}))\cap C^{1}([0,\tau^{\delta}];H^{s_{0}-1}(\mathbb{R}^{3 }_{+}))\) of the compressible Euler equations (1.9)-(1.11) such that \(\rho^{\delta}>0\) and \(T^{\delta}>0\).
For later use, we define the linearized collision operator \(\mathbf{L}\) by
\[\mathbf{L}\mathfrak{h}=-\frac{1}{\sqrt{\mu}}\Big{\{}Q(\mu,\sqrt{\mu} \mathfrak{h})+Q(\sqrt{\mu}\mathfrak{h},\mu)\Big{\}}. \tag{1.12}\]
Denote the null space of \(\mathbf{L}\) as \(\mathcal{N}\), it is clear that
\[\mathcal{N}=\operatorname{span}\{\chi_{0},\chi_{1},\chi_{2},\chi_{3},\chi_{4 }\},\]
where
\[\chi_{0}=\frac{1}{\sqrt{\rho}}\sqrt{\mu},\quad\chi_{i}=\frac{1}{\sqrt{\rho T} }(v_{i}-\mathfrak{u}_{i})\sqrt{\mu},\quad\chi_{4}=\frac{1}{\sqrt{6\rho}}( \frac{|v-\mathfrak{u}|^{2}}{T}-3)\sqrt{\mu}.\]
For each \(k\geq 1\), decompose \(f_{k}:=\frac{F_{k}}{\sqrt{\mu}}\) as
\[f_{k} =\mathbf{P}f_{k}+\{\mathbf{I}-\mathbf{P}\}f_{k}\] \[\equiv\left\{\frac{\rho_{k}}{\sqrt{\rho}}\chi_{0}+\sum_{j=1}^{3} \sqrt{\frac{\rho}{T}}u_{k,j}\cdot\chi_{j}+\sqrt{\frac{\rho}{6}}\frac{\theta_{ k}}{T}\chi_{4}\right\}+\{\mathbf{I}-\mathbf{P}\}f_{k}\] \[\equiv\left\{\frac{\rho_{k}}{\rho}+u_{k}\cdot\frac{v-\mathfrak{u }}{T}+\frac{\theta_{k}}{6T}(\frac{|v-\mathfrak{u}|^{2}}{T}-3)\right\}\sqrt{ \mu}+\{\mathbf{I}-\mathbf{P}\}f_{k}, \tag{1.13}\]
where \(\mathbf{P}\) is the macroscopic projection onto \(\mathcal{N}\).
#### 1.2.2. Viscous boundary layer expansion
Generally, the solution of interior expansion \(F_{i},i=1,2,\cdots\) do not satisfy the specular reflection boundary conditions. To overcome the difficulty coming from the boundary condition, the boundary layer expansion is needed, see [20] and [39, 40].
We define the scaled normal coordinate:
\[y:=\frac{x_{3}}{\varepsilon}. \tag{1.14}\]
For simplicity of presentation, we denote
\[x_{\shortshortshort}=(x_{1},x_{2}),\quad\nabla_{\shortshortshort}=(\partial _{x_{1}},\partial_{x_{2}})\quad\text{and}\quad v_{\shortshort}=(v_{1},v_{2}). \tag{1.15}\]
Motivated by [40, Section 3.4.1], we define the viscous boundary layer expansion as
\[\bar{F}^{\varepsilon}(t,x_{\shortshortshort},y)\sim\sum_{k=1}^{\infty} \varepsilon^{k}\bar{F}_{k}(t,x_{\shortshort},y,v).\]
Plugging \(F^{\varepsilon}+\bar{F}^{\varepsilon}\) into the Boltzmann equation (1.5) and comparing the order of \(\varepsilon\), then using (1.7), in the neighborhood of physical boundary, we have
\[\begin{split}\frac{1}{\varepsilon}:&\qquad 0=Q(\mu_{0}, \bar{F}_{1})+Q(\bar{F}_{1},\mu_{0}),\\ \varepsilon^{0}:&\quad v_{3}\frac{\partial\bar{F}_{1 }}{\partial y}=[Q(\mu_{0},\bar{F}_{2})+Q(\bar{F}_{2},\mu_{0})]+y[Q(\partial_{ 3}\mu_{0},\bar{F}_{1})+Q(\bar{F}_{1},\partial_{3}\mu_{0})]\\ &\qquad\qquad+Q(F_{1}^{0},\bar{F}_{1})+Q(\bar{F}_{1},F_{1}^{0})+ Q(\bar{F}_{1},\bar{F}_{1}),\\ &\qquad\qquad\vdots\\ \varepsilon^{k}:&\quad\{\partial_{t}+v_{{}_{\shortparallel}} \cdot\nabla_{{}_{\shortparallel}}\}\bar{F}_{k}+v_{3}\frac{\partial\bar{F}_{ k+1}}{\partial y}=Q(\mu_{0},\bar{F}_{k+2})+Q(\bar{F}_{k+2},\mu_{0})\\ &\qquad\qquad+\sum_{\begin{subarray}{c}l+j=k+2\\ 1\leq l\leq b,\,j\geq 1\end{subarray}}\frac{y^{l}}{l!}\big{[}Q(\partial_{3}^{l} \mu_{0},\bar{F}_{j})+Q(\bar{F}_{j},\partial_{3}^{l}\mu_{0})\big{]}\\ &\qquad+\sum_{\begin{subarray}{c}i+j=k+2\\ i,j\geq 1\end{subarray}}\big{[}Q(F_{i}^{0},\bar{F}_{j})+Q(\bar{F}_{j},F_{i}^{0})+ Q(\bar{F}_{i},\bar{F}_{j})\big{]}\\ &\qquad+\sum_{\begin{subarray}{c}i+j+l=k+2\\ 1\leq l\leq b,\,i,j\geq 1\end{subarray}}\frac{y^{l}}{l!}\big{[}Q(\partial_{3}^{l}F_{i}^ {0},\bar{F}_{j})+Q(\bar{F}_{j},\partial_{3}^{l}F_{i}^{0})\big{]},\quad\text{ for }k\geq 1,\end{split} \tag{1.16}\]
where we have used the Taylor expansions of \(\mu\) and \(F_{i}\) at \(x_{3}=0\), i.e.,
\[\mu(t,x_{1},x_{2},x_{3},v)=\mu_{0}+\sum_{l=1}^{\mathfrak{b}}\frac{1}{l!} \partial_{3}^{l}\mu_{0}\cdot x_{3}^{l}+\frac{x_{3}^{\mathfrak{b}+1}}{( \mathfrak{b}+1)!}\partial_{3}^{\mathfrak{b}+1}\tilde{\mu}, \tag{1.17}\]
\[F_{i}(t,x_{1},x_{2},x_{3},v)=F_{i}^{0}+\sum_{l=1}^{\mathfrak{b}}\frac{1}{l!} \partial_{3}^{l}F_{i}^{0}\cdot x_{3}^{l}+\frac{x_{3}^{\mathfrak{b}+1}}{( \mathfrak{b}+1)!}\partial_{3}^{\mathfrak{b}+1}\mathfrak{F}_{i},\quad i\geq 1. \tag{1.18}\]
Here we have used the simplified notations
\[\begin{split}\partial_{3}^{l}\mu_{0}:&=(\partial_{3 }^{l}\mu)(t,x_{1},x_{2},0,v),\quad\partial_{3}^{\mathfrak{b}+1}\tilde{\mu}:=( \partial_{3}^{\mathfrak{b}+1}\mu)(t,x_{1},x_{2},\xi_{0},v),\\ \partial_{3}^{l}F_{i}^{0}:&=(\partial_{3}^{l}F_{i}) (t,x_{1},x_{2},0,v),\quad\partial_{3}^{\mathfrak{b}+1}\mathfrak{F}_{i}:=( \partial_{3}^{\mathfrak{b}+1}F_{i})(t,x_{1},x_{2},\xi_{i},v),\end{split} \tag{1.19}\]
for some \(\xi_{i}\in(0,x_{3})\) with \(i\geq 0\). The number \(\mathfrak{b}\in\mathbb{N}_{+}\) will be chosen later.
For the macro-micro decomposition of viscous and Knudsen boundary layers, we denote the corresponding linearized operator, macroscopic projection, and null space as
\[\mathbf{L}_{0}=\mathbf{L}(t,x_{{}_{\shortparallel}},0,v),\qquad\mathbf{P}_{0 }=\mathbf{P}(t,x_{{}_{\shortparallel}},0,v),\qquad\mathcal{N}_{0}=\mathcal{N} (t,x_{{}_{\shortparallel}},0,v).\]
It is noted that \(\mathbf{L}_{0},\mathbf{P}_{0}\) and \(\mathcal{N}_{0}\) are independent of normal variables. We define
\[\bar{f}_{k}:=\frac{\bar{F}_{k}}{\sqrt{\mu_{0}}}, \tag{1.20}\]
then it holds that
\[\begin{split}\bar{f}_{k}&=\mathbf{P}_{0}\bar{f}_{k }+\{\mathbf{I}-\mathbf{P}_{\mathbf{0}}\}\bar{f}_{k}\\ &=\left\{\frac{\bar{\rho}_{k}}{\rho^{0}}+\bar{u}_{k}\cdot\frac{v- \mathfrak{u}^{0}}{T^{0}}+\frac{\bar{\theta}_{k}}{6T^{0}}(\frac{|v-\mathfrak{u} ^{0}|^{2}}{T^{0}}-3)\right\}\sqrt{\mu_{0}}+\{\mathbf{I}-\mathbf{P}_{\mathbf{0 }}\}\bar{f}_{k},\end{split}\]
where and whereafter we always use the notation \((\rho^{0},\mathfrak{u}^{0},T^{0}):=(\rho,\mathfrak{u},T)(t,x_{{}_{\shortparallel}},0)\).
Throughout the present paper, we always assume the far-field condition
\[\bar{f}_{k}(t,x_{{}_{\shortparallel}},y,v)\to 0,\quad\text{as }y\to+\infty. \tag{1.21}\]
#### 1.2.3. Knudsen boundary layer expansion
To construct the solution satisfying the boundary condition at higher orders, we still need the Knudsen boundary layer. We define the new scaled normal coordinate:
\[\eta:=\frac{x_{3}}{\varepsilon^{2}}.\]
The Knudsen boundary layer expansion is defined as
\[\hat{F}^{\varepsilon}(t,x_{\
Using (1.7), (1.16) and (1.22), one can obtain the equation of \(F_{R}^{\varepsilon}\)
\[\partial_{t}F_{R}^{\varepsilon}+v\cdot\nabla_{x}F_{R}^{ \varepsilon}-\frac{1}{\varepsilon^{2}}\{Q(\mu,F_{R}^{\varepsilon})+Q(F_{R}^{ \varepsilon},\mu)\}\] \[=\varepsilon^{3}Q(F_{R}^{\varepsilon},F_{R}^{\varepsilon})+\sum_ {i=1}^{N}\varepsilon^{i-2}\{Q(F_{i}+\bar{F}_{i}+\hat{F}_{i},F_{R}^{\varepsilon })+Q(F_{R}^{\varepsilon},F_{i}+\bar{F}_{i}+\hat{F}_{i})\}\] \[\quad+R^{\varepsilon}+\bar{R}^{\varepsilon}+\hat{R}^{\varepsilon}, \tag{1.25}\]
where \(R^{\varepsilon},\bar{R}^{\varepsilon}\) and \(\hat{R}^{\varepsilon}\) are defined in (4.8)-(4.10).
The main purpose of the present paper is to establish the validity of the Hilbert expansion for the Boltzmann equation around the local Maxwellian \(\mu\) determined by compressible Euler equations (1.9). For later use, we define
\[F_{R}^{\varepsilon}=\sqrt{\mu}f_{R}^{\varepsilon}. \tag{1.26}\]
To use the \(L^{2}\)-\(L^{\infty}\) framework [17, 16], we also introduce a global Maxwellian
\[\mu_{M}:=\frac{1}{(2\pi T_{M})^{3/2}}\exp\bigg{\{}-\frac{|v|^{2}}{2T_{M}} \bigg{\}},\]
where \(T_{M}>0\) satisfies the condition
\[T_{M}<\min_{x\in\mathbb{R}_{+}^{3}}T(t,x)\leq\max_{x\in\mathbb{R}_{+}^{3}}T(t,x)<2T_{M}. \tag{1.27}\]
Using (1.27), one can easily deduce that there exists a positive constant \(C>0\) such that for some \(\frac{1}{2}<\alpha<1\), the following holds:
\[\frac{1}{C}\mu_{M}\leq\mu(t,x,v)\leq C\mu_{M}^{\alpha}. \tag{1.28}\]
We further define
\[F_{R}^{\varepsilon}=\{1+|v|^{2}\}^{-\frac{\varepsilon}{2}}\sqrt{\mu_{M}}h_{R}^ {\varepsilon}\equiv\frac{1}{\varpi_{\mathfrak{t}}(v)}\sqrt{\mu_{M}}h_{R}^{ \varepsilon}, \tag{1.29}\]
with \(\mathfrak{k}\geq 0\) and \(\varpi_{\mathfrak{t}}:=(1+|v|^{2})^{\frac{1}{2}}\).
**Theorem 1.1**.: _Let \(\tau^{\delta}>0\) be the life-span of smooth solution of compressible Euler equations (1.9). Let \(\mathfrak{k}\geq 16\), \(N\geq 6\) and \(\mathfrak{b}\geq 5\). We assume the initial data_
\[F^{\varepsilon}(0,x,v) =\mu(0,x,v)+\sum_{i=1}^{N}\varepsilon^{i}\left\{F_{i}(0,x,v)+ \bar{F}_{i}(0,x_{\text{\tiny{\rm{i}}}},\frac{x_{3}}{\varepsilon},v)+\hat{F}_ {i}(0,x_{\text{\tiny{\rm{i}}}},\frac{x_{3}}{\varepsilon^{2}},v)\right\}\] \[\quad+\varepsilon^{5}F_{R}^{\varepsilon}(0,x,v)\geq 0,\]
_and \(F_{i}(0),\bar{F}_{i}(0),i=1,\cdots,N\) satisfy the regularity and compatibility conditions described in Proposition 4.1, and_
\[\Big{\|}(\frac{F_{R}^{\varepsilon}}{\sqrt{\mu}})(0)\Big{\|}_{L^{2}_{x,v}}+ \varepsilon^{3}\Big{\|}(\varpi_{\mathfrak{t}}\frac{F_{R}^{\varepsilon}}{ \sqrt{\mu_{M}}})(0)\Big{\|}_{L^{\infty}_{x,v}}<\infty.\]
_Then there exists a small positive constant \(\varepsilon_{0}>0\) such that the IBVP problem (1.5) and (1.3) has a unique solution for \(\varepsilon\in(0,\varepsilon_{0}]\) over the time interval \(t\in[0,\tau^{\delta}]\) in the following form of expansion_
\[F^{\varepsilon}(t,x,v) =\mu(t,x,v)+\sum_{i=1}^{N}\varepsilon^{i}\left\{F_{i}(t,x,v)+\bar {F}_{i}(t,x_{\text{\tiny{\rm{i}}}},\frac{x_{3}}{\varepsilon},v)+\hat{F}_{i}( t,x_{\text{\tiny{\rm{i}}}},\frac{x_{3}}{\varepsilon^{2}},v)\right\}\] \[\quad+\varepsilon^{5}F_{R}^{\varepsilon}(t,x,v)\geq 0, \tag{1.30}\]
_with_
\[\sup_{t\in[0,\tau^{\delta}]}\left\{\left\|\frac{F_{R}^{\varepsilon}(t)}{\sqrt{ \mu}}\right\|_{L^{2}_{x,v}}+\varepsilon^{3}\Big{\|}\varpi_{\mathfrak{t}}(v) \frac{F_{R}^{\varepsilon}(t)}{\sqrt{\mu_{M}}}\Big{\|}_{L^{\infty}_{x,v}} \right\}\leq C(\tau^{\delta})<\infty. \tag{1.31}\]
_Here the functions \(F_{i}(t,x,v),\tilde{F}_{i}(t,x_{\!v},y,v)\) and \(\hat{F}_{i}(t,x_{\!v},\eta,v)\) are the interior expansion, viscous and Knudsen boundary layers respectively constructed in Proposition 4.1._
**Remark 1.2**.: _From (1.30)-(1.31) and the uniform estimates in Proposition 4.1, it is direct to check that_
\[\sup_{t\in[0,\tau^{\varepsilon}]}\left\{\Big{\|}\Big{(}\frac{F^{ \varepsilon}-\mu}{\sqrt{\mu}}\Big{)}(t)\Big{\|}_{L^{2}(\mathbb{R}^{3}_{+} \times\mathbb{R}^{3})}+\Big{\|}\varpi_{\mathbf{t}}\left(\frac{F^{\varepsilon }-\mu}{\sqrt{\mu_{M}}}\Big{)}(t)\Big{\|}_{L^{\infty}(\mathbb{R}^{3}_{+}\times \mathbb{R}^{3})}\right\}\leq C\varepsilon\to 0.\]
_Hence we have established the hydrodynamic limit from the Boltzmann equation to the compressible Euler system for the half-space problem._
**Remark 1.3**.: _For simplicity of presentation, we only give details of proof for the Boltzmann equation of soft potentials in the present paper. And we point out that it is also valid for the cases of hard potentials by similar arguments._
Now we briefly comment the key points of present paper. To estimate the microscopic part of interior expansions and viscous boundary layers, we need some decay property on pseudo-inverse linear operator \(\mathbf{L}^{-1}\) and \(\mathbf{L}^{-1}_{0}\). For \(-\frac{3}{2}<\kappa\leq 1\), the authors [30] obtained
\[|\mu^{-\frac{\eta}{2}}\mathbf{L}^{-1}\mathfrak{g}(v)|\lesssim\| \mu^{-\frac{\eta^{\prime}}{2}}\mathfrak{g}\|_{L^{\infty}_{\tau}},\quad 0<q^{ \prime}<1.\]
Due to the strong singularity, it is hard to establish above estimate for \(-3<\kappa\leq\frac{3}{2}\). In this paper, by observing the feature of Hilbert expansion on interior parts and viscous boundary layers, we can get the following control
\[|\mu^{-\frac{\eta}{2}}\mathbf{L}^{-1}\mathfrak{g}(v)|^{2} \lesssim\sum_{0\leq\alpha\leq N}\|\partial_{v}^{\alpha}\{\mu^{- \frac{\eta}{2}}\mathbf{L}^{-1}\mathfrak{g}\}\|_{L^{2}}^{2}\lesssim\sum_{0\leq \alpha\leq N}\|\nu^{-1}\mu^{-\frac{\eta^{\prime}}{2}}\partial_{v}^{\alpha} \mathfrak{g}\|_{L^{2}}^{2},\ N\geq 2,\,0<q<q^{\prime}<1, \tag{1.32}\]
by losing velocity derivatives, see section 2.2 for details. We point out that the losing velocity derivatives is natural since the interior parts and viscous boundary layers always possess enough regularity with respect to \(v\in\mathbb{R}^{3}\).
The construction of Knudsen layers is more delicate for Boltzmann equation of soft potentials. Noting (1.22), to solve the Knudsen layer, it is equivalent to study the following linear boundary value problem
\[\begin{cases}v_{3}\partial_{\eta}f+\nu^{0}(v)f-K^{0}f=\mathfrak{g},\\ f(0,v)|_{v_{3}>0}=f(0,R_{\eta}v),\\ \lim_{\eta\to\infty}f(\eta,v)=0.\end{cases} \tag{1.33}\]
Especially, by noting the right hand side (RHS) of (1.22), we have to get at least space polynomial decay for the solution of (1.33) to continue the construction of Hilbert expansion. For hard sphere case, one can obtain even exponential decay with the help of strong effect of collision frequency \(\nu(v)\cong 1+|v|\). However, it is hard for the cases of soft potentials since the effect of collision frequency \(\nu(v)\cong(1+|v|)^{\kappa}\to 0\) as \(|v|\to\infty\) is very weak.
To solve (1.33), we first establish the _a prior_ uniform \(L^{\infty}\) estimate for an approximate problem (see (3.17)), i.e.,
\[\|w_{l}f^{\lambda}\|_{L^{\infty}_{\eta,v}}+|w_{l}f^{\lambda}|_{L^ {\infty}(\gamma_{+})}\leq C\Big{(}\|\sqrt{\nu^{0}}f^{\lambda}\|_{L^{2}_{\eta,v }}+\|(\nu^{0})^{-1}w_{l}g\|_{L^{\infty}_{\eta,v}}+|w_{l+4}r|_{L^{\infty}( \gamma_{+})}\Big{)}, \tag{1.34}\]
see Lemma 3.3 for details. Here we point out that the constant in (1.34) is independent of the length of domain \(\Omega=[0,d]\). For soft potentials, since the collision effect is weak, the key point is to take the number of collisions with the boundaries to depend on \(v\), that is \(k=\tilde{k}_{0}|v_{3}|(1+|v|)^{|\kappa|}\) with \(\tilde{k}_{0}\gg 1\).
Under some constraint conditions, we can have the following \(L^{2}_{\eta,v}\) decay estimate by lossing velocity weight arguments
\[\int_{0}^{d}(1+\eta)^{n}\|w_{l}f\|_{\nu}^{2}d\eta\leq C_{n}\int_{0}^{d}(1+\eta)^{ 2p_{n}}\|w_{l+2n+2}g\|_{L^{2}_{\pi}}^{2}d\eta,\quad p_{n}>\frac{n}{2}+1, \tag{1.35}\]
see Lemmas 3.9-3.10 for details. For the space decay rate in \(L^{\infty}_{\eta,v}\), we multiply (1.33) by \((1+\eta)^{n}\) to obtain
\[v_{3}\partial_{\eta}\{(1+\eta)^{n}f\}+\nu^{0}(v)\{(1+\eta)^{n}f\}-K^{0}\{(1+ \eta)^{n}f\}=(1+\eta)^{n}\mathfrak{g}+nv_{3}(1+\eta)^{n-1}f,\]
which yields that
\[\|w_{l}\,(1+\eta)^{n}f\|_{L^{\infty}_{\eta,v}} \lesssim\|w_{l+4}\,(1+\eta)^{n-1}f\|_{L^{\infty}_{\eta,v}}+\|( \nu^{0})^{\frac{1}{2}}(1+\eta)^{n}f\|_{L^{2}_{\eta,v}}\] \[+\|(\nu^{0})^{-1}w_{l}\,(1+\eta)^{n}\mathfrak{g}\|_{L^{\infty}_{ \eta,v}}.\]
Then, using (1.35) and an induction arguments on \(n\), we finally obtain that
\[\|w_{l}\,(1+\eta)^{n}f\|_{L^{\infty}_{\eta,v}} \lesssim\|w_{l+4n+4}\,(1+\eta)^{q_{n}}\mathfrak{g}\|_{L^{\infty}_ {\eta,v}},\quad\text{for }q_{n}>n+\frac{3}{2}.\]
With above estimates, we obtain the existence of Knudsen boundary layer problem with enough space decay estimate in \(L^{\infty}_{\eta,v}\).
With the help of above estimates on \(\mathbf{L}^{-1}\) and Knudsen boundary layer, by the same arguments as in [20], we can establish Hilbert expansion of Boltzmann equation of soft potentials with multi-scales in half-space.
The paper is organized as follows. In section 2, we give some basic estimates on collision operator and establish the decay estimate of \(\mathbf{L}^{-1}\) for soft potentials. Section 3 is devoted to existence of Knudsen boundary layer of soft potentials with enough space decay rate. In section 4, we construct the Hilbert expansion of soft Boltzmann equation and prove Theorem 1.1.
**Notations.** Throughout the present paper, \(C\) denotes a generic positive constant and vary from line to line. And \(C(a),C(b),\cdots\) denote the generic positive constants depending on \(a,\ b,\cdots\), respectively, which also may vary from line to line. We use \(\langle\cdot,\cdot\rangle\) to denote the standard \(L^{2}\) inner product in \(\mathbb{R}^{3}_{v}\). \(\|\cdot\|_{L^{2}}\) denotes the standard \(L^{2}(\mathbb{R}^{3}_{+}\times\mathbb{R}^{3}_{v})\)-norm, \(\|\cdot\|_{L^{\infty}}\) denotes the \(L^{\infty}(\mathbb{R}^{3}_{+}\times\mathbb{R}^{3}_{v})\)-norm and \(\|\cdot\|_{\nu}\) denotes \(\langle\nu\cdot,\cdot\rangle^{\frac{1}{2}}\).
## 2. Some Estimates for Soft Boltzmann Operators
### Preliminaries
It follows from (1.12) that \(\mathbf{L}\mathfrak{h}=\nu(v)\mathfrak{h}-K\mathfrak{h}\) with
\[\nu(v)=\iint_{\mathbb{R}^{3}\times\mathbb{S}^{2}}B(v-u,\omega)\mu(u)\,d \omega du\cong(1+|v|)^{\kappa}\quad\text{and}\quad K\mathfrak{h}=K_{1} \mathfrak{h}-K_{2}\mathfrak{h}, \tag{2.1}\]
where
\[(K_{1}\mathfrak{h})(v)=\mu^{\frac{1}{2}}(v)\iint_{\mathbb{R}^{3}\times\mathbb{ S}^{2}}\mathfrak{h}(u)\mu^{\frac{1}{2}}(u)B(v-u,\omega)\,d\omega du, \tag{2.2}\]
and
\[(K_{2}\mathfrak{h})(v) =\frac{1}{\sqrt{\mu}}\iint_{\mathbb{R}^{3}\times\mathbb{S}^{2}} B(v-u,\omega)\mu(v^{\prime})[\sqrt{\mu(u^{\prime})}\mathfrak{h}(u^{\prime})+\mu(u^{ \prime})\sqrt{\mu(v^{\prime})}\mathfrak{h}(v^{\prime})]dud\omega\] \[=\mu^{\frac{1}{2}}(v)\iint_{\mathbb{R}^{3}\times\mathbb{S}^{2}}[ \mu^{-\frac{1}{2}}(v^{\prime})\mathfrak{h}(v^{\prime})+\mu^{-\frac{1}{2}}(u^{ \prime})\mathfrak{h}(u^{\prime})]\mu(u)B(v-u,\omega)\,d\omega du,\] \[=2\mu^{\frac{1}{2}}(v)\iint_{\mathbb{R}^{3}\times\mathbb{S}^{2}} \mu^{-\frac{1}{2}}(v^{\prime})\mathfrak{h}(v^{\prime})\mu(u)B(v-u,\omega)\,d \omega du. \tag{2.3}\]
By standard arguments as in [11], we can rewrite \(K_{i}\mathfrak{h}=\int_{\mathbb{R}^{3}}k_{i}(u,v)\mathfrak{h}(u)du\) with
\[k_{1}(u,v)=C|v-u|^{\kappa}\mu^{\frac{1}{2}}(v)\mu^{\frac{1}{2}}(u),\]
\[0\leq k_{2}(u,v)\leq\frac{C_{\kappa}}{|u-v|^{\frac{3-\kappa}{2}}}\exp\Big{\{}- \frac{1}{8}|v-u|^{2}-\frac{1}{8}\frac{(|v-u|^{2}-|u-u|^{2})^{2}}{|u-v|^{2}}\Big{\}}.\]
It is well-known that there is a positive constant \(c_{0}>0\) such that
\[\langle\mathbf{L}\mathfrak{h},\mathfrak{h}\rangle\geq c_{0}\| \{\mathbf{I}-\mathbf{P}\}\mathfrak{h}\|_{\nu}^{2}.\]
For soft potentials, motivated by [37], we define a monotone cutoff function \(\chi_{z}(s)\in C^{\infty}(0,\infty)\) satisfying
\[\chi_{z}(s)\equiv 0\text{ for }0\leq s\leq z,\quad\chi_{z}(s)\equiv 1\text{ for }s\geq 2z,\quad 0\leq\chi_{z}(s)\leq 1\text{ for all }s>0, \tag{2.4}\]
where \(z\) is a parameter. Define
\[(K^{m}\mathfrak{h})(v) =\int_{\mathbb{R}^{3}}\int_{\mathbb{S}^{2}}B(|v-u|,\theta)\tilde{ \chi}_{m}(|v-u|)\sqrt{\mu(u)\mu(v)}\mathfrak{h}(u)d\omega du\] \[-\int_{\mathbb{R}^{3}}\int_{\mathbb{S}^{2}}B(v-u,\theta)\tilde{ \chi}_{m}(|v-u|)\sqrt{\mu(u)\mu(u^{\prime})}\mathfrak{h}(v^{\prime})d\omega du\] \[-\int_{\mathbb{R}^{3}}B(|v-u|,\theta)\tilde{\chi}_{m}(|v-u|) \sqrt{\mu(u)\mu(v^{\prime})}\mathfrak{h}(u^{\prime})d\omega du\] \[=:K_{1}^{m}\mathfrak{h}(v)-K_{2}^{m}\mathfrak{h}(v),\]
and \(K^{c}=K-K^{m}\), where \(\tilde{\chi}_{m}=1-\chi_{m}\). We denote
\[(K^{m}\mathfrak{h})(v)=\int_{\mathbb{R}^{3}}k^{m}(v,u)\mathfrak{ h}(u)du,\quad(K^{c}\mathfrak{h})(v)=\int_{\mathbb{R}^{3}}k^{c}(v,u)\mathfrak{ h}(u)du. \tag{2.5}\]
**Lemma 2.1** ([9]).: _For any \(0<m\leq 1\), it holds that_
\[|(K^{m}\mathfrak{h})(v)|\leq Cm^{3+\kappa}e^{-\frac{|v-u|^{2}}{ 10}}\|\mathfrak{h}\|_{L^{\infty}_{v}}, \tag{2.6}\]
_where \(C>0\) is independent of \(m\). The kernels \(k^{m}(v,u)\) and \(k^{c}(v,u)\) satisfy_
\[|k^{m}(v,u)|\leq C_{\kappa}\{|v-u|^{\kappa}+|v-u|^{-\frac{3-\kappa }{2}}\}e^{-\frac{|v-u|^{2}+|u-u|^{2}}{16}}, \tag{2.7}\]
_and_
\[|k^{c}(v,u)|\leq \frac{C_{\kappa}m^{a(\kappa-1)}}{|v-u|^{1+\frac{(1-a)}{2}(1- \kappa)}}\frac{1}{(1+|v-\mathfrak{u}|+|u-\mathfrak{u}|)^{a(1-\kappa)}}e^{- \frac{|v-u|^{2}-|u-u|^{2}|^{2}}{16|v-u|^{2}}}\] \[+C|v-u|^{\kappa}e^{-\frac{|v-u|^{2}}{4}}e^{-\frac{|u-u|^{2}}{4}}, \tag{2.8}\]
_where \(a\in[0,1]\) is an arbitrary constant and \(C_{\kappa}\) depending only on \(\kappa\)._
**Remark 2.2**.: _The original version of Lemma 2.1 was proved in [9] for the global Maxwellian. And it is direct to check that it is still valid for local Maxwellian. We omit the details for simplicity of presentation._
Denote
\[\tilde{w}(v)=(1+|v|^{2})^{\frac{1}{2}}\mu^{-\mathfrak{a}}\quad \text{with }l\geq 0,0\leq\mathfrak{a}<\frac{1}{2},\]
and
\[K^{c}_{\tilde{w}}h\equiv\tilde{w}K^{c}(\frac{h}{\tilde{w}})= \int_{\mathbb{R}^{3}}k^{c}_{\tilde{w}}(v,u)h(u)du.\]
Then, from Lemma 2.1, it is clear that
\[\int_{\mathbb{R}^{3}}|k^{c}_{\tilde{w}}(v,u)|e^{\frac{|v-u|^{2}}{ 32}}du \leq Cm^{\kappa-1}(1+|v|)^{\kappa-2}, \tag{2.9}\] \[\int_{\mathbb{R}^{3}}|k^{c}_{\tilde{w}}(v,u)|e^{\frac{|v-u|^{2}}{ 32}}du \leq C(1+|v|)^{-1}, \tag{2.10}\]
where \(C\) is a constant independent of \(m\).
**Lemma 2.3** ([34]).: _Let \(\Gamma(\mathfrak{h},\mathfrak{g})=\frac{1}{\sqrt{\mu}}Q(\sqrt{\mu}\mathfrak{h}, \sqrt{\mu}\mathfrak{g})\). For \(\kappa\in(-3,0)\), it holds that_
\[\Big{|}\int_{\mathbb{R}^{3}}\Gamma(\mathfrak{g}_{1},\mathfrak{g}_{2}) \mathfrak{g}_{3}dv\Big{|}\leq C\{\|\mathfrak{g}_{3}\|_{\nu}\|\mathfrak{g}_{2 }\|_{\nu}\|\varpi_{k}\mathfrak{g}_{1}\|_{L^{\infty}}+\|\mathfrak{g}_{3}\|_{ \nu}\|\mathfrak{g}_{1}\|_{\nu}\|\varpi_{k}\mathfrak{g}_{2}\|_{L^{\infty}}\}, \quad k>\frac{3}{2}. \tag{2.11}\]
### Estimate for \(\mathbf{L}^{-1}\)
To consider the derivatives for operators \(K_{1},K_{2}\), we denote \(\xi:=u-v\). Then one can rewrite \(K_{1}\mathfrak{h}\) as
\[K_{1}\mathfrak{h}(v) =\iint_{\mathbb{R}^{3}\times\mathbb{S}^{2}}|u-v|^{\kappa}\beta( \theta)\mu^{\frac{1}{2}}(v)\mu^{\frac{1}{2}}(u)\mathfrak{h}(u)\,d\omega du\] \[=\iint_{\mathbb{R}^{3}\times\mathbb{S}^{2}}|\xi|^{\kappa}\beta( \theta)\mu^{\frac{1}{2}}(v)\mu^{\frac{1}{2}}(v+\xi)\mathfrak{h}(v+\xi)\,d \omega d\xi,\]
which yields that
\[\partial_{v}^{\alpha}(K_{1}\mathfrak{h}) =\iint_{\mathbb{R}^{3}\times\mathbb{S}^{2}}|\xi|^{\kappa}\beta( \theta)\mu^{\frac{1}{2}}(v)\mu^{\frac{1}{2}}(v+\xi)\partial_{v}^{\alpha} \mathfrak{h}(v+\xi)\,d\omega d\xi\] \[\quad+\sum_{0\leq\alpha^{\prime}<\alpha}C_{\alpha}^{\alpha^{ \prime}}\iint_{\mathbb{R}^{3}\times\mathbb{S}^{2}}|\xi|^{\kappa}\partial_{v}^ {-\alpha^{\prime}}\big{(}\beta(\theta)\mu^{\frac{1}{2}}(v)\mu^{\frac{1}{2}}(v+ \xi)\big{)}\partial_{v}^{\alpha^{\prime}}\mathfrak{h}(v+\xi)\,d\omega d\xi,\]
where \(\alpha=(\alpha_{1},\alpha_{2},\alpha_{3})\) is the multi-index, and \(\partial_{v}^{\alpha}:=\partial_{v_{1}}^{\alpha_{1}}\partial_{v_{2}}^{\alpha_ {2}}\partial_{v_{3}}^{\alpha_{3}}\). For small positive number \(\epsilon\), it is direct to check that
\[|\partial_{v}^{\alpha-\alpha^{\prime}}\big{(}\beta(\theta)\mu^{\frac{1}{2}}( v)\mu^{\frac{1}{2}}(v+\xi)\big{)}|\leq C_{\epsilon,N}\mu^{\frac{1}{2}-\epsilon}(v) \mu^{\frac{1}{2}-\epsilon}(v+\xi).\]
Hence one obtains
\[|\partial_{v}^{\alpha}(K_{1}\mathfrak{h})| \leq\iint_{\mathbb{R}^{3}\times\mathbb{S}^{2}}|\xi|^{\kappa}\beta (\theta)\mu^{\frac{1}{2}}(v)\mu^{\frac{1}{2}}(v+\xi)|\partial_{v}^{\alpha} \mathfrak{h}(v+\xi)|\,d\omega d\xi \tag{2.12}\] \[\quad+C_{\epsilon,N}\sum_{0\leq\alpha^{\prime}<\alpha}\int_{ \mathbb{R}^{3}}|\xi|^{\kappa}\mu^{\frac{1}{2}-\epsilon}(v)\mu^{\frac{1}{2}- \epsilon}(v+\xi)|\partial_{v}^{\alpha^{\prime}}\mathfrak{h}(v+\xi)|\,d\xi\] \[\leq\iint_{\mathbb{R}^{3}\times\mathbb{S}^{2}}|v-u|^{\kappa}\beta (\theta)\mu^{\frac{1}{2}}(v)\mu^{\frac{1}{2}}(u)|\partial_{u}^{\alpha} \mathfrak{h}(u)|\,d\omega du\] \[\quad+C_{\epsilon,N}\sum_{0\leq\alpha^{\prime}<\alpha}\int_{ \mathbb{R}^{3}}|v-u|^{\kappa}\mu^{\frac{1}{2}-\epsilon}(v)\mu^{\frac{1}{2}- \epsilon}(u)|\partial_{u}^{\alpha^{\prime}}\mathfrak{h}(u)|\,du\] \[=:I_{1}+I_{2}.\]
It follows from \(\mu^{\frac{1}{2}}(v)\mu^{\frac{1}{2}}(u)=\mu^{\frac{1}{2}}(v^{\prime})\mu^{ \frac{1}{2}}(u^{\prime})\) and (2.3) that
\[K_{2}\mathfrak{h} =2\iint_{\mathbb{R}^{3}\times\mathbb{S}^{2}}\mu^{\frac{1}{2}}(u^ {\prime})\mu^{\frac{1}{2}}(u)\mathfrak{h}(v^{\prime})|v-u|^{\kappa}\beta( \theta)\,d\omega du\] \[=2\iint_{\mathbb{R}^{3}\times\mathbb{S}^{2}}\mu^{\frac{1}{2}}(v+ \xi_{\perp})\mu^{\frac{1}{2}}(v+\xi)\mathfrak{h}(v+\xi_{\parallel})|\xi|^{ \kappa}\beta(\theta)\,d\omega d\xi,\]
which implies that
\[\partial_{v}^{\alpha}K_{2}\mathfrak{h}=2\iint_{\mathbb{R}^{3} \times\mathbb{S}^{2}}\mu^{\frac{1}{2}}(v+\xi_{\perp})\mu^{\frac{1}{2}}(v+\xi) \partial_{v}^{\alpha}\mathfrak{h}(v+\xi_{\parallel})|\xi|^{\kappa}\beta( \theta)\,d\omega d\xi\] \[\quad+2\sum_{0\leq\alpha^{\prime}<\alpha}C_{\alpha}^{\alpha^{ \prime}}\iint_{\mathbb{R}^{3}\times\mathbb{S}^{2}}\partial_{v}^{\alpha-\alpha^{ \prime}}\big{(}\mu^{\frac{1}{2}}(v+\xi_{\perp})\mu^{\frac{1}{2}}(v+\xi)\big{)} \partial_{v}^{\alpha^{\prime}}\mathfrak{h}(v+\xi_{\parallel})|\xi|^{\kappa} \beta(\theta)\,d\omega d\xi,\]
where \(\xi_{\shortparallel}:=[(u-v)\cdot\omega]\omega\) and \(\xi_{\perp}:=\xi-\xi_{\shortparallel}\). It is clear that
\[|\partial_{v}^{\alpha-\alpha^{\prime}}\big{(}\mu^{\frac{1}{2}}(v+\xi_{\perp}) \mu^{\frac{1}{2}}(v+\xi))|\leq C_{\epsilon,N}\mu^{\frac{1}{2}-\epsilon}(v+\xi_{ \perp})\mu^{\frac{1}{2}-\epsilon}(v+\xi).\]
Then we can obtain
\[|\partial_{v}^{\alpha}(K_{2}\mathfrak{h})| \leq 2\iint_{\mathbb{R}^{3}\times\mathbb{S}^{2}}|\xi|^{\kappa}\beta( \theta)\mu^{\frac{1}{2}}(v+\xi_{\perp})\mu^{\frac{1}{2}}(v+\xi)|\partial_{v}^{ \alpha}\mathfrak{h}(v+\xi_{\
and
\[\begin{split}|\langle\nu^{-1}\mu^{-\frac{q}{2}}I_{2}^{1-\chi_{r}},\mu^ {-\frac{q}{2}}\partial_{v}^{\alpha}\mathfrak{h}\rangle|&\leq C \sum_{0\leq\alpha^{\prime}<\alpha}\left(\int_{\mathbb{R}^{3}}|\partial_{u}^{ \alpha^{\prime}}\mathfrak{h}(u)|^{2}\nu(u)\,du\right)^{\frac{1}{2}}\\ &\qquad\qquad\times\left(\int_{\mathbb{R}^{3}}|\partial_{v}^{ \alpha}\mathfrak{h}(v)|^{2}\nu(v)\,dv\right)^{\frac{1}{2}}\\ &\leq\frac{1}{2}\|\partial_{v}^{\alpha}\mathfrak{h}\|_{\nu}^{2}+C \sum_{0\leq\alpha^{\prime}<\alpha}\|\partial_{v}^{\alpha^{\prime}}\mathfrak{h} \|_{\nu}^{2},\end{split} \tag{2.18}\]
where \(C\) depends on \(\rho,\mathfrak{u},T,r,q\) and we have chosen \(0<\epsilon<\frac{1-q}{2}\) in the expression of \(I_{2}^{1-\chi_{r}}\). For \(|\langle\nu^{-1}\mu^{-\frac{q}{2}}(J_{i})^{1-\chi_{r}},\mu^{-\frac{q}{2}} \partial_{v}^{\alpha}\mathfrak{h}\rangle|\), using similar arguments as in [30, Lemma 3.2], one can get
\[|\langle\nu^{-1}\mu^{-\frac{q}{2}}J_{i}^{1-\chi_{r}},\mu^{-\frac{q}{2}} \partial_{v}^{\alpha}\mathfrak{h}\rangle|\leq C\sum_{0\leq\alpha^{\prime}\leq \alpha}\|\partial_{v}^{\alpha}\mathfrak{h}\|_{\nu}^{2},\quad i=1,2,\]
which, together with (2.17) and (2.18), yields (2.16). Therefore the proof of Lemma 2.6 is completed.
**Lemma 2.7**.: _Let \(N\in\mathbb{N}\), \(|\alpha|\leq N\), \(0<q<q^{\prime}<1\), \(-3<\kappa<0\). For any \(r>0\), there exists a constant \(C>0\) such that the following estimates hold:_
\[\big{|}\langle\nu^{-1}\mu^{-\frac{q}{2}}(\partial_{v}^{\alpha}K_{1}\mathfrak{ h})^{\chi_{r}},\mu^{-\frac{q}{2}}\partial_{v}^{\alpha}\mathfrak{h}\rangle \big{|}\leq C\exp\big{(}-\frac{(1-q)r^{2}}{32T}\big{)}\sum_{0\leq\alpha^{ \prime}\leq\alpha}\|\mu^{-\frac{q}{2}}\partial_{v}^{\alpha^{\prime}}\mathfrak{ h}\|_{L_{v}^{2}}^{2}, \tag{2.19}\]
_and_
\[|\langle\nu^{-1}\mu^{-\frac{q}{2}}(\partial_{v}^{\alpha}K_{2}\mathfrak{h})^{ \chi_{r}},\mu^{-\frac{q}{2}}\partial_{v}^{\alpha}\mathfrak{h}\rangle|\leq C \Big{\{}\frac{1}{1+r}\sum_{0\leq\alpha^{\prime}\leq\alpha}\|\mu^{-\frac{q}{2} }\partial_{v}^{\alpha^{\prime}}\mathfrak{h}\|_{L_{v}^{2}}^{2}+\exp(\frac{2qr^{ 2}}{T})\sum_{0\leq\alpha^{\prime}\leq\alpha}\|\partial_{v}^{\alpha^{\prime}} \mathfrak{h}\|_{\nu}^{2}\,\Big{\}}. \tag{2.20}\]
_The constant \(C\) depends only on \(\rho,\mathfrak{u},T,q,N\)._
**Proof.** We divide it into several steps. We point out that the constants \(C\) in the proof do not depend on \(r\).
_Step 1._ Estimates on \(|\langle\nu^{-1}\mu^{-\frac{q}{2}}(\partial_{v}^{\alpha}K_{1}\mathfrak{h})^{ \chi_{r}},\mu^{-\frac{q}{2}}\partial_{v}^{\alpha}\mathfrak{h}\rangle|\). Noting (2.12) and the definition of \(\chi_{r}(s)\), we have
\[\begin{split}&\big{|}\langle\nu^{-1}\mu^{-\frac{q}{2}}I_{1}^{ \chi_{r}},\mu^{-\frac{q}{2}}\partial_{v}^{\alpha}\mathfrak{h}\rangle\big{|}\\ &\leq C\Big{(}\iint_{|u-v|\geq r}|\partial_{v}^{\alpha}\mathfrak{h} (v)\mu^{-\frac{q}{2}}(v)|^{2}\mu^{\frac{3(1-q)}{8}}(v)\mu^{\frac{1+q}{2}}(u)|u -v|^{\kappa}dudv\Big{)}^{\frac{1}{2}}\\ &\quad\times\Big{(}\iint_{|u-v|\geq r}|\partial_{u}^{\alpha} \mathfrak{h}(u)\mu^{-\frac{q}{2}}(u)|^{2}\mu^{\frac{3(1-q)}{8}}(v)\mu^{\frac{1+q }{2}}(u)|u-v|^{\kappa}dudv\Big{)}^{\frac{1}{2}}\,,\end{split}\]
where we have used \(|\nu^{-1}(v)\mu^{\frac{1-q}{8}}(v)|\leq C(\rho,\mathfrak{u},T,q)<\infty\). Then it is direct to obtain
\[\big{|}\langle\nu^{-1}\mu^{-\frac{q}{2}}I_{1}^{\chi_{r}},\mu^{-\frac{q}{2}} \partial_{v}^{\alpha}\mathfrak{h}\rangle\big{|}\leq C\exp\left(-\frac{(1-q)r^{ 2}}{32T}\right)\|\mu^{-\frac{q}{2}}\partial_{v}^{\alpha}\mathfrak{h}\|_{L_{v}^{ 2}}^{2}. \tag{2.21}\]
Taking \(0<\epsilon<\frac{1-q}{8}\), one has
\[\begin{split}\big{|}\langle\nu^{-1}\mu^{-\frac{q}{2}}I_{2}^{ \chi_{r}},\mu^{-\frac{q}{2}}\partial_{v}^{\alpha}\mathfrak{h}\rangle\big{|}& \leq C\exp\left(-\frac{(1-q)r^{2}}{32T}\right)\|\mu^{-\frac{q}{2}} \partial_{v}^{\alpha}\mathfrak{h}\|_{L_{v}^{2}}\sum_{0\leq\alpha^{\prime}<\alpha }\|\mu^{-\frac{q}{2}}\partial_{v}^{\alpha^{\prime}}\mathfrak{h}\|_{L_{v}^{2}} ^{2}\\ &\leq C\exp\left(-\frac{(1-q)r^{2}}{32T}\right)\sum_{0\leq\alpha^{ \prime}\leq\alpha}\|\mu^{-\frac{q}{2}}\partial_{v}^{\alpha^{\prime}}\mathfrak{h} \|_{L_{v}^{2}}^{2},\end{split}\]
which, together with (2.21), yields (2.19).
_Step 2._ Recall (2.13) and [30, Lemma 3.3], it is direct to have
\[|\langle\nu^{-1}\mu^{-\frac{q}{2}}J_{1}^{\chi_{r}},\mu^{-\frac{q}{2}}\partial_{v }^{\alpha}\mathfrak{h}\rangle|\leq C\Big{\{}\frac{1}{1+r}\|\mu^{-\frac{q}{2}} \partial_{v}^{\alpha}\mathfrak{h}\|_{L_{v}^{2}}^{2}+\exp(\frac{2qr^{2}}{T})\| \partial_{v}^{\alpha}\mathfrak{h}\|_{L_{v}^{2}}^{2}\Big{\}}.\]
For \(|\langle\nu^{-1}\mu^{-\frac{q}{2}}J_{2}^{\chi_{r}},\mu^{-\frac{q}{2}}\partial_{v}^{ \alpha}\mathfrak{h}\rangle|\), taking \(0<\epsilon<\frac{1-q}{2}\), we can obtain
\[|\langle\mu^{-\frac{q}{2}}J_{2}^{\chi_{r}},\mu^{-\frac{q}{2}} \partial_{v}^{\alpha}\mathfrak{h}\rangle|\] \[\leq\frac{C}{1+r}\|\mu^{-\frac{q}{2}}\partial_{v}^{\alpha} \mathfrak{h}\|_{L_{v}^{2}}\sum_{0\leq\alpha^{\prime}<\alpha}\|\mu^{-\frac{q}{ 2}}\partial_{v}^{\alpha^{\prime}}\mathfrak{h}\|_{L_{v}^{2}}+C\exp(\frac{2qr^{2} }{T})\|\partial_{v}^{\alpha}f\|_{\nu}\sum_{0\leq\alpha^{\prime}<\alpha}\| \partial_{v}^{\alpha^{\prime}}\mathfrak{h}\|_{\nu}\] \[\leq\frac{C}{1+r}\sum_{0\leq\alpha^{\prime}\leq\alpha}\|\mu^{- \frac{q}{2}}\partial_{v}^{\alpha^{\prime}}\mathfrak{h}\|_{L_{v}^{2}}^{2}+C \exp(\frac{2qr^{2}}{T})\sum_{0\leq\alpha^{\prime}\leq\alpha}\|\partial_{v}^{ \alpha^{\prime}}\mathfrak{h}\|_{\nu}^{2}.\]
Therefore the proof of Lemma 2.7 is completed.
**Lemma 2.8** (Weighted hypocoercivity of \(\partial_{v}^{\alpha}\mathbf{L}\)).: _Let \(N\in\mathbb{N}\), \(|\alpha|\leq N\), \(0<q<1\) and \(-3<\kappa<0\). Then there is a constant \(C=C(\rho,\mathfrak{u},T,q,N)>0\) such that_
\[\langle\nu^{-1}\mu^{-\frac{q}{2}}\partial_{v}^{\alpha}(\mathbf{L}\mathfrak{h}),\mu^{-\frac{q}{2}}\partial_{v}^{\alpha}\mathfrak{h}\rangle\geq\frac{1}{2}\| \mu^{-\frac{q}{2}}\partial_{v}^{\alpha}\mathfrak{h}\|_{L_{v}^{2}}^{2}-C\sum_{ 0\leq\alpha^{\prime}<\alpha}\|\mu^{-\frac{q}{2}}\partial_{v}^{\alpha^{\prime} }\mathfrak{h}\|_{L_{v}^{2}}^{2}-C\sum_{0\leq\alpha^{\prime}\leq\alpha}\| \partial_{v}^{\alpha}\mathfrak{h}\|_{\nu}^{2}. \tag{2.22}\]
Proof.: A direct calculation shows that
\[\nu^{-1}(\partial_{v}^{\alpha}\mathbf{L}\mathfrak{h})=\nu^{-1} \partial_{v}^{\alpha}(\nu\mathfrak{h})-\nu^{-1}(\partial_{v}^{\alpha}K)^{1- \chi_{r}}\mathfrak{h}-\nu^{-1}(\partial_{v}^{\alpha}K_{1}\mathfrak{h})^{\chi _{r}}+\nu^{-1}(\partial_{v}^{\alpha}K_{2}\mathfrak{h})^{\chi_{r}}.\]
Noting \(\nu^{-1}(v)\partial_{v}^{\alpha}(\nu(v))\leq C_{N}\), we have
\[\langle\nu^{-1}\mu^{-\frac{q}{2}}(\partial_{v}^{\alpha}(\nu \mathfrak{h})),\mu^{-\frac{q}{2}}\partial_{v}^{\alpha}\mathfrak{h}\rangle\geq \frac{7}{8}\|\mu^{-\frac{q}{2}}\partial_{v}^{\alpha}\mathfrak{h}\|_{L_{v}^{2}}^ {2}-C_{N}\sum_{0\leq\alpha^{\prime}<\alpha}\|\mu^{-\frac{q}{2}}\partial_{v}^{ \alpha^{\prime}}\mathfrak{h}\|_{L_{v}^{2}}^{2}. \tag{2.23}\]
Then it follows from (2.23) and Lemmas 2.6-2.7 that
\[\langle\nu^{-1}\mu^{-\frac{q}{2}}(\partial_{v}^{\alpha}\mathbf{L} \mathfrak{h}),\mu^{-\frac{q}{2}}\partial_{v}^{\alpha}\mathfrak{h}\rangle\] \[\geq\frac{7}{8}\|\mu^{-\frac{q}{2}}\partial_{v}^{\alpha}\mathfrak{ h}\|_{L_{v}^{2}}^{2}-C\big{\{}\sum_{0\leq\alpha^{\prime}<\alpha}\|\mu^{-\frac{q}{2}} \partial_{v}^{\alpha^{\prime}}\mathfrak{h}\|_{L_{v}^{2}}^{2}-\sum_{0\leq\alpha ^{\prime}\leq\alpha}\|\partial_{v}^{\alpha^{\prime}}\mathfrak{h}\|_{\nu}^{2} \big{\}}\] \[\quad-C(\rho,\mathfrak{u},T,q,N)[\exp(-\frac{(1-q)r^{2}}{32T})+ \frac{1}{1+r}]\|\mu^{-\frac{q}{2}}\partial_{v}^{\alpha}\mathfrak{h}\|_{L_{v}^{2 }}^{2}.\]
Taking \(r\) large enough, one gets (2.22). Therefore the proof of Lemma 2.8 is completed.
For later use, we recall a result on the hypercoercivity in [15].
**Lemma 2.9** ([15]).: _Let \(-3<\kappa<0\) and \(|\alpha|\leq N\). Then there exists a constant \(C(\rho,\mathfrak{u},T,N)>0\) such that_
\[\langle\partial_{v}^{\alpha}(\mathbf{L}\mathfrak{h}),\partial_{v}^{\alpha} \mathfrak{h}\rangle\geq\frac{1}{2}\|\partial_{v}^{\alpha}\mathfrak{h}\|_{\nu}^ {2}-C\|\mathfrak{h}\|_{\nu}^{2}.\]
Proof of Proposition 2.5.: Let \(\mathfrak{g}\in\mathcal{N}^{\perp}\), we denote \(\mathfrak{h}:=\mathbf{L}^{-1}\mathfrak{g}\), that is, \(\mathbf{L}\mathfrak{h}=\nu\mathfrak{h}-K\mathfrak{h}=\mathfrak{g}\). By Sobolev's embedding theorem, we have for \(N\geq 2\) that
\[|\mu^{-\frac{q}{2}}\mathbf{L}^{-1}\mathfrak{g}| \leq C\sum_{|\alpha|\leq N}\|\partial_{v}^{\alpha}(\mu^{-\frac{q}{ 2}}\mathfrak{h})\|_{L_{v}^{2}} \tag{2.24}\] \[=C\sum_{|\alpha|\leq N}\|\mu^{-\frac{q}{2}}\partial_{v}^{\alpha} \mathfrak{h}\|_{L_{v}^{2}}+C\sum_{|\alpha|\leq 2}\sum_{0\leq\alpha^{\prime}< \alpha}C_{\alpha}^{\alpha^{\prime}}\|(\partial_{v}^{\alpha-\alpha^{\prime}}\mu^{- \frac{q}{2}})(\partial_{v}^{\alpha^{\prime}}\mathfrak{h})\|_{L_{v}^{2}}\] \[\leq C\sum_{|\alpha|\leq N}\|\mu^{-\frac{q^{\prime}}{2}}\partial_{v}^ {\alpha}\mathfrak{h}\|_{L_{v}^{2}},\]
where we have used the fact that
\[\mu^{\frac{q^{\prime}}{2}}\sum_{0\leq\alpha^{\prime}<2}(\partial_{v}^{\alpha- \alpha^{\prime}}\mu^{-\frac{q}{2}})\leq C\qquad\text{for any }0<q<q^{\prime}<1,\]
in the last inequality.
It follows from Lemmas 2.8-2.9 and \(\|\mathfrak{h}\|_{\nu}^{2}\lesssim\langle\mathbf{L}\mathfrak{h},\mathfrak{h}\rangle\) that
\[\|\mu^{-\frac{q^{\prime}}{2}}\partial_{v}^{\alpha}\mathfrak{h}\|_{ L_{v}^{2}}^{2} \leq 2\langle\nu^{-1}\mu^{-\frac{q^{\prime}}{2}}\partial_{v}^{ \alpha}(\mathbf{L}\mathfrak{h}),\mu^{-\frac{q^{\prime}}{2}}\partial_{v}^{ \alpha}\mathfrak{h}\rangle+C\sum_{0\leq\alpha^{\prime}<\alpha}\|\mu^{-\frac{q ^{\prime}}{2}}\partial_{v}^{\alpha^{\prime}}\mathfrak{h}\|_{L_{v}^{2}}^{2}\] \[\quad+C\|\partial_{v}^{\alpha}\mathfrak{h}\|_{\nu}^{2}\] \[\leq 16\|\nu^{-1}\mu^{-\frac{q^{\prime}}{2}}\partial_{v}^{\alpha} \mathfrak{g}\|_{L_{v}^{2}}^{2}+\frac{1}{4}\|\mu^{-\frac{q^{\prime}}{2}} \partial_{v}^{\alpha}\mathfrak{h}\|_{L_{v}^{2}}^{2}+C\sum_{0\leq\alpha^{ \prime}<\alpha}\|\mu^{-\frac{q^{\prime}}{2}}\partial_{v}^{\alpha^{\prime}} \mathfrak{h}\|_{L_{v}^{2}}^{2}\] \[\quad+C\|\nu^{-1}\partial_{v}^{\alpha}\mathfrak{g}\|_{L_{v}^{2}} ^{2}+\frac{1}{4}\|\mathfrak{h}\|_{\nu}^{2},\]
which, together with (2.24), yields that
\[|\mu^{-\frac{q}{2}}\mathbf{L}^{-1}\mathfrak{g}|\leq C\sum_{| \alpha|\leq N}\|\mu^{-\frac{q^{\prime}}{2}}\partial_{v}^{\alpha}\mathfrak{h} \|_{L_{v}^{2}}\leq C\sum_{|\alpha|\leq N}\|\nu^{-1}\mu^{-\frac{q^{\prime}}{2}} \partial_{v}^{\alpha}\mathfrak{g}\|_{L_{v}^{2}},\]
where the constant depend only \(\rho,\mathfrak{u},T\) and \(q^{\prime}\). Thus the proof of Proposition 2.5 is finished.
**Remark 2.10**.: _Denote \(\mathbf{L}_{0}\mathfrak{h}=\nu^{0}(v)\mathfrak{h}-K^{0}\mathfrak{h}=\mathbf{ L}\mathfrak{h}|_{x_{3}=0}\). Similarly, we have \(K^{0}\mathfrak{h}=K^{0,c}\mathfrak{h}+K^{0,m}\mathfrak{h}\). Define_
\[w_{l}(v)=(1+|v|^{2})^{\frac{l}{2}}\mu_{0}^{-\mathfrak{a}}, \tag{2.25}\]
_then we can define \(k_{w}^{0,m},k_{w}^{0,c}\) similarly as in section 2.1. It is obvious that we can have similar results as in (2.6)-(2.10) for \(K^{0,m},K^{0,c},k_{w}^{0,m},k_{w}^{0,c}\)._
_For \(\mathbf{L}_{0}\), one also has_
\[\langle\mathbf{L}_{0}\mathfrak{h},\mathfrak{h}\rangle\geq c_{1} \|\{\mathbf{I}-\mathbf{P}_{0}\}\mathfrak{h}\|_{\nu}^{2}, \tag{2.26}\]
_since \(\nu^{0}\cong\nu\cong(1+|v|)^{\kappa}\). \(\mathbf{L}_{0}^{-1}\) can be defined as_
\[(\mathbf{L}_{0}|_{\mathcal{N}_{0}^{\perp}})^{-1}:\mathcal{N}_{0}^{\perp} \rightarrow\mathcal{N}_{0}^{\perp}.\]
_Let \(\mathfrak{g}\in\mathcal{N}_{0}^{\perp}\), from Proposition 2.5, it is direct to know that_
\[|\mu_{0}^{-\frac{q}{2}}\mathbf{L}_{0}^{-1}\mathfrak{g}(v)|\leq C \sum_{|\alpha|\leq N}\|\partial_{v}^{\alpha}(\mu_{0}^{-\frac{q}{2}}\mathbf{L}_ {0}^{-1}\mathfrak{g})\|_{L_{v}^{2}}\leq C\sum_{|\alpha|\leq N}\|(\nu^{0})^{-1} \mu_{0}^{-\frac{q^{\prime}}{2}}\partial_{v}^{\alpha}\mathfrak{g}\|_{L_{v}^{2}},\quad\text{for }v\in\mathbb{R}^{3}. \tag{2.27}\]
_where the constant \(C=C(\rho^{0},\mathfrak{u}^{0},T^{0},q^{\prime},N)>0\). These estimates will be used to study the viscous and Knudsen boundary layers._
**Remark 2.11**.: _We point out that all above results for soft potentials in this section are also valid for hard potentials. The proofs are very similar._
## 3. Existence of a Steady Linear Boltzmann Equation
To construct the Knudsen layer solutions, we study equation:
\[\begin{cases}v_{3}\partial_{\eta}\mathfrak{f}+\mathbf{L}_{0}\mathfrak{f}=S, \quad(\eta,v)\in[0,\infty)\times\mathbb{R}^{3},\\ \mathfrak{f}(0,v)|_{v_{3}>0}=\mathfrak{f}(0,R_{x}v)+f_{b}(v),\\ \lim_{\eta\rightarrow\infty}\mathfrak{f}(\eta,v)=0.\end{cases} \tag{3.1}\]
where \(S\) is a given function and \(f_{b}(v)\) is defined only for \(v_{3}<0\), and we always assume that it is extended to be \(0\) for \(v_{3}>0\). For soft potentials, there has not been work for Knudsen layer solutions with specular reflection boundary.
**Theorem 3.1**.: _Recall \(w_{l}\) in (2.25). Assume \(l>2,\ 0\leq\mathfrak{a}<\frac{1}{2}\),_
\[\int_{\mathbb{R}^{3}}(1,v_{1}-\mathfrak{u}_{1}^{0},v_{2}-\mathfrak{ u}_{2}^{0},|v-\mathfrak{u}^{0}|^{2})\sqrt{\mu_{0}}Sdv =0, \tag{3.2}\] \[\int_{\mathbb{R}^{3}}(1,v_{1}-\mathfrak{u}_{1}^{0},v_{2}- \mathfrak{u}_{2}^{0},|v-\mathfrak{u}^{0}|^{2})v_{3}\sqrt{\mu_{0}}f_{b}dv =0,\]
_and_
\[\|(1+\eta)^{q_{k}}w_{l+4k+4}S\|_{L^{\infty}_{q,v}}<\infty,\qquad \text{for}\quad k\in\mathbb{N}_{+},\ q_{k}>\max\{3,k+\frac{3}{2}\}, \tag{3.3}\] \[\|w_{l+4k+5}f_{b}\|_{L^{\infty}_{v}}<\infty,\qquad\text{for} \quad k\in\mathbb{N}_{+},\ q_{k}>\max\{3,k+\frac{3}{2}\},\]
_then there exists a unique solution \(\mathfrak{f}\) of (3.1) such that_
\[\|(1+\eta)^{k}w_{l}\mathfrak{f}\|_{L^{\infty}_{q,v}}+|(1+\eta)^{k }w_{l}\mathfrak{f}(0,\cdot)|_{L^{\infty}_{v}}\] \[\leq C\Big{\{}\|(1+\eta)^{q_{k}}w_{l+4k+4}S\|_{L^{\infty}_{q,v}}+ \|w_{l+4k+5}f_{b}\|_{L^{\infty}_{v}}\Big{\}}, \tag{3.4}\]
_where \(C>0\) is a positive constant._
**Remark 3.2**.: _As indicated in [19], in general, it is hard to obtain the normal derivatives estimates for the boundary value problem (3.1). Fortunately, it is easy to obtain the tangential and time derivatives estimates for the solution of (3.1), i.e.,_
\[\sum_{i+j\leq r}\|(1+\eta)^{k}w_{l}\partial_{t}^{i}\nabla_{{}_{ \shortparallel}}^{j}\mathfrak{f}(t,x_{{}_{\shortparallel}},\cdot,\cdot)\|_{L^ {\infty}_{q,v}}+\|w_{l}\partial_{t}^{i}\nabla_{{}_{\shortparallel}}^{j} \mathfrak{f}(t,x_{{}_{\shortparallel}},0,\cdot)\|_{L^{\infty}_{v}}\] \[\leq C\sum_{i+j\leq r}\Big{\{}\|w_{l+4k+5}\partial_{t}^{i}\nabla _{{}_{\shortparallel}}^{j}f_{b}(t,x_{{}_{\shortparallel}},\cdot)\|_{L^{ \infty}_{v}}+\|(1+\eta)^{q_{k}}w_{l+4k+4}\partial_{t}^{i}\nabla_{{}_{ \shortparallel}}^{j}S\|_{L^{\infty}_{q,v}}\Big{\}}, \tag{3.5}\]
_provided the right hand side of (3.5) is bounded and \(q_{k}>\max\{3,k+\frac{3}{2}\}\). We point out that such an estimate (3.5) is enough for us to establish the Hilbert expansion. To prove the estimate (3.5), we study the equation of \(\partial_{t}^{i}\nabla_{{}_{\shortparallel}}^{j}(\sqrt{\mu_{0}}\mathfrak{f})\). It is direct to check that the new source term and boundary perturbation term satisfy the solvability conditions in Theorem 3.1, hence one can obtain the estimate for \(\partial_{t}^{i}\nabla_{{}_{\shortparallel}}^{j}(\sqrt{\mu_{0}}\mathfrak{f})\) by applying Theorem 3.1, therefore (3.5) follows immediately._
_Moreover, taking \(L^{\infty}_{x_{{}_{\shortparallel}}}\cap L^{2}_{x_{{}_{\shortparallel}}}\) over (3.5), one obtains_
\[\sum_{i+j\leq r}\sup_{t\in[0,\tau^{s}]}\Big{\{}\|(1+\eta)^{k}w_{l }\partial_{t}^{i}\nabla_{{}_{\shortparallel}}^{j}f(t)\|_{L^{\infty}_{x_{{}_{ \shortparallel}},\eta,v}\cap L^{2}_{x_{{}_{\shortparallel}}}L^{\infty}_{v,v} }+\|w_{l}\partial_{t}^{i}\nabla_{{}_{\shortparallel}}^{j}(t,\cdot,0,\cdot)\|_{ L^{\infty}_{x_{{}_{\shortparallel}}}\cap L^{2}_{x_{{}_{\shortparallel}}}L^{\infty}_{v}} \Big{\}}\] \[\leq C\sup_{t\in[0,\tau^{s}]}\Big{\{}\sum_{i+j\leq r}\Big{\{}\|w_ {l+4k+5}\partial_{t}^{i}\nabla_{{}_{\shortparallel}}^{j}f_{b}(t)\|_{L^{ \infty}_{x_{{}_{\shortparallel}},\tau}\cap L^{2}_{x_{{}_{\shortparallel}}}L^{ \infty}_{v}}\] \[\qquad\qquad+\sum_{i+j\leq r}\|(1+\eta)^{q_{k}}w_{l+4k+4} \partial_{t}^{i}\nabla_{{}_{\shortparallel}}^{j}S(t)\|_{L^{\infty}_{x_{{}_{ \shortparallel}},\eta,v}\cap L^{2}_{x_{{}_{\shortparallel}}}L^{\infty}_{v,v} }\Big{\}}\Big{\}},\quad q_{k}>\max\{3,k+\frac{3}{2}\}. \tag{3.6}\]
Let \(\Upsilon(\eta)\) be a monotonic smooth cut-off function
\[\Upsilon(\eta)\equiv 1,\ \text{for}\ \eta\in[0,1],\quad\text{and}\quad \Upsilon(\eta)\equiv 0,\ \text{for}\ \eta\in[2,+\infty).\]
Define
\[f(x,v):=\mathfrak{f}(x,v)-\Upsilon(\eta)f_{b}(v),\quad x=(x_{{}_{ \shortparallel}},\eta,v)\]
then (3.1) is rewritten as
\[\begin{cases}v_{3}\partial_{\eta}f+\mathbf{L}_{0}f=g:=S-v_{3} \partial_{\eta}\Upsilon(\eta)f_{b}(v)-\Upsilon(\eta)\mathbf{L}_{0}f_{b},\quad( \eta,v)\in[0,\infty)\times\mathbb{R}^{3},\\ f(0,v)|_{v_{3}>0}=f(0,R_{\eta}v),\\ \lim_{\eta\to\infty}f(\eta,v)=0,\end{cases} \tag{3.7}\]
where \(x_{{}_{\shortparallel}}\) is regarded as parameters. The conditions (3.2) deduces that
\[\int_{\mathbb{R}^{3}}(1,v_{1}-\mathfrak{u}_{1}^{0},v_{2}-\mathfrak{u}_{2}^{0},|v-\mathfrak{u}^{0}|^{2})\sqrt{\mu_{0}}g\ dv=0. \tag{3.8}\]
Define the viscosity and thermal conductivity coefficients by
\[\begin{split}\mu(T^{0})&:=T^{0}\langle\mathcal{A}^{0}_{3 1},\ \mathbf{L}_{0}^{-1}\mathcal{A}^{0}_{31}\rangle\equiv T^{0}\langle\mathcal{A}^{0}_{ ij},\ \mathbf{L}_{0}^{-1}\mathcal{A}^{0}_{ij}\rangle,\quad\forall i\neq j,\\ \kappa(T^{0})&:=\frac{2}{3}T^{0}\langle\mathcal{B}^ {0}_{3},\ \mathbf{L}_{0}^{-1}\mathcal{B}^{0}_{3}\rangle\equiv\frac{2}{3}T^{0}\langle \mathcal{B}^{0}_{i},\ \mathbf{L}_{0}^{-1}\mathcal{B}^{0}_{i}\rangle,\end{split} \tag{3.9}\]
where \(i,j=1,2,3\) and \(\mathcal{A}^{0}_{ij}\), \(\mathcal{B}^{0}_{i}\) are
\[\begin{split}\mathcal{A}^{0}_{ij}&:=\left\{\frac{(v _{i}-\mathfrak{u}^{0}_{i})(v_{j}-\mathfrak{u}^{0}_{j})}{T^{0}}-\delta_{ij}\frac {|v-\mathfrak{u}^{0}|^{2}}{3T^{0}}\right\}\sqrt{\mu_{0}},\\ \mathcal{B}^{0}_{i}&:=\frac{v_{i}-\mathfrak{u}^{0}_ {i}}{2\sqrt{T^{0}}}\left(\frac{|v-\mathfrak{u}^{0}|^{2}}{T^{0}}-5\right)\sqrt {\mu_{0}}.\end{split} \tag{3.10}\]
Using Lemma 4.4 in [3], one has \(\langle T^{0}\mathcal{A}^{0}_{33},\mathbf{L}_{0}^{-1}\mathcal{A}^{0}_{33} \rangle=\frac{4}{3}\mu(T^{0})\).
### Approximate solutions and uniform estimate
This section is devoted to the existence result for the linearized problem (3.7). To prove the existence of solution, we first consider a truncated approximate problem with penalized term:
\[\begin{cases}\delta f^{\delta}+v_{3}\partial_{\eta}f^{\delta}+\mathbf{L}_{0} f^{\delta}=g,\\ f^{\delta}(\eta,v)|_{\gamma_{-}}=f^{\delta}(\eta,R_{\eta}v),\end{cases}\quad( \eta,v)\in\Omega_{d}\times\mathbb{R}^{3}, \tag{3.11}\]
where \(\Omega_{d}=(0,d)\), \(d\geq 1\) and \(\delta\in(0,1]\). We define
\[h^{\delta}(\eta,v):=w_{l}(v)f^{\delta}(\eta,v),\]
then (3.11) can be rewritten as
\[\begin{cases}\delta h^{\delta}+v_{3}\partial_{\eta}h^{\delta}+\nu^{0}(v)h^{ \delta}=K^{0}_{w_{l}}h^{\delta}+w_{l}g,\\ h^{\delta}(\eta,v)|_{\gamma_{-}}=h^{\delta}(\eta,R_{\eta}v),\end{cases} \tag{3.12}\]
where \(K^{0}_{w_{l}}h=w_{l}K^{0}(\frac{h}{w_{l}}).\) Then it is clear that
\[K^{0}_{w_{l}}h(v)=\int_{\mathbb{R}^{3}}k^{0}_{w_{l}}(v,u)h(u)du\quad\text{with }\quad k^{0}_{w_{l}}(v,u)=w_{l}(v)k^{0}(v,u)w_{l}(u)^{-1}. \tag{3.13}\]
For the approximate problem (3.12), the most difficult part is to obtain the \(L^{\infty}_{\eta,v}\)-bound. Motivated by [10], multiplying (3.12) by \((1+|v|^{2})^{\frac{|\kappa|}{2}}\), one gets
\[\begin{cases}(\nu^{0}(v)+\delta)(1+|v|^{2})^{\frac{|\kappa|}{2}}h^{\delta}+v_ {3}(1+|v|^{2})^{\frac{|\kappa|}{2}}\partial_{\eta}h^{\delta}=(1+|v|^{2})^{ \frac{|\kappa|}{2}}K^{0}_{w_{l}}h^{\delta}+(1+|v|^{2})^{\frac{|\kappa|}{2}}w_{ l}g,\\ h^{\delta}(\eta,v)|_{\gamma_{-}}=h^{\delta}(\eta,R_{\eta}v).\end{cases} \tag{3.14}\]
Denote \(\hat{\nu}_{\delta}=(\nu^{0}(v)+\delta)(1+|v|^{2})^{\frac{|\kappa|}{2}},\hat{ \nu}=\nu^{0}(v)(1+|v|^{2})^{\frac{|\kappa|}{2}},\hat{v}_{3}=v_{3}(1+|v|^{2})^{ \frac{|\kappa|}{2}},\) then \(\eqref{eq:11}_{1}\) becomes
\[\hat{\nu}_{\delta}h^{\delta}+\hat{v}_{3}\partial_{\eta}h^{\delta}=(1+|v|^{2})^ {\frac{|\kappa|}{2}}K^{0}_{w_{l}}h^{\delta}+(1+|v|^{2})^{\frac{|\kappa|}{2}}w_ {l}g.\]
For given \((t,\eta,v)\), let \([X(s),V(s)]\) be the speeded backward characteristics for (3.14). Then \([X(s),V(s)]\) is determined by
\[\begin{cases}\frac{dX(s)}{ds}=\hat{V}_{3}(s):=V_{3}(s)(1+|V|^{2})^{\frac{| \kappa|}{2}},\quad\frac{dV(s)}{ds}=0,\\ [X(t),V(t)]=[\eta,v],\end{cases}\]
which yields that
\[[X(s),V(s)]=[X(s;t,\eta,v),V(s;t,\eta,v)]=[\eta-(t-s)\hat{v}_{3},v].\]
Now for each \((\eta,v)\) with \(\eta\in\bar{\Omega}_{d}\) and \(v_{3}\neq 0\), we define its backward exit time \(t_{\mathbf{b}}(\eta,v)\geq 0\) to be the last moment at which the back-time straight line \([X(-\tau;0,\eta,v),V(-\tau;0,\eta,v)]\) remains in \(\bar{\Omega}\):
\[t_{\mathbf{b}}(\eta,v)=\sup\{s\geq 0:\eta-\tau\hat{v}_{3}\in\bar{\Omega}_{d}\text{ for }0 \leq\tau\leq s\}.\]
We also define the last position
\[\eta_{\mathbf{b}}(\eta,v)=\eta(t_{\mathbf{b}})=\eta-t_{\mathbf{b}}(\eta,v)\hat{v}_ {3}\in\partial\Omega_{d}.\]
It is obvious that \(X(s)\), \(t_{\mathbf{b}}(\eta,v)\) and \(\eta_{\mathbf{b}}(x,v)\) are independent of the horizontal velocity \(v_{\shortparallel}:=(v_{1},v_{2})\).
Let \(\eta\in\bar{\Omega}_{d}\), \((\eta,v)\notin\gamma_{0}\cup\gamma_{-}\) and \((t_{0},\eta_{0},v_{0})=(t,\eta,v)\), we inductively define
\[(t_{k+1},\eta_{k+1},v_{k+1})=(t_{k}-t_{\mathbf{b}}(\eta_{k},v_{k}),\eta_{ \mathbf{b}}(\eta_{k},v_{k}),R_{\eta_{k+1}}v_{k}),\quad k\geq 1,\]
and the back-time cycle as
\[\begin{cases}X_{cl}(s;t,\eta,v)=\sum_{k}\mathbf{1}_{(t_{k+1},t_{k})}(s)\{\eta _{k}-\hat{v}_{k,3}(t_{k}-s)\},\\ \\ V_{cl}(s;t,\eta,v)=\sum_{k}\mathbf{1}_{(t_{k+1},t_{k})}(s)v_{k}.\end{cases} \tag{3.15}\]
Clearly, for \(k\geq 1\) and \((\eta,v)\notin\gamma_{0}\cup\gamma_{-}\), it holds that
\[\begin{split}&\eta_{k}=\frac{1-(-1)^{k}}{2}\eta_{1}+\frac{1+(-1 )^{k}}{2}\eta_{2},\quad v_{k,\shortparallel}=v_{0,\shortparallel},\quad v_{k,3}=(-1)^{k}v_{0,3},\\ & t_{k}-t_{k+1}=t_{1}-t_{2}=\frac{d}{|\hat{v}_{0,3}|}>0,\quad\nu^{0}(v) \equiv\nu^{0}(v_{k}).\end{split} \tag{3.16}\]
Now we are in a position to construct solutions to (3.11) or equivalently (3.12). We first present a useful \(L^{\infty}\)_a priori_ uniform estimate which will be used frequently.
**Lemma 3.3**.: _For any given \(\lambda\in[0,1]\), let \(f^{\lambda}\) be the solution of the following system:_
\[\begin{cases}\delta f^{\lambda}+v_{3}\partial_{\eta}f^{\lambda}+\nu^{0}(v)f^{ \lambda}-\lambda K^{0}f^{\lambda}=g,\\ \\ f^{\lambda}(\eta,v)|_{\gamma_{-}}=(1-\frac{1}{n})f^{\lambda}(\eta,R_{\eta}v)+ r(\eta,R_{\eta}v),\end{cases} \tag{3.17}\]
_where \(n>1\) is an integer and \(g,r\) are given. Assume \(\|w_{l}f^{\lambda}\|_{L^{\infty}_{\eta,v}}+|w_{l}f^{\lambda}|_{L^{\infty}( \gamma_{+})}<\infty\), \(l>2\), then it holds that_
\[\|w_{l}f^{\lambda}\|_{L^{\infty}_{\eta,v}}+|w_{l}f^{\lambda}|_{L^{\infty}( \gamma_{+})}\leq C\|\sqrt{\nu^{0}}f^{\lambda}\|_{L^{2}_{\eta,v}}+C\{\|(\nu^{0 })^{-1}w_{l}g\|_{L^{\infty}_{\eta,v}}+|w_{l+4}r|_{L^{\infty}(\gamma_{+})}\}. \tag{3.18}\]
_We point out the constant \(C>0\) is independent of \(\lambda\), \(d\) and \(n\)._
**Remark 3.4**.: _For hard potentials, similar uniform estimate has been obtained in [22]. For soft potentials, since the effect of collision frequency is weak, i.e., \(\nu^{0}(v)=(1+|v|)^{\kappa}\to 0\) as \(|v|\to\infty\), we have to be more careful. In fact, one has to loss some weight to control the boundary perturbation \(r\), see (3.18)_
**Proof.** Denote \(h^{\lambda}:=w_{l}f^{\lambda}\), then it holds that
\[\begin{cases}\hat{\nu}_{\delta}h^{\lambda}+\frac{dh^{\lambda}}{ds}=(1+|v|^{2} )^{\frac{|\kappa|}{2}}\lambda K^{0}_{w_{l}}h^{\lambda}+(1+|v|^{2})^{\frac{| \kappa|}{2}}w_{l}g,\\ \\ h^{\lambda}(\eta,v)|_{\gamma_{-}}=(1-\frac{1}{n})h^{\lambda}(\eta,R_{\eta}v)+ w_{l}r(\eta,R_{\eta}v).\end{cases}\]
Integrating along the characteristic line, one gets
\[h^{\lambda}(\eta,v) =(1-\frac{1}{n})^{k}h^{\lambda}(\eta_{k},v_{k})e^{-\hat{\nu}_{ \delta}(v)(t-t_{k})}+\lambda\sum_{i=0}^{k-1}(1-\frac{1}{n})^{i}\int_{t_{i+1}} ^{t_{i}}e^{-\hat{\nu}_{\delta}(v)(t-s)}(1+|v|^{2})^{\frac{|\kappa|}{2}}K^{0}_ {w_{l}}h^{\lambda}ds\] \[+\sum_{i=0}^{k-1}(1-\frac{1}{n})^{i}\int_{t_{i+1}}^{t_{i}}e^{- \hat{\nu}_{\delta}(v)(t-s)}(1+|v|^{2})^{\frac{|\kappa|}{2}}w_{l}gds+\sum_{i=0} ^{k-1}(1-\frac{1}{n})^{i}(w_{l}r)(\eta_{i},v_{i+1})e^{-\hat{\nu}_{\delta}(v)(t- t_{i})}\] \[=:I_{1}+I_{2}+I_{3}+I_{4}. \tag{3.19}\]
Taking \(k=\tilde{k}_{0}|v_{3}|(1+|v|^{2})^{\frac{|\kappa|}{2}}\) with \(\tilde{k}_{0}\gg 1\) chosen later. Then it holds that
\[I_{1}\leq e^{-\nu_{0}(k-1)t_{\mathbf{b}}}\|h^{\lambda}\|_{L^{\infty}_{\eta,v}} \leq e^{-\frac{1}{2}\nu_{0}\tilde{k}_{0}d}\|h^{\lambda}\|_{L^{\infty}_{\eta,v}}, \tag{3.20}\]
where \(\nu_{0}>0\) is a constant depending on \(\rho^{0},\mathfrak{u}^{0},T^{0}\). It is obvious that
\[I_{3}\leq\|(\nu^{0})^{-1}w_{l}g\|_{L^{\infty}_{\eta,v}}. \tag{3.21}\]
For \(I_{4}\), noting \(|v_{i}|=|v|\), one has
\[I_{4}\leq k(1+|v|)^{-4}|w_{l+4}r|_{L^{\infty}(\gamma_{+})}\leq C|w_{l+4}r|_{L^{ \infty}(\gamma_{+})}. \tag{3.22}\]
To estimate \(I_{2}\), we divide it into two parts:
\[\sum_{i=0}^{k-1}(1-\frac{1}{n})^{i}\int_{t_{i+1}}^{t_{i}}e^{-\hat {\nu}_{\delta}(v)(t-s)}\lambda(1+|v|^{2})^{\frac{|v|}{2}}K_{w_{l}}^{0}h^{ \lambda}(X_{cl}(s),v_{i})ds\] \[\leq\sum_{i=0}^{k-1}\int_{t_{i+1}}^{t_{i}}e^{-\hat{\nu}_{\delta}( v)(t-s)}(1+|v|^{2})^{\frac{|v|}{2}}|K_{w_{l}}^{0,c}h^{\lambda}(X_{cl}(s),v_{i} )|ds\] \[+\sum_{i=0}^{k-1}\int_{t_{i+1}}^{t_{i}}e^{-\hat{\nu}_{\delta}(v)( t-s)}(1+|v|^{2})^{\frac{|v|}{2}}|K_{w_{l}}^{0,m}h^{\lambda}(X_{cl}(s),v_{i} )|ds. \tag{3.23}\]
For the second term on the RHS of (3.23), one has from (2.6) that
\[\sum_{i=0}^{k-1}\int_{t_{i+1}}^{t_{i}}e^{-\hat{\nu}_{\delta}(v)(t-s)}(1+|v|^{2 })^{\frac{|v|}{2}}|K_{w_{l}}^{0,m}h^{\lambda}(X_{cl}(s),v_{i})|ds\leq Cm^{3+ \kappa}e^{-\frac{|v|^{2}}{20}}\|h^{\lambda}\|_{L^{\infty}_{\eta,v}}. \tag{3.24}\]
For the first term on the RHS of (3.23), we use (3.19) again to obtain
\[\sum_{i=0}^{k-1}\int_{t_{i+1}}^{t_{i}}e^{-\hat{\nu}_{\delta}(v)(t- s)}(1+|v|^{2})^{\frac{|v|}{2}}|K_{w_{l}}^{0,c}h^{\lambda}(X_{cl}(s),v_{i})|ds\] \[=\sum_{i=0}^{k-1}\int_{t_{i+1}}^{t_{i}}e^{-\hat{\nu}_{\delta}(v)( t-s)}(1+|v|^{2})^{\frac{|v|}{2}}\Big{|}\int_{\mathbb{R}^{3}}k_{w_{l}}^{0,c}(v_{i},v ^{\prime})h^{\lambda}(X_{cl}(s),v^{\prime})dv^{\prime}\Big{|}ds\] \[\leq\sum_{i=0}^{k-1}\int_{t_{i+1}}^{t_{i}}e^{-\hat{\nu}_{\delta}( v)(t-s)}(1+|v|^{2})^{\frac{|v|}{2}}\int_{\mathbb{R}^{3}}|k_{w_{l}}^{0,c}(v_{i},v ^{\prime})|\times(1+|v^{\prime}|^{2})^{\frac{|v|}{2}}dv^{\prime}ds\] \[\qquad\times\sum_{j=0}^{k^{\prime}-1}\int_{t^{\prime}_{j+1}}^{t^{ \prime}_{j}}e^{-\hat{\nu}_{\delta}(v^{\prime})(s-s_{1})}\int_{\mathbb{R}^{3}}| k_{w_{l}}^{0,c}(v^{\prime}_{j},v^{\prime\prime})h^{\lambda}(X^{\prime}_{cl}(s_{1}),v ^{\prime\prime})|dv^{\prime\prime}ds_{1}\] \[\quad+C\big{(}m^{3+\kappa}+m^{\kappa-1}e^{-\frac{1}{2}\nu_{0}k_{0} d}\big{)}\|h^{\lambda}\|_{L^{\infty}_{\eta,v}}+Cm^{\kappa-1}\big{\{}\|(\nu^{0})^{-1}w_{l}g \|_{L^{\infty}_{\eta,v}}+|w_{i+4}r|_{L^{\infty}(\gamma_{+})}\big{\}}, \tag{3.25}\]
where we have used (3.20)-(3.22), (3.24) and (2.9)-(2.10), and denoted \(X^{\prime}_{cl}(s_{1})=X_{cl}(s_{1};s,X_{cl}(s),v^{\prime})\), and \(t^{\prime}_{j},v^{\prime}_{j}\) are the corresponding times and velocities for specular cycles. Here \(k^{\prime}=\tilde{k}_{0}|v^{\prime}_{3}|(1+|v^{\prime}|^{2})^{\frac{|v|}{2}}\).
For the first term on RHS of (3.25), we divide the proof into several cases.
_Case 1._\(|v|\geq N\). Using (2.9), the first term on the RHS of (3.25) is bounded by
\[Cm^{\kappa-1}\sum_{i=0}^{k-1}\int_{t_{i+1}}^{t_{i}}e^{-\hat{\nu }_{\delta}(v)(t-s)}(1+|v|^{2})^{\frac{|v|}{2}}\int_{\mathbb{R}^{3}}|k_{w_{l}}^{0,c}(v_{i},v^{\prime})|(1+|v^{\prime}|)^{-2}dv^{\prime}ds\cdot\|h^{\lambda}\|_{L^ {\infty}_{\eta,v}}\] \[\leq Cm^{2(\kappa-1)}(1+|v|)^{-2}\|h^{\lambda}\|_{L^{\infty}_{ \eta,v}}\leq C\frac{m^{2(\kappa-1)}}{N^{2}}\|h^{\lambda}\|_{L^{\infty}_{\eta,v}}, \tag{3.26}\]
where we have used the fact \(|v|\equiv|v_{i}|\) for \(i=0,1,\cdots\). It is important that the constant in (3.26) is independent of \(k\).
_Case 2._\(|v|\leq N,|v^{\prime}|\geq 2N\) _or_\(|v^{\prime}|\leq 2N,|v^{\prime\prime}|\geq 3N\). Noting \(|v_{i}|=|v|\) and \(|v^{\prime}_{j}|=|v^{\prime}|\), we get either \(|v_{i}-v^{\prime}|\geq N\) or \(|v^{\prime}_{j}-v^{\prime\prime}|\geq N\), then either one of the following is valid for some small positive
constant \(0<c_{2}\leq\frac{1}{32}\):
\[\begin{split}|k^{0,c}_{w_{l}}(v_{i},v^{\prime})|&\leq e ^{-c_{2}N^{2}}|k^{0,c}_{w_{l}}(v_{i},,v^{\prime})\exp\big{(}c_{2}|v_{i}-v^{ \prime}|^{2}\big{)}|,\\ |k^{0,c}_{w_{l}}(v^{\prime}_{j},v^{\prime\prime})|&\leq e ^{-c_{2}N^{2}}|k^{0,c}_{w_{l}}(v^{\prime}_{j},,v^{\prime\prime})\exp\big{(}c_{ 2}|v^{\prime}_{j}-v^{\prime}|^{2}\big{)}|,\end{split} \tag{3.27}\]
which, together with (2.9), yields that
\[\sum_{i=0}^{k-1}\int_{t_{i+1}}^{t_{i}}e^{-\hat{\nu}_{\delta}(v)(t-s)}\left\{ \iint_{|v|\leq N,|v^{\prime}|\geq 2N}+\iint_{|v^{\prime}|\leq 2N,|v^{\prime \prime}|\geq 3N}\right\}(\cdots)dv^{\prime\prime}ds_{1}dv^{\prime}ds\]
\[\leq Cm^{\kappa-1}e^{-c_{2}N^{2}}\|h^{\lambda}\|_{L^{\infty}_{q,v}}\leq\frac{ Cm^{\kappa-1}}{N}\|h^{\lambda}\|_{L^{\infty}_{q,v}}. \tag{3.28}\]
We also point out that the constant in (3.28) is independent of \(k\).
_Case 3. \(|v|\leq N,|v^{\prime}|\leq 2N\), \(|v^{\prime\prime}|\leq 3N\)._ We denote \(\mathcal{D}=\{|v|\leq N,\,|v^{\prime}|\leq 2N,\,|v^{\prime\prime}|\leq 3N\}\). Noting \(\hat{\nu}_{\delta}(v)\geq\nu_{0}\), the corresponding part is bounded by
\[\sum_{i=0}^{k-1}\int_{t_{i+1}}^{t_{i}}e^{-\nu_{0}(t-s)}\iint_{ \mathcal{D}}|k^{0,c}_{w_{l}}(v_{i},v^{\prime})k^{0,c}_{w_{l}}(v^{\prime}_{j},v ^{\prime\prime})|(1+|v|^{2})^{\frac{|v|}{2}}(1+|v^{\prime}|^{2})^{\frac{|v|}{ 2}}dv^{\prime\prime}dv^{\prime}ds\] \[\qquad\qquad\times\sum_{j=0}^{k^{\prime}-1}\left(\int_{t^{\prime }_{j}-\frac{1}{N^{0}}}^{t^{\prime}_{j}}+\int_{t^{\prime}_{j+1}}^{t^{\prime}_{j }-\frac{1}{N^{0}}}\right)e^{-\nu_{0}(s-s_{1})}|h^{\lambda}(X^{\prime}_{cl}(s_{ 1}),v^{\prime\prime})|ds_{1}=:P_{1}+P_{2}.\]
For \(P_{1}\), noting \(|v^{\prime}|\leq 2N\), one has
\[P_{1}\leq C\frac{k^{\prime}m^{2\kappa-2}}{N^{6}}\|h^{\lambda}\|_{L^{\infty}_{q,v}}\leq C\frac{\tilde{k}_{0}m^{2(\kappa-1)}N^{4}}{N^{6}}\|h^{\lambda}\|_{L^{ \infty}_{q,v}}\leq\frac{Cm^{2(\kappa-1)}}{N^{2}}\|h^{\lambda}\|_{L^{\infty}_{ q,v}}.\]
For \(P_{2}\), a direct calculation shows
\[P_{2} \leq\sum_{i=0}^{k-1}\sum_{j=0}^{k^{\prime}-1}\int_{t_{i+1}}^{t_{i }}\int_{t^{\prime}_{j+1}}^{t^{\prime}_{j}-\frac{1}{N^{0}}}\iint_{\mathcal{D}}| k^{0,c}_{w_{l}}(v_{i},v^{\prime})k^{0,c}_{w_{l}}(v^{\prime}_{j},v^{\prime\prime})|(1+|v |^{2})^{\frac{|v|}{2}}(1+|v^{\prime}|^{2})^{\frac{|v|}{2}}\] \[\qquad\qquad\times|e^{-\nu_{0}(t-s_{1})}h^{\lambda}(X^{\prime}_{ cl}(s_{1}),v^{\prime\prime})|dv^{\prime\prime}dv^{\prime}ds_{1}ds\] \[\leq C_{N}\sum_{i=0}^{k-1}k^{\prime-1}\int_{t_{i+1}}^{t_{i}}\int_ {t^{\prime}_{j+1}}^{t^{\prime}_{j}-\frac{1}{N^{0}}}e^{-\nu_{0}(t-s_{1})}\Big{[} \iint_{\mathcal{D}}\nu^{0}(v^{\prime\prime})|f^{\lambda}(X^{\prime}_{cl}(s_{1 }),v^{\prime\prime})|^{2}dv^{\prime\prime}dv^{\prime}\Big{]}^{\frac{1}{2}}, \tag{3.29}\]
where we used the fact
\[\iint_{\mathcal{D}}|k^{0,c}_{w_{l}}(v_{i},v^{\prime})k^{0,c}_{w_{l}}(v^{\prime} _{j},v^{\prime\prime})|^{2}(1+|v|^{2})^{|\kappa|}(1+|v^{\prime}|^{2})^{|\kappa| }{w_{l}}^{2}(v^{\prime\prime})(\nu^{0})^{-1}(v^{\prime\prime})dv^{\prime}dv^{ \prime\prime}\leq C_{N}.\]
Define \(y:=\eta^{\prime}_{j}-\hat{v}^{\prime}_{j,3}(t^{\prime}_{j}-s_{1})=X^{\prime}_{ cl}\). We have \(\eta^{\prime}_{j}=0\,\mathrm{or}\,d\) and \(\hat{v}^{\prime}_{j,3}=(-1)^{j}\hat{v}^{\prime}_{0,3}\). For \(t^{\prime}_{j}=t^{\prime}_{j}(s_{1};s,X_{cl}(s),v^{\prime})\), it holds that
\[s-t^{\prime}_{j}=\begin{cases}\frac{X_{cl}(s)}{|\hat{v}^{\prime}_{0,3}|}+(j-1) \frac{d}{|\hat{v}^{\prime}_{0,3}|},\quad\text{for }v^{\prime}_{0,3}>0,\\ \frac{d-X_{cl}(s)}{|\hat{v}^{\prime}_{0,3}|}+(j-1)\frac{d}{|\hat{v}^{\prime}_{0,3 }|},\quad\text{for }v^{\prime}_{0,3}<0,\end{cases}\]
which yields that
\[y=\begin{cases}\eta^{\prime}_{j}-(-1)^{j}\Big{\{}\hat{v}^{\prime}_{0,3}(s-s_{1} )-[X_{cl}(s)+(j-1)d]\Big{\}},\quad\text{for }v^{\prime}_{0,3}>0,\\ \eta^{\prime}_{j}-(-1)^{j}\Big{\{}\hat{v}^{\prime}_{0,3}(s-s_{1})+[jd-X_{cl}(s) ]\Big{\}},\quad\text{for }v^{\prime}_{0,3}<0.\end{cases}\]
Since \(\eta^{\prime}_{j}=0\,\mathrm{or}\,d\), which is independent of \(v^{\prime}_{0,3}\), thus we have
\[\left|\frac{dy}{dv^{\prime}_{0,3}}\right|=(s-s_{1})\Big{\{}(1+|v^{\prime}|^{2})^{ \frac{|v|}{2}}+|\kappa|(1+|v^{\prime}|^{2})^{\frac{|v|}{2}-1}(v^{\prime}_{0,3}) ^{2}\Big{\}}\geq\frac{1}{N^{6}},\quad\text{for }s_{1}\in[t^{\prime}_{j+1},t^{\prime}_{j}-\frac{1}{N^{6}}],\]
which yields that
\[\left(\iint_{\mathcal{D}}\nu^{0}(v^{\prime\prime})|f^{\lambda}(\eta^{\prime}_{j}-v ^{\prime}_{j,3}(t^{\prime}_{j}-s_{1}),v^{\prime\prime})|^{2}dv^{\prime}dv^{\prime \prime}\right)^{\frac{1}{2}}\leq C_{m,N}\|\sqrt{\nu^{0}}f^{\lambda}\|_{L^{2}_{ \eta,v}}.\]
Combining above estimates, then the RHS of (3.29) is bounded by
\[\frac{Cm^{2(\kappa-1)}}{N^{2}}\|h^{\lambda}\|_{L^{\infty}_{\eta,v}}+C_{m,N}\| \sqrt{\nu^{0}}f^{\lambda}\|_{L^{2}_{\eta,v}}.\]
Combining the above estimates, we obtain
\[\|h^{\lambda}\|_{L^{\infty}_{\eta,v}}+|h^{\lambda}|_{L^{\infty}( \gamma_{+})} \leq C\big{(}m^{\kappa+3}+m^{\kappa-1}e^{-\frac{1}{2}\nu_{0}\bar{k} _{0}d}+\frac{m^{2(\kappa-1)}}{N}\big{)}\big{\{}\|h^{\lambda}\|_{L^{\infty}_{ \eta,v}}+|h^{\lambda}|_{L^{\infty}(\gamma_{+})}\big{\}}\] \[+C_{m,N}\|\sqrt{\nu^{0}}f^{\lambda}\|_{L^{2}_{\eta,v}}+C_{m}\{\|( \nu^{0})^{-1}w_{l}g\|_{L^{\infty}_{\eta,v}}+|w_{l+4}r|_{L^{\infty}(\gamma_{+} )}\}.\]
First \(m\) sufficiently small, then taking Taking \(\bar{k}_{0}\) and \(N\) suitably large so that
\[C\big{(}m^{\kappa+3}+m^{\kappa-1}e^{-\frac{1}{2}\nu_{0}\bar{k}_{0}d}+\frac{m^ {2(\kappa-1)}}{N}\big{)}\leq\frac{1}{2},\]
then one has
\[\|h^{\lambda}\|_{L^{\infty}_{\eta,v}}+|h^{\lambda}|_{L^{\infty}(\gamma_{+})} \leq C\|\sqrt{\nu^{0}}f^{\lambda}\|_{L^{2}_{\eta,v}}+C\big{\{}\|(\nu^{0})^{-1} w_{l}g\|_{L^{\infty}_{\eta,v}}+|w_{l+4}r|_{L^{\infty}(\gamma_{+})}\big{\}}.\]
Therefore the proof of Lemma 3.3 is finished.
**Lemma 3.5**.: _Let \(\delta>0,d\geq 1\), \(n\geq n_{0}\), and \(l>2\). Assume \(\|(\nu^{0})^{-1}w_{l}g\|_{L^{\infty}_{\eta,v}}<\infty\). Then there exists a unique solution \(f^{n}\) to the following boundary value problem_
\[\begin{cases}\delta f^{n}+v_{3}\partial_{\eta}f^{n}+\nu^{0}(v)f^{n}-K^{0}f^{n} =g,\\ f^{n}(\eta,v)|_{\gamma_{-}}=(1-\frac{1}{n})f^{n}(\eta,R_{\eta}v),\end{cases} \quad(\eta,v)\in\Omega_{d}\times\mathbb{R}^{3}, \tag{3.30}\]
_satisfying_
\[\|w_{l}f^{n}\|_{L^{\infty}_{\eta,v}}+|w_{l}f^{n}|_{L^{\infty}(\gamma_{+})} \leq C_{\delta,d}\|(\nu^{0})^{-1}w_{l}g\|_{L^{\infty}_{\eta,v}},\]
_where the positive constant \(C_{\delta,d}>0\) depends only on \(\delta,d\). Moreover, if \(g\) is continuous in \(\Omega_{d}\times\mathbb{R}^{2}\), then \(f^{n}\) is continuous away from grazing set \(\gamma_{0}\)._
**Proof.** We consider the solvability of the following boundary value problem
\[\begin{cases}\mathcal{L}_{\lambda}f:=\delta f+v_{3}\partial_{\eta}f+\nu^{0}(v )f-\lambda K^{0}f=g,\\ f(\eta,v)|_{\gamma_{-}}=(1-\frac{1}{n})f(\eta,R_{\eta}v),\end{cases} \tag{3.31}\]
for \(\lambda\in[0,1]\). For brevity we denote \(\mathcal{L}_{\lambda}^{-1}\) to be the solution operator associated with the problem, \(f:=\mathcal{L}_{\lambda}^{-1}g\) is a solution to the BVP (3.31). Our idea is to prove the existence of \(\mathcal{L}_{0}^{-1}\), and then extend to obtain the existence of \(\mathcal{L}_{1}^{-1}\) by a continuous argument on \(\lambda\). We split the proof into several steps.
_Step 1._ In this step, we prove the existence of \(\mathcal{L}_{0}^{-1}\). We consider the following approximate sequence
\[\begin{cases}\mathcal{L}_{0}f^{i+1}=\delta f^{i+1}+v_{3}\partial_{\eta}f^{i+1} +\nu^{0}(v)f^{i+1}=g,\\ f^{i+1}(\eta,v)|_{\gamma_{-}}=(1-\frac{1}{n})f^{i}(\eta,R_{\eta}v),\end{cases} \tag{3.32}\]
for \(i=0,1,2,\cdots\), where we have set \(f^{0}\equiv 0\). We will construct \(L^{\infty}\) solutions to (3.32) for \(i=0,1,2,\cdots\), and establish uniform \(L^{\infty}\)-estimates.
Firstly, multiplying (3.32) by \(f^{i+1}\) and integrating the resultant equality over \(\Omega_{d}\times\mathbb{R}^{3}\), one obtains that
\[\delta\|f^{i+1}\|_{L^{2}_{\eta,v}}^{2}+\frac{1}{2}|f^{i+1}|_{L^{2 }(\gamma_{+})}^{2}+\|\sqrt{\nu^{0}}f^{i+1}\|_{L^{2}_{\eta,v}}\] \[\leq\frac{1}{2}(1-\frac{1}{n})^{2}|f^{i}|_{L^{2}(\gamma_{+})}^{2}+ C_{\delta}\|g\|_{L^{2}_{\eta,v}}^{2}+\frac{\delta}{2}\|f^{i+1}\|_{L^{2}_{\eta,v}}^{2}, \tag{3.33}\]
which yields that
\[\delta\|f^{i+1}\|^{2}_{L^{2}_{\eta,v}}+|f^{i+1}|^{2}_{L^{2}(\gamma_{+})}\leq(1- \frac{1}{n})^{2}|f^{i}|^{2}_{L^{2}(\gamma_{+})}+C_{\delta}\|g\|^{2}_{L^{2}_{\eta, v}}.\]
Considering the equation of \(f^{i+1}-f^{i}\), by similar energy estimate as above, one obtains
\[\delta\|f^{i+1}-f^{i}\|^{2}_{L^{2}_{\eta,v}}+|f^{i+1}-f^{i}|^{2}_ {L^{2}(\gamma_{+})}\] \[\leq(1-\frac{1}{n})^{2}|f^{i}-f^{i-1}|^{2}_{L^{2}(\gamma_{+})} \leq\cdots\leq(1-\frac{1}{n})^{2i}|f^{1}|^{2}_{L^{2}(\gamma_{+})}\] \[\leq C_{\delta}(1-\frac{1}{n})^{2i}\|g\|^{2}_{L^{2}_{\eta,v}}<\infty. \tag{3.34}\]
Noting \(1-\frac{1}{n}<1\), thus \(\{f^{i}\}_{i=0}^{\infty}\) is a Cauchy sequence in \(L^{2}\), i.e.,
\[|f^{i}-f^{j}|^{2}_{L^{2}(\gamma_{+})}+\|f^{i}-f^{j}\|^{2}_{L^{2}_{\eta,v}}\to 0,\quad\text{as }i,j\to\infty,\]
and we have, for \(i=0,1,2,\cdots\),
\[|f^{i}|^{2}_{L^{2}(\gamma_{+})}+\|f^{i}\|^{2}_{L^{2}_{\eta,v}}\leq C_{\delta} \|g\|^{2}_{L^{2}_{\eta,v}}. \tag{3.35}\]
Next we consider the uniform \(L^{\infty}_{\eta,v}\) estimate. Let \(h^{i}=w_{l}f^{i}\), one has that
\[h^{i+1}(\eta,v)e^{\rho_{\delta}(v)t}=(1-\frac{1}{n})h^{i}(\eta_{1},v_{1})e^{ \rho_{\delta}(v)t_{1}}+\int_{t_{1}}^{t}e^{\rho_{\delta}s}(1+|v|^{2})^{\frac{| \varepsilon|}{2}}w_{l}gds.\]
Then it is easy to obtain
\[\|h^{1}\|_{L^{\infty}_{\eta,v}}+|h^{1}|_{L^{\infty}(\gamma_{+})}\leq\|(\nu^{ 0})^{-1}w_{l}g\|_{L^{\infty}_{\eta,v}}.\]
Also, by iteration, it holds that
\[\|h^{i}\|_{L^{\infty}_{\eta,v}}+|h^{i}|_{L^{\infty}(\gamma_{+})}\leq C_{i}\|( \nu^{0})^{-1}w_{l}g\|_{L^{\infty}_{\eta,v}},\quad i=0,1,2...,\]
where the constants \(C_{i}\) depend on \(i\). Taking \(h^{i+1}-h^{i}\), similarly, one obtains
\[(h^{i+1}-h^{i})(\eta,v)=(1-\frac{1}{n})e^{-\hat{\nu}_{\delta}(t-t_{1})}(h^{i}- h^{i-1})(\eta_{1},v_{1}),\]
which yields that
\[\|h^{i+1}-h^{i}\|_{L^{\infty}_{\eta,v}}+|h^{i+1}-h^{i}|_{L^{\infty }(\gamma_{+})}\leq(1-\frac{1}{n})|(h^{i}-h^{i-1})|_{L^{\infty}(\gamma_{+})}\] \[\leq...\leq(1-\frac{1}{n})^{i}|h^{1}|_{L^{\infty}(\gamma_{+})} \leq(1-\frac{1}{n})^{i}\|(\nu^{0})^{-1}w_{l}g\|_{L^{\infty}_{\eta,v}}.\]
Since \(1-\frac{1}{n}<1\), \(\{h^{i}\}_{i=0}^{\infty}\) is a Cauchy sequence in \(L^{\infty}\) space. Then there exists a solution to \(\mathcal{L}_{0}^{-1}\) with \(L^{2}\) and \(L^{\infty}\)-weighted bound.
_Step 2._ Assume \(f\) is a solution to (3.31) and \(\|w_{l}f\|_{L^{\infty}_{\eta,v}}+|w_{l}f|_{L^{\infty}(\gamma_{+})}<\infty\). Multiplying \(f\) to (3.31) and integrating on \(\mathbb{R}^{3}\), one obtains
\[\delta\|f\|^{2}_{L^{2}_{v}}+\frac{d}{d\eta}\int_{\mathbb{R}^{3}}v_{3}f^{2}\ dv+\lambda c_{1}\|(\mathbf{I}-\mathbf{P}_{0})f\|^{2}_{ \nu}\leq\frac{\delta}{4}\|f\|^{2}_{L^{2}_{v}}+\frac{C}{\delta}\|g\|^{2}_{L^{2} _{v}}, \tag{3.36}\]
where we used
\[\langle f,\nu^{0}(v)f\rangle-\lambda\langle f,K^{0}f\rangle\geq\lambda c_{1}\| (\mathbf{I}-\mathbf{P}_{0})f\|^{2}_{\nu}+C(1-\lambda)\|f\|^{2}_{\nu}.\]
A direct calculation shows that
\[\int_{0}^{d}\int_{\mathbb{R}^{3}}\frac{d}{d\eta}(v_{3}f^{2})\ dvd\eta =\int_{\mathbb{R}^{3}}v_{3}|f|^{2}(d)\ dv-\int_{\mathbb{R}^{3}}v_{ 3}|f|^{2}(0)\ dv\] \[=\int_{v_{3}>0}v_{3}f^{2}(d,v)\ dv+\int_{v_{3}<0}(1-\frac{1}{n})^ {2}v_{3}|f|^{2}(d,Rv)\ dv\] \[\quad-\int_{v_{3}>0}(1-\frac{1}{n})^{2}v_{3}|f|^{2}(0,Rv)\ dv- \int_{v_{3}<0}v_{3}|f|^{2}(0,v)\ dv\]
\[\|w_{l}f^{n}\|_{L^{\infty}_{\eta,v}}+|w_{l}f^{n}|_{L^{\infty}(\gamma_{+})}\leq C \Big{\{}\|(\nu^{0})^{-1}w_{l}g\|_{L^{\infty}_{\eta,v}}+\|f^{n}\|_{L^{2}_{\eta,v}} \Big{\}}\leq C_{\delta,d}\|(\nu^{0})^{-1}w_{l}g\|_{L^{\infty}_{\eta,v}}. \tag{3.43}\]
Taking the difference \(f^{n_{1}}-f^{n_{2}}\) with \(n_{1},n_{2}\geq n_{0}\), we know that
\[\begin{cases}\delta(f^{n_{1}}-f^{n_{2}})+v_{3}\partial_{\eta}(f^{n_{1}}-f^{n_{ 2}})+\mathbf{L}_{0}(f^{n_{1}}-f^{n_{2}})=0,\\ (f^{n_{1}}-f^{n_{2}})(\eta,v)|_{\gamma_{-}}=(1-\frac{1}{n_{1}})(f^{n_{1}}-f^{n_ {2}})(\eta,R_{\eta}v)+(\frac{1}{n_{2}}-\frac{1}{n_{1}})f^{n_{2}}(\eta,R_{\eta} v).\end{cases} \tag{3.44}\]
Multiplying (3.44) by \(f^{n_{1}}-f^{n_{2}}\) and integrating it over \(\Omega_{d}\times\mathbb{R}^{3}\), we can obtain
\[\delta\|(f^{n_{1}}-f^{n_{2}})\|^{2}_{L^{2}_{\eta,v}}+c_{1}\int_{0}^{d}\|( \mathbf{I}-\mathbf{P}_{0})(f^{n_{1}}-f^{n_{2}})\|^{2}_{\nu}\ d\eta\]
\[\|w_{l}f\|_{L^{\infty}_{q,v}}+|w_{l}f|_{L^{\infty}(\gamma_{+})}\leq C_{d}\|(\nu^{ 0})^{-1}w_{l}g\|_{L^{\infty}_{q,v}}. \tag{3.50}\]
_Moreover, if \(g\) is continuous in \(\Omega_{d}\times\mathbb{R}^{3}\), then \(f\) is continuous away from grazing set \(\gamma_{0}\)._
**Proof.** Let \(f^{\delta}\) be the solution of (3.11) constructed in Lemma 3.6. We shall consider the limit \(\delta\to 0\) to obtain solution of (3.49).
By similar arguments as in [22, Lemma 3.7], we can obtain
\[\|\mathbf{P}_{0}f^{\delta}\|^{2}\leq Cd^{6}\Big{(}\|(\mathbf{I}-\mathbf{P}_{0 })f^{\delta}\|^{2}_{\nu}+\|g\|^{2}_{L_{q,v}}\Big{)}. \tag{3.51}\]
It is noted that [22, Lemma 3.7] was proved for hard sphere case, but the proof can be generalized to both hard and soft potentials without any difficulty.
Multiplying (3.11) by \(f^{\delta}\) and integrating over \(\Omega_{d}\times\mathbb{R}^{3}\), we have
\[\delta\|f^{\delta}\|^{2}_{L^{\infty}_{q,v}}+c_{0}\|(\mathbf{I}-\mathbf{P})f^{ \delta}\|^{2}_{\nu}\leq\vartheta\|f^{\delta}\|^{2}_{L^{2}_{q,v}}+C_{\vartheta }\|g\|^{2}_{L^{2}_{q,v}}. \tag{3.52}\]
which, together with (3.51) and taking \(\vartheta\) small enough (depending on \(d\)), yields that
\[\|\sqrt{\nu^{0}}f^{\delta}\|^{2}_{L^{2}_{q,v}}\leq C_{d}\|g\|^{2}_{L^{2}_{q,v}}. \tag{3.53}\]
Applying (3.18) to \(f^{\delta}\) and using (3.53), one obtain
\[\|w_{l}f^{\delta}\|_{L^{\infty}_{q,v}}+|w_{l}f^{\delta}|_{L^{\infty}(\gamma_ {+})}\leq C_{d}\|(\nu^{0})^{-1}w_{l}g\|_{L^{\infty}_{q,v}}. \tag{3.54}\]
Next we consider the convergence of \(f^{\delta}\) as \(\delta\to 0+\). For any \(\delta_{1},\delta_{2}>0\), we consider the difference \(f^{\delta_{2}}-f^{\delta_{1}}\) satisfying
\[\begin{cases}v_{3}\partial_{\eta}(f^{\delta_{2}}-f^{\delta_{1}})+ \mathbf{L}_{0}(f^{\delta_{2}}-f^{\delta_{1}})=-\delta_{2}f^{\delta_{2}}+\delta _{1}f^{\delta_{1}},\\ (f^{\delta_{2}}-f^{\delta_{1}})|_{\gamma_{-}}=(f^{\delta_{2}}-f^{\delta_{1}}) (\eta,R_{\eta}v).\end{cases} \tag{3.55}\]
Multiplying (3.55) by \(f^{\delta_{2}}-f^{\delta_{1}}\), integrating the resultant equation and by similar arguments as in (3.52)-(3.54), one gets
\[\|\sqrt{\nu^{0}}(f^{\delta_{2}}-f^{\delta_{1}})\|^{2}_{L_{\eta,v}} \leq C_{d}\|\delta_{2}f^{\delta_{2}}-\delta_{1}f^{\delta_{1}}\|^{2}_{L^{2}_{ \eta,v}}\leq C_{d}(\delta_{1}^{2}+\delta_{2}^{2})\cdot\|(\nu^{0})^{-1}w_{l}g \|^{2}_{L^{\infty}_{\eta,v}}\to 0, \tag{3.56}\]
as \(\delta_{1}\), \(\delta_{2}\to 0+\). Finally, applying (3.18) to \(f^{\delta_{2}}-f^{\delta_{1}}\) and using (3.56), then we obtain
\[\|w_{l}(f^{\delta_{2}}-f^{\delta_{1}})\|_{L^{\infty}_{\eta,v}}+|w_ {l}(f^{\delta_{2}}-f^{\delta_{1}})|_{L^{\infty}(\gamma_{+})}\] \[\leq C\Big{\{}\|(\nu^{0})^{-1}w_{l}(\delta_{2}f^{\delta_{2}}- \delta_{1}f^{\delta_{1}})\|_{L^{\infty}_{\eta,v}}+\|\sqrt{\nu^{0}}(f^{\delta_{ 2}}-f^{\delta_{1}})\|_{L^{2}_{\eta,v}}\Big{\}}\] \[\leq C_{d}(\delta_{1}+\delta_{2})\|(\nu^{0})^{-1}w_{l}g\|_{L^{ \infty}_{\eta,v}}\to 0, \tag{3.57}\]
as \(\delta_{1}\), \(\delta_{2}\to 0+\), With (3.57), we know that there exists a function \(f\) so that \(\|w_{l}\,(f^{\delta}-f)\|_{L^{\infty}_{\eta,v}}\to 0\) as \(\delta\to 0+\). And it is direct to see that \(f\) solves (3.49). Also, (3.50) follows immediately from (3.54). The continuity of \(f\) directly follows from the \(L^{\infty}_{\eta,v}\)-convergence and the continuity of \(f^{\delta}\). Therefore the proof of Lemma 3.7 is complete.
To obtain the solution for half-space problem, we need some uniform estimates independent of \(d\), then we can take the limit \(d\to\infty\). Let \(f\) be the solution of (3.49), we denote
\[\mathbf{P}_{0}f(\eta,v)=\big{[}a(\eta)+b(\eta)\cdot(v-\mathfrak{u}^{0})+c( \eta)(|v-\mathfrak{u}^{0}|^{2}-3T^{0})\big{]}\sqrt{\mu_{0}}.\]
Multiplying (3.49) by \(\sqrt{\mu_{0}}\) and using (3.8), we have
\[0=\frac{d}{d\eta}\int_{\mathbb{R}^{3}}v_{3}\sqrt{\mu_{0}}f(\eta,v)dv=\frac{d} {d\eta}b_{3}(\eta)\equiv 0. \tag{3.58}\]
Since \(f\) satisfies the specular boundary, it holds that \(b_{3}(\eta)|_{\eta=0}=b_{3}(\eta)|_{\eta=d}=0\), which, together with (3.58), yields
\[b_{3}(\eta)=0,\quad\text{for }\eta\in[0,d]. \tag{3.59}\]
Let \((\phi_{0},\phi_{1},\phi_{2},\phi_{3})\) be some constants chosen later, we define
\[\bar{f}(\eta,v) :=f(\eta,v)+[\phi_{0}+\phi_{1}(v_{1}-\mathfrak{u}^{0}_{1})+\phi_{ 2}(v_{2}-\mathfrak{u}^{0}_{2})+\phi_{3}(|v-\mathfrak{u}^{0}|^{2}-3T^{0})]\sqrt {\mu_{0}}\] \[=[\bar{a}(\eta)+\bar{b}_{1}(\eta)(v_{1}-\mathfrak{u}^{0}_{1})+ \bar{b}_{2}(\eta)(v_{2}-\mathfrak{u}^{0}_{2})+\bar{c}(\eta)(|v-\mathfrak{u}^{0}| ^{2}-3T^{0})]\sqrt{\mu_{0}}\] \[\qquad+(\mathbf{I}-\mathbf{P}_{0})\bar{f},\]
where
\[\begin{cases}\bar{a}(\eta)=a(\eta)+\phi_{0},\\ \bar{b}_{i}(\eta)=b_{i}(\eta)+\phi_{i},\quad i=1,2,\\ \bar{c}(\eta)=c(\eta)+\phi_{3}.\end{cases}\]
It follows from (3.59) that
\[\bar{b}_{3}(\eta)\equiv 0\quad\text{and}\quad(\mathbf{I}-\mathbf{P}_{0})\bar{f} (\eta,v)\equiv(\mathbf{I}-\mathbf{P}_{0})f(\eta,v)\quad\forall\eta\in[0,d]. \tag{3.60}\]
The equation for \(\bar{f}\) is
\[\begin{cases}v_{3}\partial_{\eta}\bar{f}+\mathbf{L}_{0}\bar{f}=g,\quad(\eta,v) \in\Omega_{d}\times\mathbb{R}^{3},\\ \bar{f}(\eta,v)|_{\gamma_{-}}=\bar{f}(\eta,R_{\eta}v).\end{cases} \tag{3.61}\]
Hence it follows from (3.50) that
\[\|w_{l}\,\bar{f}\|_{L^{\infty}_{\eta,v}}+|w_{l}\,\bar{f}\|_{L^{\infty}(\gamma_{ +})}\leq C_{d}\|(\nu^{0})^{-1}w_{l}g\|_{L^{\infty}_{\eta,v}}+C_{d}|(\phi_{0}, \phi_{1},\phi_{2},\phi_{3})|.\]
Multiplying \(\eqref{eq:3.61}_{1}\) by \((v_{1}-\mathfrak{u}_{1}^{0},v_{2}-\mathfrak{u}_{2}^{0},|v-\mathfrak{u}^{0}|^{2}-5T ^{0})\sqrt{\mu_{0}}\) and using (3.8), we get
\[\begin{split}\int_{\mathbb{R}^{3}}v_{3}(v_{i}-\mathfrak{u}_{i}^{0} )\sqrt{\mu_{0}}\bar{f}(\eta,v)dv&=0,\quad\forall\,\eta\in[0,d], \quad i=1,2,\\ \int_{\mathbb{R}^{3}}v_{3}(|v-\mathfrak{u}^{0}|^{2}-5T^{0})\sqrt{ \mu_{0}}\bar{f}(\eta,v)dv&=0,\quad\forall\,\eta\in[0,d].\end{split} \tag{3.62}\]
It follows from (3.60) and (3.62) that
\[\int_{\mathbb{R}^{3}}v_{3}|\mathbf{P}_{0}\bar{f}(\eta,v)|^{2}dv\equiv\int_{ \mathbb{R}^{3}}v_{3}\mathbf{P}_{0}\bar{f}(\eta,v)\cdot(\mathbf{I}-\mathbf{P}_ {0})\bar{f}(\eta,v)dv\equiv 0, \tag{3.63}\]
which yields that
\[\int_{\mathbb{R}^{3}}v_{3}|\bar{f}(\eta,v)|^{2}dv=\int_{\mathbb{R}^{3}}v_{3}|( \mathbf{I}-\mathbf{P}_{0})\bar{f}(\eta,v)|^{2}dv,\quad\forall\eta\in[0,d]. \tag{3.64}\]
Multiplying (3.61) by \(\bar{f}\) and using (3.64), (3.8), we obtain
\[\frac{d}{d\eta}\int_{\mathbb{R}^{3}}v_{3}|(\mathbf{I}-\mathbf{P}_{0})\bar{f}| ^{2}dv+\frac{1}{2}c_{1}\|(\mathbf{I}-\mathbf{P}_{0})\bar{f}\|_{\nu}^{2}\leq C \|(\nu^{0})^{-\frac{1}{2}}g\|_{L_{v}^{2}}^{2},\]
which yields that
\[\int_{0}^{d}\|(\mathbf{I}-\mathbf{P}_{0})\bar{f}\|_{\nu}^{2}\ d\eta\leq C\int _{0}^{d}\|(\nu^{0})^{-\frac{1}{2}}g\|_{L_{v}^{2}}^{2}d\eta, \tag{3.65}\]
where we have used (3.8) to derive
\[\int_{\mathbb{R}^{3}}g\bar{f}dv=\int_{\mathbb{R}^{3}}g(\mathbf{I}-\mathbf{P}_ {0})\bar{f}dv\leq\frac{1}{2}c_{1}\|(\mathbf{I}-\mathbf{P}_{0})\bar{f}\|_{\nu }^{2}+C\|(\nu^{0})^{-\frac{1}{2}}g\|_{L_{v}^{2}}^{2}.\]
**Lemma 3.8**.: _There exist constants \((\phi_{0},\phi_{1},\phi_{2},\phi_{3})\) such that_
\[\begin{split}&\int_{\mathbb{R}^{3}}v_{3}\bar{f}(d,v)\cdot v_{3} \sqrt{\mu_{0}}dv=0,\\ &\int_{\mathbb{R}^{3}}v_{3}\bar{f}(d,v)\cdot\mathbf{L}_{0}^{-1}( \mathcal{A}_{3i}^{0})dv=0,\ i=1,2,\\ &\int_{\mathbb{R}^{3}}v_{3}\bar{f}(d,v)\cdot\mathbf{L}_{0}^{-1}( \mathcal{B}_{3}^{0})dv=0.\end{split} \tag{3.66}\]
**Proof.** A direct calculation shows that
\[\begin{split}\int_{\mathbb{R}^{3}}v_{3}\bar{f}(\eta,v)\cdot v_{3 }\sqrt{\mu_{0}}dv&=\rho^{0}T^{0}\bar{a}(\eta)+2\rho^{0}(T^{0})^{2 }\bar{c}(\eta)+T^{0}\int_{\mathbb{R}^{3}}\mathcal{A}_{33}^{0}(v)\cdot(\mathbf{ I}-\mathbf{P}_{0})\bar{f}(\eta,v)dv\\ &=\rho^{0}T^{0}\phi_{0}+2\rho^{0}(T^{0})^{2}\phi_{3}+\rho^{0}T^{ 0}a(\eta)+2\rho^{0}(T^{0})^{2}c(\eta)\\ &\quad+T^{0}\int_{\mathbb{R}^{3}}\mathcal{A}_{33}^{0}(v)\cdot( \mathbf{I}-\mathbf{P}_{0})f(\eta,v)dv,\end{split} \tag{3.67}\]
\[\begin{split}\int_{\mathbb{R}^{3}}v_{3}\bar{f}(\eta,v)\cdot \mathbf{L}_{0}^{-1}(\mathcal{A}_{31}^{0})dv&=\mu(T^{0})\bar{b}_{1}( \eta)+\int_{\mathbb{R}^{3}}v_{3}(\mathbf{I}-\mathbf{P}_{0})\bar{f}(\eta,v)\cdot \mathbf{L}_{0}^{-1}(\mathcal{A}_{31}^{0})dv\\ &=\mu(T^{0})\phi_{1}+\mu(T^{0})b_{1}(\eta)+\int_{\mathbb{R}^{3}}v _{3}(\mathbf{I}-\mathbf{P}_{0})f(\eta,v)\cdot\mathbf{L}_{0}^{-1}(\mathcal{A} _{31}^{0})dv,\end{split} \tag{3.68}\]
\[\begin{split}\int_{\mathbb{R}^{3}}v_{3}\bar{f}(\eta,v)\cdot \mathbf{L}_{0}^{-1}(\mathcal{A}_{32}^{0})dv&=\mu(T^{0})\bar{b}_{2}( \eta)+\int_{\mathbb{R}^{3}}v_{3}(\mathbf{I}-\mathbf{P}_{0})\bar{f}(\eta,v)\cdot \mathbf{L}_{0}^{-1}(\mathcal{A}_{32}^{0})dv\\ &=\mu(T^{0})\phi_{2}+\mu(T^{0})b_{2}(\eta)+\int_{\mathbb{R}^{3}}v _{3}(\mathbf{I}-\mathbf{P}_{0})f(\eta,v)\cdot\mathbf{L}_{0}^{-1}(\mathcal{A} _{32}^{0})dv,\end{split} \tag{3.69}\]
\[\begin{split}\int_{\mathbb{R}^{3}}v_{3}\bar{f}(\eta,v)\cdot \mathbf{L}_{0}^{-1}(\mathcal{B}_{3}^{0})dv&=\kappa(T^{0})\bar{c}( \eta)+\int_{\mathbb{R}^{3}}v_{3}(\mathbf{I}-\mathbf{P}_{0})\bar{f}(\eta,v)\cdot \mathbf{L}_{0}^{-1}(\mathcal{B}_{3}^{0})dv\\ &=\kappa(T^{0})\phi_{3}+\kappa(T^{0})c(\eta)+\int_{\mathbb{R}^{3}} v_{3}(\mathbf{I}-\mathbf{P}_{0})f(\eta,v)\cdot\mathbf{L}_{0}^{-1}(\mathcal{B} _{3}^{0})dv,\end{split} \tag{3.70}\]
where we have used the notations in (3.9).
Using (3.67)-(3.70), then (3.66) is equivalent as
\[\left(\begin{array}{cccc}1&0&0&2T^{0}\\ 0&\mu(T^{0})&0&0\\ 0&0&\mu(T^{0})&0\\ 0&0&0&\kappa(T^{0})\end{array}\right)\left(\begin{array}{c}\phi_{0}\\ \phi_{1}\\ \phi_{2}\\ \phi_{3}\end{array}\right)=-\left(\begin{array}{c}a(d)+2T^{0}c(d)+\frac{1}{ \rho^{0}}\int_{\mathbb{R}^{3}}(\mathbf{I}-\mathbf{P}_{0})f(d,v)\cdot\mathcal{A }_{33}^{0}(v)dv\\ \mu(T^{0})b_{1}(d)+\int_{\mathbb{R}^{3}}v_{3}(\mathbf{I}-\mathbf{P}_{0})f(d,v) \cdot\mathbf{L}_{0}^{-1}(\mathcal{A}_{31}^{0})dv\\ \mu(T^{0})b_{2}(d)+\int_{\mathbb{R}^{3}}v_{3}(\mathbf{I}-\mathbf{P}_{0})f(d,v) \cdot\mathbf{L}_{0}^{-1}(\mathcal{A}_{32}^{0})dv\\ \kappa(T^{0})c(d)+\int_{\mathbb{R}^{3}}v_{3}(\mathbf{I}-\mathbf{P}_{0})f(d,v) \cdot\mathbf{L}_{0}^{-1}(\mathcal{B}_{3}^{0})dv\end{array}\right).\]
Noting the matrix is non-singular, hence \((\phi_{0},\phi_{1},\phi_{2},\phi_{3})\) are found. Therefore the proof of Lemma 3.8 is completed.
From now on, the proof is quite different with hard sphere case since we do not have \(\nu^{0}\geq\sigma|v_{8}|\) for soft cases. Hence it is hard to obtain the space exponential decay as hard sphere case. Our strategy is to get the space decay by losing the particle velocity weight.
**Lemma 3.9**.: _Let \((\phi_{0},\phi_{1},\phi_{2},\phi_{3})\) be the ones determined in Lemma 3.8, then it holds that_
\[\int_{0}^{d}\|\bar{f}\|_{\nu}^{2}d\eta\leq C\int_{0}^{d}\int_{\mathbb{R}^{3}}(1 +\eta)^{2p_{0}}(\nu^{0})^{-1}g^{2}dvd\eta,\quad p_{0}>1. \tag{3.71}\]
_where the constant \(C>0\) is independent of \(d\)._
**Proof.** Multiplying (3.61) by \(\mathbf{L}_{0}^{-1}(\mathcal{A}_{31}^{0}),\mathbf{L}_{0}^{-1}(\mathcal{A}_{32 }^{0})\) and \(\mathbf{L}_{0}^{-1}(\mathcal{B}_{3}^{0})\), respectively, one has from (3.62) that
\[\frac{d}{d\eta}\left(\begin{array}{c}\int_{\mathbb{R}^{3}}v_{3}\bar{f}(\eta, v)\cdot\mathbf{L}_{0}^{-1}(\mathcal{A}_{31}^{0})dv\\ \int_{\mathbb{R}^{3}}v_{3}\bar{f}(\eta,v)\cdot\mathbf{L}_{0}^{-1}(\mathcal{A}_ {32}^{0})dv\\ \int_{\mathbb{R}^{3}}v_{3}\bar{f}(\eta,v)\cdot\mathbf{L}_{0}^{-1}(\mathcal{B} _{3}^{0})dv\end{array}\right)=\left(\begin{array}{c}\int_{\mathbb{R}^{3}}g \cdot\mathbf{L}_{0}^{-1}(\mathcal{A}_{31}^{0})dv\\ \int_{\mathbb{R}^{3}}g\cdot\mathbf{L}_{0}^{-1}(\mathcal{A}_{32}^{0})dv\\ \int_{\mathbb{R}^{3}}g\cdot\mathbf{L}_{0}^{-1}(\mathcal{B}_{3}^{0})dv\end{array} \right).\]
Integrating the above system over \([\eta,d]\) and using (3.66), one obtains
\[\left(\begin{array}{c}\int_{\mathbb{R}^{3}}v_{3}\bar{f}(\eta, v)\cdot\mathbf{L}_{0}^{-1}(\mathcal{A}_{31}^{0})dv\\ \int_{\mathbb{R}^{3}}v_{3}\bar{f}(\eta,v)\cdot\mathbf{L}_{0}^{-1}(\mathcal{A}_ {32}^{0})dv\\ \int_{\mathbb{R}^{3}}v_{3}\bar{f}(\eta,v)\cdot\mathbf{L}_{0}^{-1}(\mathcal{B} _{3}^{0})dv\end{array}\right)=-\int_{\eta}^{d}\left(\begin{array}{c}\int_{ \mathbb{R}^{3}}g\cdot\mathbf{L}_{0}^{-1}(\mathcal{A}_{31}^{0})dv\\ \int_{\mathbb{R}^{3}}g\cdot\mathbf{L}_{0}^{-1}(\mathcal{A}_{32}^{0})dv\\ \int_{\mathbb{R}^{3}}g\cdot\mathbf{L}_{0}^{-1}(\mathcal{B}_{3}^{0})dv\end{array} \right)(z)dz,\]
which, together with (3.68)-(3.70) and Proposition 2.5, yields that
\[|(\mu(T^{0})\bar{b}_{1},\mu(T^{0})\bar{b}_{2},\kappa(T^{0})\bar{c })(\eta)|\leq C\|(\mathbf{I}-\mathbf{P}_{0})f(\eta)\|_{\nu}+C\int_{\eta}^{d}\|g(z )\|_{L_{v}^{2}}dz, \tag{3.72}\]
where we used Proposition 2.5 to derive the decay estimates for \(v_{3}\mathbf{L}_{0}^{-1}(\mathcal{A}_{31}^{0}),v_{3}\mathbf{L}_{0}^{-1}( \mathcal{A}_{32}^{0}),v_{3}\mathbf{L}_{0}^{-1}(\mathcal{B}_{3}^{0})\).
It follows from (3.67) that
\[\bar{a}(\eta)=-2T^{0}\bar{c}(\eta)-\frac{1}{\rho^{0}T^{0}}\int_{\mathbb{R}^{3}} (\mathbf{I}-\mathbf{P}_{0})\bar{f}(\eta,v)\cdot v_{3}^{2}\sqrt{\mu_{0}}dv-\frac {1}{\rho^{0}T^{0}}\int_{\eta}^{d}\int_{\mathbb{R}^{3}}g\cdot v_{3}\sqrt{\mu_{0} }dvdz,\]
which yields that
\[|\bar{a}(\eta)|\leq C\|(\mathbf{I}-\mathbf{P}_{0})f(\eta)\|_{\nu}+C\int_{\eta}^{d} \|g(z)\|_{L_{v}^{2}}dz. \tag{3.73}\]
Using (3.65), (3.72)-(3.73), one gets
\[\int_{0}^{d}\|\mathbf{P}_{0}\bar{f}\|_{\nu}^{2}d\eta\leq C\int_{0}^{d}\|(\mathbf{I}-\mathbf{P}_{0})f(\eta)\|_{\nu}^{2}d\eta+C \int_{0}^{d}\Big{\{}\int_{\eta}^{d}\|g(z)\|_{L_{v}^{2}}dz\Big{\}}^{2}d\eta\] \[\leq C\int_{0}^{d}\int_{\mathbb{R}^{3}}(\nu^{0})^{-1}g^{2}dvd\eta+C \int_{0}^{d}\int_{\eta}^{d}(1+z)^{-2p_{0}}dzd\eta\cdot\int_{0}^{d}\int_{\mathbb{R} ^{3}}(1+\eta)^{2p_{0}}g^{2}dvd\eta\] \[\leq C\int_{0}^{d}\int_{\mathbb{R}^{3}}(1+\eta)^{2p_{0}}(\nu^{0})^{-1 }g^{2}dvd\eta,\quad p_{0}>1. \tag{3.74}\]
We conclude (3.71) from (3.74) and (3.65). The proof of Lemma 3.9 is completed.
Since we will encounter some space weight \(\eta^{l}\) in the formulation of Hilbert expansion (see (1.22) for details), then we have to derive at least polynomial decay for Knudsen boundary layer so that the analysis can be closed.
**Lemma 3.10**.: _Let \((\phi_{0},\phi_{1},\phi_{2},\phi_{3})\) be the ones determined in Lemma 3.8, then it holds that_
\[\int_{0}^{d}(1+\eta)^{k}\|w_{l}\bar{f}\|_{\nu}^{2}d\eta\leq C_{k}\int_{0}^{d}(1 +\eta)^{2p_{k}}\|w_{l+2k+2}g\|_{L_{v}^{2}}^{2}d\eta,\quad p_{k}>\frac{k}{2}+1, \tag{3.75}\]
_where \(k\) is a non-negative integer and the constant \(C_{k}\) depends only on \(k\)._
**Proof.** We divide the proof into three steps.
_Step 1._ Let \(l\) be any positive constant. From [37, Corollary 1], it holds that
\[\langle w_{l}^{2}\mathbf{L}_{0}\mathfrak{h},\mathfrak{h}\rangle\geq\frac{1}{2 }\|w_{l}\mathfrak{h}\|_{\nu}^{2}-C\|\mathfrak{h}\|_{\nu}^{2}. \tag{3.76}\]
Multiplying (3.61) by \(w_{l}^{2}\bar{f}\) and integrating on \(\mathbb{R}^{3}\), one has
\[\frac{1}{2}\frac{d}{d\eta}\int_{\mathbb{R}^{3}}v_{3}w_{l}^{2}\bar{f}^{2}dv+ \int_{\mathbb{R}^{3}}w_{l}^{2}\bar{f}\cdot\mathbf{L}_{0}\bar{f}dv=\int_{ \mathbb{R}^{3}}w_{l}^{2}\bar{f}gdv. \tag{3.77}\]
Integrating (3.77) on \([0,d]\) and using (3.76), one gets
\[\int_{0}^{d}\|w_{l}\bar{f}\|_{\nu}^{2}\ d\eta \lesssim\int_{0}^{d}\|\bar{f}\|_{\nu}^{2}d\eta+\int_{0}^{d}\int_{ \mathbb{R}^{3}}w_{l}^{2}\bar{f}g\ dvd\eta\] \[\lesssim\int_{0}^{d}\|\bar{f}\|_{\nu}^{2}d\eta+\int_{0}^{d}\|( \nu^{0})^{-\frac{1}{2}}w_{l}g\|_{L_{v}^{2}}^{2}d\eta\] \[\lesssim\int_{0}^{d}\|\bar{f}\|_{\nu}^{2}d\eta+\int_{0}^{d}\|w_{ l+2}g\|_{L_{v}^{2}}^{2}d\eta\] \[\lesssim\int_{0}^{d}(1+\eta)^{2p_{0}}\|w_{l+2}g\|_{L_{v}^{2}}^{2} d\eta,\quad p_{0}>1, \tag{3.78}\]
where we have used Lemma 3.9.
_Step 2._ Multiplying (3.61) by \(\bar{f}\) and integrating over \(\mathbb{R}^{3}\), one has
\[\frac{1}{2}\frac{d}{d\eta}\int_{\mathbb{R}^{3}}v_{3}\bar{f}^{2}dv+\int_{ \mathbb{R}^{3}}\bar{f}\mathbf{L}_{0}\bar{f}dv=\int_{\mathbb{R}^{3}}\bar{f}gdv,\]
which implies that
\[\frac{d}{d\eta}\int_{\mathbb{R}^{3}}v_{3}\bar{f}^{2}dv+c_{1}\|(\mathbf{I}- \mathbf{P}_{0})\bar{f}\|_{\nu}^{2}\lesssim\|(\nu^{0})^{-\frac{1}{2}}g\|_{L_{v }^{2}}^{2}. \tag{3.79}\]
Multiplying (3.79) by \((1+\eta)^{k}\) with \(k\) being some positive integer, we get
\[\partial_{\eta}\big{\{}(1+\eta)^{k}\int_{\mathbb{R}^{3}}v_{3}\bar {f}^{2}dv\big{\}}+c_{1}(1+\eta)^{k}\|(\mathbf{I}-\mathbf{P}_{0})\bar{f}\|_{\nu }^{2}\] \[\lesssim k(1+\eta)^{k-1}\int_{\mathbb{R}^{3}}v_{3}\bar{f}^{2}dv+( 1+\eta)^{k}\|(\nu^{0})^{-\frac{1}{2}}g\|_{L_{v}^{2}}^{2}. \tag{3.80}\]
Then, integrating (3.80) on \([0,d]\), one obtains
\[\int_{0}^{d}(1+\eta)^{k}\|(\mathbf{I}-\mathbf{P}_{0})\bar{f}\|_{ \nu}^{2}d\eta \lesssim k\int_{0}^{d}(1+\eta)^{k-1}\int_{\mathbb{R}^{3}}v_{3}\bar{f}^{2} dvd\eta+\int_{0}^{d}(1+\eta)^{k}\|(\nu^{0})^{-\frac{1}{2}}g\|_{L_{v}^{2}}^{2}d\eta\] \[\lesssim k\int_{0}^{d}(1+\eta)^{k-1}\|w_{2}\bar{f}\|_{\nu}^{2}d \eta+\int_{0}^{d}(1+\eta)^{k}\|w_{2}g\|_{L_{v}^{2}}^{2}d\eta. \tag{3.81}\]
On the other hand, from (3.72)-(3.73), one has that
\[\int_{0}^{d}(1+\eta)^{k}\|\mathbf{P}_{0}\bar{f}\|_{L_{v}^{2}}^{2}d\eta \lesssim_{k}\int_{0}^{d}(1+\eta)^{k}\|(\mathbf{I}-\mathbf{P}_{0})\bar{f}\|_{ \nu}^{2}d\eta+\int_{0}^{d}(1+\eta)^{k}\Big{\{}\int_{\eta}^{d}\|g(z)\|_{L_{v}^{ 2}}dz\Big{\}}^{2}d\eta\]
\[\lesssim_{k}\int_{0}^{d}(1+\eta)^{k-1}\|w_{2}\bar{f}\|_{\nu}^{2}d \eta+\int_{0}^{d}(1+\eta)^{2p_{k}}\|w_{2}g\|_{L^{2}_{v}}^{2}d\eta, \tag{3.82}\]
where \(p_{k}>\frac{k}{2}+1\). It follows from (3.81)-(3.82) that
\[\int_{0}^{d}(1+\eta)^{k}\|\bar{f}\|_{\nu}^{2}d\eta\lesssim k\int_{0}^{d}(1+ \eta)^{k-1}\|w_{2}\bar{f}\|_{\nu}^{2}d\eta+\int_{0}^{d}(1+\eta)^{2p_{k}}\|w_{2 }g\|_{L^{2}_{v}}^{2}d\eta,\quad p_{k}>\frac{k}{2}+1. \tag{3.83}\]
_Step 3._ Multiplying (3.77) by \((1+\eta)^{k}\), one has
\[\frac{1}{2}\frac{d}{d\eta}\Big{\{}(1+\eta)^{k}\int_{\mathbb{R}^{ 3}}v_{3}w_{l}^{2}\bar{f}^{2}dv\Big{\}}-\frac{k}{2}(1+\eta)^{k-1}\int_{\mathbb{ R}^{3}}v_{3}w_{l}^{2}\bar{f}^{2}dv\] \[\quad+(1+\eta)^{k}\int_{\mathbb{R}^{3}}w_{l}^{2}\bar{f}\mathbf{ L}_{0}\bar{f}dv=(1+\eta)^{k}\int_{\mathbb{R}^{3}}w_{l}^{2}\bar{f}gdv. \tag{3.84}\]
We have from (3.76) that
\[(1+\eta)^{k}\int_{\mathbb{R}^{3}}w_{l}^{2}\bar{f}\mathbf{L}_{0}\bar{f}dv\geq \frac{1}{2}(1+\eta)^{k}\|w_{l}\bar{f}\|_{\nu}^{2}-C(1+\eta)^{k}\|\bar{f}\|_{ \nu}^{2},\]
which, together with (3.83)-(3.84), yields that
\[\int_{0}^{d}(1+\eta)^{k}\|w_{l}\bar{f}\|_{\nu}^{2}d\eta \lesssim\int_{0}^{d}(1+\eta)^{k}\|\bar{f}\|_{\nu}^{2}d\eta+\int_{0 }^{d}\int_{\mathbb{R}^{3}}(1+\eta)^{k}(\nu^{0})^{-1}w_{l}^{2}g^{2}dvd\eta\] \[\quad+k\int_{0}^{d}\int_{\mathbb{R}^{3}}(1+\eta)^{k-1}|v_{3}|w_{l }^{2}\bar{f}^{2}dvd\eta\] \[\lesssim k\int_{0}^{d}(1+\eta)^{k-1}\|w_{2}\bar{f}\|_{\nu}^{2}d \eta+\int_{0}^{d}(1+\eta)^{2p_{k}}\|w_{2}g\|_{L^{2}_{v}}^{2}d\eta\] \[\quad+\int_{0}^{d}(1+\eta)^{k}\|w_{l+2}g\|_{L^{2}_{v}}^{2}d\eta+k \int_{0}^{d}(1+\eta)^{k-1}\|w_{l+2}\bar{f}\|_{\nu}^{2}d\eta\] \[\lesssim k\int_{0}^{d}(1+\eta)^{k-1}\|w_{l+2}\bar{f}\|_{\nu}^{2}c \eta+\int_{0}^{d}(1+\eta)^{2p_{k}}\|w_{l+2}g\|_{L^{2}_{v}}^{2}d\eta, \tag{3.85}\]
where \(p_{k}>\frac{k}{2}+1\). Using (3.78), (3.85), and induction arguments, one can deduce that
\[\int_{0}^{d}(1+\eta)^{k}\|w_{l}\bar{f}\|_{\nu}^{2}d\eta \lesssim_{k}\int_{0}^{d}(1+\eta)^{k-1}\|w_{l+2}\bar{f}\|_{\nu}^{2} d\eta+\int_{0}^{d}(1+\eta)^{2p_{k}}\|w_{l+2}g\|_{L^{2}_{v}}^{2}d\eta\] \[\lesssim_{k}\int_{0}^{d}(1+\eta)^{k-2}\|w_{l+4}\bar{f}\|_{\nu}^{2} d\eta+\int_{0}^{d}(1+\eta)^{2p_{k}}\|w_{l+4}g\|_{L^{2}_{v}}^{2}d\eta\] \[\lesssim_{k}\cdots\lesssim_{k}\int_{0}^{d}\|w_{l+2k}\bar{f}\|_{ \nu}^{2}d\eta+\int_{0}^{d}(1+\eta)^{2p_{k}}\|w_{l+2k}g\|_{L^{2}_{v}}^{2}d\eta\] \[\lesssim_{k}\int_{0}^{d}(1+\eta)^{2p_{k}}\|w_{l+2k+2}g\|_{L^{2}_{ v}}^{2}d\eta,\quad p_{k}>\frac{k}{2}+1. \tag{3.86}\]
Therefore the proof of Lemma 3.10 is completed.
**Lemma 3.11**.: _Let \((\phi_{0},\phi_{1},\phi_{2},\phi_{3})\) be the ones determined in Lemma 3.8, then it holds that_
\[\|(1+\eta)^{k}w_{l}\bar{f}\|_{L^{\infty}_{\eta,v}}+|w_{l}\bar{f}|_{L^{\infty}( \gamma_{+})}\leq C_{k}\|(1+\eta)^{q_{k}}w_{l+4k+4}g\|_{L^{\infty}_{\eta,v}}, \quad q_{k}>k+\frac{3}{2}, \tag{3.87}\]
_where \(k\) is a non-negative integer, and the constant \(C_{k}\) is independent of \(d\)._
**Proof.** Let \(h_{0}=w_{l}\bar{f}\), then it holds that
\[\begin{cases}v_{3}\partial_{\eta}h_{0}+\nu^{0}h_{0}=K^{0}_{w_{l}}h_{0}+w_{l}g,\\ h_{0}(\eta,v)|_{\gamma_{-}}=h_{0}(\eta,R_{\eta}v).\end{cases} \tag{3.88}\]
Applying Lemma 3.3, one has that
\[\|w_{l}\bar{f}\|_{L^{\infty}_{\eta,v}}+|w_{l}\bar{f}|_{L^{\infty}( \gamma_{+})} \leq C\Big{\{}\int_{0}^{d}\int_{\mathbb{R}^{3}}\nu^{0}\bar{f}^{2} dvd\eta\Big{\}}^{\frac{1}{2}}+C\|(\nu^{0})^{-1}w_{l}g\|_{L^{\infty}_{\eta,v}}\] \[\leq C\Big{\{}\int_{0}^{d}(1+\eta)^{2p_{0}}\|w_{2}g\|_{L^{\infty}_ {\eta}}^{2}d\eta\Big{\}}^{\frac{1}{2}}+C\|(\nu^{0})^{-1}w_{l}g\|_{L^{\infty}_{ \eta,v}}\] \[\leq C\|(1+\eta)^{q_{0}}w_{4}g\|_{L^{\infty}_{\eta,v}}+C\|w_{l+3} g\|_{L^{\infty}_{\eta,v}}\] \[\leq C\|(1+\eta)^{q_{0}}w_{l+4}g\|_{L^{\infty}_{\eta,v}},\quad \text{for }q_{0}>p_{0}+\frac{1}{2} \tag{3.89}\]
where we have used (3.75) to derive
\[\int_{0}^{d}(1+\eta)^{2k}\|w_{l}\bar{f}\|_{\nu}^{2}d\eta \lesssim_{k}\int_{0}^{d}(1+\eta)^{2p_{2k}}\|w_{l+4k+2}g\|_{L^{2}_{ v}}^{2}\] \[\lesssim_{k}\|(1+\eta)^{q_{k}}w_{l+4k+4g}\|_{L^{\infty}_{\eta,v}} ^{2}\cdot\int_{0}^{d}(1+\eta)^{2p_{2k}-2q_{k}}d\eta\int_{\mathbb{R}^{3}}w_{2} ^{-2}dv\] \[\lesssim_{k}\|(1+\eta)^{q_{k}}w_{l+4k+4g}\|_{L^{\infty}_{\eta,v}} ^{2},\quad\text{for }q_{k}>p_{2k}+\frac{1}{2}. \tag{3.90}\]
Let \(h_{k}=(1+\eta)^{k}w_{l}\bar{f}\), then it holds that
\[v_{3}\partial_{\eta}h_{k}+\nu^{0}h_{k}=K^{0}_{w_{l}}h_{k}+k(1+\eta)^{k-1}v_{3 }w_{l}\bar{f}+(1+\eta)^{k}w_{l}g. \tag{3.91}\]
Applying Lemma 3.3 and (3.90), one gets that
\[\|h_{k}\|_{L^{\infty}_{\eta,v}}+|h_{k}|_{L^{\infty}(\gamma_{+})} \leq Ck\|(1+\eta)^{k-1}(\nu^{0})^{-1}v_{3}w_{l}\bar{f}\|_{L^{\infty }_{\eta,v}}+C\|(1+\eta)^{k}(\nu^{0})^{-1}w_{l}g\|_{L^{\infty}_{\eta,v}}\] \[\quad+C\|(1+\eta)^{k}(\nu^{0})^{\frac{1}{2}}\bar{f}\|_{L^{2}_{ \eta,v}}\] \[\leq C_{k}\|(1+\eta)^{k-1}w_{l+4}\bar{f}\|_{L^{\infty}_{\eta,v}}+ C_{k}\|(1+\eta)^{q_{k}}w_{\max\{4k+4,l+3\}}g\|_{L^{\infty}_{\eta,v}}, \tag{3.92}\]
where \(q_{k}>p_{2k}+\frac{1}{2}\). Using (3.89), (3.92) and induction arguments, one obtains that
\[\|(1+\eta)^{k}w_{l}\bar{f}\|_{L^{\infty}_{\eta,v}}+|(1+\eta)^{k}w_ {l}\bar{f}|_{L^{\infty}(\gamma_{+})}\] \[\leq C_{k}\|(1+\eta)^{k-1}w_{l+4}\bar{f}\|_{L^{\infty}_{\eta,v}}+ C_{k}\|(1+\eta)^{q_{k}}w_{\max\{4k+4,l+3\}}g\|_{L^{\infty}_{\eta,v}}\] \[\leq C_{k}\|(1+\eta)^{k-2}w_{l+8}\bar{f}\|_{L^{\infty}_{\eta,v}}+ C_{k}\|(1+\eta)^{q_{k}}w_{\max\{4k+4,l+7\}}g\|_{L^{\infty}_{\eta,v}}\] \[...\] \[\leq C_{k}\|w_{l+4k}\bar{f}\|_{L^{\infty}_{\eta,v}}+C_{k}\|(1+\eta) ^{q_{k}}w_{\max\{4k+4,l+4k\}}g\|_{L^{\infty}_{\eta,v}}\] \[\leq C_{k}\|(1+\eta)^{q_{k}}w_{l+4k+4g}\|_{L^{\infty}_{\eta,v}}, \quad q_{k}>p_{2k}+\frac{1}{2}.\]
Recall the range of \(p_{k}\) in (3.75), then the proof of Lemma 3.11 is finished.
With the help of decay estimate in Lemma 3.11, we shall prove Theorem 3.1 by taking the limit \(d\to\infty\). From now on, we denote the solution \(\tilde{f}(\eta,v)\) of (3.61) as \(\tilde{f}_{d}(\eta,v)\) to emphasize the dependence on \(d\). We denote
\[\tilde{f}(\eta,v)=\bar{f}_{d_{2}}(\eta,v)-\bar{f}_{d_{1}}(\eta,v),\quad 1\leq d_{1 }\leq d_{2}.\]
Then \(\tilde{f}\) satisfies the following equation
\[\begin{cases}v_{3}\partial_{\eta}\tilde{f}+\mathbf{L}_{0}\tilde{f}=0,\quad \eta\in[0,d_{1}],\ v\in\mathbb{R}^{3},\\ \tilde{f}(0,v)|_{v_{3}>0}=\tilde{f}(0,Rv).\end{cases} \tag{3.93}\]
### Proof of Theorem 3.1
We divide the proof into two steps.
_Step 1. Convergence in \(L^{2}\)-norm._ Multiplying (3.93) by \(\tilde{f}\) and integrating on \([0,d_{1}]\times\mathbb{R}^{3}\), one obtains that
\[\int_{0}^{d_{1}}\int_{\mathbb{R}^{3}}\nu^{0}|(\mathbf{I}-\mathbf{P }_{0})\tilde{f}(\eta,v)|^{2}dvd\eta\] \[\leq C\int_{\mathbb{R}^{3}}|v_{3}|\cdot|\tilde{f}(d_{1},v)|^{2}dv \leq C\big{\{}\|w_{l}\bar{f}_{d_{2}}(d_{1})\|_{L^{\infty}_{v}}^{2}+|w_{l}\bar{ f}_{d_{1}}(d_{1})|_{L^{\infty}(\gamma_{+})}^{2}\big{\}}\] \[\leq C\|(1+\eta)^{\mathfrak{q}}w_{l+4}g\|_{L^{\infty}_{\eta,v}}^{ 2}\cdot d_{1}^{-2},\quad\mathfrak{q}\geq 3. \tag{3.94}\]
We still need to control the macroscopic part. Denote
\[\mathbf{P}_{0}\tilde{f}=[\tilde{a}(\eta)+\tilde{b}_{1}(\eta)(v_{1}-\mathfrak{ u}_{1}^{0})+\tilde{b}_{2}(\eta)(v_{2}-\mathfrak{u}_{2}^{0})+\tilde{c}(\eta)(|v- \mathfrak{u}^{0}|^{2}-3T^{0})]\sqrt{\mu_{0}}.\]
Similar as in Lemma 3.9, we can obtain
\[\left(\begin{array}{c}\int_{\mathbb{R}^{3}}v_{3}\tilde{f}(\eta,v)\cdot \mathbf{L}_{0}^{-1}(\mathcal{A}_{31}^{0})dv\\ \int_{\mathbb{R}^{3}}v_{3}\tilde{f}(\eta,v)\cdot\mathbf{L}_{0}^{-1}(\mathcal{ A}_{32}^{0})dv\\ \int_{\mathbb{R}^{3}}v_{3}\tilde{f}(\eta,v)\cdot\mathbf{L}_{0}^{-1}(\mathcal{B}_{3}^{0}) dv\end{array}\right)=\left(\begin{array}{c}\int_{\mathbb{R}^{3}}v_{3} \tilde{f}(d_{1},v)\cdot\mathbf{L}_{0}^{-1}(\mathcal{A}_{31}^{0})(d_{1})dv\\ \int_{\mathbb{R}^{3}}v_{3}\tilde{f}(d_{1},v)\cdot\mathbf{L}_{0}^{-1}(\mathcal{ A}_{32}^{0})(d_{1})dv\\ \int_{\mathbb{R}^{3}}v_{3}\tilde{f}(d_{1},v)\cdot\mathbf{L}_{0}^{-1}(\mathcal{ B}_{3}^{0})(d_{1})dv\end{array}\right),\]
which, together with (3.68)-(3.70), yields that
\[|(\mu(T^{0})\tilde{b}_{1},\mu(T^{0})\tilde{b}_{2},\kappa(T^{0})\tilde{c})( \eta)|\leq C\Big{\{}\|w_{l}\bar{f}_{d_{2}}(d_{1})\|_{L^{\infty}_{v}}+|w_{l} \bar{f}_{d_{1}}(d_{1})|_{L^{\infty}(\gamma_{+})}\Big{\}}+C\|(\mathbf{I}- \mathbf{P}_{0})\tilde{f}(\eta)\|_{\nu}. \tag{3.95}\]
Integrating (3.95) over \([0,d_{1}]\), using (3.87), (3.94), one has
\[\int_{0}^{d_{1}}|(\tilde{b}_{1},\tilde{b}_{2},\tilde{c})(\eta)|^{2}d\eta\leq C \|(1+\eta)^{\mathfrak{q}}w_{l+4}g\|_{L^{\infty}_{\eta,v}}^{2}\cdot d_{1}^{-1}, \quad\mathfrak{q}\geq 3. \tag{3.96}\]
Multiplying (3.93) by \(v_{3}\sqrt{\mu_{0}}\), we have that
\[\frac{d}{d\eta}\int_{\mathbb{R}^{3}}\tilde{f}(\eta,v)\cdot v_{3}^{2}\sqrt{\mu _{0}}dv=0.\]
Integrating the above equation over \([\eta,d]\) and using (3.67), one obtains
\[\tilde{a}(\eta)=-2T^{0}\tilde{c}(\eta)-\frac{1}{\rho^{0}T^{0}}\int_{\mathbb{R }^{3}}(\mathbf{I}-\mathbf{P}_{0})\tilde{f}(\eta,v)\cdot v_{3}^{2}\sqrt{\mu_{0 }}dv+\frac{1}{\rho^{0}T^{0}}\int_{\mathbb{R}^{3}}\tilde{f}(d_{1},v)\cdot v_{3} ^{2}\sqrt{\mu_{0}}dv. \tag{3.97}\]
Using (3.87) (3.94) and (3.96), one can get that
\[\int_{0}^{d_{1}}|\tilde{a}(\eta)|^{2}d\eta\leq C\|(1+\eta)^{\mathfrak{q}}w_{l+ 4}g\|_{L^{\infty}_{\eta,v}}^{2}\cdot d_{1}^{-1},\quad\text{for $\mathfrak{q}\geq 3$},\]
which, together with (3.94) and (3.96), yields that
\[\int_{0}^{d_{1}}\int_{\mathbb{R}^{3}}\nu^{0}|\tilde{f}(\eta,v)|^{2}dvd\eta \leq C\|(1+\eta)^{\mathfrak{q}}w_{l+4}g\|_{L^{\infty}_{\eta,v}}^{2}\cdot d_{1} ^{-1},\quad\mathfrak{q}\geq 3. \tag{3.98}\]
_Step 2. Convergence in \(L^{\infty}\)-norm._ We shall use \(t_{k}=t_{k}(t,\eta,v),X_{cl}(s;t,\eta,v),\eta_{k}=\eta_{k}(\eta,v)\) to be the back-time cycles defined for domain \([0,d_{1}]\times\mathbb{R}^{3}\). For later use, we denote \(\tilde{h}:=w_{l}\tilde{f}\). Let \((\eta,v)\in[0,d_{1}]\times\mathbb{R}^{3}\backslash(\gamma_{0}\cup\gamma_{-})\), it follows from (3.93) that
\[\tilde{h}(\eta,v)=e^{-\hat{\nu}(v)(t-t_{k})}\tilde{h}(d_{1},v_{k-1})+\sum_{i=0 }^{k-1}\int_{t_{i+1}}^{t_{i}}e^{-\hat{\nu}(v)(t-s)}(1+|v|^{2})^{\frac{|s|}{2}} K_{w_{l}}^{0}\tilde{h}(X_{cl}(s),v_{i})ds, \tag{3.99}\]
with \(k=1\) for \(v_{0,3}<0\), and \(k=2\) for \(v_{0,3}>0\). We will use this summation convention in the following of this lemma. We always have
\[|e^{-\hat{\nu}(v)(t-t_{k})}\tilde{h}(d_{1},v_{k-1})| \leq C\Big{(}\|w_{l}\bar{f}_{d_{2}}(d_{1})\|_{L^{\infty}_{v}}+|w_{l} \bar{f}_{d_{1}}(d_{1})|_{L^{\infty}(\gamma_{+})}\Big{)}\] \[\leq C\|(1+\eta)^{\mathfrak{q}}w_{l+4}g\|_{L^{\infty}_{\eta,v}} \cdot d_{1}^{-1},\quad\mathfrak{q}\geq 3. \tag{3.100}\]
For the second term on RHS of (3.99), we use (3.99) again to obtain
\[\left|\sum_{i=0}^{k-1}\int_{t_{i+1}}^{t_{i}}e^{-\hat{\nu}(v)(t-s)}(1+ |v|^{2})^{\frac{|s|}{2}}K_{w_{l}}^{0}\tilde{h}(X_{cl}(s),v_{i})ds\right|\] \[=\left|\sum_{i=0}^{k-1}\int_{t_{i+1}}^{t_{i}}e^{-\hat{\nu}(v)(t-s) }(1+|v|^{2})^{\frac{|s|}{2}}K_{w_{l}}^{0,c}\tilde{h}(X_{cl}(s),v_{i})ds\right|\] \[+\left|\sum_{i=0}^{k-1}\int_{t_{i+1}}^{t_{i}}e^{-\hat{\nu}(v)(t-s) }(1+|v|^{2})^{\frac{|s|}{2}}K_{w_{l}}^{0,m}\tilde{h}(X_{cl}(s),v_{i})ds\right|\] \[\leq\frac{1}{4}\|\tilde{h}\|_{L_{\eta,v}^{\infty}}+C\|(1+\eta)^{ \mathfrak{q}}w_{l+4}g\|_{L_{\eta,v}^{\infty}}\cdot d_{1}^{-1}\] \[+\Big{|}\sum_{i=0}^{k-1}\int_{t_{i+1}}^{t_{i}}e^{-\hat{\nu}(v)(t- s)}(1+|v|^{2})^{\frac{|s|}{2}}\int_{\mathbb{R}^{3}}k_{w_{l}}^{0,c}(v_{i},v^{ \prime})(1+|v^{\prime}|^{2})^{\frac{|s|}{2}}\] \[\quad\times\sum_{j=0}^{k-1}\int_{t_{j+1}^{\prime}}^{t_{j}}e^{- \hat{\nu}(v^{\prime})(s-s_{1})}\int_{\mathbb{R}^{3}}k_{w_{l}}^{0,c}(v_{j}^{ \prime},v^{\prime\prime})\tilde{h}(X_{cl}(s_{1}),v^{\prime\prime})dv^{\prime \prime}ds_{1}dv^{\prime}ds\Big{|}. \tag{3.101}\]
where we have used (3.100) and denote \(X_{cl}^{\prime}(s_{1})=X_{cl}(s_{1};s,X_{cl}(s),v^{\prime})\), \(t_{j}^{\prime}=t_{j}^{\prime}(s_{1};s,X_{cl}(s),v^{\prime})\) and \(v_{j}^{\prime}\) to be the back-time cycle of \((s,X_{cl}(s),v^{\prime})\). Then, by the same arguments as in Lemma 3.3, we get
\[\|\tilde{h}\|_{L^{\infty}([0,d_{1}]\times\mathbb{R}^{3})}+|\tilde {h}(0)|_{L^{\infty}(\gamma_{+})}\] \[\leq\frac{1}{2}(\|\tilde{h}\|_{L^{\infty}([0,d_{1}]\times\mathbb{ R}^{3})}+|\tilde{h}(0)|_{L^{\infty}(\gamma_{+})})\] \[\quad+Cd_{1}^{-1}\|(1+\eta)^{\mathfrak{q}}w_{l+4}g\|_{L_{\eta,v}^ {\infty}}+C\|(\nu^{0})^{\frac{1}{2}}\tilde{f}\|_{L^{2}([0,d_{1}]\times\mathbb{ R}^{3})}\] \[\leq Cd_{1}^{-\frac{1}{2}}\|(1+\eta)^{\mathfrak{q}}w_{l+4}g\|_{L_{ \eta,v}^{\infty}},\quad\mathfrak{q}\geq 3. \tag{3.102}\]
With the help of (3.102), there exists a function \(f(\eta,v)\) with \((\eta,v)\in\mathbb{R}_{+}\times\mathbb{R}^{3}\) so that \(\|w(\bar{f}_{d}-f)\|_{L^{\infty}([0,d]\times\mathbb{R}^{3})}\to 0\) as \(d\to\infty\). The uniform bound (3.4) follows from (3.87) and the strong convergence in \(L_{\eta,v}^{\infty}\). It is direct to see that \(f(\eta,v)\) solves (3.7). The continuity of \(f\) follows directly from the \(L_{x,v}^{\infty}\)-convergence and the continuity of \(\bar{f}_{d}\).
For the uniqueness, let \(\mathbf{f}_{1},\mathbf{f}_{2}\) be two solutions of (3.7) with the bound (3.4), then it holds that
\[\begin{cases}v_{3}\partial_{\eta}(\mathbf{f}_{1}-\mathbf{f}_{2})+\mathbf{L}_{0 }(\mathbf{f}_{1}-\mathbf{f}_{2})=0,\\ \mathbf{f}_{i}(0,v)|_{v_{3}>0}=\mathbf{f}_{i}(0,Rv),\ i=1,2,\\ \lim_{\eta\to\infty}\mathbf{f}_{i}(\eta,v)=0,\ i=1,2.\end{cases} \tag{3.103}\]
Multiplying (3.103) by \((\mathbf{f}_{1}-\mathbf{f}_{2})\), it is direct to prove that
\[\int_{0}^{\infty}\|(\mathbf{I}-\mathbf{P}_{0})(\mathbf{f}_{1}-\mathbf{f}_{2}) \|_{\nu}^{2}d\eta=0.\]
That is, \((\mathbf{f}_{1}-\mathbf{f}_{2})=\mathbf{P}_{0}(\mathbf{f}_{1}-\mathbf{f}_{2})\). Then by the same arguments as (3.72)-(3.73), one has that
\[\int_{0}^{\infty}\|\mathbf{P}_{0}(\mathbf{f}_{1}-\mathbf{f}_{2})\|_{L_{x}^{2}} ^{2}d\eta=0.\]
Thus, we prove \(\mathbf{f}_{1}\equiv\mathbf{f}_{2}\).
Finally, let \(\mathfrak{f}:=f+\Upsilon(\eta)\,f_{b}(v)\), then it direct to know that \(\mathfrak{f}\) solves (3.1). The proof of Theorem 3.1 is completed.
## 4. Hilbert Expansions for Boltzmann Equation of Soft Potentials
In this section, we aim to construct the solutions of Boltzmann equation of soft potentials through Hilbert expansion with multi-scales.
### Linear parts of Hilbert expansion
In this section, we shall construct the soft Boltzmann solutions in the form of Hilbert expansion with multi-scales. Recall \(\varpi_{\mathfrak{t}}\) in (1.29), we define the velocity weight functions
\[\tilde{w}_{\kappa_{i}}(v)=\varpi_{\kappa_{i}}(v)\mu^{-\mathfrak{a}_{i}},\quad \mathfrak{w}_{\bar{\kappa}_{i}}(v)=\varpi_{\bar{\kappa}_{i}}(v)\mu_{0}^{- \mathfrak{a}_{i}}\,\text{and}\,\,\mathfrak{w}_{\hat{\kappa}_{i}}(v)=\varpi_{ \hat{\kappa}_{i}}(v)\mu_{0}^{-\mathfrak{a}_{i}}, \tag{4.1}\]
for constants \(\kappa_{i},\bar{\kappa}_{i},\hat{\kappa}_{i}\geq 0\), \(1\leq i\leq N\) and \(0\leq\mathfrak{a}_{i}<\frac{1}{2}\). Note that the weight function \(\tilde{w}_{\kappa_{i}}\) depends on \((t,x)\), while \(\mathfrak{w}_{\hat{\kappa}_{i}}\) and \(\mathfrak{w}_{\hat{\kappa}_{i}}\) depend on \((t,x_{\bar{\nu}})\). For later use, we define
\[\hat{x}=(x_{\bar{\nu}},\eta)\in\mathbb{R}_{+}^{3},\quad\nabla_{\hat{x}}:=( \nabla_{\bar{\nu}},\partial_{\eta}),\]
and recall \(\bar{x}=(x_{\bar{\nu}},y)\in\mathbb{R}_{+}^{3},\,\,\nabla_{\bar{x}}=(\nabla_{ \bar{\nu}},\partial_{y})\), and the weighted \(L_{l}^{2}\)-norm with \((1+y)^{l}\) weight.
**Proposition 4.1**.: _Let \(\tau^{\delta}>0\) be the life-span of compressible Euler equations. Let \(0\leq\mathfrak{a}_{i}<\frac{1}{2}\) in (4.1) and \(\mathfrak{a}_{i}>\mathfrak{a}_{i+1}\). Let \(s_{0},s_{i},\bar{s}_{i},\hat{s}_{i},\zeta_{i}\in\mathbb{N}_{+}\), \(\kappa_{i},\bar{\kappa}_{i},\hat{\kappa}_{i}\in\mathbb{R}_{+}\) for \(1\leq i\leq N\); and define \(l_{j}^{i}:=\bar{l}_{i}+2(\bar{s}_{i}-j)\) for \(1\leq i\leq N,\,\,0\leq j\leq\bar{s}_{i}\). For these parameters, we have chosen \(s_{i},\bar{s}_{i},\hat{s}_{i}\) such that_
\[s_{0}\geq s_{1}+\mathfrak{b}+6,\quad s_{1}=\bar{s}_{1}=\hat{s}_{ 1}\gg 1;\] \[s_{1}>s_{i}>\bar{s}_{i}>\hat{s}_{i}\geq s_{i+1}>\bar{s}_{i+1}> \hat{s}_{i+1}\geq...\gg 1,\,\,i=2,...,N-1;\] \[s_{i+1}\leq\min\{\hat{s}_{i},\frac{1}{2}\bar{s}_{i}-3\},\,\,\bar{ s}_{i+1}\leq s_{i+1}-8-\mathfrak{b},\,\,\hat{s}_{i+1}\leq\frac{1}{2}\bar{s}_{i+1} -2-\mathfrak{b},\,\,i=1,...,N-1, \tag{4.2}\]
_and taken \(l_{j}^{i}=\bar{l}_{j}+2(\bar{s}_{i}-j)\) with \(0\leq j\leq\bar{s}_{i}\) so that_
\[l_{j}^{N}\gg 2\mathfrak{b}\quad\text{and}\quad l_{j}^{i}\geq 2l_{j}^{i+1}+18+2 \mathfrak{b},\,\,\text{for}\,\,1\leq i\leq N-1, \tag{4.3}\]
_and_
\[\kappa_{i}\gg\bar{\kappa}_{i}\gg\hat{\kappa}_{i}\gg\kappa_{i+1} \gg\bar{\kappa}_{i+1}\gg\hat{\kappa}_{i+1}\gg 1,\] \[\zeta_{i+1}-\zeta_{i}\geq\mathfrak{b}+3\quad\text{and}\quad\zeta _{1}\gg\zeta_{2}...\gg\zeta_{i}...\gg\mathfrak{b}. \tag{4.4}\]
_Let \((\rho_{i},u_{i},\theta_{i})(0)\) be the initial data for interior expansions, and \((\bar{u}_{i,\bar{\nu}},\bar{\theta}_{i})(0)\) be the initial data of viscous boundary layer. Assume_
\[\sum_{i=0}^{N}\Big{\{}\sum_{\gamma+\beta\leq s_{i}}\|\partial_{t}^{\gamma} \nabla_{x}^{\beta}(\rho_{i},u_{i},\theta_{i})(0)\|_{L_{x}^{2}}+\sum_{j=0}^{ \bar{s}_{i}}\sum_{j=2\gamma+\beta}\|\partial_{t}^{\gamma}\nabla_{\bar{x}}^{ \beta}(\bar{u}_{i,\bar{\nu}},\bar{\theta}_{i})(0)\|_{L_{\bar{l}_{j}^{i}}^{2}}^{ 2}\Big{\}}<\infty. \tag{4.5}\]
_And we also assume that the compatibility conditions for initial data \((\rho_{i},u_{i},\theta_{i})(0)\) and \((\bar{u}_{i,\bar{\nu}},\bar{\theta}_{i})(0)\) are satisfied. Then there exist solutions \(F_{i}=\sqrt{\mu}f_{i},\,\bar{F}_{i}=\sqrt{\mu_{0}}\bar{f}_{i},\,\,\hat{F}_{i}= \sqrt{\mu_{0}}\hat{f}_{i}\) to interior expansions (1.7), viscous boundary layer (1.16) and Knudsen layer solutions (1.22) over the time interval \(t\in[0,\tau^{\delta}]\) so that the specular boundary condition is satisfied in the following form:_
\[(F_{i}+\hat{F}_{i}+\hat{F}_{i})(t,x_{\bar{\nu}},0,v_{\bar{\nu}},v_{3})|_{v_{3}>0 }=(F_{i}+\hat{F}_{i}+\hat{F}_{i})(t,x_{\bar{\nu}},0,v_{\bar{\nu}},-v_{3}).\]
_Moreover, it holds that_
\[\sup_{t\in[0,\tau^{\delta}]}\sum_{i=1}^{N}\Bigg{\{}\sum_{\gamma+ \beta\leq s_{i}}\|\tilde{w}_{\kappa_{i}}\partial_{t}^{\gamma}\nabla_{x}^{\beta}f _{i}(t)\|_{L_{x}^{2}L_{\bar{\nu}}^{\infty}}+\sum_{j=0}^{\bar{s}_{i}}\sum_{j=2 \gamma+\beta}\|\mathfrak{w}_{\bar{\kappa}_{i}}\partial_{t}^{\gamma}\nabla_{\bar{x}}^ {\beta}\bar{f}_{i}(t)\|_{L_{\bar{l}_{j}^{i}}^{2}L_{\bar{\nu}}^{\infty}}\] \[+\sum_{\gamma+\beta\leq\hat{s}_{i}}\|(1+\eta)^{\zeta_{i}}\mathfrak{w }_{\bar{\kappa}_{i}}\partial_{t}^{\gamma}\nabla_{\bar{\kappa}_{i}}^{\beta}\hat{f}_{ i}(t)\|_{L_{x,\bar{\nu}}^{\infty}\cap L_{x_{\bar{\nu}}}^{2}L_{\bar{\nu}}^{ \infty}}\Bigg{\}}\] \[\leq C\Bigg{(}\tau^{\delta},\|(\varphi_{0},\Phi_{0},\vartheta_{0})\| _{H^{*0}}+\sum_{i=0}^{N}\sum_{\gamma+\beta\leq s_{i}}\|\partial_{t}^{\gamma} \nabla_{x}^{\beta}(\rho_{i},u_{i},\theta_{i})(0)\|_{L_{x}^{2}}\] \[+\sum_{i=0}^{N}\sum_{j=0}^{\bar{s}_{i}}\sum_{j=2\gamma+\beta}\| \partial_{t}^{\gamma}\nabla_{\bar{x}}^{\beta}(\bar{u}_{i,\bar{\nu}},\bar{ \theta}_{i})(0)\|_{L_{t_{j}^{i}}^{2}}^{2}\Bigg{)}. \tag{4.6}\]
**Proof.** With the help of Proposition 2.5 and Theorem 3.1, by similar arguments as in [20, Proposition 5.1], one can construct \(f_{i},\bar{f}_{i}\) and \(\hat{f}_{i}\).
Here we explain a little bit on how to use Proposition 2.5 and Theorem 3.1 for soft potential cases. Noting \(f_{i},\bar{f}_{i}\) are smooth on \((t,x,v)\) and \((t,\bar{x},v)\), then by using Proposition 2.5, one can always get the exponential decay on \(v\), i.e.
\[|\partial_{t}\nabla_{x}f_{i}|\lesssim\mu^{\frac{q}{2}},\quad|\partial_{t} \nabla_{\bar{x}}\bar{f}_{i}|\lesssim\mu_{0}^{\frac{q}{2}}\quad\text{for $q\in(0,1)$}.\]
With the help of Theorem 3.1, we can construct the solutions for Knudsen boundary layers \(\hat{f}_{i}\) with enough polynomial space decay estimate.
### Estimates on the remainder
We first consider the \(L^{2}\)-energy estimate. Recall the definition of \(f_{R}^{\varepsilon}\) in (1.26), we rewrite the equation of \(f_{R}^{\varepsilon}\) as
\[\partial_{t}f_{R}^{\varepsilon}+v\cdot\nabla_{x}f_{R}^{ \varepsilon}+\frac{1}{\varepsilon^{2}}\mathbf{L}f_{R}^{\varepsilon}\] \[=-\frac{\{\partial_{t}+v\cdot\nabla_{x}\}\sqrt{\mu}}{\sqrt{\mu}} f_{R}^{\varepsilon}+\varepsilon^{3}\frac{1}{\sqrt{\mu}}Q(\sqrt{\mu}f_{R}^{ \varepsilon},\sqrt{\mu}f_{R}^{\varepsilon})\] \[\quad+\sum_{i=1}^{N}\varepsilon^{i-2}\frac{1}{\sqrt{\mu}}\Big{\{} Q(F_{i}+\bar{F}_{i}+\hat{F}_{i},\sqrt{\mu}f_{R}^{\varepsilon})+Q(\sqrt{\mu}f_{R}^{ \varepsilon},F_{i}+\bar{F}_{i}+\hat{F}_{i})\Big{\}}\] \[\quad+\frac{1}{\sqrt{\mu}}R^{\varepsilon}+\frac{1}{\sqrt{\mu}} \bar{R}^{\varepsilon}+\frac{1}{\sqrt{\mu}}\hat{R}^{\varepsilon}, \tag{4.7}\]
where
\[R^{\varepsilon}=-\varepsilon^{N-6}\{\partial_{t}+v\cdot\nabla_{x}\}(F_{N-1}+ \varepsilon F_{N})+\varepsilon^{N-6}\sum_{\begin{subarray}{c}i+j\geq N+1\\ 1\leq i,j\leq N\end{subarray}}\varepsilon^{i+j-N-1}Q(F_{i},F_{j}), \tag{4.8}\]
\[\bar{R}^{\varepsilon}=-\varepsilon^{N-6}\{\partial_{t}+v_{{}_{ \shortmid}}\cdot\nabla_{{}_{\shortmid}}\}(\bar{F}_{N-1}+\varepsilon\bar{F}_{N })-\varepsilon^{N-6}v_{3}\partial_{y}\bar{F}_{N}\] \[\quad+\varepsilon^{N-6}\sum_{\begin{subarray}{c}i+j\geq N+1\\ 1\leq i,j\leq N,\ 1\leq l\leq b\end{subarray}}\varepsilon^{i+j-N-1}\cdot\frac{y^{l}}{l!} \big{[}Q(\partial_{3}^{l}\mu_{0},\bar{F}_{j})+Q(\bar{F}_{j},\partial_{3}^{l} \mu_{0})\big{]}\] \[\quad+\varepsilon^{N-6}\sum_{\begin{subarray}{c}i+j\geq N+1\\ 1\leq i,j\leq N\end{subarray}}\varepsilon^{i+j-N-1}\big{[}Q(F_{i}^{0},\bar{F}_ {j})+Q(\bar{F}_{j},F_{i}^{0})+Q(\bar{F}_{i},\bar{F}_{j})\big{]}\] \[\quad+\varepsilon^{N-6}\sum_{\begin{subarray}{c}i+j+l\geq N+1\\ 1\leq i,j\leq N,\ 1\leq l\leq b\end{subarray}}\varepsilon^{i+j-N-1}\cdot\frac{y^{l}}{l!} \big{[}Q(\partial_{3}^{l}F_{i}^{0},\bar{F}_{j})+Q(\bar{F}_{j},\partial_{3}^{l }F_{i}^{0})\big{]}\] \[\quad+\varepsilon^{b-5}\frac{y^{b+1}}{(b+1)!}\sum_{j=1}^{N} \varepsilon^{j-1}[Q(\partial_{3}^{b+1}\bar{\mu},\bar{F}_{j})+Q(\bar{F}_{j}, \partial_{3}^{b+1}\bar{\mu})]\] \[\quad+\varepsilon^{b-4}\frac{y^{b+1}}{(b+1)!}\sum_{i,j=1}^{N} \varepsilon^{i+j-2}\big{[}Q(\partial_{3}^{b+1}\mathfrak{F}_{i},\bar{F}_{j})+Q (\bar{F}_{j},\partial_{3}^{b+1}\mathfrak{F}_{i})\big{]}, \tag{4.9}\]
and
\[\hat{R}^{\varepsilon}=-\varepsilon^{N-6}\{\partial_{t}+v_{{}_{ \shortmid}}\cdot\nabla_{{}_{\shortmid}}\}(\hat{F}_{N-1}+\varepsilon\hat{F}_{N})\] \[\quad+\varepsilon^{N-6}\sum_{\begin{subarray}{c}i+2l\geq N+1\\ 1\leq j\leq N,1\leq l\leq b\end{subarray}}\varepsilon^{i+2l-N-1}\cdot\frac{\eta }{l!}\big{[}Q(\partial_{3}^{l}\mu_{0},\hat{F}_{j})+Q(\hat{F}_{j},\partial_{3}^ {l}\mu_{0})\big{]}\] \[\quad+\varepsilon^{N-6}\sum_{\begin{subarray}{c}i+j\geq N+1\\ 1\leq i,j\leq N\end{subarray}}\varepsilon^{i+j-N-1}\big{[}Q(F_{i}^{0}+\bar{F}_{i}^ {0},\hat{F}_{j})+Q(\hat{F}_{j},F_{i}^{0}+\bar{F}_{i}^{0})+Q(\hat{F}_{i},\hat{F}_ {j})\big{]}\]
\[+\varepsilon^{N-6}\sum_{\begin{subarray}{c}i+j+2l>N+1\\ 1\leq i,j\leq N,1\leq l\leq b\end{subarray}}\varepsilon^{i+j+l-N-1}\cdot\frac{ \eta^{l}}{l!}\big{[}Q(\partial_{3}^{l}F_{i}^{0},\hat{F}_{j})+Q(\hat{F}_{j}, \partial_{3}^{l}F_{i}^{0})\big{]}\] \[+\varepsilon^{N-6}\sum_{\begin{subarray}{c}i+j+l>N+1\\ 1\leq i,j\leq N,1\leq l\leq b\end{subarray}}\varepsilon^{i+j+l-N-1}\cdot\frac{ \eta^{l}}{l!}\big{[}Q(\partial_{y}^{l}\bar{F}_{i}^{0},\hat{F}_{j})+Q(\hat{F}_{ j},\partial_{y}^{l}\bar{F}_{i}^{0})\big{]}\] \[+\varepsilon^{2b-4}\frac{\eta^{b+1}}{(b+1)!}\sum_{j=1}^{N} \varepsilon^{j-1}\big{[}Q(\partial_{3}^{b+1}\tilde{\mu},\hat{F}_{j})+Q(\hat{F }_{j},\partial_{3}^{b+1}\tilde{\mu})\big{]}\] \[+\varepsilon^{2b-3}\frac{\eta^{b+1}}{(b+1)!}\sum_{i,j=1}^{N} \varepsilon^{i+j-2}\big{[}Q(\partial_{3}^{b+1}\mathfrak{F}_{i},\hat{F}_{j})+Q (\hat{F}_{j},\partial_{3}^{b+1}\mathfrak{F}_{i})\big{]}\] \[+\varepsilon^{b-4}\frac{\eta^{b+1}}{(b+1)!}\sum_{i,j=1}^{N} \varepsilon^{i+j-2}\big{[}Q(\partial_{3}^{b+1}\mathfrak{F}_{i},\hat{F}_{j})+Q (\hat{F}_{j},\partial_{3}^{b+1}\mathfrak{F}_{i})\big{]}, \tag{4.10}\]
where \(\partial_{3}^{l}\mu_{0},\partial_{3}^{b+1}\tilde{\mu}\), \(\partial_{3}^{l}F_{i}^{0},\partial_{3}^{b+1}\mathfrak{F}_{i}\) and \(\partial_{y}^{l}\bar{F}_{i}^{0},\partial_{y}^{b+1}\bar{\mathfrak{F}}_{i}\) are defined in (1.19), (1.23). From Proposition 4.1, we know that \(f_{R}^{\varepsilon}\) satisfies specular reflection boundary conditions
\[f_{R}^{\varepsilon}(t,x_{1},x_{2},0,v_{1},v_{2},v_{3})|_{v_{3}>0}=f_{R}^{ \varepsilon}(t,x_{1},x_{2},0,v_{1},v_{2},-v_{3}). \tag{4.11}\]
**Lemma 4.2**.: _Recall \(\alpha\) in (1.28). Let \(0<\frac{1}{2\alpha}(1-\alpha)<\mathfrak{a}_{i}<\frac{1}{2}\), \(\mathfrak{k}\geq 18\), \(N\geq 6\) and \(\mathfrak{b}\geq 5\). Let \(\tau^{\delta}>0\) be the life span of compressible Euler solution, then there exists a suitably small constant \(\varepsilon_{0}>0\) such that for all \(\varepsilon\in(0,\varepsilon_{0})\), it holds that_
\[\frac{d}{dt}\|f_{R}^{\varepsilon}(t)\|_{L^{2}}^{2}+\frac{c_{0}}{2 \varepsilon^{2}}\|\{\mathbf{I}-\mathbf{P}\}f_{R}^{\varepsilon}(t)\|_{\nu}^{2}\] \[\leq C\big{\{}1+\varepsilon^{8}\|h_{R}^{\varepsilon}(t)\|_{L^{ \infty}}^{2}\big{\}}\cdot(\|f_{R}^{\varepsilon}(t)\|_{L^{2}}^{2}+1),\text{ for }t\in[0,\tau^{\delta}]. \tag{4.12}\]
**Proof.** Multiplying (4.7) by \(f_{R}^{\varepsilon}\) and integrating over \(\mathbb{R}_{+}^{3}\times\mathbb{R}^{3}\), one obtains that
\[\frac{1}{2}\frac{d}{dt}\|f_{R}^{\varepsilon}\|_{L^{2}}^{2}+\frac {c_{0}}{2\varepsilon^{2}}\|\{\mathbf{I}-\mathbf{P}\}f_{R}^{\varepsilon}\|_{\nu}^ {2}\] \[=-\int_{\mathbb{R}_{+}^{3}}\int_{\mathbb{R}^{3}}\frac{\{\partial_{ t}+v\cdot\nabla_{x}\}\sqrt{\mu}|f_{R}^{\varepsilon}|^{2}+\varepsilon^{3}\int_{ \mathbb{R}_{+}^{3}}\int_{\mathbb{R}^{3}}\frac{1}{\sqrt{\mu}}Q(\sqrt{\mu}f_{R}^ {\varepsilon},\sqrt{\mu}f_{R}^{\varepsilon})f_{R}^{\varepsilon}\] \[+\int_{\mathbb{R}_{+}^{3}}\int_{\mathbb{R}^{3}}\sum_{i=1}^{N} \varepsilon^{i-2}\frac{1}{\sqrt{\mu}}\Big{\{}Q(F_{i}+\bar{F}_{i}+\hat{F}_{i}, \sqrt{\mu}f_{R}^{\varepsilon})+Q(\sqrt{\mu}f_{R}^{\varepsilon},F_{i}+\bar{F}_{ i}+\hat{F}_{i})\Big{\}}f_{R}^{\varepsilon}\] \[+\int_{\mathbb{R}_{+}^{3}}\int_{\mathbb{R}^{3}}\bigg{\{}\frac{1}{ \sqrt{\mu}}R^{\varepsilon}+\frac{1}{\sqrt{\mu}}\bar{R}^{\varepsilon}+\frac{1}{ \sqrt{\mu}}\hat{R}^{\varepsilon}\bigg{\}}\,f_{R}^{\varepsilon}, \tag{4.13}\]
where we have used (4.11) so that the boundary term vanishes.
Recall the definition \(h_{R}^{\varepsilon}\) in (1.29). For any \(\lambda>0\), motivated by [18], we take \(\mathfrak{k}\geq 18\) to get
\[\int_{\mathbb{R}_{+}^{3}}\int_{\mathbb{R}^{3}}\frac{\{\partial_{ t}+(v\cdot\nabla_{x})\}\sqrt{\mu}}{|f_{R}^{\varepsilon}|^{2}dvdx}\] \[\leq C\int_{\mathbb{R}_{+}^{3}}\int_{\mathbb{R}^{3}}|(\nabla_{x} \rho,\nabla_{x}\mathfrak{u},\nabla_{x}T)|(1+|v|)^{3}|f_{R}^{\varepsilon}|^{2} dvdx\] \[\leq C\left\{\int_{\mathbb{R}_{+}^{3}}\int_{|v|\geq\frac{\lambda}{ \varepsilon^{1/3}}}+\int_{\mathbb{R}_{+}^{3}}\int_{|v|\leq\frac{\lambda}{ \varepsilon^{1/3}}}\right\}(\cdots)dvdx\] \[\leq C\frac{\lambda}{\varepsilon^{2}}\|\{\mathbf{I}-\mathbf{P}\}f _{R}^{\varepsilon}\|_{\nu}^{2}+C_{\lambda}(1+\varepsilon^{4}\|h_{R}^{\varepsilon} \|_{L^{\infty}})\|f_{R}^{\varepsilon}\|_{L^{2}},\]
where we have used
\[\int_{\mathbb{R}^{3}}\int_{|v|\leq\frac{\lambda}{\varepsilon^{\frac{ 1}{1/3}}}}|\nabla_{x}(\rho,\mathfrak{u},T)|(1+|v|)^{3}|f_{R}^{\varepsilon}|^{2 }dvdx\] \[\leq\int_{\mathbb{R}^{3}}\int_{|v|\leq\frac{\lambda}{\varepsilon^{ \frac{1}{1/3}}}}|\nabla_{x}(\rho,\mathfrak{u},T)|(1+|v|)^{3}\Big{\{}|\mathbf{P} f_{R}^{\varepsilon}|^{2}+|(\mathbf{I}-\mathbf{P})f_{R}^{\varepsilon}|^{2}\Big{\}}dvdx\] \[\leq C_{\lambda}\|f_{R}^{\varepsilon}\|_{L^{2}}^{2}+C\|(\mathbf{ I}-\mathbf{P})f_{R}^{\varepsilon}\|_{\nu}^{2}\cdot\max_{|v|\leq\frac{\lambda}{ \varepsilon^{\frac{1}{1/3}}}}(1+|v|)^{3-\kappa}\] \[\leq C_{\lambda}\|f_{R}^{\varepsilon}\|_{L^{2}}^{2}+C\frac{ \lambda}{\varepsilon^{2}}\|(\mathbf{I}-\mathbf{P})f_{R}^{\varepsilon}\|_{\nu} ^{2},\]
and
\[\int_{\mathbb{R}^{3}}\int_{|v|\geq\frac{\lambda}{\varepsilon^{ \frac{1}{1/3}}}}|\nabla_{x}(\rho,\mathfrak{u},T)|(1+|v|)^{3}|f_{R}^{ \varepsilon}|^{2}dvdx\] \[\leq C\|f_{R}^{\varepsilon}\|_{L^{2}}\|h_{R}^{\varepsilon}\|_{L^ {\infty}}\cdot\Big{\{}\int_{|v|\geq\frac{\lambda}{\varepsilon^{\frac{1}{3}}}} (1+|v|)^{6-2\mathfrak{v}}dv\Big{\}}^{\frac{1}{2}}\] \[\leq C_{\lambda}\varepsilon^{\frac{1}{3}\mathfrak{k}-2}\|f_{R}^{ \varepsilon}\|_{L^{2}}\|h_{R}^{\varepsilon}\|_{L^{\infty}}\leq C_{\lambda} \varepsilon^{4}\|f_{R}^{\varepsilon}\|_{L^{2}}\|h_{R}^{\varepsilon}\|_{L^{ \infty}}.\]
Using Lemma 2.3, one has
\[\varepsilon^{3}\int_{\mathbb{R}^{3}_{+}}\int_{\mathbb{R}^{3}} \frac{1}{\sqrt{\mu}}Q(\sqrt{\mu}f_{R}^{\varepsilon},\sqrt{\mu}f_{R}^{ \varepsilon})f_{R}^{\varepsilon}dvdx\] \[=\varepsilon^{3}\int_{\mathbb{R}^{3}_{+}}\int_{\mathbb{R}^{3}} \frac{1}{\sqrt{\mu}}Q(\sqrt{\mu}f_{R}^{\varepsilon},\sqrt{\mu}f_{R}^{ \varepsilon})\{\mathbf{I}-\mathbf{P}\}f_{R}^{\varepsilon}dvdx\] \[\leq\varepsilon^{3}\|\{\mathbf{I}-\mathbf{P}\}f_{R}^{\varepsilon }\|_{\nu}\|h_{R}^{\varepsilon}\|_{L^{\infty}}\|f_{R}^{\varepsilon}\|_{L^{2}}\] \[\leq\frac{\lambda}{\varepsilon^{2}}\|\{\mathbf{I}-\mathbf{P}\}f_ {R}^{\varepsilon}\|_{\nu}^{2}+C_{\lambda}\varepsilon^{8}\|h_{R}^{\varepsilon }\|_{L^{\infty}}^{2}\|f_{R}^{\varepsilon}\|_{L^{2}}^{2}.\]
From (4.2), we have
\[s_{N}>\bar{s}_{N}\geq 2\mathfrak{b}+4+\hat{s}_{N},\quad\hat{s}_{N}\geq 1,\]
which, together with Proposition 4.1 and Sobolev embedding theorem, yields that, for \(1\leq i\leq N\) and \(t\in[0,\tau^{\delta}]\),
\[\sum_{k=0}^{2\mathfrak{b}+2}\Big{\{}\left\|\tilde{w}_{\kappa_{i} }(v)\nabla_{t,x}^{k}f_{i}(t)\right\|_{L^{2}_{x,v}}+\left\|\tilde{w}_{\kappa_{i }}\nabla_{t,x}^{k}f_{i}(t)\right\|_{L^{\infty}_{x,v}}\Big{\}}\leq C_{R}(\tau^{ \delta}), \tag{4.14}\] \[\sum_{k=0,1}^{\mathfrak{b}+2}\Bigg{\{}\left\|\mathfrak{w}_{\bar{ \kappa}_{i}}(1+\eta)^{\mathfrak{b}+9}\nabla_{t,\bar{x}}^{k}\hat{f}_{i}(t) \right\|_{L^{2}_{x,v}}+\left\|\mathfrak{w}_{\bar{\kappa}_{i}}(1+\eta)^{ \mathfrak{b}+9}\nabla_{t,\bar{x}}^{k}\hat{f}_{i}(t)\right\|_{L^{\infty}_{x,v} }\Bigg{\}}\leq C_{R}(\tau^{\delta}),\]
where we have denoted
\[C_{R}(\tau^{\delta}): =C\Bigg{(}\tau^{\delta},\|(\varphi_{0},\Phi_{0},\vartheta_{0})\| _{H^{s_{0}}}+\sum_{i=0}^{N}\sum_{\gamma+\beta\leq s_{i}}\|\partial_{t}^{\gamma }\nabla_{x}^{\beta}(\rho_{i},u_{i},\theta_{i})(0)\|_{L^{2}_{x}}\] \[\qquad\qquad\qquad\qquad+\sum_{i=0}^{N}\sum_{j=0}^{\bar{s}_{i}} \sum_{j=2\gamma+\beta}\|\partial_{t}^{\gamma}\nabla_{\bar{x}}^{\beta}(\bar{u}_{ i,u},\bar{\theta}_{i})(0)\|_{L^{2}_{t^{j}_{j}}}^{2}\Bigg{)}.\]
Recall \(\varpi_{\mathbf{t}}\) in (1.28). For \(1\leq i\leq N\), it is clear that
\[\begin{split}\left|\varpi_{\mathbf{t}}(v)\frac{\sqrt{\mu_{0}}}{ \sqrt{\mu}}\check{f}_{i}(t,x_{\short
**Lemma 4.3** ([18]).: _It holds that_
\[|\hat{K}^{m}g(v)|\leq Cm^{3+\kappa}\nu(\mu)\|g\|_{L^{\infty}},\]
_and \(\hat{K}^{c}g(v)=\int_{\mathbb{R}^{3}}l(v,v^{\prime})g(v^{\prime})dv^{\prime}\) where the kernel \(l(v,v^{\prime})\) satisfies_
\[|l(v,v^{\prime})|\leq C_{m}\frac{\exp(|v-u|^{2})}{|v-u|(1+|v|+|u|)^{1-\kappa}}.\]
Denoting \(K_{\varpi}g\equiv\varpi_{\mathbf{t}}\hat{K}(\frac{g}{\varpi_{\mathbf{t}}})\), we deduce from (4.7) and (1.29) that
\[\partial_{t}h_{R}^{\varepsilon}+v\cdot\nabla_{x}h_{R}^{ \varepsilon}+\frac{\nu(\mu)}{\varepsilon^{2}}h_{R}^{\varepsilon}-\frac{1}{ \varepsilon^{2}}K_{\varpi}h_{R}^{\varepsilon}\] \[=\sum_{i=1}^{N}\varepsilon^{i-2}\frac{\varpi_{\mathbf{t}}(v)}{ \sqrt{\mu_{M}(v)}}\Big{\{}Q(F_{i}+\bar{F}_{i}+\hat{F}_{i},\frac{\sqrt{\mu_{M} }h_{R}^{\varepsilon}}{\varpi_{\mathbf{t}}})+Q(\frac{\sqrt{\mu_{M}}h_{R}^{ \varepsilon}}{\varpi_{\mathbf{t}}},F_{i}+\bar{F}_{i}+\hat{F}_{i})\Big{\}}\] \[\quad+\varepsilon^{3}\frac{\varpi_{\mathbf{t}}}{\sqrt{\mu_{M}}}Q \Big{(}\frac{\sqrt{\mu_{M}}h_{R}^{\varepsilon}}{\varpi_{\mathbf{t}}},\frac{ \sqrt{\mu_{M}}h_{R}^{\varepsilon}}{\varpi_{\mathbf{t}}}\Big{)}+\frac{\varpi_{ \mathbf{t}}}{\sqrt{\mu_{M}}}\big{[}R^{\varepsilon}+\bar{R}^{\varepsilon}+\hat{ R}^{\varepsilon}\big{]}. \tag{4.18}\]
Using Lemma 4.3, by similar arguments as in [20, Lemma 6.3] (see also [18, Lemma 2.2]), we can obtain the following \(L^{\infty}\) estimates. Here we omit the details for simplicity of presentation.
**Lemma 4.4**.: _For \(t\in[0,\tau^{\delta}]\), it holds that_
\[\sup_{0\leq s\leq t}\|\varepsilon^{3}h_{R}^{\varepsilon}(s)\|_{L^{\infty}} \leq C(t)\{\|\varepsilon^{3}h_{R}^{\varepsilon}(0)\|_{L^{\infty}}+ \varepsilon^{N-1}+\varepsilon^{\mathfrak{b}}\}+\sup_{0\leq s\leq t}\|f_{R}^{ \varepsilon}(s)\|_{L^{2}}.\]
### Proof of Theorem 1.1
With Lemma 4.2 and Lemma 4.4, one can close the proof by the same arguments as in [18]. We omit the details for simplicity of presentation. Therefore the proof of Theorem 1.1 is complete.
**Acknowledgments.** Yong Wang's research is partially supported by National Key R&D Program of China No. 2021YFA1000800, National Natural Science Foundation of China No. 12022114, 12288201, CAS Project for Young Scientists in Basic ResearchGrant No. YSBR-031, and Youth Innovation Promotion Association of the Chinese Academy of Science No. 2019002. We thank Weiqiang Wang for his valuable discuss.
**Conflict of interest.** The authors declare that they have no conflict of interest.
|
2309.04625 | Democracy from topology | Chiral form fields in $d$ dimensions can be effectively described as edge
modes of topological Chern-Simons theories in $d+1$ dimensions. At the same
time, manifestly Lorentz-invariant Lagrangian description of such fields
directly in terms of a $d$-dimensional field theory is challenging and requires
introducing nontrivial auxiliary gauge fields eliminated on-shell with extra
gauge symmetries. A recent work by Arvanitakis et al.\ demonstrates
(emphasizing the case of 2d chiral bosons) that the two approaches are related,
and a peculiar reduction on the $(d+1)$-dimensional topological Lagrangian
automatically leads to $d$-dimensional Lagrangians with appropriate sets of
auxiliary fields. We develop this setup in three distinct directions. First, we
demonstrate how arbitrary Abelian self-interactions for chiral forms can be
included using nonlinear boundary terms in the Chern-Simons theory. Second, by
generalizing the Chern-Simons theory to the BF theory, we obtain an analogous
democratic description of non-chiral form fields, where electric and magnetic
potentials appear as explicit dynamical variables. Third, we discuss the
effects of introducing topological interactions in the higher-dimensional bulk,
which produce extra interaction terms in the boundary theory. When applied to a
topological 4-form field in 12 dimensions, this construction results in a
democratic description of the 3-form gauge field of the 11-dimensional
supergravity. | Oleg Evnin, Euihun Joung, Karapet Mkrtchyan | 2023-09-08T22:31:56Z | http://arxiv.org/abs/2309.04625v1 | # Democracy from topology
###### Abstract
Chiral form fields in \(d\) dimensions can be effectively described as edge modes of topological Chern-Simons theories in \(d+1\) dimensions. At the same time, manifestly Lorentz-invariant Lagrangian description of such fields directly in terms of a \(d\)-dimensional field theory is challenging and requires introducing nontrivial auxiliary gauge fields eliminated on-shell with extra gauge symmetries. A recent work by Arvanitakis et al. demonstrates (emphasizing the case of 2d chiral bosons) that the two approaches are related, and a peculiar reduction on the \((d+1)\)-dimensional topological Lagrangian automatically leads to \(d\)-dimensional Lagrangians with appropriate sets of auxiliary fields. We develop this setup in three distinct directions. First, we demonstrate how arbitrary Abelian self-interactions for chiral forms can be included using nonlinear boundary terms in the Chern-Simons theory. Second, by generalizing the Chern-Simons theory to the BF theory, we obtain an analogous democratic description of non-chiral form fields, where electric and magnetic potentials appear as explicit dynamical variables. Third, we discuss the effects of introducing topological interactions in the higher-dimensional bulk, which produce extra interaction terms in the boundary theory. When applied to a topological 4-form field in 12 dimensions, this construction results in a democratic description of the 3-form gauge field of the 11-dimensional supergravity.
## I Introduction
It has been known for a long time [1; 2; 3; 4; 5] that the topological Chern-Simons theory and its BF generalizations can describe (chiral) \(p\)-form degrees of freedom on the boundary. However, the generality and systematics of this approach is not fully understood yet.
While the description of chiral fields as edge modes of topological theory is graceful and simple, the fact that one inevitably starts in a fictitious spacetime of one dimension higher may be seen as a drawback. Attempts to describe chiral fields as Lagrangian theories without introducing extra dimensions, on the other hand, have met difficulties of their own. Early ventures in this direction sacrificed manifest Lorentz invariance [6; 7; 8]. The elegant Pasti-Sorokin-Tonin (PST) approach [9; 10; 11] offers an economical Lorentz-invariant formulation, but suffers from non-polynomial dependence of the action on an auxiliary scalar field, and furthermore encounters difficulties when including self-interactions [11]. (We mention additionally the approach of [12], where chiral fields are necessarily accompanied by decoupled but propagating additional degrees of freedom. See also [13; 14].)
Recently [15], Lorentz-covariant Lagrangians for arbitrary self-interacting chiral \(p\)-forms were found. The description includes a doubled set of gauge fields and an auxiliary scalar, which are gauged on-shell to a single propagating self-interacting chiral \(p\)-form. A comparison of this formalism with other approaches in the literature can be found in [16].
The topological field theory approaches to chiral forms have been pursued historically rather independently of the line of research that builds Lagrangian descriptions of chiral forms using auxiliary fields without introducing extra spacetime dimensions. A bridge connecting the two approaches was set up in a recent work by Arvanitakis et al. [17] who found a reduction procedure1 that allows deriving the boundary theory from the Chern-Simons theory in the bulk. The procedure naturally leads to a boundary theory in the form of [15] (which, for the case of free forms, can be related to PST formulation [19] by integrating out auxiliary gauge fields).
Footnote 1: The reduction procedure of [17] assumes a topologically trivial bulk with a single boundary. The nontrivial features of the bulk theory on manifolds of more complicated topology (see, e.g., [18]) thus do not enter the game in this setting. We thank Massimo Porrati for emphasizing the importance of this point.
Our present purpose is to extend and generalize the formulation of [17] in a few different directions. First, arbitrary Abelian self-interactions can be introduced to
the setup of [17] by adding nonlinear boundary terms to the Chern-Simons action. One thus recovers the full scope of self-interacting theories in [15]. Second, the problem of Lagrangian description of chiral forms is often discussed side-by-side with the problem of 'democratic' description of ordinary (non-chiral) forms, where the dual electric and magnetic potentials appear as explicit dynamical variables. As we shall see, such democratic theories emerge from boundary reductions of the topological BF theory, a cousin of the Chern-Simons theory evoked in [17]. Finally, in the BF setup, it is possible to introduce topological interactions in the bulk. This, correspondingly, affects the boundary theory inducing self-interactions that essentially involve the gauge potential (as opposed to being expressible through the field strength alone). In this way, in particular, one obtains a democratic description of the self-interacting 3-form appearing in the 11-dimensional supergravity.
## II Chiral fields
Here, we give a short derivation similar to that undertaken in [17] for free chiral forms, adding Abelian interactions.
The starting point is the Chern-Simons theory given by the action
\[S=\int_{M}H\wedge\mathrm{d}H \tag{1}\]
(for our purposes the overall factor aka Chern-Simons level does not have to be explicit) where \(M\) is a \(d+1=2p+3\) (\(p\) is even) dimensional manifold with a boundary \(\partial M\) and \(H\) is a \((p+1)\)-form field.
The variation of this Lagrangian contains a boundary term \(\int_{\partial M}\delta H\wedge H\), which would be incompatible with the least action principle. To remedy for this inconsistency, we add a boundary term \(-\frac{1}{2}H\wedge\star H\) to the action to obtain
\[S_{\mbox{\tiny free}}=\int_{M}H\wedge\mathrm{d}H-\frac{1}{2}\int_{\partial M} H\wedge\star H\,. \tag{2}\]
The variation is then
\[\delta S_{\mbox{\tiny free}}=2\int_{M}\delta H\wedge\mathrm{d}H-\frac{1}{2} \int_{\partial M}\delta H^{+}\wedge H^{-}\,. \tag{3}\]
Here and in what follows, we use the shorthand notation
\[H^{\pm}=H\pm\star H, \tag{4}\]
and the pullback of \(H\) onto the boundary is denoted by the same symbol \(H\). Note that \(\star\) shall denote throughout the Hodge dual associated with an arbitrary metric on the boundary with Lorentzian signature (the bulk Hodge dual will not appear in the formalism we consider, hence no danger of confusion).
We may impose the Dirichlet boundary condition, \(\delta H^{+}=0\) or the Neumann one \(H^{-}=0\): \(H^{+}\) and \(H^{-}\) play the roles of 'position' and'momentum' respectively. The Neumann condition can be also viewed as the dynamical equation with respect to the boundary variation. We shall take the latter point of view as it is more convenient for introducing interactions.
As discussed in [15; 16], general equations describing self-interactions of a chiral field are given as
\[H^{-}=f(H^{+})\,,\qquad\mathrm{d}H=0\,, \tag{5}\]
where \(f:\Lambda^{+}\to\Lambda^{-}\) is an antiselfdual form valued function of a selfdual variable (here \(\Lambda^{+}\) and \(\Lambda^{-}\) represent the space of selfdual and antiselfdual forms respectively).
In order to reproduce these equations, one can introduce a boundary term to the Chern-Simons theory, given by an arbitrary function of \(H^{+}\) as
\[S=\int_{M}H\wedge\mathrm{d}H-\int_{\partial M}\frac{1}{2}\,H\wedge\star H+g(H^ {+})\,. \tag{6}\]
The function \(g(H^{+})\) is a top form function of the selfdual argument \(H^{+}\). The addition of \(g(H^{+})\) is analogous to the addition of an arbitrary potential term to a free Hamiltonian. The bulk equations of motion stemming from the action (6) are simply \(\mathrm{d}H=0\), describing pure gauge configurations, while the boundary equations reproduce (5), where \(f(Y)=\partial g(Y)/\partial Y\) is an anti-selfdual \((p+1)\)-form function of a selfdual variable \(Y=H^{+}\).
The action (6) describes arbitrary Abelian interacting theories of a single chiral \(2k-\)form field in \(d=4k+2\) dimensional spacetime (the boundary \(\partial M\)) endowed with a metric of Lorentzian signature.
In six dimensions, there is a unique functionally independent scalar made of a selfdual 3-form, therefore, (6) describes an infinite number of consistent theories parameterized by a function of one variable [15]. In ten and higher dimensions such theories are parametrized by a function of more than one variable, as many as the number of independent Lorentz scalars constructed from a selfdual form. In two dimensions, there is no polynomial scalar constructed from a selfdual vector, therefore the only option of the form (6) is the free Abelian theory. For multiple fields, however, interactions via bulk non-Abelian deformations are possible [17].
## III Democratic description for \(p\)-forms
We will use now the same logic to derive democratic Lagrangians for arbitrary \(p\)-forms (including arbitrary Abelian interactions from [15]). The starting point is the topological theory given by the action (occasionally referred to as the BF theory)
\[S_{\mbox{\tiny bulk}}=\int_{M}(-1)^{d-p}\,G\wedge\mathrm{d}F+\mathrm{d}G \wedge F\,, \tag{7}\]
where \(M\) is a \((d+1)\)-dimensional manifold with \(d\)-dimensional boundary, \(F\) is a \((p+1)-\)form and \(G\) is a
\((d-p-1)-\)form. Here, both \(d\) and \(p\) are arbitrary, as opposed to the previous section. The gauge symmetry is given by
\[\delta F=\mathrm{d}\alpha\,,\quad\delta G=\mathrm{d}\beta\,. \tag{8}\]
The Lagrangian is gauge invariant up to boundary terms. The bulk equations of motion are \(\mathrm{d}F=0=\mathrm{d}G\), implying that these fields are pure gauge, therefore there are no bulk degrees of freedom. The boundary term in the variation of the bulk Lagrangian is given by \(\int_{\partial M}\delta G\wedge F-G\wedge\delta F\,.\) Adding to the action (7) a boundary term,
\[-\int_{\partial M}\frac{1}{2}(F\wedge\star F+G\wedge\star G)\,, \tag{9}\]
modifies the boundary variation as
\[\int_{\partial M}\delta F\wedge((-1)^{p+d+pd}\,G-\star F)+\delta G \wedge(F-\star G)\] \[\qquad=(-1)^{p+d+pd}\int_{\partial M}\star\delta(F+\star G)\wedge( F-\star G)\,. \tag{10}\]
Here, again, we take the Neumann boundary condition \(F-\star G=0\), which can be viewed as the dynamical equations with respect to the boundary variation, so that the variational principle gives the equations \(\mathrm{d}F=0=\mathrm{d}G\) supplemented with these boundary conditions. The boundary term (9) again uses a metric with Lorentzian signature.
Generalization to the self-interacting case is given as
\[S=\int_{M}(-1)^{d-p}\,G\wedge\mathrm{d}F+\mathrm{d}G\wedge F\] \[\quad-\int_{\partial M}\frac{1}{2}\left(F\wedge\star F+G\wedge \star G\right)+g(F+\star G)\,, \tag{11}\]
which gives the same bulk equations \(\mathrm{d}F=0=\mathrm{d}G\) and the following modified boundary conditions:
\[F-\star G=f(F+\star G)\,. \tag{12}\]
Here again, \(f(Y)=\partial g(Y)/\partial Y\) for a \((p+1)-\)form argument \(Y\). This reproduces the democratic theory of general Abelian self-interactions for \(p\)-forms (the reduction to the democratic Lagrangians of [15] will be demonstrated below).
An interesting observation [20] is that, as opposed to the chiral case, now we also have the option to describe the boundary theory in a non-democratic manner by simply integrating out one of the fields. E.g., we can solve the bulk equation for \(G\), that is \(\mathrm{d}F=0\), which implies \(F=\mathrm{d}A\). Substituting this into the action reduces the whole system to a boundary Lagrangian that is algebraic in \(F=\mathrm{d}A\), while the only field variable is now \(A\). In the case of free theory, we will simply get a Maxwell Lagrangian \(F\wedge\star F\). Instead, for nontrivial \(g(Y)\), we get a nonlinear algebraic equation expressing \(G\) in terms of \(F\), similar to those discussed in [21; 15]. Such relations are not always easy to solve explicitly even for nonlinear electrodynamics in \(3+1\) dimensions, where some simplifications occur compared to general \(d\) and \(p\). These equations, however, explicitly capture the essence of the conversion procedure between democratic and ordinary single-field formalisms. Note that we could equally integrate out \(F\) instead of \(G\) arriving at different but equivalent \(d\)-dimensional descriptions. The two theories, corresponding to two different reductions (either integrating out \(G\) or \(F\)), are related by duality [20]. This is somewhat similar to the dualization procedure where we integrate out the field \(A\) and \(F\) from the action \(S=\int_{\partial M}-\frac{1}{2}\,F\wedge\star F+G\wedge(F-\mathrm{d}A)\). In the non-Abelian case, this procedure leads to non-polynomial action in terms of the variable \(G\), with no smooth free limit [22].
The democratic action (11) for \(p=2k\)-forms in \(d=4k+2\) dimensions can be diagonalized by introducing new variables \(C=(F+G)/\sqrt{2}\) and \(D=(F-G)/\sqrt{2}\) as
\[S=\int_{M}C\wedge\mathrm{d}C-D\wedge\mathrm{d}D\] \[-\int_{\partial M}\frac{1}{2}\left(C\wedge\star C+D\wedge\star D \right)+g(C_{+}+D_{-})\,, \tag{13}\]
thus explicitly describing one chiral and one antichiral \(p\)-forms. Note that the Abelian interaction term \(g(C_{+}+D_{-})\) can be viewed as a function of two independent variables \(C_{+}\) and \(D_{-}\), which are simply the selfdual and anti-selfdual projections of \(C_{+}+D_{-}\), which means that (13) actually represents the most general interactions for one chiral and one antichiral fields \(C\) and \(D\).
Note that the normalization of the fields in the democratic setup is not unique: one can rescale the fields \(F\) and \(G\) in an opposite manner, arriving at the action,
\[S= \ \int_{M}(-1)^{d-p}\,G\wedge\mathrm{d}F+\mathrm{d}G\wedge F\] \[-\int_{\partial M}\left[\frac{1}{2}\left(\lambda^{-2}\,F\wedge \star F+\lambda^{2}\,G\wedge\star G\right)\right.\] \[\qquad\qquad+\left.g(\lambda^{-1}\,F+\lambda\,\star G)\,\right], \tag{14}\]
with boundary equations of motion,
\[\mathrm{d}F=0=\mathrm{d}G\,,\quad\lambda^{-1}\,F-\lambda\,\star G=f(\lambda^{ -1}\,F+\lambda\,\star G)\,. \tag{15}\]
When coupled to charged matter (see for example [23]), this rescaling is related to the change in the coupling constant, which requires opposite rescaling for electric and magnetic couplings. This rescaling freedom is consistent with the Dirac-Schwinger quantization of the charges since the product of their coupling constants is invariant (the quantization applies only to the linear combination of pairwise product of electric and magnetic charges).
### Nonlinear electrodynamics and \(So(2)\) duality
When \(d=4k\), and both \(F\) and \(G\) are \(p+1=2k\)-forms, it is convenient to label them as \(F=H^{1}\) and \(G=H^{2}\,\). The Abelian nonlinear \(p\)-form theory in the democratic form, given in [21], can be derived from a \(d+1=4k+1\)-dimensional topological action with a boundary term,
\[S= \int_{M}\epsilon_{bc}\,H^{b}\wedge\mathrm{d}H^{c}\] \[-\int_{\partial M}\frac{1}{2}\,H^{b}\wedge\star H^{b}+g(\star H^{ b}+\epsilon^{bc}H^{c})\,. \tag{16}\]
This action transmutes under the reduction procedure of [17] to that of [21].
The function \(g(Y)\) is further restricted [21] if we require the \(SO(2)\) duality symmetry rotating \(H^{1}\) and \(H^{2}\). When \(d=4\), the duality-symmetric theories of nonlinear electrodynamics are given by the five-dimensional action of type (16) where the Abelian interaction term is reduced to a function of a single variable, \(g(W^{ab}\,W_{ab})\). Here, \(W^{ab}\) is the duality covariant Lorentz scalar,
\[W^{ab}=\star[(\star H^{a}+\epsilon^{ac}H^{c})\wedge\star(\star H^{b}+\epsilon^ {bd}H^{d})]\,,\]
whose trace vanishes identically: \(W^{a}{}_{a}=0\,\). The next example is \(d=8\), where the interactions in the general democratic 3-form theory will be parameterized by a function of 14 variables, two for each order in fields -- from second to eighth. The duality-symmetric condition leaves only half of these variables -- seven: one for each order.
## IV Reduction to boundary theories
We now proceed to the dimensional reduction procedure introduced in [17] to show that the action (6) can be reduced to the nonlinear chiral \(p\)-form actions of [15]. For that, one introduces a closed one-form \(v\) (and corresponding vector which we will denote with the same letter) and decomposes the bulk field as:
\[H=\hat{H}+v\wedge\check{H}\,, \tag{17}\]
with a gauge redundancy
\[\delta\hat{H}=-v\wedge\alpha\,,\qquad\delta\check{H}=\alpha\,, \tag{18}\]
which was fixed by the choice \(i_{v}\hat{H}=0\) in [17]. Plugging this decomposition into the Lagrangian, we notice that the field \(\check{H}\) becomes a Lagrange multiplier enforcing a constraint on the field \(\hat{H}\),
\[v\wedge\mathrm{d}\hat{H}=0\,, \tag{19}\]
which can be solved following the Appendix C of [24], arriving at
\[H=\mathrm{d}A+v\wedge R\,, \tag{20}\]
where \(A\) and \(R\) are \(p\)-forms. Then, one can see that the bulk Chern-Simons term of the action becomes a total derivative taking into account that \(\mathrm{d}v=0\). Therefore, the full action reduces to a bulk terms contribution to the boundary \(\mathrm{d}A\wedge v\wedge R\) plus boundary term, where the field \(H\) is replaced by \(\mathrm{d}A+v\wedge R\). Thus the final boundary action is given as
\[S=\int_{\partial M}-\frac{1}{2}\,H\wedge\star H+\mathrm{d}A\wedge v\wedge R+g( \star H+H)\,, \tag{21}\]
where \(H=\mathrm{d}A+v\wedge R\).
The equation (21) reproduces the Lagrangian for the arbitrary interacting theory of chiral \(p\)-form given in [15] with one small difference: there, the \(v\) is parameterized as \(v=\mathrm{d}a\) with a dynamical field \(a\), thus avoiding the need for a prescribed one-form in the theory that naively breaks the Lorentz symmetry. The shift symmetry of the field \(a\), which we call henceforth 'PST symmetry' due to its close relation to the similar symmetry featured in the PST theory [9], is hard to anticipate from the Chern-Simons point of view.2 This symmetry, however, is crucial for the consistency of the theory and furthermore makes it possible to gauge-fix the field \(a\) to a non-dynamical fixed function, at the expense of manifest Lorentz symmetry (thus making contact with the Chern-Simons derivation above). One may add a top-form term \(J\wedge\mathrm{d}v\) to the Lagrangian (where \(J\) is a Lagrange multiplier) and keep the field \(v\) unconstrained. This formulation (for the free theory) was the starting point in [19] (where the one-form \(v\) was denoted as \(c\)). Note, that the condition \(v^{2}\neq 0\) is essential for the theory given by action (21) to describe a chiral form. One way to exclude the space \(v^{2}=0\) from the theory could be an extra condition \(v^{2}=1\) imposed by a Lagrange multiplier \(\mu\), i.e., adding3 a term \(\mu(v^{2}-1)\) to the Lagrangian (21).
Footnote 2: Naively, in order to get the boundary Lagrangian, one needs to use a specific \(v\). However, any non-null \(v\) gives a consistent theory on the boundary, and all such theories are equivalently encoded in the action (6) which has manifest Lorentz symmetry. This gives an intuitive picture of why there should be extra gauge symmetries in the boundary theory that provide for Lorentz invariance, as in [9; 10; 11; 15; 16; 21], though it is not obvious how to make these symmetries explicit in the bulk theory language.
Footnote 3: We thank Chris Ishid for discussions on this matter.
Within the boundary theory, the expression \(\star H+H\) is gauge-invariant with respect to the enlarged set of gauge symmetries shifting the auxiliary fields [15]. Thus, these gauge symmetries guide us to the action (21) in the language of the boundary theory of [15], while in the Chern-Simons language, the structure of the corresponding boundary terms is guessed so that they give rise to self-interacting chiral edge modes.
Now that we reviewed the derivation of [17] and generalized it to include Abelian interactions of chiral forms, we will proceed to the democratic formulation for arbitrary \(p\)-forms. Using the same reduction procedure as in
the chiral case, one can show that (11) leads to the general Abelian self-interactions for the \(p\)-forms, with the democratic boundary Lagrangian given in [15]. For that, one decomposes the fields \(F\) and \(G\) using a closed one-form \(v\) (and corresponding vector which we will denote with the same letter):
\[F=\hat{F}+v\wedge\check{F}\,,\qquad G=\hat{G}+v\wedge\check{G}\,. \tag{22}\]
Substituting this in the bulk Lagrangian, we can see that the fields \(\check{F}\) and \(\check{G}\) are Lagrange multipliers, imposing the constraints on the fields \(\check{F}\) and \(\hat{G}\),
\[v\wedge\mathrm{d}\hat{F}=0=v\wedge\mathrm{d}\hat{G}\,, \tag{23}\]
which can be solved as earlier.
Substitution of the latter expressions in the action leads to purely boundary theory with a Lagrangian,
\[\mathcal{L} = v\wedge S\wedge\mathrm{d}A-\mathrm{d}B\wedge v\wedge R \tag{24}\] \[+\frac{1}{2}\left(F\wedge\star F+G\wedge\star G\right)+g(\star G+ F)\,,\]
where \(H_{1}\) and \(H_{2}\) are given by
\[F = \mathrm{d}A+v\wedge R\,, \tag{25}\] \[G = \mathrm{d}B+v\wedge S\,. \tag{26}\]
This Lagrangian coincides with [15] after solving the constraint \(\mathrm{d}v=0\) as \(v=\mathrm{d}a\) and a simple field redefinition discussed in [24].
## V Bulk-induced interactions
The interactions introduced above only enter the higher-dimensional topological description through the boundary terms. Consequently, the interactions in the resulting boundary theory are expressed through the field strength alone, but not through the gauge potential. It is possible to construct more general interactions by considering topological interactions in the bulk. The simplest example of such interactions would be the non-Abelian Chern-Simons Lagrangian discussed in [17]. More generally, one can add bulk interaction terms that are top-form wedge products of the fields involved. Such interactions are very limited for a single field, which we will discuss here, completing the discussion on Abelian self-interactions, and leaving the less constrained cases with multiple fields for future work.
For the chiral case, the only field is the \((p+1)-\)form \(H\), so the interactions may have the form \(H\wedge H\wedge H\). Such a term is only legitimate in three bulk dimensions, where \(H\) is a one-form, and even there, it is trivial for a single field \(H\). For higher dimensions, self-interactions of a single chiral field can only be introduced via the boundary terms discussed earlier.
For democratic fields, the situation is different. In special cases, there is a possibility to add interacting terms for a single field. This happens when \(d=3p+2\) for odd \(p\), and the corresponding bulk term is \(F\wedge F\wedge F\) (we recall that \(F\) is a \((p+1)-\)form and therefore the latter term is nontrivial for odd \(p\) and is a top form in \(d+1=3(p+1)\) dimensions). Therefore, the full action is given as
\[S = \int_{M}G\wedge\mathrm{d}F+\mathrm{d}G\wedge F+\frac{2}{3}\, \lambda_{3}\,F\wedge F\wedge F \tag{27}\] \[-\int_{\partial M}\frac{1}{2}\left(F\wedge\star F+G\wedge\star G \right)+g(F+\star G)\,.\]
In the first non-trivial case, \(p=1\), the \(\lambda_{3}\) term in the action (27) describes Abelian Chern-Simons interactions for five-dimensional nonlinear electrodynamics. This can be quickly verified by integrating out the field \(G\), most easily done in the case \(g(Y)=0\), leading to Maxwell-Chern-Simons theory.
In the next case, \(p=3\), the \(\lambda_{3}\) term describes the Chern-Simons interactions for the three-form in eleven dimensions. This interaction is essential for the 11d supergravity and was the missing element for the democratic formulation of the latter in the same line as type II supergravities in ten dimensions [25].
More generally, bulk Abelian interactions are possible in the dimensions \(d=np+n-1\) (assuming that \(p\) is odd) and are given by a wedge product of \(n\) copies of \(F\). For the quartic interactions, the first nontrivial case is the seven-dimensional Abelian Chern-Simons term, given by the bulk interaction \(\lambda_{4}\,F\wedge F\wedge F\wedge F\).
The reduction procedure of [17] works smoothly also in the presence of the bulk interaction (27). The same procedure as performed above in the case of \(\lambda_{3}=0\) leads to a neat cancellation of all bulk terms and leaves a boundary theory with the Lagrangian,
\[\mathcal{L} = v\wedge S\wedge\mathrm{d}A-\mathrm{d}B\wedge v\wedge R-\frac{ \lambda_{3}}{3}A\wedge\mathrm{d}A\wedge\mathrm{d}A \tag{28}\] \[+\frac{1}{2}\left(F\wedge\star F+G\wedge\star G\right)+g(\star G +F)\,,\]
where \(F\) takes the same form as in (25) while \(G\) is modified to
\[G=\mathrm{d}B+v\wedge S-\lambda_{3}\,A\wedge\mathrm{d}A\,. \tag{29}\]
This Lagrangian describes democratically nonlinear Maxwell-Chern-Simons theory in five dimensions for 1-form \(A\) and 2-form \(B\). The same Lagrangian describes democratically the 3-form \(A\) in eleven-dimensions on equal footing with its dual 6-form \(B\).
## VI Maximal supergravities in \(d=10,11\)
We can now quickly derive the type II supergravities in the democratic form of [25] from a topological theory in eleven dimensions. The starting point is the Chern-Simons action on the 11-dimensional manifold \(M\) with a
Lorentzian \(10d\) boundary \(\partial M\),
\[S_{{}_{\rm RR}}=\int_{M}G\wedge DG+\int_{\partial M}\frac{1}{2}(G,\star G)\,, \tag{30}\]
where \(\star\) is defined with a factor \(\star\alpha=(-1)^{\left\lfloor\frac{\mathrm{d}\alpha\,\alpha}{2}\right\rfloor+ \mathrm{d}\varepsilon\,\alpha}\) compared to Hodge star denoted in this section as \(\ast\), and we use Mukai pairing \((\alpha,\beta):=(-1)^{\left\lfloor\frac{\mathrm{d}\alpha\,\alpha}{2}\right\rfloor }(\alpha\wedge\beta)^{\mathrm{top}}\), and finally \(D=\mathrm{d}+H\wedge\), where \(H\) is a closed 3-form curvature of the Kalb-Ramond field (see details in [25]).
Here, \(G\) encodes all the curvatures of RR fields:
\[G =G_{2}+G_{4}+G_{6}+G_{8}+G_{10},\qquad\text{(IIA case)} \tag{31}\] \[G =G_{1}+G_{3}+G_{5}+G_{7}+G_{9}.\qquad\text{(IIB case)} \tag{32}\]
The action (30) can be reduced to ten dimensions via the procedure of [17] to reproduce the RR sector actions of [25]. It is straightforward to add the NSNS sector and gravity, which are not described democratically.
An analogous description can be proposed for the 11-dimensional supergravity [26]. Here, we introduce a 12-dimensional BF theory with a 11-dimensional boundary term and describe democratically the 3-form field with 4-form curvature \(F\) and its dual 7-form curvature \(G\) of the 6-form potential. Therefore, the action takes the form of (27) where the coupling constant is fixed by supersymmetry as \(\lambda_{3}=1\), whose value is responsible for the remarkable exceptional symmetries of the dimensional reductions of \(11d\) supergravity [27]. When \(g(Y)=0\), we can integrate out the \(G\) field from (27) to recover the standard 11d action involving a single three-form potential field. Instead, if we reduce the \(12d\) action (27) via the procedure of [17], we find the democratic description of the \(11d\) Lagrangian of the form (28) (with \(\lambda_{3}=1\)).
Integrating out the auxiliary fields \(R\) and \(S\), we recover the PST form of the action from [28]. Note that deformations similar to \(\alpha^{\prime}-\)corrections in String Theory are suggested by a non-trivial interaction term \(g(\star G+F)\).
## VII Discussion
We have provided a simple derivation of arbitrary self-interacting Abelian \(p\)-form theories with first-order equations of motion -- democratic or chiral -- starting from familiar topological theories, making use of the ideas introduced in [17]. We also introduced large classes of Abelian self-interactions for these fields. The last missing piece of the puzzle was the Abelian interactions that cannot be written in terms of curvatures and are given by Abelian Chern-Simons terms that are only gauge invariant up to boundary terms. This setup builds a connection between Lagrangian formulations for the nonlinear (twisted) selfduality equations [15] and other influential considerations in the literature (see, e.g. [6; 7; 29; 30; 31; 32; 33; 34; 35; 36] for a sample of historical references). More general interactions between multiple different fields will be studied systematically elsewhere.
The topological description of the RR fields in ten-dimensional supergravities discussed in this letter also provides supporting explanations on the resolution [37; 25] of the puzzles of supergravity on-shell actions [37], which have to be contrasted with the expectations from holography. This resolution, which does not rely on a specific vacuum solution, is made at the level of the democratic \(d\)-dimensional Lagrangians with a unique \((d-1)\)-dimensional boundary term protected by the PST symmetry. From the perspective of the \((d+1)\)-dimensional topological theories, this boundary term lives on the boundary of the boundary, and hence it is not surprising that any ambiguity in such a term is resolved. We expect that the analogous puzzle of \(11d\) supergravity related to the electric solution [38] admits a similar resolution.
The democratic descriptions discussed here require a Lorentzian metric on the boundary because the (twisted) self-duality equations with signature \((t,d-t)\) admit non-trivial solutions only for \(+1(-1)\) values of the Hodge star squared \(\star^{2}=(-1)^{p(d-p)+t}\). Gravitational theories involving such actions may use path integral over the metric with arbitrary signature (see for example [39]). Then, the degrees of freedom described by the democratic (or chiral) formulations of \(p\)-forms will be switched off in even-time signatures, going to a lower-dimensional phase space compared to the Lorentzian signature.
## Acknowledgements
We are grateful to Alex Arvanitakis, Chris Hull, Massimo Porrati, Arkady Tseytlin, and Fridrich Valach for helpful discussions, and Zhirayr Avetisyan, Calvin Chen, Lewis Cole, and Alexander Sevrin for feedback on the manuscript. O. E. is supported by Thailand NSRF via PMU-B (grant numbers B01F650006 and B05F650021). E. J. was supported by the National Research Foundation of Korea (NRF) grant funded by the Korea government (MSIT) (No. 2022R1F1A1074977). K. M. was supported by the European Union's Horizon 2020 Research and Innovation Programme under the Marie Sklodowska-Curie Grant No. 844265, UKRI and STFC Consolidated Grant ST/T000791/1.
|
2309.13359 | Analysis of the Gravitational Wave Background Using Gamma-Ray Pulsar
Timing Arrays with Next-Generation Detectors | In this work, we investigate the potential of gamma-ray pulsar time array
(PTA) on gravitational waves background (GWB) using future gamma-ray detectors
with larger effective areas. We consider both spaceborne detectors and
ground-based imaging air Cherenkov telescope arrays (IACTs). We simulated the
detected photons from pulsars using the response of hypothetical detectors
taking into account the backgrounds and analyzed the sensitivities. Our results
showed that thanks to the higher statistics of IACTs, the PTA using IACTs can
improve significantly the performance compared with the PTA using Fermi-LAT
data. | Zhen Xie, Zhipeng Zhang, Jieshuang Wang, Ruizhi Yang | 2023-09-23T12:44:08Z | http://arxiv.org/abs/2309.13359v1 | Analysis of the Gravitational Wave Background Using Gamma-Ray Pulsar Timing Arrays with Next-Generation Detectors
###### Abstract
In this work, we investigate the potential of gamma-ray pulsar time array (PTA) on gravitational waves background (GWB) using future gamma-ray detectors with larger effective areas. We consider both spaceborne detectors and ground-based imaging air Cherenkov telescope arrays (IACTs). We simulated the detected photons from pulsars using the response of hypothetical detectors taking into account the backgrounds and analyzed the sensitivities. Our results showed that thanks to the higher statistics of IACTs, the PTA using IACTs can improve significantly the performance compared with the PTA using Fermi-LAT data.
## I Introduction
Pulsars are ideal cosmic laboratories for their excellent periodicity. The pulsar timing array (PTA) is the only method so far to detect the low-frequency gravitational waves (GWs) in nHz [1]. The GWs can be detected using ensembles of millisecond pulsars (MSPs) known as pulsar timing arrays (PTAs). PTAs monitor the arrival times of steady pulses from each pulsar, which are affected by spacetime perturbations and may arrive earlier or later than expected. For observations taken on Earth, the low-frequency GWs are expected to produce a signature quadrupolar pattern of the TOAs of the photons that come from the pulsar, known as the Hellings-Downs correlation[2].
Low-frequency GWs have many origins, and they can provide a wealth of information about the universe. Supermassive black hole (SMBH) binaries are expected to emit GWs, and the superposition of GWs from many SMBH binaries throughout the universe is predicted to build up a GW background (GWB). GWs from inflation would help describe the universe at its earliest moments[3] and are also an important way to test cosmology theories. Cosmic strings are theorized topological defects produced by phase transitions in the early universe, vibrating and losing energy via gravitational wave emission over the history of the universe [4]. If cosmic strings exist, they will create a stochastic GWB, and the observation of such kind of GWB would bring confirmation of physics beyond the Standard Model [5]. As mentioned above, since many processes can produce GW signals, the information derived from stochastic GWB would provide significant information about astrophysical processes over the history of the universe[6].
Recently, the Fermi-LAT Collaboration has performed for the first time the study of gravitational wave background using PTA observed in gamma-ray band[7], which demonstrates the great potential to study the GWB. Gamma PTA has many advantages compared with the traditional ratio PTAs. For example, a main noise source for radio PTAs is the effect of radio propagation through plasma, including the solar wind and the ionized interstellar medium (IISM). These effects are time-dependent and introduce noises similar to the GW signals. On the other hand, the effects of the IISM and solar wind can be ignored for gamma-ray photons. In this regard, gamma PTA has smaller noise and much easier data analysis.
But gamma PTA also suffers from poor angular resolution and limited exposure of the current instrument. In this letter, we investigated the potential improvement of gamma PTA[8] by future detectors. We considered two types of instruments. One is future spaceborne telescopes (FSTs) like Fermi-LAT with a larger effective area; and the other is Image Air Cherenkov Telescopes (IACTs), these ground-based telescope has a much larger effective area with high time accuracy.
Our work follows this structure. We described the method we used to simulate the observation of pulsars using the hypothetical instruments in session 2, we analyzed the simulated data and investigated the sensitivities of gamma PTA with future instruments in session 3, and the last session is the conclusion.
## II Simulated Data Based on Future Detectors
In Fermi-LAT gamma PTA, pulsar PSR J1231-1411 gave the best constraint of the photon-by-photon method, so we used this object in the following simulation as an example.
In the simulation, two different types of detectors are considered. Firstly we consider FSTs similar to Fermi-LAT but with 10 times more effective area. Another type
is low threshold IACTs. In this work, we adopt 5@5 as an example of such kind of detector. 5@5 is a large ground-based Cherenkov telescope array planned for the mountains of the Atacama Desert in northern Chile. Due to its low energy threshold, it shows great potential for pulsar research. In this paper, we used the response of 5@5 to perform the simulation. In the analysis we didn't consider the true geometrical location of the arrays, instead, we just assumed a 100-hour exposure of the pulsar with the fiducial telescope response. We admit that the true instrument response will depend on the site location as well as the source declination, but for a single pulsar, it is easy to find 100 hours of observation times every year with reasonable declination. Thus in the work, we used a uniform instrumental response for IACTs for simplicity. The telescope's effective area can be described as Aharonian _et al._[9] calculated:
\[A_{eff}=8.5E^{5.2}[1+(E/5~{}GeV)^{4.7}]^{-1}~{}\rm{m}^{2}, \tag{1}\]
and the point speared function (PSF) of 5@5 can be described as:
\[\phi=0.8(E/1~{}GeV)^{-0.4}~{}\rm{degree}, \tag{2}\]
by integrating with the spectrum of the pulsar, we can derive the expected photon number of the IACTs. Fig. 1 shows the result of the photon number by Fermi-LAT and 5@5 which makes an observation for 100 h per year in 12.5 years. We found that the ground-based telescope has a good performance in collecting photons, due to their large effective area. For J1231-1411, we made a conservative estimate to observe it 100h per year, the photon number IACT can collect is 30 times more than that from Fermi-LAT in the same time span. We note that a significant disadvantage of IACTs is the much smaller FOV and lower duty cycles. Fermi LAT results showed that the combined likelihood of more than 20 pulsars can further improve the sensitivity by a factor of two. In this regard, IACTs cannot compete because of the limited sky coverage. But thanks to the advantage of photo sensors, the next generation IACTs can also operate on the night with moon [10], thus the observation time every year can be increased to nearly 2000 hours. Thus it would be easy to observe more than 10 pulsars every year with an exposure of about 100 hours each, which will also allow us to perform the joint likelihood analysis.
For Fermi-LAT, the gamma-ray data are recorded in terms of energy \(E_{i}\), spatial position \(\mathbf{r}_{i}\), and arrival time \(t_{i}\) for the \(i\)-th photon. So in simulations for photons detected by hypothetical detectors, we also sample these quantities. The energy for photon from a pulsar can be described by the parameterized function PLSuperExpCutoff4 used by Fermi-LAT[11]:
\[\frac{dN}{dE}=K(\frac{E}{E_{0}})^{\frac{d}{8}-\Gamma_{*}}exp[\frac{d}{b^{2}}(1 -(\frac{E}{E_{0}})^{b})]~{}~{}~{}~{}(b\,ln\frac{E}{E_{0}}>10^{-2})\,, \tag{3}\]
each parameter can be queried in the catalog provided by Fermi-LAT. We first sample the energy of the photons by using this distribution. For the spatial position, we chose a circle of 3\({}^{\circ}\) radius around the pulsar as Fermi-LAT PTA and then sampled the position of the detected photon by taking into account the point spread function (PSF) of the detector, as well as the flux from both pulsar and a flat background. Note that the PSF is always energy-dependent.
Due to the high, sometimes even dominating backgrounds in gamma-ray astronomy, it is always difficult to recognize whether the photon comes from the pulsar itself or from backgrounds. The background in Fermi LAT (and other space-borne detectors) is mainly the diffuse Galactic gamma-ray background (DGE). In Fermi LAT it is described in the standard background file _gll_iem_v07_fits_[12]. It is taken into account in the data analysis in Fermi PTA to calculate the _weight_ of photons. As in gamma PTA, _weight_ is given to each photon to show the possibility of whether the photon comes from a pulsar or not.
In IACTs, however, in addition to the DGE, there are unavoidable contaminations from cosmic ray (CR) proton and electrons which are also observed by IACTs. In the energy range we are interested in this work (\(1-10\) GeV), the CR electrons cannot be detected by IACTs due to the geomagnetic cutoff effects. As calculated in [13], the background from CR protons can also be neglected in this energy range due to a much lower trigger rate at low energy. In this case, the dominating background in IACTs would also be DGE, and the analysis for IACTs would be identical to that of Fermi-LAT and FSTs.
However, we cannot exclude the possibility that IACT could induce further CR backgrounds due to different configurations to that used in [13]. As a conservative check, in this work we estimated the CR proton background based on the results in Aharonian _et al._[9], the background for 1 - 10 GeV gamma-rays mainly comes from the protons with energy 10 - 100 GeV, considering also a gamma/p separation power of about 1/10, the flux of background from CR protons can be written as \(F_{\rm bkg}=2\times 10^{-7}~{}(\rm{E/1GeV})^{-2.7}MeV^{-1}sr^{-1}cm^{-2}s^{-1}\), which is at least one order of magnitude larger than the DGE in the plane at the same energy range. As a result, we consider only the background induced by CR protons in the calculation for IACTs. We also assume it is uniformly distributed spatially due to the homogeneity of CR proton arriving directions. In addition to the primary electron and CRs, the secondary electrons produced in the primary CR interaction with the atmosphere can be another background. But these secondary electrons should be part of the hadronic shower induced by primary CR protons, which is already included in the proton background and gamma/p separation procedure discussed above.
The arrival time of each photon can be translated into the phase by the _PINT_ software[14], by accumulation, we can get the pulse profile. And the profile can be described by the superposition of several Gaussian distributions, which is called the template function. In our simulation, we used the profile folded by Fermi-LAT Observation data in 12.5 years. The sampling of the arrival time of a photon consists of two parts, integer multiples of the period of the pulsar and the phase (time) conforming to the pulsar's pulse profile, which is described of template functions for PSR J1231-1411 derived in Fermi PTAs [7].
The last step of simulating is to calculate the _weight_ of each photon. We calculated the predicted photon flux from the pulsar by convolving the flux of the pulsar with the PSF at each position, as well as the flux from the background. We calculated the _weight_ of each photon by dividing the photon flux from the pulsar by the total photon flux (pulsar plus background) at each position.
Through the above steps, we simulated the energy, time(phase), position, and the _weight_ information of each incident photon, we used them in the analysis using gamma PTA pipelines.
## III Gamma PTA data analysis
The log-likelihood function of a single pulsar is given by unbinned (photon-by-photon) method[7]:
\[\begin{split}\log\mathcal{L}=\sum_{i}\log\left[w_{i}f(\phi_{i} )+(1-w_{i})\right]-\\ 0.5\beta^{T}C_{\rm tn}^{-1}\beta-\frac{1}{2}\log(|C_{\rm tn}|), \end{split} \tag{4}\]
here,\(\phi\) is the phase of an individual pulsar, and \(f(\phi)\) is the profile of \(\phi\), which is defined by a sum over one (or many) Gaussian distributions \(g(\phi;\mu,\sigma)\) with the mean \(\mu\) and the variance \(\sigma\), and each photon is assigned a _weight_ which characterizes its probability of originating from the pulsar or background as described earlier. The second part represents a Gaussian noise process of Fourier amplitudes \(\mathbf{\beta}\):
\[\mathcal{L}_{\rm tn}\propto\frac{1}{\sqrt{|C_{\rm tn}|}}\exp\left(-\frac{1}{ 2}\beta^{T}C_{\rm tn}^{-1}\beta\right). \tag{5}\]
To compare these results with radio PTA, we assumed that both IACTs and FSTs will start observation in 2035 when such large instruments are likely to be put into operation. Then we calculated the sensitivities with observation duration. We considered the constraints for the single source PSR J1231-1411, which gave the best constraints for Fermi-LAT. For IACTs, we calculated both the sensitivities with and without the hypothetical CR background, assumed an effective exposure time of 100 hours every year. We found the IACTs we considered
Figure 1: The number of effective photons of 35 Fermi-LAT pulsars[7] measured in 12.5 years, compared with the expected number of photons by 5@5 observation 100h/yr in 12.5 years, at the energy from 1 to 10 Gev.
here have a sensitivity significantly better than FSTs, even though we have assumed that the FSTs have a 10 times larger effective area than Fermi-LAT, which is nearly unrealistic. The Gamma PTA with IACTs can surpass the Fermi LAT sensitivities within a decade or so after the operation. We also compared these results with the recent NANOGrav 15-year Data Set result, which gave evidence of GWB with the amplitude of \(2.6\times 10^{-15}\) at a reference frequency of 1 yr\({}^{-1}\)[15]. The Square Kilometre Array (SKA) can greatly enhance pulsar timing precision by its unprecedented collecting area and bandwidth, and the expected levels to be reached by the SKA is about \(10^{-16}-10^{-17}\) at a reference frequency of 1 yr\({}^{-1}\)[16]. These results are shown in Fig. 2.
For ideal PTA,the signal-to-noise ratio grows proportionally to \(A_{gwb}^{2}\times t_{obs}^{\Gamma}\)[17]. So as \(\Gamma=13/3\) for SMBH generated GWB[18], the relation of \(A_{gwb}\) with the observation time length \(t_{obs}\) will be \(A_{gwb}\propto t_{obs}^{-13/6}\), here the dimensionless strain amplitude \(A_{gwb}\) incorporates the growth, masses, and merger rates of SMBHs, and the \(\Gamma\) is the spectral index of spectrum of GWBs power spectral densities. We calculated the Fermi LAT upper limit on \(A_{gwb}\) with different \(t_{obs}\) using the real Fermi-LAT data and the results are shown in Fig. 3. We also calculated the sensitivity with time using IACTs, assuming an exposure of 100 hours per year. The results are shown in Fig. 4. We can see that the sensitivity gradually gets closer to the expectation for both Fermi-LAT and IACTs with the increase of observation time.
In order to consider the level of how the background of IACTs influences the sensitivity of GWB analysis, we also simulated data with different background levels, as Fig. 5 shows. From our calculated results, we found that the influence is relatively small when the background contributes less than 80% of the total photons since the photons that come from the background have lower _weight_, which seldom affects the pulsars profile. While at higher rates, it weakens the sensitivity sharply, this may be due to the profile of the pulsar being broken by the background. In very high background ratios(\(>95\%\)), even fitting the profile
Figure 3: Changing in \(A_{gwb}\) limit for J1231-1411 with increased observation time using Fermi-LAT data. The dashed line represents the relationship that \(A_{gwb}\) is proportional to \(t_{obs}^{-13/6}\).
Figure 2: Constraints on the GWB from radio and gamma-ray PTAs, the radio PTA data is from Fermi-LAT[7]. Assuming that both IACTs and FSTs start observation in 2035 **(note that the data points before 2045 are above the \(A_{gwb}\) range shown in this figure due to the steep rise of the sensitivity curve)**, the points in the right half show the results for about 7.5 and 12.5-year observations of J1231-1411. The solid line shows the Fermi-LAT result, in which the sensitivity is proportional to \(t_{obs}^{-13/6}\). The dot-dash line shows the results of IACTs with and without background as Fig. 4. The green line shows the level at which SKA can be reached when it goes into operation in 2028. The orange star is the NANOGrav 15-year Data Set result.
of the pulsar is failed. So it's still necessary to lower the background photon's effect in ground-based observation. A possible way is using a small exposure window near the center of the pulsar. In our current work, we used \(3^{\circ}\) region, a smaller exposure window can improve the performance of sensitivity. Considering the PSF of about \(1^{\circ}\) for IACTs in this energy range, such improvement is feasible.
## IV Discussion
In this paper, we extended the gamma PTA analytical method of GWB used by Fermi-LAT to simulate future gamma-ray detectors' capability on gamma PTA. Both IACTs and FSTs can lift the statistics significantly. IACTs would potentially induce extra CR backgrounds, which could limit the sensitivity to GWB. We took the extra background into account and found that, in our conservative estimation of CR backgrounds, the IACTs still gave a much better sensitivity due to their overwhelming effective area.
Meanwhile, the sensitivity of gamma PTA is still limited, and there's still a gap with the result of radio PTA. This is not only due to the limitation of existing instruments but also on account of the short time length of gamma PTA observations. The sensitivity of gamma PTA is hard to compare with radio PTA in the short term. But as we had discussed in this letter, gamma PTA shows great potential to match radio PTA in a decade, especially with future detectors. Beyond that, due to the much easier data reduction procedure and less impact from ISM plasma for gamma PTA, we believe that the cross-check from multi-wavelength observation is also necessary and important to limit the GWB and other physical progress.
Looking ahead, large gamma-ray instruments have been planned or are already under construction, such as VLAST[19] and Cherenkov Telescope Array (CTA) [20]. There is also a plan to build IACTs on the site of the Large High Altitude Air Shower Observatory (LHAASO) [21], [22]. But for low threshold IACTs the LHAASO site may be not good enough because of the limited weather conditions, other better sites for optical astronomy, such as Lenghu [23] are more suitable for such an instrument. The gamma PTA is a supplementation and cross-checking tool for radio PTA. With the continued development of new detection tools, we expect further progress in understanding these elusive phenomena.
|
2301.00261 | Cluster radioactivity in trans-lead region: A systematic study with
modified empirical formulas | The possibility of cluster emission from trans-lead (86$\leq$Z$\leq$96)
region of periodic chart has been explored comprehensively by employing few
empirical formulas which are modified by adding angular momentum ($l$) or
isospin-dependent ($I=(N-Z)/A$) or both terms for the calculation of cluster
decay half-lives. These modified versions of the formulas are found with lesser
${\chi}^2$ per degree of freedom and root mean-square error, in addition to the
smaller values of some other statistical parameters, while compared to their
corresponding old versions on available 61 experimental data of cluster
radioactivity. By applying the modified version of the formula given by
Balasubramaniam \textit{et al.} [PRC 70 (2004) 017301], the most accurate
formula among these, half-lives of several clusters i.e. isotopes of Be, B, C,
N, O, F, Ne, Na, Mg, and Si are predicted systematically for the several
isotopes in the trans-lead region. The contest of cluster emission with
$\alpha$-decay has been investigated in form of branching ratio which brings
several potential cluster emissions into the probable decay modes of these
nuclei. The accurate prediction of half-lives of such clusters is expected to
be crucial for the future experimental observations where $\alpha$-decay is
observed dominantly. | A. Jain, P. K. Sharma, S. K. Jain, J. K. Deegwal, G. Saxena | 2022-12-31T18:03:03Z | http://arxiv.org/abs/2301.00261v1 | # Cluster radioactivity in trans-lead region: A systematic study with modified empirical formulas
###### Abstract
The possibility of cluster emission from trans-lead (86\(\leq\)Z\(\leq\)96) region of periodic chart has been explored comprehensively by employing few empirical formulas which are modified by adding angular momentum (\(l\)) or isospin-dependent (\(I=(N-Z)/A\)) or both terms for the calculation of cluster decay half-lives. These modified versions of the formulas are found with lesser \(\chi^{2}\) per degree of freedom and root mean-square error, in addition to the smaller values of some other statistical parameters, while compared to their corresponding old versions on available 61 experimental data of cluster radioactivity. By applying the modified version of the formula given by Balasubramaniam _et al._ [PRC 70 (2004) 017301], the most accurate formula among these, half-lives of several clusters i.e. isotopes of Be, B, C, N, O, F, Ne, Na, Mg, and Si are predicted systematically for the several isotopes in the trans-lead region. The contest of cluster emission with \(\alpha\)-decay has been investigated in form of branching ratio which brings several potential cluster emissions into the probable decay modes of these nuclei. The accurate prediction of half-lives of such clusters is expected to be crucial for the future experimental observations where \(\alpha\)-decay is observed dominantly.
keywords: Cluster decay, Trans-lead Nuclei, Empirical formulas, \(\alpha\)-decay. +
Footnote †: journal: Nuclear Physics A
## 1 Introduction
In 1980, Sandulescu _et al._[1] firstly predicted a new type of radioactivity: cluster radioactivity, which was based on fragmentation theory, where fusion and fission reaction valleys were generated by the shell closure effect [2]. Later in 1984, Rose and Jones experimentally proved the existence of this new type of exotic decay [3], in which \({}^{14}\)C decays from actinide parent nucleus \({}^{223}\)Ra and forms a stable doubly magic (Z=82, N=126) nucleus \({}^{208}\)Pb. Till now, many clusters decays from light to heavy clusters (\({}^{14}\)C to \({}^{32}\)Si) have been observed from various trans-lead nuclei (Fr, Ra, Ac, Pa, Th, U, Pu, etc.) resulting the corresponding daughter nuclei as magic nuclei (Z=82) or neighboring ones (Z=80, 81, and 83), which indicate the importance of shell and pairing effects in cluster radioactivity [4; 5; 6]. These clusters are observed with long half-lives (T\({}_{1/2}\)) in the range 10\({}^{11}\)-10\({}^{30}\) sec. [7].
Theoretically, the half-lives of cluster emissions are predicted using various models such as unified fission model (UFM) [8], generalised liquid drop model (GLDM) [9], super-asymmetric fission
model (SAFM) [10], preformation cluster model (PCM) [11], etc. Cluster decay half-lives are also calculated by using various semi-empirical formulas such as (i) the empirical relation suggested by Balasubramaniam _et al._ (BKAG formula) for cluster decay half-lives with only three parameters [12], (ii) the empirical relation suggested by Ren _et al._ (RenA formula) using a microscopic density-dependent cluster model with the re-normalized M3Y nucleon-nucleon interaction [13]. Comcomitantly, based on experimental observations about the characteristics of exotic cluster decays, scaling law proposed by Horoi [14] in which logarithmic half-life is proportional to scaling variable \((Z_{c}Z_{d})^{0.6}/\sqrt{Q}\) and also proportional to \(\sqrt{\mu}\), where \(\mu\) is the reduced mass of cluster and daughter nuclei which was followed by another semi-empirical formula (NRDX), proposed by Ni _et al._[15] considering WKB barrier penetration probability with some approximations. In 2009, Qi _et al._ introduced universal decay law (UDL) [16] that originates from the mechanism of charged particle decay and R-matrix for all sort of decays of clusters, which includes monopole radioactive decays as well. Poenaru _et al._[17] plotted a universal curve (UNIV) which is found to be a straight line for cluster decay and \(\alpha\)-decay.
All the above-mentioned formulas have been fitted to the available experimental data without considering the dependence of half-lives on angular momentum taken away by the cluster: expected to be crucial alike to the \(\alpha\)-decay [18] to delineate all sets of experimental data. The importance of angular momentum on the \(\alpha\)-decay half-lives has already been established in a few of our recent works [19; 20] which has invoked us to probe similar dependence on the cluster decay half-lives. In addition to this, isospin (\(I=(N-Z)/A\)) of parent nucleus is found to be pivotal for the case of \(\alpha\)-decay in heavy and superheavy nuclei [20; 21; 22; 23; 24; 25] pointing towards its significance in terms of cluster decay as well. Considering these two effects together, modified UDL formula (new UDL) by Soylu and Qi [26], and improved NRDX formula (named as improved unified formula (IUF)) by Ismail _et al._[27] have explained recently that angular momentum and isospin are indeed crucial quantities in determining the cluster decay half-lives. Importance of isospin effect is also probed by improving semi-empirical formula (ISEM) for the cluster radioactivity in Ref. [28].
In this article, we have modified the BKAG [12], RenA [13], Horoi [14], NRDX [15], UDL [16], and UNIV [17] formulas by investigating the effect of centrifugal barrier and isospin terms. These six modified formulas are fitted by using 61 experimental cluster decay data [7; 9; 26; 29]. The comparison of RMSE (root mean square error) between the older and modified version manifestly shows the significance of inclusion of angular momentum and isospin-dependent terms in cluster emission. Furthermore, one of the modified formulas i.e. MBKAG formula (emerged with least RMSE) is employed to calculate the cluster decay half-lives for various cluster emissions like isotopes of Be, B, C, N, O, F, Ne, Na, Mg, and Si in trans-lead region (86\(\leq\)Z\(\leq\)96). For these theoretical estimates, the requirement of disintegration energy (\(Q\)-value) is tested by 121 available experimental \(Q\)-values [7; 9; 26; 29] from various mass models [30; 31; 32; 33]. Consequently, various potential clusters are proposed from trans-lead region along with their accurate estimation of half-lives.
## 2 Formalism
In 2004, Balasubramaniam _et al._ fitted a formula (BKAG) [12] for cluster decay. In the course of that year, Ren _et al._ established a formula [13] that can be treated as a natural extension of the Geiger-Nuttall law [34] as well as the Viola-Seaborg formula [35] from simple \(\alpha\)-decay to complex cluster radioactivity. In the same year, Horoi also suggested an independent model for \(\alpha\)-decay which was generalized for cluster emission [14]. In 2008, Ni _et al._ established NRDX semi-empirical formula for the calculation of half-lives of \(\alpha\) and cluster decays [15]. Afterwards, Qi
_et al._ has introduced universal decay law (UDL) [16] which is widely used by many authors for the estimation of half-lives of cluster radioactivity. In 2011, Poenaru _et al._ fitted UNIV formula [17] and represented a single line of the universal curve on the graph for \(\alpha\)-decay and cluster decay. The original versions of these formulas are mentioned below:
\[log_{10}T_{1/2}^{BKAG}(sec.)=[aA_{c}(A_{d}-A_{c})/A+bZ_{c}(Z_{d}-Z_{c})/Z]Q^{-1/ 2}+c \tag{1}\]
\[log_{10}T_{1/2}^{RenA}(sec.)=aZ_{d}Z_{c}Q^{-1/2}+bZ_{d}Z_{c}+c \tag{2}\]
\[log_{10}T_{1/2}^{Horoi}(sec.)=(a\sqrt{\mu}+b)[(Z_{c}Z_{d})^{0.607}Q^{-1/2}-7]+ (c\sqrt{\mu}+d) \tag{3}\]
\[log_{10}T_{1/2}^{NRDX}(sec.)=aZ_{c}Z_{d}\sqrt{\frac{\mu}{Q}}+b\sqrt{\mu}(Z_{c }Z_{d})^{1/2}+c \tag{4}\]
\[log_{10}T_{1/2}^{UDL}(sec.) = aZ_{c}Z_{d}\sqrt{\frac{\mu}{Q}}+b[\mu Z_{c}Z_{d}({A_{c}}^{1/3}+{ A_{d}}^{1/3})]^{1/2}+c \tag{5}\]
\[log_{10}T_{1/2}^{UNIV}(sec.) = -logP+log_{10}S-[log_{10}(ln2)-log_{10}v] \tag{6}\]
In the above-mentioned formulas \(A_{d}\), \(A_{c}\) and \(Z_{d}\), \(Z_{c}\) denote the mass numbers and atomic numbers of the daughter nucleus and cluster, respectively. \(Q\) (in MeV) is the energy released in cluster decay, and \(\mu=A_{d}A_{c}/(A_{d}+A_{c})\) is the reduced mass. In Eqn. (6), \(-logP\) is determined by \(a(\mu Z_{c}Z_{d}R_{b})^{1/2}[arccos\sqrt{r}-\sqrt{r(1-r)}],r=R_{a}/R_{b}\) with \(R_{a}=1.2249({A_{c}}^{1/3}+{A_{d}}^{1/3})\) fm, \(R_{b}=1.43998Z_{d}Z_{c}/Q\) fm, and the logarithmic form of preformation factor is given by \(log_{10}S=-b(A_{c}-1)\) along with \([log_{10}(ln2)-log_{10}v]\) = d is the additive constant. The values of fitting coefficients a, b, c, and d of the above mentioned formulas can be found in their respective Refs. [12-17].
On account of the importance of angular momentum (\(l\)) as mentioned above, in the present work, as the first step we have modified these formulas by adding only \(l\) dependent term (\(l(l+1)\)), where \(l\) is the minimum angular momentum of cluster particle, which is obtained by following selection rules:
\[l=\left\{\begin{array}{ll}\triangle_{j}&\mbox{for even $\triangle_{j}$ and $\pi_{i}=\pi_{f}$}\\ \triangle_{j}+1&\mbox{for even $\triangle_{j}$ and $\pi_{i}\neq\pi_{f}$}\\ \triangle_{j}&\mbox{for odd $\triangle_{j}$ and $\pi_{i}\neq\pi_{f}$}\\ \triangle_{j}+1&\mbox{for odd $\triangle_{j}$ and $\pi_{i}=\pi_{f}$}\end{array}\right. \tag{7}\]
here, \(\triangle_{j}=|j_{p}-j_{d}-j_{c}|\) with j\({}_{p}\), \(\pi_{i}\), are the spin and parity values of the parent nucleus, respectively. j\({}_{d}\) is the spin of the daughter nucleus. \(\pi_{f}=(\pi_{d})(\pi_{c})\), in which, \(\pi_{d}\) and \(\pi_{c}\) are the parities of the daughter nucleus and cluster, respectively. For the purpose of fitting, the data of spin and parity are taken from NUBASE2020 [36]. In the next step, the formulas are also modified by adding isospin \(I(=(N-Z)/A)\) dependent term (\(I(I+1)\)). The accuracy and need of addition of different terms belong to the modified formulas are checked by \(\chi^{2}\) per degree of freedom (\(\chi^{2}\)) and RMSE values for various versions, which are listed in Table 1 and calculated by using the following relations:
\[\chi^{2}=\frac{1}{N_{nucl}-N_{p}}\sum_{i=1}^{N_{nucl}}\left(log\frac{T_{Th.}^{ i}}{T_{Exp.}^{i}}\right)^{2} \tag{8}\]
\[\text{RMSE}=\sqrt{\frac{1}{N_{nucl}}\sum_{i=1}^{N_{nucl}}\left(log\frac{T_{Th.}^{i} }{T_{Exp.}^{i}}\right)^{2}} \tag{9}\]
where, \(N_{nucl}\) is the total number of nuclei (data) and \(N_{p}\) is the number of degree of freedom (or no. of coefficients). \(T_{Exp.}^{i}\) and \(T_{Th.}^{i}\) are the experimental and theoretical values of half-lives for \(i^{th}\) data point, respectively.
The investigation of addition of different terms leads to the following conclusion from Table 1: (i) the addition of \(l\)-dependent term which reflects the hindrance effect of centrifugal barrier, significantly reduces \(\chi^{2}\) and RMSE for all the considered six formulas, (ii) whereas, the addition of \(I\)-dependent term minimises \(\chi^{2}\) and RMSE values only for BKAG and RenA formulas. As a result, the final versions of these modified formulas adopted in the present article are given by:
\[log_{10}T_{1/2}^{MBKAG}(sec.)=[aA_{c}(A_{d}-A_{c})/A+bZ_{c}(Z_{d}-Z_{c})/Z]Q^{ -1/2}+cl(l+1)+dI(I+1)+e \tag{10}\]
\[log_{10}T_{1/2}^{MRenA}(sec.)=aZ_{d}Z_{c}Q^{-1/2}+bZ_{d}Z_{c}+cl(l+1)+dI(I+1)+e \tag{11}\]
\[log_{10}T_{1/2}^{MHoroi}(sec.)=(a\sqrt{\mu}+b)[(Z_{c}Z_{d})^{0.607}Q^{-1/2}-7] +(c\sqrt{\mu}+d)+el(l+1) \tag{12}\]
\[log_{10}T_{1/2}^{MNRDX}(sec.) = aZ_{c}Z_{d}\sqrt{\frac{\mu}{Q}}+b\sqrt{\mu}(Z_{c}Z_{d})^{1/2}+cl( l+1)+d \tag{13}\]
\[log_{10}T_{1/2}^{MUDL}(sec.) = aZ_{c}Z_{d}\sqrt{\frac{\mu}{Q}}+b[\mu Z_{c}Z_{d}({A_{c}}^{1/3}+{A_ {d}}^{1/3})]^{1/2}+cl(l+1)+d \tag{14}\]
\[log_{10}T_{1/2}^{MUNIV}(sec.) = -logP-log_{10}S+cl(l+1)+d \tag{15}\]
The coefficients a, b, c, d, and e of these modified formulas are mentioned in Table 2.
## 3 Results and discussions
To ascertain the impact on accuracy for the estimation of half-lives of cluster decay by the addition of the above mentioned terms, we have plotted the ratio of decay widths \(W_{Exp.}/W_{Th.}=log_{10}T_{1/2}^{Th.}/log_{10}T_{1/2}^{Exp.}\) as a function of A for our six modified formulas (MBKAG, MRenA, MHoroi,
\begin{table}
\begin{tabular}{c|c c c c c c c c c c c c} \hline Formula & \multicolumn{2}{c}{BKAG} & \multicolumn{2}{c}{RenA} & \multicolumn{2}{c}{Horoi} & \multicolumn{2}{c}{NRDX} & \multicolumn{2}{c}{UDL} & \multicolumn{2}{c}{UNIV} \\ \cline{2-13} & \(\chi^{2}\) & RMSE & \(\chi^{2}\) & RMSE & \(\chi^{2}\) & RMSE & \(\chi^{2}\) & RMSE & \(\chi^{2}\) & RMSE & \(\chi^{2}\) & RMSE & \(\chi^{2}\) & RMSE \\ \hline Original & 1.01 & 0.98 & 1.10 & 0.95 & 1.45 & 1.16 & 0.85 & 0.90 & 1.88 & 1.34 & 0.87 & 0.91 \\ With \(l\) term only & 0.66 & 0.78 & 0.92 & 0.93 & 0.76 & 0.84 & 0.66 & 0.78 & 0.51 & 0.69 & 0.65 & 0.78 \\ With \(l\) and I terms & 0.44 & 0.63 & 0.68 & 0.79 & 0.77 & 0.83 & 0.66 & 0.77 & 0.49 & 0.67 & 0.67 & 0.77 \\ \hline \end{tabular}
\end{table}
Table 1: The \(\chi^{2}\) and RMSE of various versions of BKAG, RenA, Horoi, NRDX, UDL, and UNIV formulas for 61 cluster decay data.
MNRDX, MUDL, and MUNIV) along with their original versions in Fig. 1. Most of the points corresponding to our modified formulas (red diamonds) are between half order of magnitude while the points corresponding to the original formulas (blue triangles) are somewhat widely scattered, which indicate the improvement for the estimation of half-lives of cluster decay after the addition of angular momentum (\(l\)) or isospin-dependent (\(I=(N-Z)/A\)) or both terms.
For the comparison among our modified formulas with a few of latest fitted/modified formulas [26; 27; 28] for cluster decay half-lives, we have calculated some other statistical parameters such as standard deviation (\(\sigma\)), uncertainty (\(u\)), average deviation factor (\(\overline{x}\)), and mean deviation \(\overline{\delta}\) for 61 experimentally known cluster decay half-lives [7; 9; 26; 29]. All these stati
\begin{table}
\begin{tabular}{l|c c c c c} \hline Formula & \(a\) & \(b\) & \(c\) & \(d\) & \(e\) \\ \hline MBKAG & 6.5279 & 89.2684 & 0.0798 & 70.0439 & -100.4122 \\ MRenA & 1.2947 & -0.0423 & 0.0771 & 89.9255 & -101.5076 \\ MHori & 10.1451 & -23.1954 & 4.4835 & -10.9094 & 0.0567 \\ MINRDX & 0.3590 & -1.0063 & 0.0634 & -18.8444 & - \\ MUDL & 0.3564 & -0.3199 & 0.0737 & -24.8301 & - \\ MUNIV & 0.2369 & 0.6104 & 0.0648 & -23.7267 & - \\ \hline \end{tabular}
\end{table}
Table 2: The coefficients of MBKAG, MRenA, MHoroi, MNRDX, MUDL, and MUNIV formulas proposed in the present work.
Figure 1: (Colour online) Ratio of experimental to theoretical decay widths \(W_{Exp.}/W_{Th.}=log_{10}T_{1/2}^{Th.}/log_{10}T_{1/2}^{Exp.}\) for the comparison of our six modified formulas with their respective original versions by using 61 cluster emission data. The RMSE values are also indicated in front of the name of the respective formula.
these formulas are mentioned in Table 3. These statistical parameters are defined as:
\[\sigma=\sqrt{\frac{1}{N_{nucl}-1}\sum_{i=1}^{N_{nucl}}\left(log\frac{T_{Th.}^{i}}{ T_{Exp.}^{i}}\right)^{2}} \tag{16}\]
\[u=\sqrt{\frac{1}{N_{nucl}(N_{nucl}-1)}\sum_{i=1}^{N_{nucl}}\left(log\frac{T_{Th. }^{i}}{T_{Exp.}^{i}}-\mu\right)^{2}} \tag{17}\]
\[\overline{x}=\frac{1}{N_{nucl}}\sum_{i=1}^{N_{nucl}}\left(\frac{|logT_{Exp.}^{ i}-logT_{Th.}^{i}|}{logT_{Exp.}^{i}}\right) \tag{18}\]
\[\overline{\delta}=\frac{1}{N_{nucl}}\sum_{i=1}^{N_{nucl}}\left|log\frac{T_{Th. }^{i}}{T_{Exp.}^{i}}\right| \tag{19}\]
The terms in above equations are already defined in Eqns. (8) and (9). \(\mu\) in Eqn. (17) refers to the mean of full data set.
It is clear from Table 3 that the isospin (only for BKAG and RenA) and angular momentum play a crucial role to improve the cluster decay formulas and result in lesser statistical parameters \(\sigma\), \(u\), \(\overline{x}\), and \(\overline{\delta}\) for the modified formulas introduced in the present work, as compared with a few of the latest fitted/modified formulas (new UDL, IUF, and ISEF formulas) for the cluster decay. It is to be noted that among all the modified formulas, MBKAG formula renders more accurate half-life while compared through all the statistical parameters. Hence, MBKAG formula can be employed to predict the more precise half-lives of cluster decay and the probable decay emission. With this in view, the possibility of cluster emission from the experimentally known trans-lead (86\(\leq\)Z\(\leq\)96) isotopes is probed by considering the daughter nuclei near the proton shell closure i.e., the emission of a cluster is chosen in such a way that the proton number of daughter nucleus \(Z_{d}\) is close to 82 (Pb).
Before predicting possibilities of new cluster decays in trans-lead regions, we first calculate the half-lives of experimentally known cluster decay using the MBKAG formula which are listed in Table 4. We have taken only one parent-cluster combination out of 61 experimental data of cluster decay, to compare with \(\alpha\)-decay half-lives. For the \(\alpha\)-decay half-lives, we have used the NMHF (new modified Horoi formula) whose accuracy in determining the half-lives has already
\begin{table}
\begin{tabular}{l c c c c} \hline \hline Formula & \(\sigma\) & \(u\) & \(\overline{x}\) & \(\overline{\delta}\) \\ \hline MBKAG (Present Work) & 0.64 & 0.08 & 0.02 & 0.51 \\ MRENA(Present Work) & 0.80 & 0.10 & 0.02 & 0.62 \\ MHori (Present Work) & 0.84 & 0.11 & 0.03 & 0.66 \\ MNRDX (Present Work) & 0.79 & 0.10 & 0.02 & 0.60 \\ MUDL (Present Work) & 0.70 & 0.09 & 0.03 & 0.53 \\ MUNIV (Present Work) & 0.79 & 0.10 & 0.03 & 0.59 \\ New UDL [26] & 0.81 & 0.10 & 0.03 & 0.68 \\ IUF [27] & 0.84 & 0.11 & 0.03 & 0.64 \\ ISEF [28] & 0.93 & 0.12 & 0.04 & 0.76 \\ \hline \end{tabular}
\end{table}
Table 3: Comparison of MBKAG, MRENA, MHori, MNRDX, MUDL, and MUNIV formulas with few others formulas.
\begin{table}
\begin{tabular}{c c c c c c c c c c c} \hline Parent & Daughter & Emitted & \(Q\) & \(Q_{\alpha}\) & \(l\) & \multicolumn{3}{c}{log\({}_{10}\)T\({}_{1/2}\)(sec.)} & BR\({}_{Exp.}\) & BR \\ nucleus & nucleus & cluster & (MeV) & (MeV) & & Exp. & MBKAG & NMHF & \\ & & & & & & & (Cluster) & (\(\alpha\)) & & \\ \hline \({}^{221}\)Fr & \({}^{207}\)Tl & \({}^{14}\)C & 31.28 & 6.46 & 3 & 14.52 & 15.44 & 2.96 & -11.56 & -12.48 \\ \({}^{221}\)Ra & \({}^{207}\)Pb & \({}^{14}\)C & 32.39 & 6.88 & 3 & 13.39 & 13.01 & 1.74 & -11.65 & -11.27 \\ \({}^{222}\)Ra & \({}^{208}\)Pb & \({}^{14}\)C & 33.05 & 6.68 & 0 & 11.22 & 11.46 & 2.32 & -8.90 & -9.14 \\ \({}^{223}\)Ra & \({}^{209}\)Pb & \({}^{14}\)C & 31.85 & 5.98 & 4 & 15.25 & 15.18 & 5.17 & -10.08 & -10.01 \\ \({}^{223}\)Ac & \({}^{209}\)Bi & \({}^{14}\)C & 33.06 & 6.78 & 2 & 12.60 & 11.54 & 2.38 & -10.22 & -9.16 \\ \({}^{223}\)Ac & \({}^{208}\)Pb & \({}^{15}\)N & 39.47 & 6.78 & 2 & 14.76 & 14.36 & 2.38 & -12.38 & -11.98 \\ \({}^{224}\)Ra & \({}^{210}\)Pb & \({}^{14}\)C & 30.54 & 5.79 & 0 & 15.90 & 15.99 & 5.87 & -10.03 & -10.12 \\ \({}^{225}\)Ac & \({}^{211}\)Bi & \({}^{14}\)C & 30.48 & 5.94 & 4 & 17.16 & 17.30 & 5.70 & -11.46 & -11.60 \\ \({}^{226}\)Ra & \({}^{212}\)Pb & \({}^{14}\)C & 28.21 & 4.87 & 0 & 21.19 & 20.68 & 10.52 & -10.67 & -10.16 \\ \({}^{226}\)Th & \({}^{212}\)Po & \({}^{14}\)C & 30.67 & 6.45 & 0 & 15.30 & 15.02 & 3.79 & -11.51 & -11.24 \\ \({}^{228}\)Th & \({}^{208}\)Pb & \({}^{20}\)O & 44.72 & 5.52 & 0 & 20.72 & 21.34 & 7.82 & -12.90 & -13.52 \\ \({}^{230}\)Th & \({}^{206}\)Hg & \({}^{24}\)Ne & 57.78 & 4.77 & 0 & 24.64 & 25.78 & 11.91 & -12.73 & -13.87 \\ \({}^{230}\)U & \({}^{208}\)Pb & \({}^{22}\)Ne & 61.40 & 5.99 & 0 & 19.57 & 20.38 & 6.32 & -13.25 & -14.06 \\ \({}^{231}\)Pa & \({}^{207}\)Tl & \({}^{24}\)Ne & 60.42 & 5.15 & 1 & 23.23 & 23.33 & 10.11 & -13.12 & -13.22 \\ \({}^{232}\)Th & \({}^{208}\)Hg & \({}^{24}\)Ne & 55.62 & 4.08 & 0 & 29.20 & 28.56 & 16.63 & -12.57 & -11.94 \\ \({}^{232}\)Th & \({}^{206}\)Hg & \({}^{26}\)Ne & 55.97 & 4.08 & 0 & 29.20 & 29.21 & 16.63 & -12.57 & -12.59 \\ \({}^{232}\)Th & \({}^{208}\)Pb & \({}^{24}\)Ne & 62.31 & 5.41 & 0 & 21.06 & 21.32 & 9.08 & -11.98 & -12.24 \\ \({}^{232}\)U & \({}^{204}\)Hg & \({}^{28}\)Mg & 74.32 & 5.41 & 0 & 22.26 & 25.01 & 9.08 & -13.18 & -15.93 \\ \({}^{233}\)U & \({}^{209}\)Pb & \({}^{24}\)Ne & 60.50 & 4.91 & 2 & 24.82 & 23.71 & 11.86 & -12.96 & -11.85 \\ \({}^{233}\)U & \({}^{208}\)Pb & \({}^{25}\)Ne & 60.75 & 4.91 & 2 & 24.82 & 23.97 & 11.86 & -12.96 & -12.12 \\ \({}^{233}\)U & \({}^{205}\)Hg & \({}^{28}\)Mg & 74.24 & 4.91 & 3 & 27.59 & 26.38 & 11.86 & -15.73 & -14.53 \\ \({}^{234}\)U & \({}^{210}\)Pb & \({}^{24}\)Ne & 58.84 & 4.86 & 0 & 25.88 & 25.06 & 12.19 & -13.69 & -12.87 \\ \({}^{234}\)U & \({}^{208}\)Pb & \({}^{26}\)Ne & 59.47 & 4.86 & 0 & 25.88 & 25.46 & 12.19 & -13.69 & -13.27 \\ \({}^{234}\)U & \({}^{206}\)Hg & \({}^{28}\)Mg & 74.13 & 4.86 & 0 & 25.14 & 25.86 & 12.19 & -12.95 & -13.67 \\ \({}^{235}\)U & \({}^{211}\)Pb & \({}^{24}\)Ne & 57.36 & 4.68 & 1 & 27.42 & 26.95 & 13.37 & -14.05 & -13.58 \\ \({}^{235}\)U & \({}^{210}\)Pb & \({}^{25}\)Ne & 57.83 & 4.68 & 3 & 27.42 & 27.81 & 13.37 & -14.05 & -14.43 \\ \({}^{235}\)U & \({}^{207}\)Hg & \({}^{28}\)Mg & 72.20 & 4.68 & 1 & 28.09 & 27.81 & 13.37 & -14.72 & -14.44 \\ \({}^{235}\)U & \({}^{206}\)Hg & \({}^{29}\)Mg & 72.61 & 4.68 & 3 & 28.09 & 28.70 & 13.37 & -14.72 & -15.32 \\ \({}^{236}\)U & \({}^{212}\)Pb & \({}^{24}\)Ne & 55.96 & 4.57 & 0 & 25.90 & 28.50 & 14.04 & -11.86 & -14.46 \\ \({}^{236}\)U & \({}^{210}\)Pb & \({}^{26}\)Ne & 56.75 & 4.57 & 0 & 25.90 & 28.73 & 14.04 & -11.86 & -14.69 \\ \({}^{236}\)U & \({}^{208}\)Hg & \({}^{28}\)Mg & 71.69 & 4.57 & 0 & 27.58 & 28.40 & 14.04 & -13.54 & -14.36 \\ \({}^{236}\)U & \({}^{206}\)Hg & \({}^{30}
been demonstrated in Ref. [20]. The first, second, and third columns of Table 4 show the parent, daughter, and cluster nuclei, respectively. Next two columns represent the disintegration energies of cluster decay and \(\alpha\)-decay taken from Refs. [7; 9; 26; 29] and from AME2020 [37], respectively. The sixth column lists angular momentum taken away by cluster particle after emission which is calculated by using selection rules explained in the Eqn. (7). We have calculated logarithmic half-lives of cluster decay (using Eqn. (10)), tabulated them in the eighth column, and compared these results with the experimental results (presented in the seventh column). It is clear from the Table 4 that calculated half-lives of cluster emission by using the MBKAG formula (present work) are very close to experimental results. Branching ratio (BR) which quantifies comparison between cluster decay to the \(\alpha\)-decay and is defined as the ratio of \(\alpha\)-decay half-life (listed in the ninth column) to the cluster decay half-life as below:
\[BR=log_{10}b_{c}=log_{10}(\lambda_{c}/\lambda_{\alpha})=log_{10}(T_{\alpha}/T_{ c}) \tag{20}\]
where, \(\lambda_{\alpha}\) and \(\lambda_{c}\) are referred as the decay constants of \(\alpha\)-decay and cluster emission, respectively. The calculated branching ratios are shown in the last column which are indeed close to experimental branching ratios [7; 9; 26; 29] (presented in the second last column). In fact, an excellent match of half-lives of almost all mentioned clusters in Table 4 validates the pertinence of MBKAG formula. Furthermore, one can note that the experimental cluster decay half-life goes maximum nearly upto \(10^{30}\) sec., therefore, it can be reasoned out that the clusters with a half-life less than \(10^{30}\) sec. seemingly be of experimental interest.
In the next step of our study, we have utilized the degree of accuracy of MBKAG formula, as exhibited in Table 4, to predict the logarithmic half-lives of unknown cluster emissions in the trans-lead region. For this estimation, the \(Q\)-values are calculated by the following relation:
\[Q(MeV)=B.E.(d)+B.E.(c)-B.E.(p)+k[Z_{p}^{\epsilon}-Z_{d}^{\epsilon}] \tag{21}\]
where, the term \(k[Z_{p}^{\epsilon}-Z_{d}^{\epsilon}]\) indicates screening effect caused by the surrounding electrons around the nuclei [38] with k=8.7 eV [8.7 \(\times\)\(10^{-6}\)MeV] and \(\epsilon\)=2.517 for Z (proton number) \(\geq\) 60, and k=13.6 eV [13.6 \(\times\)\(10^{-6}\)MeV] and \(\epsilon\) =2.408 for Z \(<\) 60 have been deducted from the data shown by Huang _et al._[39]. For accurate prediction of theoretical \(Q\)-values, we have selected an effective and reliable possible treatment among various theoretical approaches viz. relativistic mean-field theory (RMF) [32; 40; 41; 42; 43; 44], Finite Range Droplet Model (FRDM) [31], nonrelativistic Skyrme Hartree-Fock-Bogoliubov (HFB) [33], and Weizsacker-Skyrme mass model (WS4) [30]. From these approaches, we have calculated RMSE, listed in Table 5, for the known 121 \(Q\)-values related to cluster emissions [7; 9; 26; 29]. Table 5 establishes that WS4 mass model provides an excellent agreement with the minimum RMSE compared to all other considered theoretical approaches and hence justifies the calculation of \(Q\)-values for cluster emission by taking binding energies (for daughter(d), cluster(c), and parent(p) nuclei) from this mass model [30].
\begin{table}
\begin{tabular}{l c} \hline \hline Theory & RMSE \\ \hline WS4 & 0.43 \\ FRDM & 0.78 \\ HFB & 1.17 \\ RMF & 3.61 \\ \hline \hline \end{tabular}
\end{table}
Table 5: RMSE of various mass models for \(Q\)-value data for cluster emission.
Figure 2: (Colour online) Variation of half-lives of various cluster emissions from experimentally known isotopes of trans-lead nuclei (86\(\leq\)Z\(\leq\)96) as a function of neutron number of daughter nuclei (considering proton number \(Z_{d}\)=82). These half-lives are calculated by using MBKAG formula and the \(Q\)-values are taken from the WS4 mass model[30].
After the selection of efficacious empirical formula as well as the theoretical \(Q\)-values, we have chosen all the parent-cluster combinations for this extensive study to find the possible clusters emitted from \({}^{211-231}\)Rn, \({}^{213-226}\)Fr, \({}^{214-235}\)Ra, \({}^{215-233}\)Ac, \({}^{216-237}\)Th, \({}^{218-241}\)Pa, \({}^{228-243}\)U, \({}^{226-245}\)Np, \({}^{226-245}\)Pu, \({}^{227-248}\)Am, and \({}^{231-252}\)Cm isotopes leading to \({}^{208}\)Pb daughter (doubly magic) and neighbouring nuclei. We have plotted our results (up to T=\(10^{100}\) sec.) in Fig. 2 where the minima of log\({}_{10}\)T\({}_{1/2}\) in several panels (Ra-isotopes to U-isotopes) correspond to \({}^{208}\)Pb daughter i.e., doubly magic (Z=82, N=126) or near to it. These minima provide us the most probable clusters emitted from the respective isotopes. However, the probability of cluster emission always competes with \(\alpha\)-decay which is quantified by branching ratio as we have discussed in Eqn. (20). The limit of experimental branching ratio related to \(\alpha\)-decay is around \(BR=-17\) as can be seen in Table 4 and also explained by Poenaru _et al._[45]. Accordingly, cluster emission emerges more probable if \(BR\geq-17\): the criteria for the listed probable clusters in Table 6. These clusters are selected from the Fig. 2 for the particular isotopic chain of parent trans-lead nuclei \({}^{211-231}\)Rn, \({}^{213-226}\)Fr, \({}^{214-235}\)Ra, \({}^{215-233}\)Ac, \({}^{216-237}\)Th, \({}^{218-241}\)Pa, and \({}^{228-243}\)U. Most of our results are within the experimental reach and also in close match with the recent predictions of Refs. [46; 47; 48].
On the other side, in the panels from Np-isotopes to Cm-isotopes in Fig. 2, in-spite of a clear minima, there is incessantly some probability of emission of clusters since many of the clusters own half-lives less than \(10^{30}\) sec. (experimental limit of half-lives of cluster emissions). For examples,
\begin{table}
\begin{tabular}{c c c c c c c c c} \hline Parent & Daughter & Emitted & \(Q\) & \(Q_{\alpha}\) & \(l\) & \(\log_{10}\)T\({}_{1/2}\)(sec.) & BR \\ nucleus & nucleus & cluster & (MeV) & (MeV) & & MBKAG & NMHF & \\ & & & & & & (Cluster) & (\(\alpha\)) & & \\ \hline \({}^{216}\)Rn & \({}^{208}\)Pb & \({}^{8}\)Be & 17.13 & 8.20 & 0 & 6.65 & -2.84 & -9.49 \\ \({}^{222}\)Fr & \({}^{207}\)Pb & \({}^{14}\)B & 21.56 & 5.85 & 0 & 20.23 & 5.24 & -14.99 \\ \({}^{221}\)Ra & \({}^{208}\)Pb & \({}^{13}\)C & 31.70 & 6.88 & 3 & 13.13 & 1.74 & -11.39 \\ \({}^{223}\)Ra & \({}^{208}\)Pb & \({}^{15}\)C & 29.22 & 5.98 & 2 & 19.15 & 5.17 & -13.98 \\ \({}^{222}\)Ac & \({}^{208}\)Pb & \({}^{14}\)N & 35.64 & 7.14 & 1 & 17.93 & 1.03 & -16.90 \\ \({}^{222}\)Ac & \({}^{207}\)Pb & \({}^{15}\)N & 39.10 & 7.14 & 1 & 14.09 & 1.03 & -13.06 \\ \({}^{224}\)Ac & \({}^{208}\)Pb & \({}^{16}\)N & 36.43 & 6.33 & 2 & 19.44 & 3.99 & -15.45 \\ \({}^{225}\)Ac & \({}^{208}\)Pb & \({}^{17}\)N & 35.64 & 5.94 & 2 & 21.68 & 5.70 & -15.98 \\ \({}^{224}\)Th & \({}^{208}\)Pb & \({}^{16}\)O & 46.63 & 7.30 & 0 & 15.11 & 0.81 & -14.30 \\ \({}^{225}\)Th & \({}^{208}\)Pb & \({}^{17}\)O & 45.02 & 6.92 & 2 & 18.39 & 2.22 & -16.17 \\ \({}^{226}\)Th & \({}^{208}\)Pb & \({}^{18}\)O & 45.88 & 6.45 & 0 & 17.98 & 3.79 & -14.19 \\ \({}^{227}\)Th & \({}^{208}\)Pb & \({}^{19}\)O & 44.36 & 6.15 & 2 & 21.19 & 5.16 & -16.03 \\ \({}^{228}\)Th & \({}^{208}\)Pb & \({}^{20}\)O & 44.87 & 5.52 & 0 & 21.12 & 7.96 & -13.16 \\ \({}^{229}\)Th & \({}^{208}\)Pb & \({}^{21}\)O & 43.41 & 5.17 & 0 & 23.84 & 9.77 & -14.37 \\ \({}^{230}\)Th & \({}^{208}\)Pb & \({}^{22}\)O & 43.48 & 4.77 & 0 & 24.73 & 11.91 & -12.82 \\ \({}^{231}\)Th & \({}^{208}\)Pb & \({}^{23}\)O & 41.08 & 4.21 & 2 & 29.26 & 15.75 & -13.51 \\ \({}^{228}\)Pa & \({}^{208}\)Pb & \({}^{20}\)F & 50.90 & 6.26 & 2 & 22.42 & 5.13 & -17.29 \\ \({}^{229}\)Pa & \({}^{208}\)Pb & \({}^{21}\)F & 51.83 & 5.84 & 0 & 21.94 & 6.74 & -15.20 \\ \({}^{231}\)Pa & \({}^{208}\)Pb & \({}^{23}\)F & 52.01 & 5.15 & 1 & 23.75 & 10.11 & -13.64 \\ \({}^{231}\)U & \({}^{208}\)Pb & \({}^{23}\)Ne & 60.99 & 5.58 & 0 & 21.55 & 8.53 & -13.02 \\ \({}^{231}\)U & \({}^{206}\)Pb & \({}^{25}\)Ne & 59.91 & 5.58 & 2 & 23.95 & 8.53 & -15.42 \\ \hline \end{tabular}
\end{table}
Table 6: The calculated logarithmic half-lives and branching ratios of probable clusters emitted from various isotopes of trans-lead nuclei (86\(\leq\)Z\(\leq\)96). Cluster decay and \(\alpha\)-decay half-lives are calculated by using MBKAG formula (Eqn. 10) and NMHF formula [20], respectively. Disintegration energies (\(Q\)-values) for the cluster decay and \(\alpha\)-decay are taken from WS4 mass model [30] and AME2020 [37], respectively. For the \(l\) values, spin and parity of parent, daughter, and cluster nuclei are used from NUBASE2020 [36].
\({}^{21}\)Na from \({}^{226-229}\)Np, \({}^{22}\)Na from \({}^{226-230}\)Np, \({}^{23}\)Na from \({}^{226-233}\)Np, \({}^{24}\)Na from \({}^{226-234}\)Np, \({}^{25,27}\)Na from \({}^{226-237}\)Np, \({}^{26}\)Na from \({}^{226-236}\)Np and \({}^{28}\)Na from \({}^{224-236}\)Np. Similarly, some possible clusters (Mg-isotopes) emitted from various Pu-isotopes (Z\({}_{p}\)=94) are \({}^{23}\)Mg from \({}^{226-231}\)Pu, \({}^{24,25}\)Mg from \({}^{226-235}\)Np, \({}^{26}\)Mg from \({}^{226-238}\)Np, \({}^{27}\)Mg from \({}^{226-239}\)Np, and \({}^{28,29}\)Mg from \({}^{226-241}\)Np. Among Am-isotopes the potential clusters are \({}^{24}\)Al from \({}^{227-230}\)Am, \({}^{25}\)Al from \({}^{227-233}\)Am, \({}^{26}\)Al from \({}^{227-236}\)Am, \({}^{27}\)Al from \({}^{227-239}\)Am, \({}^{28}\)Al from \({}^{227-240}\)Am, \({}^{29}\)Al from \({}^{227-241}\)Am, and \({}^{30-32}\)Al from \({}^{227-242}\)Am as well as \({}^{26-33}\)Si from the \({}^{231-252}\)Cm isotopes. In the emission of odd mass clusters, the odd-even staggering is noticeable in Fig. 2 which is usually attributed to the existence of nucleonic pairing correlations [49]. The above-mentioned detailed study about favorable clusters having T\({}_{1/2}<10^{30}\) sec. is expected to be certainly useful for future experimental inputs.
## 4 Conclusions
Several empirical formulas are investigated by adding angular momentum and isospin dependence. Their modified versions are turned into MBKAG, MRenA, MHoroi, MNRDX, MUDL, and MUNIV formulas. Experimental data of a total of 61 nuclei have been utilized for fitting which offers improved results of all the modified formulas while compared to their earlier versions. Among these six modified formulas, after comparison of several statistical parameters the MBKAG formula is found most precise which is used to examine cluster decay half-lives for trans-lead region: \({}^{211-231}\)Rn, \({}^{213-226}\)Fr, \({}^{214-235}\)Ra, \({}^{215-233}\)Ac, \({}^{216-237}\)Th, \({}^{218-241}\)Pa, \({}^{228-243}\)U, \({}^{226-245}\)Np, \({}^{226-245}\)Pu, \({}^{227-248}\)Am, and \({}^{231-252}\)Cm isotopes leading to \({}^{208}\)Pb daughter (doubly magic) and neighbouring nuclei. We have found the considerable probability of emission of various isotopes of Be, B, C, N, O, F, Ne, Na, Mg, and Si from above mentioned trans-lead nuclei, respectively, and many of them are found to be favorable for the measurement (T\({}_{1/2}<10^{30}\) sec.). This study reveals that doubly magic daughter nuclei play a crucial role in the cluster decay process and could serve as a stimulus to the experiments eyeing on cluster radioactivity.
## 5 Acknowledgement
AJ and GS acknowledge the support provided by SERB (DST), Govt. of India under CRG/2019/001851, and SIR/2022/000566, respectively.
|
2309.12721 | Metrology of Rydberg states of the hydrogen atom | We present a method to precisly measure the frequencies of transitions to
high-$n$ Rydberg states of the hydrogen atom which are not subject to
uncontrolled systematic shifts caused by stray electric fields. The method
consists in recording Stark spectra of the field-insensitive $k=0$ Stark states
and the field-sensitive $k=\pm2$ Stark states, which are used to calibrate the
electric field strength. We illustrate this method with measurements of
transitions from the $2\,\text{s}(f=0\text{ and } 1)$ hyperfine levels in the
presence of intentionally applied electric fields with strengths in the range
between $0.4$ and $1.6\,$Vcm$^{-1}$. The slightly field-dependent $k=0$ level
energies are corrected with a precisely calculated shift to obtain the
corresponding Bohr energies $\left(-cR_{\mathrm{H}}/n^2\right)$. The energy
difference between $n=20$ and $n=24$ obtained with our method agrees with
Bohr's formula within the $10\,$kHz experimental uncertainty. We also
determined the hyperfine splitting of the $2\,\text{s}$ state by taking the
difference between transition frequencies from the $2\,\text{s}(f=0 \text{ and
}1)$ levels to the $n=20,k=0$ Stark states. Our results demonstrate the
possibility of carrying out precision measurements in high-$n$ hydrogenic
quantum states. | Simon Scheidegger, Josef A. Agner, Hansjürg Schmutz, Frédéric Merkt | 2023-09-22T09:11:55Z | http://arxiv.org/abs/2309.12721v1 | # Metrology of Rydberg states of the hydrogen atom
###### Abstract
We present a method to precisly measure the frequencies of transitions to high-\(n\) Rydberg states of the hydrogen atom which are not subject to uncontrolled systematic shifts caused by stray electric fields. The method consists in recording Stark spectra of the field-insensitive \(k=0\) Stark states and the field-sensitive \(k=\pm 2\) Stark states, which are used to calibrate the electric field strength. We illustrate this method with measurements of transitions from the \(2\mathrm{s}(f=0\) and \(1)\) hyperfine levels in the presence of intentionally applied electric fields with strengths in the range between \(0.4\) and \(1.6\,\mathrm{V}\,\mathrm{cm}^{-1}\). The slightly field-dependent \(k=0\) level energies are corrected with a precisely calculated shift to obtain the corresponding Bohr energies \(\left(-cR_{\mathrm{H}}/n^{2}\right)\). The energy difference between \(n=20\) and \(n=24\) obtained with our method agrees with Bohr's formula within the \(10\,\mathrm{kHz}\) experimental uncertainty. We also determined the hyperfine splitting of the \(2\mathrm{s}\) state by taking the difference between transition frequencies from the \(2\mathrm{s}(f=0\) and \(1)\) levels to the \(n=20,k=0\) Stark states. Our results demonstrate the possibility of carrying out precision measurements in high-\(n\) hydrogenic quantum states.
Introduction
The hydrogen atom is a fundamental two-body quantum system. Studies of its spectrum by experiment and theory have played a key role in the development of the quantum theory [1; 2; 3; 4; 5] and of quantum electrodynamics [6; 7; 8; 9]. Spectroscopic measurements of energy intervals between the quantum states of the hydrogen atom have reached exceptional precision and the results can be exactly explained by first-principles calculations and accurately known physical constants such as the Rydberg constant \(R_{\infty}\), the fine-structure constant \(\alpha\) and the proton charge radius \(r_{\rm p}\). The theoretical treatment of the H atom by relativistic quantum mechanics and quantum electrodynamics is indeed so accurate that the comparison with the results of precision measurements in the H atom can serve to determine the values of these constants [10].
In the past years, a significant revision of the values of \(R_{\infty}\) and \(r_{\rm p}\) became necessary after a new measurement of the Lamb shift in muonic hydrogen [11; 12; 13] challenged earlier results from H-atom spectroscopy, a challenge that was referred to as the proton-radius puzzle. This challenge essentially results from the correlation between the \(R_{\infty}\) and \(r_{\rm p}\) values which necessitates the combination of at least two transition frequencies in the H atom to determine these constants. The latest CODATA values of \(R_{\infty}\) and \(r_{\rm p}\) are based on a combination of multiple results, in which the 1s-2s interval in H [14; 15] and the Lamb shift in muonic hydrogen [11] play a central role. Several recent precision measurements in H confirmed the revised values [16; 17] whereas others cover the range between the old and the new values of \(R_{\infty}\) and \(r_{\rm p}\)[18; 19; 20]. Measurement of quantities that are only sensitive to either \(r_{\rm p}\), such as electron-scattering measurements [21; 22; 23; 24; 25; 26; 27; 28; 29], or \(R_{\infty}\), such as measurements in non-penetrating Rydberg series of H, have regained interest.
Early, remarkable experiments designed to determine \(R_{\infty}\) from transition frequencies between circular states, _i.e._, states with orbital-angular-momentum quantum number \(\ell=n-1\) and magnetic quantum number \(m_{\ell}=\pm\ell\), of high principal quantum numbers in the H atom were carried out in the group of D. Kleppner at MIT [30; 31; 32], giving values of \(R_{\infty}\) compatible with the recommended CODATA values available at the time [33]. In that work, the frequencies of \(\Delta n=1\) transitions between circular states of H were measured with 2-3 Hz accuracy at \(n\) values around 30. These transition frequencies scale as \(2R_{\infty}/n^{3}\) and are completely insensitive to the proton size because the Rydberg electron does not penetrate in the core region. The \(2/n^{3}\) sensitivity factor to \(R_{\infty}\) of these measurement is only \(\approx 1\times 10^{-4}\) for the transition between the \(n=27,\ \ell=26,m_{\ell}=26\) and \(n=28,\ \ell=27,m_{\ell}=27\), but this disadvantage could be compensated by the fact that circular states are not sensitive to stray electric fields to first order, and through the exceptional control of all aspects of the millimeter-wave-spectroscopic experiments by the MIT team. An \(R_{\infty}\) value with an absolute uncertainty of 69 kHz and a relative uncertainty of \(2.1\times 10^{-11}\) was determined [32], close to the \(R_{\infty}\) uncertainty value of \(7.6\times 10^{-12}\) of the 1998 CODATA adjustment. Since this pioneering work, circular Rydberg states of Rb have been proposed as an alternative system to determine \(R_{\infty}\)[34]. The properties of circular Rydberg states of any atom or molecule are indeed ideally suited to metrology, as illustrated by the use of such states as ultrasensitive electric-field sensors [35].
If circular Rydberg states are excepted, high Rydberg states are usually not considered to be suitable for precision measurements because of their high sensitivity to stray electric fields (see discussion in, e.g., Refs. [36; 37]). In the context of metrology in the H atom, this sensitivity has implied that almost all precision experiments involving Rydberg states of H with \(n\geq 3\) have targeted states with \(n\) values below 12 [16; 38; 18] and that the measurements required a careful evaluation of the Stark effect on the level structure induced by stray electric fields.
We introduce here an alternative method to determine \(R_{\infty}\) which relies on measuring the spectra of \(|m_{\ell}|=1\) Rydberg states of the H atom in the presence of intentionally applied electric fields. Stark states of the H atom exhibit shifts of \(\approx 1.5a_{0}ekn\mathcal{F}\) that are linear in the field strength \(\mathcal{F}\) at low fields and proportional to the integer difference \(k=n_{1}-n_{2}\) between the quantum numbers \(n_{1}\) and \(n_{2}\) that arise in the solution of the Schrodinger equation in parabolic coordinates (\(k=0,\pm 1,\pm 2,\ldots,\pm(n-1-|m_{\ell}|)\), where \(m_{\ell}\) is the magnetic quantum number associated with the electron orbital motion) [9; 39]. Consequently, even-\(n\), \(k=0,|m_{\ell}|=1\) states are to first order field insensitive, as circular Rydberg states. Their magnetic moments are, however, much smaller than for circular states, which makes them less sensitive to Zeeman shifts by magnetic stray fields. \(|m_{\ell}|=1\) Stark states do not possess any \(s\) character and their \(\ell\)-mixed wavefunctions are dominated by nonpenetrating high-\(\ell\) components; consequently, their spectral positions are also insensitive to the proton size. Experimentally, we measure the frequencies of transitions from the 2s(\(f=0\) and 1) states to \(n=20,k=0,\pm 2,|m_{\ell}|\) Stark states and use the separation between the \(k=\pm 2\) states to precisely determine the value of the applied field. We then extract the position of the \(k=0\) state to determine the Bohr energy \(\left(-hcR_{\rm H}n^{-2}\right)\) after correcting for the quadratic Stark shifts. To obtain a value of \(R_{\infty}\) without having to consider its correlation with \(r_{\rm p}\), the position of the 2s levels and the \(n=20,k=0,\pm 2,|m_{\ell}|\) Stark states can be related to the position of the 2p levels using the 2s Lamb shift determined by Bezginov _et al._[17]. The sensitivity factor of the measurement to \(R_{\infty}\) is thus 1/4, _i.e._, more than 3000 times higher than for the measurement based on circular states at \(n\approx 30\). Consequently, an accuracy of about 20 kHz would make this measurement competitive with the MIT measurements and we believe that this is achievable. The price to pay for this advantage is that the transition frequencies are in the UV range of the electromagnetic spectrum rather than in the millimeter-wave range
and, therefore, compensation of the Doppler effect becomes much more critical.
In this article, we present several of the key aspects of this method of determining \(R_{\infty}\). We are still in the middle of the data-acquisition process, and use subsets of the data to discuss systematic uncertainties in the measurements of \(nkm_{\ell}\)\(\leftarrow\) 2s transition frequencies originating from the Stark effect. We also present the determination of (i) the \(f=0-f=1\) hyperfine interval in the 2s state, which we obtain by combining two sets of measurements, from 2s(\(f=0\)) and 2s(\(f=1\)) to \(n=20\) Stark states, and (ii) the difference between the \(n=20\) and \(n=24\) Bohr energies by combining measurements from the 2s(\(f=1\)) hyperfine state to \(n=20\) and 24 Stark states. The article is structured as follows: Section II describes the experimental setup and provides details on the laser systems used to prepare H atoms selectively in the 2s(\(f=0\) and 1) hyperfine states and to record spectra of the \(nkm_{\ell}\)\(\leftarrow\) 2s(\(f\)) transitions, as well as the detection system and the procedure we follow to cancel the Doppler shifts. Section III describes how we calculate the energies of the Stark states of H and draws attention to the aspects that are most relevant for the determination of the Bohr energies. Section IV illustrates the current status of our measurements by using small data sets to compare spectra recorded at different electric fields from the two hyperfine components of the 2s state and to \(n=20\) and 24 Stark states. The results we present here only concern small energy intervals (\(\sim 177\,\)MHz for the 2s(\(f=1\gets f=0\)) interval and \(2.51\,\)THz for the difference between the Bohr energies at \(n=20\) and 24) obtained by building differences of (currently still blinded) UV laser frequencies. Absolute transition frequencies will be reported when the analysis of the systematic errors related to the Doppler effect is completed. In the last section, we draw several conclusions concerning our new approach.
## II Experimental setup
The experimental setup is presented schematically in Fig. 1. It consists of (i) a differentially pumped set of vacuum chambers in which the H atoms are produced and entrained in a pulsed supersonic beam and subsequently photoexcited to Rydberg states via the metastable 2s state within a double-layer mu-metal magnetic shield; (ii) a pulsed near-Fourier-transform-limited laser system delivering radiation at 243 nm to drive the 2s \(\leftarrow\) 1s transition; and (iii) an SI-traceable single-mode continuous-wave (cw) UV laser to further excite the H atoms to Rydberg states. The experiment is run in a pulsed mode at a repetition rate of 25 Hz.
The hydrogen atom source has been described in Ref. [40] to which we refer for details. The hydrogen atoms are produced by dissociating molecular hydrogen in a dielectric-barrier discharge near the orifice of a pulsed cryogenic valve and are entrained in a supersonic beam of H\({}_{2}\). The temperature (\(T_{0}\)) of the valve can be adjusted between 45 K and 160 K to vary the forward velocity of the supersonic expansion between 970 m s\({}^{-1}\) and 1800 m s\({}^{-1}\). The final longitudinal temperature of the supersonic beam (\(\approx\)12 mK) and its forward velocity (\(v_{x}\approx\sqrt{2k_{\mathrm{H}}T_{0}\gamma/m_{\mathrm{H}_{2}}(\gamma-1)}\)) can be well approximated using the model of an adiabatic expansion [41]. At valve temperatures below the characteristic rotational temperature of the carrier gas H\({}_{2}\) (\(\theta_{\mathrm{rot}}\approx\) 90 K), the heat capacity ratio \(\gamma\) can be approximated by the one of a monoatomic gas, _i.e._, \(\gamma=\nicefrac{{5}}{{3}}\). The central part of the supersonic beam is selected by two skimmers with diameters of 2 mm and 3 mm placed at distances of 45 cm and 135 cm from the nozzle orifice, respectively.
The skimmed supersonic beam enters a magnetically shielded chamber in which the H atoms are excited to Rydberg states in a sequential three-photon absorption process. The 2s \(\leftarrow\) 1s transition is first induced between two copper plates kept at the same electric potential of 4V\({}_{\mathrm{DC}}\) by the third-harmonic (\(\lambda=243\) nm) beam of a pulse-amplified near-Fourier-transform-limited Ti:Sa laser [42] which crosses the supersonic beam at right angles. The molecular beam then traverses a region with a weak homogeneous electric field \(\mathcal{F}_{\mathrm{DC}}=\nicefrac{{V_{\mathrm{DC}}}}{{cm}^{-1}}\), where it intersects a single-mode cw UV laser (\(\lambda\approx 368\) nm) used to excite the metastable H(2s) atoms to specific Rydberg-Stark states. These states are field ionized by a large pulsed electric field (up to 6 kV cm\({}^{-1}\)) and the resulting protons are accelerated towards a microchannel-plate (MCP) detector. The different components are discussed in more details in the following subsections. Spectra of Rydberg-Stark states are recorded by monitoring the H\({}^{+}\) field-ionization yield as a function of the UV laser frequency.
### Laser system for the 2s \(\leftarrow\) 1s transition
The 243-nm radiation used to excite the H atoms to the 2s state by nonresonant two-photon excitation is generated by amplification of the 120-ns-long chopped output of a titanium-sapphire (Ti:Sa) seed laser at 729 nm using a Nd:YAG-pumped Ti:Sa multipass amplifier, as described in Ref. [42]. The output pulses, with pulse energies of \(\approx 15\) mJ, are frequency tripled in two successive \(\beta\)-barium-borate (BBO) crystals, resulting in 40-ns-long pulses at 243 nm with typical pulse energies of 800 uJ. The 243-nm-laser beam is focused slightly beyond the supersonic beam
using a 30-cm-focal-length lens. The use of two skimmers reduces the Doppler width of the 2s \(\leftarrow\) 1s transition and enables the full resolution of the \(f=0\gets f=0\) and \(f=1\gets f=1\) hyperfine components.
Because the 243-nm laser beam propagates along the \(x\) axis (see Fig. 1), perpendicularly to both the supersonic beam and the cw UV laser, the focus selects a narrow cylinder (diameter of 0.1 mm) of H atoms with a reduced velocity distribution along the \(y\) axis (see axis system in Fig. 1). This selection narrows down the Doppler width of the Rydberg-excitation spectra from the 2s level. The photoexcitation only excites H(1s) atoms in a very restricted longitudinal phase-space volume. Consequently, the H(2s)-atom cloud remains compact and hardly expands as the beam propagates through the 4-cm-long distance separating the 2s \(\leftarrow\) 1s excitation region from the \(\mathit{nkm}\leftarrow\) 2s excitation region. However, the spatial and velocity selection can lead to a nonthermal velocity distribution, potentially resulting in asymmetric Doppler profiles in the Rydberg-excitation spectra.
The 243-nm laser unavoidably ionizes a significant fraction of the H(2s) atoms [43]. To avoid stray fields from the generated protons, they are accelerated out of the H(2s) cloud by the electric field \(\mathcal{F}_{\mathrm{DC}}\) resulting from the potentials applied between the different electrodes within the mu-metal magnetic shield (see Fig. 1). To eliminate line broadening caused by interactions between closely spaced Rydberg atoms in the sample volume, the measurements are carried out in a regime where at most one Rydberg atom is in the excitation volume and on average much less than one field-ionization event is detected per experimental cycle.
### Laser system for the _nkm\(\leftarrow\)_2s excitation
The primary laser used for the precision spectroscopy of the _nkm_\(\leftarrow\) 2s transition is a commercial continuous-wave (cw) Ti:Sa ring laser (Coherent, 899-21) pumped by a 12 W solid-state laser (Coherent, Verdi V-12). The Ti:Sa ring laser is operated in the range 729\(-\)736 nm and provides 1 W of output power. In addition to the standard actuators of the ring laser, an intra-cavity electro-optic modulator (EOM) (QUBIG, PS3D-BC) is used as a fast actuator to maintain a phase lock to an ultrastable reference laser, as discussed below. Around 98 % of the optical power is sent to a home-built second-harmonic-generation enhancement cavity (SHG) equipped with a 12-mm-long lithium
Figure 1: Schematic representation of the experimental setup. Upper part: laser system and geometry of the photoexcitation from the metastable 2s state of H to Rydberg states. Lower part: vacuum chambers in which the supersonic beam of H atoms is generated, these atoms are photoexcited to Rydberg states and the Rydberg states are detected by pulsed field ionization Top right inset: Configuration of laser and supersonic beams used for the determination of Doppler-free frequencies. See text for details.
triborate (LBO) crystal cut at Brewster's angle. The SHG cavity is stabilized using a Hansch-Couillaud scheme [44]. The typical conversion efficiency to the second harmonic is 20 %. The 368-nm output of the SHG cavity is coupled into an optical fiber and guided to an actively stabilized retroreflector (AFR) setup for Doppler-shift compensation (see below). The forward-propagating and retroreflected laser beams cross the molecular beam at right angles 4 cm downstream of the 2s \(\leftarrow\) 1s excitation spot.
The remaining 2 % of the fundamental laser power is used for the frequency calibration and stabilization. The light is tightly collimated and sent through an acousto-optic modulator (AOM) (Isomet, M1260-T350L). The first-order diffraction is retro-reflected and its polarization turned by 90\({}^{\circ}\), as illustrated in the upper left part of Fig. 1. The double-pass configuration induces a shift of the fundamental frequency \(\nu_{\mathrm{L}}\) by 2\(\nu_{\mathrm{aom}}\) which can be adjusted up to 320 MHz. A polarizing beam splitter then deflects the frequency-shifted radiation and sends it through an optical fiber to an amplified, spectrally broadened, and frequency-doubled optically stabilized ultra-low-noise frequency comb (MenloSystems, FC1500-ULN & M-VIS). The repetition rate of the frequency comb is locked to an ultrastable laser, the frequency of which is referenced to an SI-traceable frequency standard, as characterized in Ref. [45]. The output of the spectrally broadened frequency comb is dispersed with a reflective grating and the spectral components around \(\nu_{\mathrm{L}}\) are selected and spatially overlapped with the laser. The beat, with frequency
\[\nu_{b}=\nu_{c}-\nu_{\mathrm{L}^{\prime}} \tag{1}\]
between the shifted laser frequency \(\nu_{\mathrm{L}^{\prime}}=\nu_{\mathrm{L}}+2\nu_{\mathrm{aom}}\) and the spectrally closest frequency-comb tooth \(\nu_{c}\) is recorded using a balanced photodiode (Thorlabs, PDB425A-AC) and processed using the electronic circuit depicted in Fig. 2. A bandpass filter centered at 60 MHz is used to suppress beat frequencies originating from neighboring comb teeth. The RF beat signal is amplified with an automatic-gain-control (AGC) amplifier and sent to a frequency counter (K+K Messtechnik, FXM50). A fraction of the RF-signal is used to establish a phase-lock of the Ti:Sa laser to the frequency comb. To this end, the beat signal is amplified again and fed to a phase-frequency detector (PFD) (Analog Devices, HMC403) where \(\nu_{b}\) is compared to a 60 MHz local oscillator. The error signal is transmitted to the control box of the ring laser [46] via an isolation amplifier (IA). The frequency components in the range \(0-20\) MHz are isolated with a diplexer, pre-amplified and distributed to an inverting bipolar high-voltage amplifier (APEX Microtechnology, PA90) and an amplifier (Comlinear, CLC103). The amplified signals are applied to the intracavity EOM as shown in Fig. 2. This frequency-offset-locking scheme provides a phase lock of the Ti:Sa ring laser to the ultra-low-noise frequency comb and makes \(\nu_{\mathrm{L}}\) SI traceable.
### Detection of the _nkm\(\leftarrow\)_ 2s transition
The \(nkm\gets 2\)s excitation is carried out in the center of two electro-polished stainless-steel plates separated by \(\approx 2.1\) cm and designed for the application of homogeneous electric fields. A ring electrode consisting of four segments is inserted between the two plates to eliminate all line-of-sight trajectories of charged particles to insulators. This measure effectively prevents accumulation of charges near the excitation volume and is crucial to reduce stray electric fields. The segmented geometry enables one to apply transverse electric fields for stray-field compensation. A short plate distance of 2.1 cm between the ion-repeller plate and the grounded extraction plate was chosen to be able to generate electric fields up to 6 kV cm\({}^{-1}\) in less than 29 ns with a home-built 12.5 kV low-noise high-voltage switch. With such fields, Rydberg states with principal quantum number as low as 20 can be efficiently field ionized (see Fig. 7 below).
The electronic circuit was conceived to combine the high voltage pulse with low-noise DC potentials (2V\({}_{\mathrm{DC}}\)) on the repeller plate using a 20-bit digital-to-analogue low-noise voltage source. This enabled us to either minimize stray-electric-field components or to apply well-defined electric fields in the \(z\) direction. The only openings in the electrode structure surrounding the photoexcitation region are 5-mm-diameter holes along the molecular-beam axis and 9-mm-diameter holes for the UV laser beam.
### Doppler-shift cancellation
The inset of Fig. 1 schematically describes the photoexcitation geometry, where \(\vec{v}\) is the H(2s)-atom velocity and \(\vec{k}\) the wavevector of the forward-propagating (blue) and reflected (red) UV radiation. Any deviation \(\delta\alpha\) from 90\({}^{\circ}\) of the angle between the laser beam and the supersonic beam leads to a first-order Doppler shift. To cancel this shift, we choose \(\delta\alpha\) to be large enough so that the spectral lines from the forward-propagating and reflected UV laser
beams do not overlap. In addition, a 180\({}^{\circ}\) reflection angle is enforced through an active-stabilization feedback system, based on a design introduced in Refs. [37; 47; 48]. This procedure resulted in a mirror-symmetric double-line profile with center at the first-order Doppler-free frequency [49]. Choosing \(\delta\alpha\) as close to zero as possible, as advocated in Ref. [47; 48], turned out not to be practical in our case because the nonthermal nature of the H(2s)-atom velocity distribution made it challenging to extract the central frequency from the lineshapes under conditions where the fine structure is not fully resolved.
An aberration-free set of four antireflection-coated lenses [50] with an effective focal length of 21.35 mm is used to collimate the diverging beam emerging from a pure-silica-core, polarization-maintaining, single-mode optical fiber (mode-field diameter 2.3 mm), resulting in a parallel beam with a M\({}^{2}\) value of \(\approx 1.02\). The focus of the resulting Gaussian beam is located \(\approx 20\) mm beyond the chamber. Consequently, the reflected beam almost exactly retraces the incoming beam and the change of wavefront curvature is negligible.
The active stabilization of the alignment of the 180\({}^{\circ}\) reflecting mirror is achieved by dithering its tip and tilt angles by applying sinusoidal electric potentials to piezo-electric elements installed at the back of the mirror holder (see Fig. 1). The dithering leads to a modulation of the incoupling efficiency of the reflected beam into the silica-core fiber beyond the lens system. These modulations are detected with an auto-balanced photodiode (PD). The dithering frequencies are selected to minimize cross talk between the motions of the tip and tilt axes. The error signal used to correct the mirror position is produced by lock-in amplifiers (LIA) (Femto, LIA-MVD-200L) connected to a proportional-integral controller (PI). To compensate slow drifts, the time constant of the feedback loop was chosen to be 0.1 s.
## III Theoretical description of Rydberg states of the H atom in electric fields
The energy levels of the H atom in a static homogeneous electric field \(\vec{\mathcal{F}}=(0,0,\mathcal{F})\) are eigenvalues of the Hamiltonian
\[\hat{\mathcal{H}}=\hat{\mathcal{H}}_{0}+e\mathcal{F}\hat{z}, \tag{2}\]
where \(\hat{\mathcal{H}}_{0}\) is a diagonal matrix containing the field-free energies of the \(|nljfm_{f}\rangle\) states with principal quantum number \(n\), orbital angular momentum quantum number \(l\), total angular momentum quantum number without nuclear spin \(j\)
Figure 2: Schematic electric-circuit diagram of the laser-stabilization electronics (see text for details). Color-shaded inset: Spectral density (SD) of the in-loop beat note \(\nu_{b}\) recorded with a bandwidth of 3 kHz with (black) and without (gray) active stabilization using the intracavity EOM.
total angular momentum quantum number \(f\), and associated magnetic quantum number \(m_{f}\). The field-free hyperfine-centroid energies, including terms arising from relativistic, quantum-electrodynamics (QED) and finite-nuclear-size corrections, can be accurately calculated using Eqs. \(7-41\) of Ref. [10] and the latest recommended physical constants (2018 CODATA, see Ref. [51]). To obtain the field-free energy-level structure at high \(n\) values, we used Bethe logarithms tabulated in Ref. [52] and included the hyperfine splittings using the analytical expressions provided in Ref. [53]. The calculated structure of the \(m_{l}=0\) levels at \(n=20\) is depicted in the inset of Fig. 3b).
The operator \(e{\cal F}\hat{z}\) in Eq. 2 describes the effect of the external field. The perturbation can be treated in excellent approximation in a nonrelativistic framework and relativistic corrections to the Stark effect as discussed in Ref. [54] become negligible as \(n\) increases. \(e{\cal F}\hat{z}\) only contributes off-diagonal elements connecting zero-field states differing in \(l\) by \(\pm 1\). These matrix elements can be expressed in analytic form using standard angular-momentum algebra (see, e.g., Refs. [55; 56]) as
\[\left\langle n^{\prime}l^{\prime}j^{\prime}f^{\prime}m_{f}^{ \prime}\right|\hat{z}\left|nljfm_{f}\right\rangle=(-1)^{\Delta f+\Delta j+ \Delta l-m_{f}^{\prime}+I+S}\times\] \[\left(\begin{matrix}l^{\prime}&1&l\\ 0&0&0\end{matrix}\right)\left(\begin{matrix}f^{\prime}&1&f\\ -m_{f}^{\prime}&0&m_{f}\end{matrix}\right)\left\{\begin{matrix}j^{\prime}&f^ {\prime}&1\\ f&j&1\end{matrix}\right\}\left\{\begin{matrix}l^{\prime}&j^{\prime}&S\\ j&l&1\end{matrix}\right\}\times\] \[\sqrt{\Theta(f^{\prime})\Theta(f)\Theta(j^{\prime})\Theta(j) \Theta(l^{\prime})\Theta(l)}\left\langle n^{\prime}l^{\prime}\right|r\left| nl\right\rangle, \tag{3}\]
where the expressions in parentheses and curly parentheses are Wigner 3j and 6j symbols, respectively, \(\Theta(x)=2x+1\)
Figure 3: Stark effect in the \(n=20,\,m_{f}=0\) manifold of the H atom. a) Field dependence of the \(k=0\) state revealing a quadratic shift below \(50\,\mathrm{mV}\,\mathrm{cm}^{-1}\) caused by the intramanifold mixing of different orbital-angular-momentum components, and a smaller quadratic shift at larger fields arising from the interaction between different \(n\) manifolds. b) Overview of the field dependence of all \(m_{l}=0\) Stark states, which is essentially linear. c) Calculated spectra for different electric-fields strengths and electric-field vectors \(\vec{\cal F}\) pointing parallel or perpendicular to the laser polarization \(\vec{\epsilon}_{\mathrm{p}}\).
\(\Delta x=x^{\prime}-x\), and \(\left\langle n^{\prime}l^{\prime}\right|r\left|nl\right\rangle\) are radial integrals connecting the \(r\)-dependent parts of the solutions of the Schrodinger equation of the H atom (see Eqs. 63.2 and 63.5 of Ref. [9]). Restricting the calculations of the Stark effect to a single \(n\) value, one obtains an intra-manifold quadratic Stark effect at low fields and a linear Stark effect at intermediate fields, as depicted in Fig. 3. The Stark states are commonly labeled by the parabolic quantum numbers \(n_{1}\) and \(n_{2}\) or by their difference \(k=n_{1}-n_{2}\)[9; 57]. At intermediate field strengths, the states can approximately be described by their \(k\) and \(m_{l}\) values. States of a given value of \(k\) form near degenerate groups with \(m_{l}\) values ranging from \(-(n-\left|k\right|-1)\) to \((n-\left|k\right|-1)\) in steps of 2. The \(k=0\) states, highlighted in red in Fig. 3, are the only states retaining almost pure parity \(\left[(-1)^{n-1}\right]\). They have a zero electric dipole moment and are insensitive to the field over a large range of fields, which makes them attractive for precision measurements, except at fields very close to zero. All other states exhibit a dipole moment in the field. At intermediate to high field strengths, the coupling between states of different \(n\) values induced by the field becomes significant and the states start exhibiting an inter-manifold quadratic Stark effect. This behavior is displayed on an enlarged vertical scale for \(m_{f}=0\) in Fig. 3a). To reliably calculate Stark shifts in this field range, it is necessary to include basis states of neighboring \(n\) values until convergence with the size of the basis set is reached.
Figure 4 presents the decomposition of the \(n=20\), \(k=0\) Stark states with \(m_{f}=0-2\) in the \(\left|ljfm_{f}\right\rangle\) basis. For each \(m_{f}\) value, the eigenstates possess contributions from up to four hyperfine-structure components, as indicated by the color labels. The intensity of transitions from the 2s level corresponds to the coherent squared sum of the p characters in the evaluation of electric-dipole-moment matrix elements.
Figure 3c depicts calculated intensity distributions in spectra of the \(n=20\gets 2s\) transitions at field strength below \(1\,\mathrm{V}\,\mathrm{cm}^{-1}\) and for laser polarizations parallel and perpendicular to the DC electric field. At fields below \(20\,\mathrm{mV}\,\mathrm{cm}^{-1}\), corresponding to typical stray fields, the center of gravity of the distribution depends on the polarization and varies strongly and nonlinearly with the field strength, making precision measurements prone to systematic uncertainties. This behavior explains why high-\(n\) Rydberg states are usually avoided in precision measurements. However, in the linear regime of the Stark effect, _i.e._, above \(0.2\,\mathrm{V}\,\mathrm{cm}^{-1}\) at \(n=20\), the spectra regain a regular intensity pattern and the spacings between the Stark states encode the field strength. When the polarization is parallel to the field (\(\pi\) transitions), the intensity is strongest at the outer edges of the manifold and vanishes at \(k=0\), for even \(n\) values
Figure 4: Expansion coefficients of the \(k=0\), \(\left|m_{f}\right|=0,1\) and 2 Rydberg-Stark wavefunctions in the \(\left|ljfm_{f}\right\rangle\) angular-momentum basis as labeled in the figure. Only basis states with odd orbital angular momentum quantum number make significant contributions.
because \(k=0,m_{l}=0\) states do not exist, and for odd \(n\) values because \(k=0\) states have vanishing p character. When the polarization is perpendicular to the field, the opposite behavior is observed (see right panel of Fig. 3c).
Consideration of Fig. 3 leads to the following conclusions concerning precision spectroscopy in high-\(n\) states of hydrogen-like systems:
* Because of the nontrivial field dependence of the line profiles, precision measurements are not attractive in the region of the intra-manifold quadratic Stark effect.
* In the linear regime of the Stark effect, regular spectral pattern are restored and the states with \(|k|\geq 0\) form pairs of levels with Stark shifts of opposite sign. The positions of the \(k\neq 0\) states can be used for the electric-field calibration, as will be demonstrated in Section IV.
* If an easily calculable shift from the Bohr energy \(\left(-hcR_{\text{H}}n^{-2}\right)\) arising from the quadratic Stark effect is disregarded, the \(k=0\) Stark states are essentially field-independent. Consequently, spectra of \(k=0\) Stark states in the linear regime are not subject to broadening by inhomogeneous fields and their positions can be converted into the Bohr energy by adding the calculated Stark shift (see red curves in Fig. 3a)).
* The linear Stark manifold is thus perfectly suited for metrological purposes, in particular for precise determination of the Bohr energy. It has previously been used to determine the binding energy of Rydberg states of H\({}_{2}\)[58].
The wavefunctions of the Stark states can be used to estimate their magnetic moments and systematic shifts arising from the Zeeman effect caused by residual magnetic fields, as illustrated in Fig. 5 with the example of the \(k=0,|m_{l}|=1\) Stark states. In this case, the electric field splits the structure into two \(m_{f}=0\), two \(m_{f}=1\) and one \(m_{f}=2\) components and a total of eight states. The magnetic moments are given by the relative orientations of the electron orbital angular momentum, electron spin, and nuclear spin vectors. A magnetic field parallel to the electric field further splits these components according to their magnetic moments, as displayed schematically on the right-hand side of Fig. 5. Because the Zeeman shifts are symmetric and extremely small in a magnetically shielded environment (less than 2.4 kHz for \(\mu=2\mu_{\text{B}}\) and \(|\text{B}|\leq\)100 nT), we conclude that the Zeeman effect in low-\(m_{l}\) states can be neglected in metrological applications relying on Stark states in the linear regime. This is also the case for perpendicular magnetic-field components because the corresponding Zeeman effect couples states with \(\Delta m_{l}=\pm 1\) which are located in different \(k\) manifolds and thus energetically too distant for significant mixing to occur.
As explained in Section II, the maximal electric-field strength we apply to record Stark spectra is 2 V cm\({}^{-1}\). The applied fields also induce shifts of the 2s level energies, which need to be considered when extracting the absolute positions of the Rydberg-Stark states. The Stark shifts of the 2s levels can be calculated in the same manner as explained above for higher \(n\) values. The calculated shifts are displayed in Fig. 6. They are positive and quadratic
Figure 5: Energy level structure of the eight \(n=20,\,k=0\) Stark states with \(m_{l}=1\) character, calculated at an electric field strength \(\mathcal{F}=0.8\) V cm\({}^{-1}\). These states split into two groups of four states each separated by \(\approx 600\) kHz. The Zeeman effect induced by a magnetic field pointing along the quantization axis is schematically illustrated on the right side and lifts all remaining degeneracies.
for small electric fields because the dominant interactions are with the 2p\({}_{\nicefrac{{1}}{{2}}}\) states, which are located energetically just below the 2s states. When determining the absolute positions of the \(nkm\) Rydberg-Stark states from spectra of the \(nkm\)\(\leftarrow\) 2s transitions, the 2s Stark shifts must be added to the measured transition frequencies.
## IV Results
Figure 7 displays pulse-field-ionization (PFI) spectra of the \(n=20\) Stark manifold recorded from the 2s(\(f=1\)) hyperfine level using laser radiation polarized linearly in the direction orthogonal to the applied DC electric field \(\mathcal{F}_{\mathrm{DC}}\). The upper (lower) trace was recorded by field ionizing the Rydberg states with a pulsed field \(\mathcal{F}_{\mathrm{PFI}}\) pointing in the same (opposite) direction as the DC field \(\left[\mathcal{F}_{\mathrm{PFI}}=5.7\,\mathrm{kV}\,\mathrm{cm}^{-1},\mathcal{ F}_{\mathrm{DC}}=0.2\,\mathrm{V}\,\mathrm{cm}^{-1}(-0.2\,\mathrm{V}\,\mathrm{cm}^{-1})\right]\). The orthogonal laser-polarization arrangement led to the observation of dominant transitions to Stark states of even \(k\) values, as assigned at the top of the figure. The intensity distributions in both spectra are very similar, except at the edges of the manifold. Whereas the intensities of the transitions to the highest \(k\) states (\(k\geq 14\)) are strongly depleted in the upper spectrum, the lowest \(k\) states (\(k\leq-14\)) are depleted in the lower spectrum. The reason for the disappearance of the intensities at the edges of the Stark manifold are twofold: First, the transition dipole moment gradually decreases with increasing \(|k|\) value. Second, the ionization rates of the Stark states that are shifted to higher energies by the pulsed field rapidly decrease with increasing \(k\) value. In the case of the upper spectrum, these states are those observed at the highest frequencies. For the lower spectrum, they are observed at the lowest frequencies because of the reversal of the sign of \(k\) when the field polarity changes upon application of the pulsed field, which diabatically inverts the Stark manifold, as schematically illustrated in the inset. This interpretation is fully supported by calculations of the spectral intensities, as depicted in the red and blue stick spectra in Fig. 7. These intensities were obtained by multiplying the squared transition dipole moments calculated as explained in Section III with the field-ionization probabilities over the 80-ns-long detection window calculated using the analytical expressions reported by Damburg and Kolosov [59].
Before recording the lower spectrum in Fig. 7, the transverse stray fields were carefully compensated. Consequently, the laser polarization was almost perfectly parallel to the DC field. Under these conditions, transitions to Stark states of odd \(k\) values have zero intensity. In the case of the upper spectrum, a weak transverse stray field made the Stark states with odd \(k\) values optically accessible. Transitions to these states are strongest at the edges of the manifold and weakest at the center. The calculated intensities of transitions to odd \(k\) states in the presence of the transverse stray field (\(\sim 10\,\mathrm{mV}\,\mathrm{cm}^{-1}\)) are depicted as gray sticks in Fig. 7. They are only observable at the low-frequency edge of the Stark manifold because the Stark states at the high-frequency edge are not efficiently ionized by the pulsed field, as explained above. The good agreement between measured and calculated intensity distributions enables us to conclude that the Rydberg-Stark states located near the center of the \(n=20\) manifold are fully ionized by the \(5.7\,\mathrm{kV}\,\mathrm{cm}^{-1}\) pulsed field used in the experiments.
Figure 6: Stark shifts of the metastable 2s levels of the H atom calculated for electric fields in the range between 0 and \(2\,\mathrm{V}\,\mathrm{cm}^{-1}\).
Figure 8 displays a typical spectrum of transitions to the \(k=0,\pm 2\) Stark states of the \(n=20\) manifold recorded from the H(2s,\(f=1\)) state using laser radiation with linear polarization orthogonal to the 0.8 V cm\({}^{-1}\) DC field. The spectrum was recorded at an angle deviation \(\delta\alpha=1.1\) mrad from exact orthogonality between the H-atom beam and the laser beam, leading to two Doppler components per \(k\) state, separated by 6.28 MHz. The two Doppler components are slightly asymmetric with mirror-symmetric lineshapes (opposite sign of \(\gamma\) in Eq. 4 below). To optimize the data acquisition rate when recording the Stark spectra, the frequency was scanned in steps of 400 kHz within the line profiles and of 2 MHz between the lines. In addition, the data points within the spectral lines were obtained by averaging over 500 experimental cycles (_i.e._, over 20 s) whereas only 100 cycles were averaged for data points between the lines. The central frequency, the electric field strength and additional parameters were determined in a least-squares fit to the experimental data (black dots) based on the following line profile for each \(k\) value
\[g_{k}(\nu)=\sum_{i=1}^{2}\sum_{m_{f}=-2}^{2}\mathrm{I}^{i}\mathrm{I}^{m_{f}}( \mathcal{F})\exp\left\{\frac{-\left[\nu-\nu_{0}^{i,m_{f}}(\mathcal{F},\gamma )\right]^{2}}{2\left(\sigma_{\mathrm{D}}^{2}+|k|\sigma_{\mathrm{S}}^{2}\right) }\right\}\times\left[1+\mathrm{erf}\left((-1)^{i}\gamma\frac{\left(\nu-\nu_{ 0}^{i,m_{f}}(\mathcal{F},\gamma)\right)}{\sqrt{2}\sigma_{\mathrm{D}}}\right) \right], \tag{4}\]
with
\[\nu_{0}^{i,m_{f}}(\mathcal{F},\gamma)=\nu_{0}+\nu_{\mathrm{S}}^{m_{f}}( \mathcal{F})+(-1)^{i}\left\{\nu_{\mathrm{D}}-\delta\nu(\gamma)\right\}. \tag{5}\]
In Eqs. 4 and 5, \(i\,(=1,2)\) is an index specifying the Doppler component, \(\nu_{0}\) is the transition frequency to the reference position (\(-cR_{\mathrm{H}}/n^{2}\)) of the calculated Stark map of the \(n=20\) levels (see Fig. 3), \(\nu_{\mathrm{S}}^{m_{f}}(\mathcal{F})\) is the field-dependent Stark shift of the \(m_{f}\) level, \(\nu_{\mathrm{D}}\) is the Doppler shift arising from the angle deviation \(\delta\alpha\), and \(\delta\nu(\gamma)\) is a frequency offset used to compensate the shift of the intensity maximum of the asymmetric line profiles from the centers of the hypothetical symmetric profiles. This shift is introduced to reduce the correlation between the asymmetry parameter \(\gamma\) and \(\nu_{\mathrm{D}}\) in the least-squares fit. \(\sigma_{\mathrm{D}}\) is the Doppler width and \(\sigma_{\mathrm{S}}\) accounts for the broadening of the \(|k|=2\) lines arising from weak field inhomogeneities in the photoexcitation volume. As mentioned in Section II.1, the asymmetry of the line profiles originate from the nonthermal velocity distribution caused by the 2s \(\leftarrow\) 1s excitation.
Figure 7: PFI spectra of the \(n=20\) Rydberg-Stark states of H recorded from the 2s\((f=1)\) hyperfine component in an electric field \(\mathcal{F}_{\mathrm{DC}}\approx 200\) mV cm\({}^{-1}\). The direction of the strong pulsed electric-field (\(\mathcal{F}_{\mathrm{PFI}}=5.7\) kV cm\({}^{-1}\)) used for ionization was set parallel to \(\mathcal{F}_{\mathrm{DC}}^{\uparrow\uparrow}\) to record the upper spectrum and antiparallel \(\mathcal{F}_{\mathrm{DC}}^{\uparrow\downarrow}\) to record the lower, inverted spectrum. The red and blue stick spectra represent the calculated intensity distributions. Inset: The alignment of the two fields leads to ionization without change of the field polarity (red) or to ionization after a diabatic state inversion upon reversal of the field polarity.
The fit of the line profiles depicted in Fig. 8 resulted in the parameters listed in Table 1.
These parameters are helpful in characterizing the experimental conditions. For instance, the homogeneous component of the field is found to correspond closely to the 0.8 V cm\({}^{-1}\) applied experimentally with an uncertainty of only 0.4% or 300 \(\mathrm{\SIUnitSymbolMicro V}\) cm\({}^{-1}\). The electric field inhomogeneity leads to a broadening of the \(k=\pm 2\) Stark components and is well represented by a field gradient of 12(3) mV cm\({}^{-2}\), which corresponds to a field change of 2.4(6) mV cm\({}^{-1}\) over the 2 mm diameter of the UV laser. The Doppler shift \(\nu_{\mathrm{D}}\) reflects the deviation angle \(\delta\alpha\) which, in this case, is 1.1 mrad. \(\sigma_{\mathrm{D}}\) is a measure of the transversal velocity distribution, which in the present case corresponds to a temperature of 40 \(\mathrm{\SIUnitSymbolMicro K}\) and is the result of the geometric constraints along the supersonic beam imposed by the skimmers and the 2s \(\leftarrow\) 1s excitation. The asymmetry parameter is alignment specific and typically varied between -2 and 4. The central frequency was arbitrary set to zero because the absolute frequency determination is still in a blinded phase. The weights used for the least-squares fits are determined in an iterative procedure to approach a normal distribution of the residuals.
The overall data set collected so far involves more than 500 individual spectra of transitions recorded from the initial 2s(\(f=1\)) and 113 from the 2s(\(f=0\)) hyperfine state to \(n=20\) Rydberg states and 35 spectra from the 2s(\(f=1\)) to \(n=24\) Rydberg states. These spectra were recorded for different valve temperatures, electric-field strengths and deviation angles \(\delta\alpha\) to investigate possible sources systematic uncertainties.
The main objective of the study presented here was to verify that the central frequencies extracted from the spectra do not depend on the strength of the applied electric field. A typical set of four measurements recorded at nominal
Figure 8: a) Typical experimental (dots) and fitted (blue) spectra of the three (\(k=0,\pm 2\)) Rydberg-Stark states near the center of the \(n=20\) Stark manifold of H, each exhibiting two Doppler components. b) Weighted residuals (see text for details).
\begin{table}
\begin{tabular}{c c} \hline \hline & value \\ \hline \(\nicefrac{{\nu_{\mathrm{D}}}}{{\mathrm{kHz}}}\) & 0(26) (blinded) \\ \(\nicefrac{{\mathcal{F}}}{{\mathrm{V}}}\) cm\({}^{-1}\) & 0.8076(3) \\ \(\nicefrac{{\nu_{\mathrm{D}}}}{{\mathrm{MHz}}}\) & 3.16(5) \\ \(\nicefrac{{\sigma_{\mathrm{D}}}}{{\mathrm{MHz}}}\) & 1.56(7) \\ \(\nicefrac{{\sigma_{\mathrm{S}}}}{{\mathrm{MHz}}}\) & 0.27(6) \\ \(\gamma\) & 0.65(18) \\ \hline \end{tabular}
\end{table}
Table 1: Fit results obtained in the least-squares fit of the line profiles based on Equations 4 and 5.
field strengths of 0.4, 0.8, 1.2 and 1.6 V cm\({}^{-1}\) under otherwise identical experimental conditions (beam velocity of 1060 m s\({}^{-1}\) and deviation angle \(\delta\alpha\) of 1.1 mrad) is presented in Fig. 9. At the scale of the figure, the Stark effect appears essentially linear. Table 2 summarizes the relevant lineshape parameters (see Eqs. 4 and 5) extracted from the fits of the lineshapes to the experimental data.
The central frequencies corrected for the Stark shift of the 2s state agree within the combined uncertainties and do not reveal any systematic dependence on the field strength within the 20 kHz accuracy of the measurements. The field strength corresponds to the applied electric potential within the expected uncertainties resulting from the geometry of the electrode plates and the electronic circuits used to apply the potentials. The field-dependent line broadening does not reveal a significant dependence on the applied field strength, which suggests that the applied field distribution does not contribute to the observed field inhomogeneity. The slight variations in the values of \(\nu_{\mathrm{D}}\) and \(\sigma_{\mathrm{D}}\) reflect small changes in the day-to-day alignments of the beams and the supersonic-beam properties.
The data set collected so far was used to determine the hyperfine splitting in the 2s level as well as the difference between the Bohr energies of the \(n=20\) and \(n=24\) Rydberg states. Figure 10 presents spectra of the transitions to the \(n=20\), \(k=0,\pm 2\) Stark states recorded from the 2s(\(f=0\)) (red) and 2s(\(f=1\)) (blue) states as illustration. Taking the difference in the central frequencies \(\nu_{0}\) (see Eq. 5) for the two sets of data (197 spectra and 50 spectra for \(f=1\)
\begin{table}
\begin{tabular}{c c c c c} \hline \hline & 0.4 V cm\({}^{-1}\) & 0.8 V cm\({}^{-1}\) & 1.2 V cm\({}^{-1}\) & 1.6 V cm\({}^{-1}\) \\ \hline \(\nu_{0}/\mathrm{kHz}\) & 0(21) & 21(18) & -20(21) & -0(20) \\ \(\mathcal{F}/\mathrm{Vcm}^{-1}\) & 0.4012(4) & 0.7990(3) & 1.1882(3) & 1.5794(3) \\ \(\sigma_{\mathrm{S}}/\mathrm{MHz}\) & 0.31(10) & 0.22(9) & 0.17(14) & 0.23(6) \\ \(\nu_{0}/\mathrm{MHz}\) & 4.26(5) & 4.61(4) & 5.20(5) & 5.18(3) \\ \(\sigma_{\mathrm{D}}/\mathrm{MHz}\) & 2.02(10) & 1.78(5) & 2.12(5) & 1.81(5) \\ \hline \end{tabular}
\end{table}
Table 2: Lineshape parameters extracted from fits to the measured spectra of the \(n=20,k=0\), \(\pm 2\), \(|m_{l}|=1\gets 2\)s(\(f=1\)) transitions measured when applying a nominal electric field of 0.4, 0.8, 1.2 and 1.6 V cm\({}^{-1}\), respectively.
Figure 9: Spectra of the \(n=20,k=0\), \(\pm 2\), \(|m_{l}|=1\gets 2\)s(\(f=1\)) transitions measured when applying nominal electric fields of 0.4, 0.8, 1.2 and 1.6 V cm\({}^{-1}\), respectively. Each spectrum represents the sum of three independent scans as described in Section II. Right: Relative positions of the line center \(\nu_{0}\) with respect to the line center measured at a nominal field strength of 0.4 V cm\({}^{-1}\). The error bars represent 1\(\sigma\) uncertainties.
and \(f=0\), respectively) yields a value of \(177.546(11)\,\mathrm{MHz}\) for the 2s hyperfine splitting, which corresponds within the \(1\sigma\) uncertainty to the much more precise value of \(177.55683887(85)\,\mathrm{MHz}\) determined by Ramsey spectroscopy in the \(n=2\) manifold [60].
The difference in the Bohr energies of the \(n=20\) and 24 Rydberg states was determined in an analogous manner from spectra of the \(n=20\) and 24 Stark states recorded from the 2s(\(f=1\)) state as illustrated in Fig. 11. The difference of the two \(\nu_{0}\) values is \(2\,511\,705.793(10)\,\mathrm{MHz}\) and also agrees within the experimental uncertainty with the value \(cR_{\mathrm{H}}\left(\nicefrac{{1}}{{20^{2}}}-\nicefrac{{1}}{{24^{2}}}\right)= 2\,511\,705.802\,\mathrm{MHz}\). The uncertainty of \(10\,\mathrm{kHz}\) results from the addition in quadrature of the \(7\,\mathrm{kHz}\) uncertainties of the blinded \(\nu_{0}\) values extracted from the experimental data.
## V Conclusion
In this article, we have outlined an experimental approach to determine \(R_{\infty}\) from \(k=0\), \(\pm 2\), \(|m_{l}|=1\) Rydberg-Stark spectra of H. We have demonstrated that systematic errors resulting from the Stark effect are insignificant within the \(\sim 11\,\mathrm{kHz}\) precision of the four data sets used as illustrations (see Fig. 9). We have also demonstrated that the differences between the Bohr energy at \(n=20\) and the positions of the \(f=0\) and 1 hyperfine components of the 2s state are consistent within the \(11\,\mathrm{kHz}\) statistical uncertainty of the present determination with the more precise value of the 2s \((f=0)-(f=1)\) interval determined recently by Ramsey microwave spectroscopy [60].
Finally, we have determined the difference between the Bohr energies at \(n=20\) and 24 and found the results to agree with Bohr's formula using the CODATA 2018 recommended value for \(R_{\mathrm{H}}\)[10]. The data presented in this article was collected over a period of several months with frequent realignment of the optical system and supersonic beam. We did not observe inconsistencies in any of the relative frequencies determined for this article over this time.
The \(2\mathrm{s}(f=0)-(f=1)\) and \(\nu_{0}(n=24)-\nu_{0}(20)\) intervals presented in this article correspond to differences of large frequencies, and systematic errors largely cancel out when building the differences. The main potential source of systematic errors in our method originates from the Doppler effect and a possible imperfect cancellation of the Doppler shifts. To characterize such uncertainties, measurements of absolute frequencies are underway, in which we systematically vary the velocity of the supersonic beam and the deviation angle \(\delta\alpha\). Absolute transition frequencies will be reported when these measurements are completed.
Figure 10: Spectra of the \(n=20,k=0\), \(\pm 2\), \(|m_{l}|=1\gets 2\mathrm{s}(f=1)\) (blue) and \(n=20,k=0\), \(\pm 2\), \(|m_{l}|=1\gets 2\mathrm{s}(f=0)\) (red). The difference of the two central frequencies \(\nu_{0}\) corresponds to the hyperfine interval of the 2s state.
## Acknowledgments
We thank Dominik Husmann (METAS, Bern) for his help in maintaining the SI-traceable frequency dissemination network and Gloria Clausen for helpful discussions. We also thank Prof. Klaus Ensslin and Peter Marki for the low-noise DC voltage source used in the measurements of the Stark spectra.
This work was supported by the Swiss National Science Foundation through the Sinergia-program (Grant No. CRSII5-183579) and a single-investigator grant (Grant No. 200020B-200478).
|
2310.00246 | A hybrid quantum-classical conditional generative adversarial network
algorithm for human-centered paradigm in cloud | As an emerging field that aims to bridge the gap between human activities and
computing systems, human-centered computing (HCC) in cloud, edge, fog has had a
huge impact on the artificial intelligence algorithms. The quantum generative
adversarial network (QGAN) is considered to be one of the quantum machine
learning algorithms with great application prospects, which also should be
improved to conform to the human-centered paradigm. The generation process of
QGAN is relatively random and the generated model does not conform to the
human-centered concept, so it is not quite suitable for real scenarios. In
order to solve these problems, a hybrid quantum-classical conditional
generative adversarial network (QCGAN) algorithm is proposed, which is a
knowledge-driven human-computer interaction computing mode that can be
implemented in cloud. The purposes of stabilizing the generation process and
realizing the interaction between human and computing process are achieved by
inputting artificial conditional information in the generator and
discriminator. The generator uses the parameterized quantum circuit with an
all-to-all connected topology, which facilitates the tuning of network
parameters during the training process. The discriminator uses the classical
neural network, which effectively avoids the "input bottleneck" of quantum
machine learning. Finally, the BAS training set is selected to conduct
experiment on the quantum cloud computing platform. The result shows that the
QCGAN algorithm can effectively converge to the Nash equilibrium point after
training and perform human-centered classification generation tasks. | Wenjie Liu, Ying Zhang, Zhiliang Deng, Jiaojiao Zhao, Lian Tong | 2023-09-30T04:31:23Z | http://arxiv.org/abs/2310.00246v1 | A hybrid quantum-classical conditional generative adversarial network algorithm for human-centered paradigm in cloud
###### Abstract
As an emerging field that aims to bridge the gap between human activities and computing systems, human-centered computing (HCC) in cloud, edge, fog has had a huge impact on the artificial intelligence algorithms. The quantum generative adversarial network (QGAN) is considered to be one of the quantum machine learning algorithms with great application prospects, which also should be improved to conform to the human-centered paradigm. The generation process of QGAN is relatively random and the generated model does not conform to the human-centered concept, so it is not quite suitable for real scenarios. In order to solve these problems, a hybrid quantum-classical conditional generative adversarial network (QCGAN) algorithm is proposed, which is a knowledge-driven human-computer interaction computing mode in cloud. The purpose of stabilizing the generation process and realizing the interaction between human and computing process is achieved by inputting artificial conditional information in the generator and discriminator. The generator uses the parameterized quantum circuit with an all-to-all connected topology, which facilitates the tuning of network parameters during the training process. The discriminator uses the classical neural network, which effectively avoids the "input bottleneck" of quantum machine learning. Finally, the BAS training set is selected to conduct experiment on the quantum cloud computing platform. The result shows that the QCGAN algorithm can effectively converge to the Nash equilibrium point after training and perform human-centered classification generation tasks.
Quantum generative adversarial network; Conditional generative adversarial network; Human-centered computing; Cloud computing; Parameterized quantum circuits
## 1 Introduction
With the development of wireless communications and networking, human-centered computing (HCC) in cloud, edge, and fog attempts to effectively integrate various computing elements related to humans [1, 2], which becomes a common focus of attention in the academic and industrial fields. Unlike other ordinary computing, HCC pays more attention to the status of human in computing technology and the interaction of humans with cyberspace and physical world [3]. Therefore, the design of HCC systems and algorithms needs to take into account the individual's ability and subjective initiative [4, 5]. Among them, cloud computing uses a super-large-scale distributed computing method to adapt to the large number of examples and complex calculation requirements of current artificial intelligence (AI) algorithms,
and it has become a computing method commonly sought [6; 7]. In the background of HCC computing and big data, there are many interesting and practical applications generating [8; 9; 10]. Privacy is also an important norm that computing models must pay attention to, especially related to privacy perception and privacy protection [11; 12; 13].
Quantum cloud computing allows users to test and develop their quantum programs on local personal computers, and run them on actual quantum devices, thereby reducing the distance between humans and the mysterious quantum [14]. Under the influence of the AI wave, many technology companies are committed to establishing quantum cloud computing platforms that enable users to implement quantum machine learning algorithms. Compared with the two major models of machine learning, the generative model and the discriminant model, the generative model is more capable of exerting human subjective initiative, so it has the potential to developed into the HCC paradigm. Therefore, we consider the very creative quantum generative adversarial network model as a breakthrough in HCC computing in cloud.
Generative adversarial network (GAN) [15] evaluates generative models through a set of adversarial neural network frameworks, which is a hot topic in recent years about generative machine learning algorithm. The GAN algorithm is bases on game theory scenario, and the generator aims to learn the mapping from simple input distribution to complex training sample space by competing with discriminator. As the adversary, the discriminator should judge as accurately as possible whether the input data comes from the training set or the generator. Both participants of the game try to minimize their own loss, so that the adversarial network framework finally reaches the Nash equilibrium [16]. In recent years, GAN has been successfully used in the fields of the processing of image, audio, natural language etc., to achieve functions such as clear image generation [17; 18], video prediction [19], text summarization [20], and image generation of semantic [21]. Actually, it is difficult to ensure stable training of GAN in operation. Researchers use the relevant results obtained by deep learning to improve GAN, including methods such as designing new network structures [22], adding regular constraints [23], integrated learning [24], and improving optimization algorithms [25]. However, the improved algorithms above are not human-centered, because the rules learned by the GAN algorithm are implicit. It is difficult to generate data that meets specific requirements by changing the structure or input of a trained generator. In 2014, Mirza et al. proposed conditional generative adversarial network (CGAN) [26]. This method guides GAN to learn to sample from the conditional distribution by adding conditional constraints to the hidden variables of the input layer, so that the generative data can be guided by conditional inputs, thereby expanding the application scenarios of the GAN algorithm. In the construction, the setting of conditional constraints can make the subjective initiative of people play a role, so it can be regarded as an HCC algorithm. Based on the CGAN algorithm, many human-centered applications have been constructed, such as objects detection [27], medical images processing and synthesis [28; 29].
Quantum generative adversarial network (QGAN) is a data-driven quantum circuit machine learning algorithm which combine the classical GAN and quantum computing [30]. In 2018, Lloyd proposed the concept of QGAN [31], which analyzed the effectiveness of three different QGAN frameworks from a theoretical
perspective, and demonstrated that quantum adversarial learning can also reach the Nash equilibrium point when the generative distribution can fit real distribution. In the same year, Pierre's team discussed QGAN in more detail, by giving the general structure of the parameterized quantum circuit (PQC) as a generator and the estimation method of the parameter gradient when training the network [32]. In 2019, Hu et al. used quantum superconducting circuit physics experiments to prove the feasibility of QGAN on current noisy intermediate-scale quantum (NISQ) devices [33]. Additionally, the optimization of the quantum generator structure is also one of the research priorities. For example, using matrix product state [34] and tree tensor network [35] to construct PQCs as generator and discriminator of GAN respectively, the convergence and robustness to noise of these methods are all verified through experiments on quantum hardware.
In terms of generating quantum data, the quantum supremacy means that classical information processors or neural networks sometimes cannot fit the data generated by quantum systems, and only quantum generator can complete such tasks. For the generation of classical data, the output of quantum generator can always meet the differentiable constraint. By sampling the output of quantum generator, classical discrete data can be obtained. In contrast, classical GAN cannot directly generate discrete data due to the influence of differentiable constraint. Therefore, as a complement to the classical GAN, QGAN with the ability to generate discrete data and the combination of other known variants of GAN and quantum mechanical mechanisms are of great research value.
Similar to classical GAN, QGAN also has the problem of uncontrollable training process and random generative output. However, in practical applications, the intent output obtained by changing the input is a more common situation, so QGAN is less practical. In order to solve the problem that the QGAN algorithm lacks human-oriented thinking, this paper proposes a hybrid quantum classical scheme based on conditional generation adversarial network. Conditional constraints are added to the QGAN algorithm to guide the training process. This method has both the controllability of CGAN and the discrete data generation capability of QGAN. By analyzing the performance of different GAN, it is proved that the algorithm is better than the classical CGAN in terms of time complexity and algorithm functions. Through modeling and training experiments in cloud on classical data generation problem, the convergence of the model and the accuracy of the generative data verify the feasibility of applying quantum computing to the CGAN structure.
The rest of the paper is organized as follows. Section II describes the preliminaries about classical GAN and QGAN. Section III presents the design focus of QCGAN, including the method of designing the PQCs and estimating the parameter gradients. The performance analysis of QCGAN and the comparison with other related algorithms are in Section IV. In Section V, experiments are performed in the quantum cloud computing platform to verify the feasibility of the proposed QCGAN algorithm. Section VI summarizes what we find in this work and the prospects for future researches.
## 2 Principles of generative adversarial network algorithm
### Generative adversarial network
The core idea of the classical GAN is to construct a zero-sum game between generator and discriminator. Through the adversarial learning strategy, generator and discriminator are alternately trained to obtain a better generative model. The structure and algorithm flowchart of GAN are shown in Fig. 1.
Specifically, the first step is to give training samples as generation target, assuming that the real data comes from a fixed and unknown distribution \(p_{real}\left(x\right)\). The generator is a neural network that can map low-dimensional distribution to high-dimensional space, and the discriminator is a neural network with classification function. The parameters of generator and discriminator are denoted as \(\overrightarrow{\theta}_{G}\) and \(\overrightarrow{\theta}_{D}\), respectively. The input of generator is a noise vector \(z\), which is generally sampled from a normal distribution or a uniform distribution; \(x=G\left(\overrightarrow{\theta}_{G},z\right)\) is the output of generator, which is transformed from the noise vector, and constitutes the generative distribution \(p_{G}\left(x\right)\). In the case of completing the ideal adversarial training, the discriminator will not be able to distinguish whether the input comes from the real distribution \(p_{real}\left(x\right)\) or the generative distribution \(p_{G}\left(x\right)\). Therefore, the goal of training generator is to make discriminator distinguish the output of generator as real data as much as possible. On the other hand, when training discriminator, its input contains real data \(x\sim p_{real}\left(x\right)\) and the output of generator \(x\sim p_{G}\left(x\right)\). At this time, the training goal is to accurately judge the two categories of input data. Combining these two aspects, the optimization of GAN can be described as the following minimax game problem
\[\min_{G}\max_{D}V\left(D,G\right)=E_{x\sim p_{real}}\left[\log D\left(x\right) \right]+E_{x\sim p_{G}}\left[\log\left(1-D\left(x\right)\right)\right]. \tag{1}\]
### Conditional generative adversarial network
In view of the uncontrollable shortcoming of the training process of GAN, the CGAN algorithm adds conditional variables to the input of generator and discriminator at the same time to play a constraining and guiding role. The structure and flowchart of CGAN algorithm are shown in Fig. 2. The condition variables \(y\) are generally known information with specific semantics, such as feature labels. Under the CGAN framework, the generator pays more attention to sample features that are closely related to conditional constraints, ignores other less relevant local features. Therefore, the addition of condition variables can control the training process to generate
Figure 1: Schematic diagram of classical generative adversarial network.
higher quality data. The output of the generator can be regarded as sampling from the conditional distribution \(p_{G}\left(x\left|y\right.\right)\), so the objective function of CGAN can be rewritten on the basis of the original GAN as
\[\min_{G}\max_{D}V\left(D,G\right)=E_{x\sim p_{real}}\left[\log D\left(x\left|y \right.\right)\right]+E_{x\sim p_{G}}\left[\log\left(1-D\left(x\left|y\right. \right)\right)\right]. \tag{2}\]
CGAN needs to sample from the noise vector and the condition variable at the same time, so the set of reasonable condition variable according to the generation target plays a crucial role in the generator's ability to fit the real distribution. The most common method is to directly extract the conditional variables from the training data, so that generator and discriminator get some prior knowledge about the training set when they receive the input. For example, the category label is used as a conditional variable and attached to the input layer of the adversarial network [26]. At this time, CGAN can be regarded as an improvement of the unsupervised GAN model into a weakly supervised or a supervised model.
### Quantum generative adversarial network
The QGAN is also a zero-sum game that constructed by generator and discriminator in principle. If one or more than one of the real data, the generator and the discriminator obey the quantum mechanism, the constructed algorithm scheme belongs to the QGAN concept. In general, the quantum data set is expressed in the form of a density matrix, which corresponds to the covariance matrix of the classical data set. Quantum generator and discriminator are composed of PQC. The selection, arrangement, and depth of quantum gates of PQC will affect the performance of it, so they are also the parts that can be optimized.
When QGAN is used for classical data generation tasks, if the goal of the generator is to reproduce statistical data on high-dimensional, QGAN with quantum generator has the potential to exponentially accelerate the convergence to Nash equilibrium [31]. Using classical neural networks as the discriminator in adversarial learning can avoid the input bottleneck of quantum machine learning, because it
Figure 2: Schematic diagram of classical conditional generative adversarial network.
reduces the calculation and resource consumption of quantum state encoding when discriminate real classical data. Combining the above two aspects, the QCGAN algorithm proposed in this paper is based on the basic settings of the quantum generator and the classical discriminator to generate classical data. The structure and algorithm flowchart of this kind of QGAN algorithm are shown in Fig. 3.
## 3 Quantum conditional generative adversarial network algorithm
The QCGAN algorithm proposed in this paper is a generative adversarial network model which is suitable for fitting classical data distribution, whose generation process is controllable. The generator of QCGAN is constructed in the form of the parameterized quantum circuit, and the discriminator uses a classical neural network to complete the classification task. Different from the unconstrained QGAN algorithm, the QCGAN algorithm adds conditional variables to the input of both generator and discriminator to guide the training process. The basic flow of the algorithm can be summarized as follows (as shown in Fig. 4): the first step is to prepare classical samples and introduce appropriate conditional constraints according to the data characteristics as well as the goal of generation task. This two parts are combined to form the training data set of the network. The classical conditional constraints, which reflect the statistical characteristics of the training data set, are encoded into a entangled quantum state through a well-designed quantum circuit. The next step is to construct the PQC of the generator and the classical neural network of discriminator. Finally, the generative distribution and the real distribution are sampled separately and input these data to the discriminator for classification, and then an adversarial strategy is formulated for training. If the objective function converges, it means finding the best quantum generator. The output of the generator can be sampled to get a set of classical data, which is the result not only fits the target distribution but also meets the constraints.
### Entangled state coding of conditional information and circuit design
For the quantum scheme of CGAN, an important topic is how to input the classical conditional variables into the quantum generator, which involves the quantum state encoding of the conditional variables and the circuit design for preparing this quantum state. In this paper, taking the representative category labels in the conditional variables as an example, the method of coding the entangled state of conditional information and designing corresponding circuit are explained in detail.
Figure 3: Schematic diagram of quantum generative adversarial network.
As shown in Fig. 4, the real data input to the discriminator are the data pairs \(\left(x,y\right)\) sampled from the classical training set, where \(y\) represents the conditional variable. The generator obtains the representation method of the conditional variables and the probability distribution of various samples in the training set through \(\left|\mathrm{y}\right\rangle\). Therefore, \(\left|\mathrm{y}\right\rangle\) is a quantum state entangled by \(m\)-categories conditional variables according to the probability distribution of real samples
\[\left|y\right\rangle=\sum\limits_{j=1}^{m}\frac{1}{\alpha_{j}}\left|y_{j} \right\rangle, \tag{3}\]
where \(1/\alpha_{j}=\left(p\left(x\left|y_{j}\right.\right)\right)^{-1/2}\), and \(1/\alpha_{j}\) meets the normalization conditions: \(\sum\limits_{j=1}^{n}\left|1/\alpha_{j}\right|^{2}=1\).
The category labels of classical data samples used for machine learning tasks are generally coded by one-hot method. Assuming that three categories of data are generated, and the classical binary representations of three labels are: \(001,010,100\). Since the classical discriminator will perform classification processing on the generative distribution and the real distribution, it is most reasonable to use the same one-hot method to encode \(\left|\mathrm{y}_{j}\right\rangle\). It also happens to be similar in form to the quantum three-particle \(W\) state, \(\left|W\right\rangle_{3}=1/3\left(\left|001\right\rangle+\left|010\right\rangle +\left|100\right\rangle\right)\). When designing a quantum circuit to prepare \(\left|\mathrm{y}\right\rangle\), the quantum circuit of preparing a multi-particle \(W\) state can be used as a template, which reduces the complexity of circuit design to a certain extent.
Taking \(\left|y\right\rangle=\left|W\right\rangle_{3}\) as an example, where \(m=3\), \(\alpha_{j}=\sqrt{3}\left(j=1,2,3\right)\), which means that the training set contains three categories of uniformly distributed data. The specific preparation process of \(\left|W\right\rangle_{3}\) can be divided into two steps, and the corresponding quantum circuit is shown in Fig. 5. The first step is to use a combination of single qubit rotation gates and CNOT gate. By adjusting the rotation angle, the qubits are prepared into a special state containing only three terms, i.e.,
\[\left|Q_{b}Q_{c}\right\rangle:\left|00\right\rangle\rightarrow\frac{1}{\sqrt{ 3}}\left(\left|00\right\rangle+\left|01\right\rangle+\left|10\right\rangle \right). \tag{4}\]
According to the calculation rule of quantum circuit cascade, there is a equation
\[EDCBA[1,0,0,0]^{\mathrm{T}}=\frac{1}{\sqrt{3}}[1,1,1,0]^{\mathrm{T}}\:. \tag{5}\]
Figure 4: Schematic diagram of quantum conditional generative adversarial network.
By solving this equation, the parameters \(\theta_{1}=\theta_{3}=0.55357436,\theta_{2}=-0.36486383\) in the quantum circuit can be obtained. The second step is to select the quantum gates without parameters to design circuit. Firstly perform the NOT gate (i.e., Pauli X gate) on \(\left|Q_{b}\right\rangle\) and \(\left|Q_{c}\right\rangle\), then apply the Toffoli gate to set the \(\left|Q_{a}\right\rangle\) equal to \(\left|1\right\rangle\), when \(\left|Q_{b}\right\rangle\) and \(\left|Q_{c}\right\rangle\) equal to \(\left|1\right\rangle\). Finally, perform a NOT gate on \(\left|Q_{b}\right\rangle\) and \(\left|Q_{c}\right\rangle\) to restore the state at the end of the first step. After the above operations, the initial state \(\left|000\right\rangle\) can be evolved into \(\left|W\right\rangle_{3}\).
Using the one-hot method to encode the conditional information in the quantum state requires relatively more quantum resources, but it can reduce the workload of converting the data into other encoding forms when the data is classically post-processed. When designing the circuit for preparing quantum state of the conditional information, as long as the fixed template is followed, the parameter value is obtained by changing the probability amplitude on the right end of Eq. 5, and the multi-class label information that meets any probability distribution can be expressed.
### Circuit design of quantum generator
Quantum computing forms a quantum circuit through the arrangement and combination of wires and basic quantum gates, which act on the quantum state to achieve the evolution of the system. The so-called parameterized quantum circuit is to choose a combination of parameterized quantum rotation gates and other quantum logic gates to constitute the circuit. Single-qubit gates are used to realize qubit rotation, while multi-qubit gates mainly realize entanglement between qubits. Representing the quantum state and the quantum gate in the form of a vector and a unitary matrix, it means that the mathematical connotation of the quantum gate operation is linear transformation, which is similar to classical machine learning. In that, the role of parameters in PQCs and classical neural networks is consistent.
Due to the unitary constraints of quantum gates, to generate \(N\) bits of data, \(N=N_{d}+N_{c}\) qubits resources are required, where \(N_{d}\) channels process sample data and \(N_{c}\) channels receive conditional information. For the quantum generator, the input \(\left|0\right\rangle^{\otimes N_{d}}\left|y\right\rangle\) is converted into the final state \(\left|x\right\rangle_{G}\left|y\right\rangle\) after the \(L_{G}\) layers combination unitary operations, where the \(\left|x\right\rangle_{G}\) represents the generative distribution. Sampling the final state of the generator can collapse the quantum state to classical
Figure 5: The quantum circuit for preparation of three-particle W-state quantum circuit.
data. The quantum generator is realized by a PQC based on quantum gate computing mechanism, which is composed of rotation layers and entanglement layers alternately arranged. Due to the unitary nature of the quantum gate set, if the rotation layer and the entanglement layer alternately perform operations and form a sufficiently long layer sequence, any unitary transformation can be performed on the initial state in theory.
According to the decomposition theorem of single qubit unitary operation, a single rotation layer is composed of two \(R_{z}\) gates and one \(R_{x}\) gate arranged at intervals, that is \(\prod\limits_{i=1}^{N}R_{z}\left(\theta_{l,3}^{i}\right)R_{x}\left(\theta_{l,2 }^{i}\right)R_{z}\left(\theta_{l,1}^{i}\right)\). The superscript \(i\) indicates that the quantum gate acts on the \(i\)-th qubit, and the subscript \(l\) indicates that the operations perform on the \(l\)-th layer. The matrix representation of \(R_{x}\) gate and \(R_{z}\) gate are
\[R_{x}\left(\theta\right)=\left[\begin{array}{cc}\cos\left(\theta/2\right)&-i \sin\left(\theta/2\right)\\ -i\sin\left(\theta/2\right)&\cos\left(\theta/2\right)\end{array}\right],R_{z} \left(\theta\right)=\left[\begin{array}{cc}e^{-i\theta/2}&0\\ 0&e^{i\theta/2}\end{array}\right].\]
A single entanglement layer generally selects two-qubit controlled rotation gates (such as CRX, CRY, CRZ gate) and general two-qubits logic gates (such as CNOT gate) for permutation and combination. The arrangement of quantum gates is related to the connectivity among qubits, thus affecting the expressiveness and entanglement capabilities of PQCs. There are three common connection topologies among qubits: circle, star, and all-to-all connectivity [36, 37]. For circle or star connectivity, the entanglement between certain qubits will not occur in a single layer, which means that more layers are required to fit the distribution of complex targets. This phenomenon undoubtedly increases the difficulty of parameters optimization. All-to-all connectivity is an ideal topology structure among qubits. Although the number of parameters of a single-layer will exceed the other two methods, a shallow all-to-all connectivity quantum circuit can achieve better generative results and the computational overhead of algorithm is cheaper.
When designing the PQC of quantum generator, it is necessary to ensure that the qubits are fully connected. According to the above rules, the quantum generator circuit of QCGAN is shown in Fig. 6. The "XX" in the Fig. 6 represents an operation involving two qubits, where any one is the control qubit, and the other is the target qubit. When the control qubit is \(\left|1\right\rangle\) or \(\left|0\right\rangle\) (specified by the operation), the target qubit is operated accordingly. The \(N_{c}\) qubits are only responsible for transmitting conditional information to the other \(N_{b}\) qubits, and continue to pass the conditional information to the discriminator in post-processing. Therefore, no rotation operation is performed on them, and they are only used as control qubits to affect the circuit for data generation.
### Adversarial training strategy
The training of the QCGAN is a parameter optimization quantum algorithm with a feedback loop. The parameters of quantum generator and classical discriminator are denoted by \(\theta\) and \(\phi\), respectively. Similar to the classical CGAN, the objective function of QCGAN is
\[\min_{G_{\theta}}\max_{D_{\phi}}V\left(D,G\right)=E_{x\sim p_{real}}\left[\log D \left(x\left|y\right\rangle\right)\right]+E_{x\sim p_{\theta}}\left[\log\left(1 -D\left(x_{G}\left|y\right\rangle\right)\right)\right]. \tag{6}\]
At the beginning of training, all parameters in quantum circuit and binary classification neural network are given random initial values. During the adversarial training process, the parameters of generator and discriminator are alternately optimized. The parameters of the quantum generator circuit are fixed first to optimize the parameters of the discriminator. The discriminator simultaneously judges the randomly sampled batch training data and the data sampled from the quantum generator. The output value of the discriminator represents the probability that the corresponding input comes from the real distribution, and the gradient is calculated in the direction of maximizing the objective function of discriminator to optimize the parameter \(\phi\). Modifying the parameters of discriminator and repeating the above optimization operations, so that discriminator can not only learn the characteristics of real data distribution but also have the ability to discriminate the data from generative distribution. Then the parameters of discriminator are fixed, and the input of discriminator is only the results of the generator sampling. The larger the output of the discriminator, the smaller the gap between the generative distribution and the previously learned real distribution. In that, the gradient is calculated according to the direction of maximizing the objective function of generator to optimize the parameters \(\theta\). The ability of generator to fit the true distribution is continuously improved by modifying the parameters and repeating the circuit on the quantum computing device. The alternate optimization of generator and discriminator parameters must be iteratively performed until generator can reconstruct the state distribution of the training set.
According to the above connotation of adversarial training, Eq. 6 is decomposed into the unsaturated maximization objective function that generator and discriminator obeys respectively,
\[\left\{\begin{array}{l}\max V_{D_{\phi}}=E_{x\sim p_{real}}\left[\log D\left( x\left|y\right.\right)\right]+E_{x\sim p_{\theta}}\left[\log(1-D\left(x_{G} \left|y\right.\right))\right]\\ \max V_{G_{\theta}}=E_{x\sim p_{\theta}}\left[\log\left(D\left(x_{G}\left|y \right.\right)\right)\right]\end{array}\right.. \tag{7}\]
Figure 6: The template of quantum generator circuit.
During the training process, the gradient descent method is used to optimize the parameters. This method needs to calculate the gradient information \(\nabla_{\theta}V_{G_{\theta}}\) and \(\nabla_{\phi}V_{D_{\phi}}\). For classical neural networks, backpropagation can be used directly to calculate the gradient value of the objective function effectively. But for quantum devices, only the measurement results can be obtained, in that the output probability of discriminator cannot be directly accessed. Therefore, the gradient estimation of a parameterized quantum circuit needs to follow the theorem: for a circuit containing the parameter unitary gates \(U\left(\eta\right)=e^{-\frac{\lambda}{\pi}\Sigma}\), the gradient of the expectation value of an observable \(B\) with respect to the parameter \(\eta\) reads
\[\frac{\partial\langle B\rangle_{\eta}}{\partial\eta}=\frac{1}{2}\left(\langle B \rangle_{\eta^{+}}-\langle B\rangle_{\eta^{-}}\right). \tag{8}\]
The \(\langle\rangle_{\eta^{\pm}}\) in Eq. 8 represents expectation value of observable with respect to the output quantum wave function generated by the same circuit with parameter \(\eta^{\pm}=\eta\pm\frac{2}{\pi}\)[38]. This is an unbiased estimation method for the gradient of PQC. According to this theorem, the gradient of the output of the discriminator with respect to the parameters \(\theta\) can be calculated
\[\frac{\partial V_{G_{\theta}}}{\partial\theta_{i}}=\frac{1}{2}E_{x\sim p_{ \theta^{+}}}\left[\log D\left(x\left|y\right.\right)\right]-\frac{1}{2}E_{x \sim p_{\theta^{-}}}\left[\log D\left(x\left|y\right.\right)\right], \tag{9}\]
where \(\theta^{\pm}=\theta\pm\frac{2}{\pi}e^{i}\) and \(e^{i}\) represents the \(i\)-th unit vector in the parameter state space, i.e., \(\theta_{i}\leftarrow\theta_{i}\pm\frac{2}{\pi}\). In order to estimate the gradient of each parameter, every single parameter needs to be optimized and then evaluated repeatedly. In the case of small-scale numerical simulation, the wave function can be used to directly calculate the expectation value. Another method is to calculate the probability distribution based on the wave function, and then sample the gradient for estimation [39].
## 4 Performance evaluation
In order to evaluate the performance of the algorithm proposed in this paper, the classical GAN [15] and CGAN [26], QGAN [31] and QCGAN are mainly compared from the perspectives of time complexity and algorithm function. The performance comparison of the four generative adversarial algorithms is shown in Table 1.
In the classical CGAN algorithm, the process of generator parameters optimization can be seen as performing gradient descent in the convex set of the normalized covariance matrix of the data set to fit the real distribution. Therefore, the time complexity of generating data that fit the \(N\)-dimensional classical distribution is \(O(N^{2})\). In contrast, the time complexity of a quantum information processor to perform a linear transformation on an \(N\)-dimensional vector is \(O(N)\). Even if optimizing the each parameter needs to modify and execute the PQC twice, the calculation time complexity of QCGAN is still lower than that of CGAN when the same parameter optimization strategy is adopted (neglecting the time cost of preparing classical data into quantum states). On the other hand, the classical CGAN algorithm cannot directly generate discrete data due to the influence of differentiable constraints during parameter optimization, while QGAN can directly generate discrete data and also has the ability to generate continuous distribution [40]. In addition, the
QCGAN algorithm proposed in this paper directly encodes classical data in quantum state, so its resource consumption is \(N_{d}+N_{c}\) the same as classical CGAN (where \(N_{d}\) is the resource consumption of generating target data, and \(N_{c}\) is the conditional information resource consumption). While the resource consumption of unsupervised GAN and QGAN algorithms is \(N\), which is equal to the generative target data size.
Compared with unconstrained QGAN, the input of conditional information brings prior knowledge about the training set to the model, turning unsupervised QGAN into a weakly supervised or supervised adversarial learning model, thereby achieving controllable data generation process. The learning results of unconstrained QGAN are more inclined to present the average state of all data in training set. But due to adding the conditional information, QCGAN will accordingly show an advantage in the fitness of the generated results to the real distribution. Moreover, the generator trained by QGAN is still purposelessly generated, which can only guarantee the authenticity of the generated data but cannot expand other functions. While QCGAN can achieve different purpose generation tasks by introducing different conditional information, which can fully reflect the subjective initiative of people and realize the interaction between people and algorithms. It can be considered that QCGAN is a human-centered algorithm. Therefore, from a functional perspective, the generators trained by QCGAN have more extensive application scenarios and higher efficiency.
## 5 Experimental
In this paper, the synthetic \(\text{BAS}(2,2)\) (Bars and Stripes) data set is used for the experiments and analyses of the classical data classification generation task. The TensorFlow Quantum (TFQ), an open source quantum cloud computing platform for the rapid prototyping of hybrid quantum-classical models for classical or quantum data [41], is introduced to realize the simulation experiments.
### \(\text{BAS}\) data set
The \(\text{BAS}(m,n)\) data is a composite image containing only horizontal bars or vertical stripes on a two-dimensional grid. For \(m\times n\)-pixel images, there are only \(2^{m}+2^{n}-2\) valid BAS images in all \(2^{m\times n}\) cases. This defines the target probability distribution, where the probabilities for valid images are specified constants, and the probabilities for invalid images are zero. The generation goal of the experiment is the classical data of \(\text{BAS}(2,2)\), which seem to be a insufficient challenging for quantum computers intuitively. However, the effective quantum state represented by the \(\text{BAS}(2,2)\) data set have a minimum entanglement entropy of \(S_{BAS(2,2)}=1.25163\) and a maximum achievable entropy of \(S_{BAS(2,2)}=1.79248\), which is the known maximum entanglement entropy of four-qubit states set [42]. Therefore, the data have rich entanglement properties and are very suitable as a generation target for quantum adversarial training.
\begin{table}
\begin{tabular}{c c c c c} \hline Algorithm name & GAN & CGAN & QGAN & QCGAN \\ \hline Time complexity & \(O(N^{2})\) & \(O(N^{2})\) & \(O(N)\) & \(O(N)\) \\ Generator resource consumption & \(N\) bits & \(N_{d}+N_{c}\) bits & \(N\) qubits & \(N_{d}+N_{c}\) qubits \\ Generate data type & Continuous & Continuous & Continuous \& Discrete & Continuous \& Discrete \\ Whether human-center algorithm & No & Yes & No & Yes \\ \hline \end{tabular}
\end{table}
Table 1: Performance comparison of \(4\) generative adversarial network algorithms
The BAS\((2,2)\) images in the training set are divided into three categories. The horizontal bar images and the vertical stripe images are respectively one category, and the image with pixel values of all 0 or all 1 is the other category. And the effective BAS images conform to the uniform distribution. According to the classification standard, the category labels are one-hot encoded and added to the basic data set as the conditional information. Hence the generator require 7 qubits resources, as processing the pixel information of BAS data requires 4 qubits, receiving conditional information requires 3 qubits.
### Experimental setup
The codes synthesis 6000 samples to form the training set, including three categories of BAS data (a total of 6 valid images) that meet the above requirements and their category labels. During training, all data is out of order firstly, and then extracted by batch size. For the pre-training of the BAS data set, the discriminator and generator are alternately trained once in each iteration optimization. The batch size of each training is 40, and there are totally 100 epochs for iterative training. In each epoch, iterative training the network 150 times, so that the discriminator can traverse the entire training set. Considering that the improper setting of the learning rate will cause the network gradient to disappear/explode, setting the learning rate \(\times 0.1\) to reduce it every 10 epochs of training. The Adam (Adaptive Moment Estimation) optimizer provided by the open source library is introduced for both generator and discriminator, and the initial learning rate is set as 0.001.
Each epoch of training optimization completes, the output of generator is sampled to inspect the quality of the current generation distribution. The inspection mainly including three points:
(1) whether the generated pixel data constitutes a valid BAS image;
(2) whether the generated pixel data matches the conditional information;
(3) whether the generated all data conforms to the uniform distribution.
Since the training process of the adversarial network is relatively unstable, if the comprehensive accuracy of the above three investigation points reaches the preset threshold of 95%, the training process can be chosen to terminate early. If the threshold can not be reached all the training time, 100 epochs of alternate training are performed according to the preset settings, and then analyze the convergence of the objective function in the whole training process. After that, the adversarial network can be trained again after reasonable adjustments to the training strategy and hyperparameters, by summarizing the reasons for the unsatisfactory training results.
## 6 Results and discussion
In the simulation process, a series of comparative experiments are conducted on the performance of the generator using circle, star, and all-to-all connected quantum circuits firstly. The results verified the superiority of designing an all-to-all connected topology of the quantum generator in this scheme. According to the result of the comparative experiment, the PQC structure shown in Fig. 7 is used as the generator of QCGAN. The input \(\left|y\right\rangle\) of the generator is \(\left|W\right\rangle_{3}\), which is prepared in advance with the circuit shown in Fig. 5.
The discriminator is classical so it is implemented using the classical deep learning framework, TensorFlow, which can form a hybrid quantum-classical model with TFQ. The discriminator has one input layer with dimension \(N_{\mathrm{d}}+N_{c}=7\), one hidden layer made up of 4 neurons and one output neuron. Since the discriminator directly judges the expectation value of the generator output, the hidden layer selects the linear ReLU activation function.
As shown in Fig. 8, in the overall trend, the loss function value of the discriminator gradually decreases and the loss function value of the generator gradually increases. After training, both the losses of generator and discriminator converge to near the expected equilibrium point. As the epoch of training increases, the model gradually stabilizes and the relationship between generator and discriminator is more intense. So it shows in Fig. 8 that there is still a large oscillation around the expectation value after the convergence. This phenomenon is also related to the influence of noise on quantum systems which access through cloud platform.
After the pre-training of the BAS data set is completed, quantum generator result is sampled \(10,000\) times to analyze the generative distribution. The probability distribution of the generated data is shown in Fig. 9(a). It can be seen that most of the generated data fall in the six valid BAS mode images, and the three categories BAS images basically conform to the uniform distribution with \(97.15\%\) accuracy. Fig. 9(b) visualizes the first 100 generative samples in the form of pixel maps of 1, 70 and 100 epoch, which shows that the quantum generator gradually has the ability to generate \(\mathrm{BAS}(2,2)\) images after pre-training.
The parameters of quantum gates in the optimal generator are extracted after pre-training, and then use the generator circuit shown in Fig. 7 to realize the task of generating classification images. The parameters of PQC in Fig. 5 are adjusted to set the input \(\ket{y}\) as \(\ket{001}\), and then sample the output \(\ket{x}_{G}\) of generator. The result shows that two kinds of horizontal stripe images meet the uniform distribution, which means that the quantum generator can generate data of multiple categories that meet the conditional constraints through the guidance of conditional information.
Figure 7: The quantum generator circuit diagram in this QCGAN experiment.
## 7 Conclusion
Combining the classical CGAN algorithm with quantum computing ideas, this paper proposes a quantum conditional generative adversarial network algorithm for human-centered paradigm, which is a general scheme suitable for fitting classical data distribution. This paper gives a detailed interpretation of our design focus, including the configuration design of PQC as the generator, the parameter gradient estimation method of adversarial training strategy as well as the specific steps of the algorithm's cloud computing implementation.
The effect of the QCGAN algorithm is that by adding conditional constraints related to the training data set in the input layer, which effectively guides the net
Figure 8: The discriminator (in orange) and generator (in blue) loss with respect to iterations.
Figure 9: \(2\times 2\) Bars-and-Stripes samples generated from QCGAN.(a)The final probability distribution of the generative BAS data.(b)BAS samples generated from QCGAN with different epoch(For illustrative purpose, we only show 10 samples for each situation.).
work to generate data that meets specific requirements. This step increases the controllability of the generation process, but also more in line with the current human-centered requirements for machine learning algorithms. Compared with classical CGAN, the time complexity of the QCGAN algorithm proposed in this paper is lower, and it is more in line with the needs of actual application scenarios. Through experiments on the quantum cloud computing platform, the results show the QCGAN can generate the BAS data distribution effectively and the generator of QCGAN can output correct data guided by the conditional constraint in cloud.
Given that QGAN has the ability to generate discrete data and the potential to dig out data distributions that cannot be effectively summarized by classical calculations, QGAN and classical GAN are functionally complementary. Many known variants of GAN can generate very realistic images, audio, and video, in that the combination of these algorithms and quantum mechanics is undoubtedly the icing on the cake. Our future work will focus on the quantum schemes of some classical GAN variant algorithms and constructing quantum machine learning algorithms that conform to the HCC paradigm and the corresponding cloud computing implementation.
## Abbreviations
QGAN: Quantum generative adversarial network; QCGAN: quantum conditional generative adversarial network; NISQ: Noisy Intermediate-Scale Quantum; CGAN: Conditional generative adversarial network; HCC: human-centered computing; GAN: Generative adversarial network; PQC: Parameterized quantum circuit; TFQ: TensorFlow Quantum; BAS: Bars and stripes
###### Acknowledgements.
This work is supported by National Natural Science Foundation of China (Grant Nos. 62071240 and 61802002); the Natural Science Foundation of Jiangsu Province (Grant No. BK20171458); the Graduate Research and Practice Innovation Program of Jiangsu Province (Grant No. KYCX20,0969); the Natural Science Foundation of Jiangsu Higher Education Institutions of China under Grant No.19RXBS20028; the Priority Academic Program Development of Jiangsu Higher Education Institutions (PAPD).
|
2310.20196 | Further Development of Event-Based Analysis of X-ray Polarization Data | An event-based maximum likelihood method for handling X-ray polarimetry data
is extended to include the effects of background and nonuniform sampling of the
possible position angle space. While nonuniform sampling in position angle
space generally introduces cross terms in the uncertainties of polarization
parameters that could create degeneracies, there are interesting cases that
engender no bias or parameter covariance. When including background in
Poisson-based likelihood formulation, the formula for the minimum detectable
polarization (MDP) has nearly the same form as for the case of Gaussian
statistics derived by Elsner et al. (2012) in the limiting case of an
unpolarized signal. A polarized background is also considered, which
demonstrably increases uncertainties in source polarization measurements. In
addition, a Kolmogorov-style test of the event position angle distribution is
proposed that can provide an unbinned test of models where the polarization
angle in Stokes space depends on event characteristics such as time or energy. | Herman L. Marshall | 2023-10-31T05:43:43Z | http://arxiv.org/abs/2310.20196v2 | # Further Development of Event-Based Analysis of X-Ray Polarization Data
###### Abstract
An event-based maximum likelihood method for handling X-ray polarimetry data is extended to include the effects of background and nonuniform sampling of the possible position angle space. While nonuniform sampling in position angle space generally introduces cross terms in the uncertainties of polarization parameters that could create degeneracies, there are interesting cases that engender no bias or parameter covariance. When including background in Poisson-based likelihood formulation, the formula for the minimum detectable polarization (MDP) has nearly the same form as for the case of Gaussian statistics derived by Elsner et al. (2012) in the limiting case of an unpolarized signal. A polarized background is also considered, which demonstrably increases uncertainties in source polarization measurements. In addition, a Kolmogorov-style test of the event position angle distribution is proposed that can provide an unbinned test of models where the polarization angle in Stokes space depends on event characteristics such as time or energy.
Polarimetry, methods +
Footnote †: journal: Astrophysical Journal
0000-0002-4007-9885]Herman L. Marshall
0000-0002-0002-3873]Herman L. Marshall
## 1 Introduction
The goal of this paper is to extend the maximum likelihood formulation developed earlier for analysis of unbinned X-ray polarimetry data (Marshall, 2021) to circumstances that were not considered there. The method was developed specifically for application to data from the Imaging X-ray Polarization Explorer (IXPE, Weisskopf et al., 2022) but can be applied generally to instruments that yield events with associated polarization information, such as a soft X-ray polarimeter (Marshall et al., 2018) that is now in development, or instruments that must be rotated to obtain polarization information. In the case of IXPE, there is an angle \(\psi\) associated with every event based on the track produced by the photoelectron ejected by the incident X-ray. For the soft X-ray polarimeter, each event is associated with a "channel" according to the position angle of its Bragg reflector relative to the sky.
By design, the gas pixel detectors on _IXPE_(Rankin et al., 2023) and PolarLight (Feng et al., 2019) have uniform sensitivity with \(\psi\). This is not generally true for systems based on Bragg reflection (e.g. OSO-8, Weisskopf et al., 1976), Thomson scattering (e.g. POLIX on XPoSat, Paul, 2022), or Compton scattering (e.g. X Calibur, Beilicke et al., 2014). Such instruments usually require rotation to obtain uniform azimuthal exposure. See the review of instruments based on Compton scattering by Del Monte et al. (2022). Thus, in section 2, exposure nonuniformities are examined and characterized by two observation-based parameters that can be used to determine the impact of such asymmetries.
Every instrument has a background signal, so in section 3, a background term is added to the unbinned likelihood model. The basic case of an unpolarized signal is covered in section 3.1 and augmented to include the impact of unpolarized background in section 3.2.
Given a model with its best fit parameters, it is necessary to test it. A Kolmogorov test of the counts with time or energy would not be sensitive to the polarization model. Previous tests of polarization models generally examined only the significances of the estimates of the polarization fraction for a full observation (e.g. Liodakis et al., 2022) or perhaps when binned by energy or pulse phase (e.g. Taverna et al., 2022). In section 4, a new test is proposed that is specifically designed to be sensitive to whether the distribution of the event \(\psi\) values matches the model. This sort of test can be used to examine the validity of a pulsar rotating vector model, such as fit by the unbinned method developed by Gonzalez-Caniulef et al. (2023). This test method can also be useful in cases where the electric vector position angle (EVPA) rotates with time as in two observations of the BL Lac object Mk 421 (Di Gesu et al., 2023) in order to test whether the rotation occurs at a uniform rate without binning EVPA measurements in time.
A short review of the maximum likelihood formalism is in order, following Marshall (2021) and Marshall (2021). For this analysis, consider a simple case of a fixed energy band over which the polarization is constant so that the data consist of counts in \(\psi\) space. At energy \(E\), the modulation factor of the instrument is \(\mu_{E}\), the instrument effective area is \(A_{E}\), and the intrinsic source photon flux is \(f_{E}\) based on the spectral model of the source. Both \(\mu_{E}\) and \(A_{E}\) are assumed to be known _a priori_. The event density in a differential energy-phase element \(dEd\psi\) about \((E,\psi)\) is
\[\lambda(E,\psi)=\frac{1}{2\pi}[1+\mu_{E}(q\cos 2\psi+u\sin 2\psi)]f_{E}A_{E}T \tag{1}\]
where \(T\) is the exposure time and the (normalized) Stokes parameters are \(q\equiv Q/I\) and \(u\equiv U/I\) for Stokes fluxes \(I\), \(Q\), and \(U\). (Circular polarization, \(V\), is ignored here, as there is currently no practical way to measure it in the X-ray band.)
Assuming that there are \(N\) events, with energies and instrument angles \((E_{i},\psi_{i})\), then the log-likelihood for a Poisson probability distribution of events, \(S=-2\ln L\), is
\[S = -2\sum_{i}^{N}\ln\lambda(E_{i},\psi_{i})+\frac{T}{\pi}\int f_{E}A _{E}dE\int_{0}^{2\pi}[1+\mu_{E}(q\cos 2\psi+u\sin 2\psi)]d\psi \tag{2}\] \[= -2\sum_{i}^{N}\ln f_{i}-2\sum_{i}^{N}\ln(1+q\mu_{i}\cos 2\psi_{i} +u\mu_{i}\sin 2\psi_{i})+2T\int f_{E}A_{E}dE \tag{3}\]
where \(f_{i}\equiv f(E_{i})\) and \(\mu_{i}\equiv\mu(E_{i})\), after dropping terms independent of \(q\), \(u\), and \(f\). In this case, the log-likelihood for the polarization parameters alone (such as when the polarization is independent of \(E\)) is relatively simple:
\[S(q,u)=-2\sum_{i}^{N}\ln(1+q\mu_{i}\cos 2\psi_{i}+u\mu_{i}\sin 2\psi_{i})=-2\sum_{ i}^{N}\ln(1+qc_{i}+us_{i}) \tag{4}\]
where \(c_{i}=\mu_{i}\cos 2\psi_{i}\) and \(s_{i}=\mu_{i}\sin 2\psi_{i}\). For a weakly polarized source, the best estimates of \(q\) and \(u\) are well approximated as \(\sum_{i}c_{i}/\sum_{i}c_{i}^{2}\) and \(\sum_{i}s_{i}/\sum_{i}s_{i}^{2}\), respectively. See Marshall (2021) for details.
## 2 Nonuniform Exposure
Now, consider the case of a nonuniform exposure in an observation of an unvarying source. The exposure function, \(w(\psi)\) with units of radians\({}^{-1}\), can be defined as the fraction of the exposure spent with sensitivity to phase angle \(\psi\). If the total exposure is \(T\), then the exposure function can be normalized such that it integrates to unity for \(0\leq\psi<2\pi\). In this case, the event density is
\[\lambda(E,\psi)=[1+\mu_{E}(q\cos 2\psi+u\sin 2\psi)]f_{E}A_{E}TdEw(\psi)d\psi \tag{5}\]
and the log-likelihood for a Poisson probability distribution of events, \(S=-2\ln L\), is
\[S = -2\sum_{i}\ln\lambda(E_{i},\psi_{i})+2T\int f_{E}A_{E}dE\int_{0}^{2 \pi}[1+\mu_{E}(q\cos 2\psi+u\sin 2\psi)]w(\psi)d\psi \tag{6}\]
To simplify some results, now assume that the spectrum has a spectral shape with uninteresting spectral shape parameters \(\xi\) that are not related to the polarization so that \(f_{E}=f_{0}\eta(E;\xi)\) and define \(K=T\int\eta(E;\xi)A_{E}dE\) and \(K_{\mu}=T\int\eta(E;\xi)A_{E}\mu_{E}dE\) as conversion constants (from flux units to counts or modulated counts), giving
\[\begin{split} S(f_{0},q,u)=&-2N\ln f_{0}-2\sum_{i} \ln(1+q\mu_{i}w_{i}\cos 2\psi_{i}+u\mu_{i}w_{i}\sin 2\psi_{i})\\ &+2Kf_{0}+2K_{\mu}f_{0}q\int_{0}^{2\pi}w(\psi)\cos 2\psi d\psi+2K_{ \mu}f_{0}u\int_{0}^{2\pi}w(\psi)\sin 2\psi d\psi\end{split} \tag{7}\]
(dropping terms independent of \(f_{0}\), \(q\), or \(u\)). Note that when \(\mu\) is independent of \(E\), \(K_{\mu}=\mu K\).
Redefining the weights with trigonometric factors, we can simplify Eq. 7:
\[S(f_{0},q,u) = -2N\ln f_{0}-2\sum_{i}\ln(1+q\mu_{i}\alpha_{i}+u\mu_{i}\beta_{i} )+2Kf_{0}+2K_{\mu}f_{0}Aq+2K_{\mu}f_{0}Bu \tag{8}\]
where \(\alpha(\psi)\equiv w(\psi)\cos 2\psi\) and \(\beta(\psi)\equiv w(\psi)\sin 2\psi\), so \(\alpha_{i}=\alpha(\psi_{i})\) and \(\beta_{i}=\beta(\psi_{i})\) and the integrals of \(\alpha\) and \(\beta\) over \(\psi\) are A and B, respectively. The quantities \(A\) and \(B\) are unitless, with absolute values less than or of order unity. Note that \(f_{0}\) is covariant with \(u\) and \(q\) via the exposure weighting terms \(A\) and \(B\). These quantities are both zero when \(w(\psi)\) is constant over \([0,\pi]\) or \([0,2\pi]\) but either or both can be nonzero otherwise.
The best estimate of \(f_{0}\) is readily determined by setting the setting \(\partial S/\partial f_{0}\) to zero and solving for \(f_{0}\), giving
\[\hat{f}_{0}=\frac{N}{K+K_{\mu}(Aq+Bu)}\ \ . \tag{9}\]
When \(A\) and \(B\) are zero or the polarization, \(\Pi\equiv(q^{2}+u^{2})^{1/2}\) is zero, then \(f_{0}\) is just \(N/K\), as expected. Setting \(\partial S/\partial u=0\) and \(\partial S/\partial q=0\) to find the best estimates of \(q\) and \(u\) gives
\[AK_{\mu}\hat{f}_{0} = \sum_{i}\frac{\mu_{i}\alpha_{i}}{1+\hat{q}\mu_{i}\alpha_{i}+\hat{ u}\mu_{i}\beta_{i}}=\sum_{i}W_{i}\mu_{i}\alpha_{i} \tag{10}\] \[BK_{\mu}\hat{f}_{0} = \sum_{i}\frac{\mu_{i}\beta_{i}}{1+\hat{q}\mu_{i}\alpha_{i}+\hat{ u}\mu_{i}\beta_{i}}=\sum_{i}W_{i}\mu_{i}\beta_{i} \tag{11}\]
where \(W_{i}\equiv(1+\hat{q}\mu_{i}\alpha_{i}+\hat{u}\mu_{i}\beta_{i})^{-1}\). As before, these two equations apply under quite general circumstances but require numerical solution. However, as in Marshall (2021), for \(\hat{q}\ll 1\) and \(\hat{u}\ll 1\), a simple approximate solution may be found, noting that \(A\) and \(B\) are generally of order unity, so
\[\hat{q} \approx \frac{\sum_{i}\mu_{i}\alpha_{i}-ANK_{\mu}/K}{\sum_{i}\mu_{i}^{2} \alpha_{i}^{2}} \tag{12}\] \[\hat{u} \approx \frac{\sum_{i}\mu_{i}\beta_{i}-BNK_{\mu}/K}{\sum_{i}\mu_{i}^{2} \beta_{i}^{2}}\ . \tag{13}\]
At this point, the uncertainties in \(q\) and \(u\) can be derived. All second derivatives of Eq. 8 are nonzero:
\[\frac{\partial^{2}S}{\partial f_{0}^{2}} = \frac{2N}{f_{0}^{2}} \tag{14}\] \[\frac{\partial^{2}S}{\partial f_{0}\partial q} = 2K_{\mu}A\] (15) \[\frac{\partial^{2}S}{\partial f_{0}\partial u} = 2K_{\mu}B\] (16) \[\frac{\partial^{2}S}{\partial q^{2}} = \sum_{i}W_{i}^{2}\mu_{i}^{2}\alpha_{i}^{2}\approx\sum_{i}\mu_{i}^ {2}\alpha_{i}^{2}\] (17) \[\frac{\partial^{2}S}{\partial u^{2}} = \sum_{i}W_{i}^{2}\mu_{i}^{2}\beta_{i}^{2}\approx\sum_{i}\mu_{i}^ {2}\beta_{i}^{2}\] (18) \[\frac{\partial^{2}S}{\partial q\partial u} = \sum_{i}W_{i}^{2}\mu_{i}^{2}\beta_{i}\alpha_{i}\approx\sum_{i}\mu _{i}^{2}\alpha_{i}\beta_{i} \tag{20}\]
where, again, the approximations hold for \(\hat{q}\ll 1\) and \(\hat{u}\ll 1\).
We are most interested in the uncertainty in the polarization, \(\Pi\). We can make the coordinate transformation from \((q,u)\) to \((\Pi,\varphi)\), where \(\varphi\) = \(\frac{1}{2}\tan^{-1}(u/q)\) and determine \(S(f_{0},\Pi,\varphi)\):
\[S(\hat{f}_{0},\Pi,\varphi) = 2N\ln[K+K_{\mu}\Pi(A\cos 2\varphi+B\sin 2\varphi)]-2\sum_{i} \ln[1+\Pi w_{i}\mu_{i}\cos(2\psi_{i}-2\varphi)] \tag{22}\]
for which the second derivative with respect to \(\Pi\) is
\[\frac{\partial^{2}S}{\partial\Pi^{2}} = \frac{-2NK_{\mu}^{2}(A\cos 2\varphi+B\sin 2\varphi)^{2}}{[K+K_{ \mu}\Pi(A\cos 2\varphi+B\sin 2\varphi)]^{2}}+2\sum_{i}\frac{w_{i}^{2}\mu_{i}^{2} \cos^{2}(2\psi_{i}-2\varphi)}{[1+\Pi\mu_{i}w_{i}\mu_{i}\cos(2\psi_{i}-2 \varphi)]^{2}} \tag{23}\]
with a limit as \(\Pi\longrightarrow 0\) giving
\[\frac{1}{\sigma_{\Pi}^{2}}\approx\sum_{i}w_{i}^{2}\mu_{i}^{2}\cos^{2}(2\psi_{ i}-2\varphi)-NK_{\mu}^{2}(A\cos 2\varphi+B\sin 2\varphi)^{2}/K^{2} \tag{24}\]
The first term on the right hand side is the "normal", expected term that depends on the modulation factor and the cosines of the phase angles. The second term, however, is of great concern because it is negative definite, causing the uncertainty in \(\Pi\) to increase arbitrarily, and because it depends on the true but unknown phase. If either \(A\) and \(B\) are nonzero, then the uncertainty in \(\Pi\) depends upon this phase in a way that can render statistical uncertainties difficult to compute and irregular. Thus, an important characteristic of a good polarimeter is designing it so that \(A\) and \(B\) are as close to zero as possible. As stated in the introduction, the gas pixel detectors on _IXPE_(Rankin et al., 2023) have uniform sensitivity to phase angle for the entire exposure, so \(A\) = \(B\) = 0. The case of a set of Bragg reflectors is worth examining. A single reflector has an ideal angular response that is a delta function in \(\psi\): \(w(\psi)=\delta(\psi-\psi_{0})\). If there are \(n_{B}\) reflectors, then \(w(\psi)=1/n_{B}\sum_{i}^{n_{B}}\delta(\psi-\psi_{i})\). It can be shown that when \(\psi_{i}=\psi_{0}+\pi i/n_{B}\), then \(A\) and \(B\) are identically zero for arbitrary \(\psi_{0}\) when \(n_{B}>2\) and the solution to Eqs. 9 to 11 is not degenerate.1 For the broad-band soft X-ray polarimeter with 3 Bragg reflectors at 120\({}^{\circ}\) to each other (Marshall et al., 2018), \(A\) = \(B\) = 0 if all three channels are operated for the same time period.
Footnote 1: For \(n_{B}\) = 2, \(A\) = \(B\) = 0 also, but then the system of equations becomes degenerate and no unique solution is possible. For example, Eq. 11 is 0 = 0 for \(\psi_{0}\) = 0.
## 3 Adding a background term
There are two cases to consider. The easier case is when the background is unpolarized. This case helps set the stage for the case of polarized background, which is important for situations such as when measuring a pulsar inside a pulsar wind nebula or a source in the wings of a brighter, polarized source.
Regardless of whether the background is polarized, a background region of solid angle \(\Omega\) is chosen that is source free and the source region covers a solid angle \(\zeta\Omega\) that is presumed to have the same background characteristics. There are \(N\) events in the source region labeled with index \(i\) and \(N_{B}\) events in the background region labeled with index \(j\). This case is similar to that considered by Elsner et al. (2012) for the case of Gaussian counting statistics. To compare to their analysis more directly, we expect \(C_{B}\equiv\zeta N_{B}\) counts in the source region to be due to background, giving \(N-C_{B}\equiv C_{S}\)_net_ counts in the source region. In this analysis, the exposure is uniform over \(\psi\).
### Unpolarized Background
If the background is unpolarized, the event density is relatively simple:
\[\lambda_{S}(\psi)=\frac{1}{2\pi}\{N_{0}[1+\mu(q\cos 2\psi+u\sin 2\psi)]+\zeta B\} \tag{25}\]
for the source region and \(\lambda_{B}(\psi)=\frac{B}{2\pi}\) for the background region. Here, the notation is simplified by defining \(N_{0}=f_{0}\,T\int\eta(E;\xi)A_{E}dE\), which is just the expected number of counts from the source under some spectral model \(f_{0}\eta(E;\xi)\). Then, the log-likelihood for a Poisson probability distribution of source and background events, \(S=-2\ln L\), is
\[S = -2\sum_{i=1}^{N}\ln\lambda_{S}(\psi_{i})+\frac{1}{2\pi}\int_{0}^ {2\pi}[N_{0}(1+\mu q\cos 2\psi+\mu u\sin 2\psi)+\zeta B]d\psi-2\sum_{j=1}^{N_{B}} \ln B+2B \tag{26}\] \[= -2\sum_{i=1}^{N}\ln[N_{0}(1+qc_{i}+us_{i})+\zeta B]+2N_{0}+2B(1+ \zeta)-2N_{B}\ln B \tag{27}\]
(dropping terms independent of \(B,N_{0},\,q,\) or \(u\)). Setting partial derivatives to zero gives
\[\hat{N_{0}} = \sum_{i=1}^{N}\frac{1+\hat{q}c_{i}+\hat{u}s_{i}}{1+\hat{q}c_{i}+ \hat{u}s_{i}+\frac{\zeta\hat{B}}{\hat{N_{0}}}}=\sum w_{i}+\hat{q}\sum w_{i}c_{ i}+\hat{u}\sum w_{i}s_{i}=\sum w_{i} \tag{28}\] \[\hat{B} = \frac{N_{B}}{1+\zeta(1-\frac{\sum w_{i}}{\hat{N_{0}}})}=N_{B}\] (29) \[0 = \sum w_{i}c_{i}\] (30) \[0 = \sum w_{i}s_{i} \tag{31}\]
for \(N_{0}\neq 0\) and defining \(w_{i}=[1+\hat{q}c_{i}+\hat{u}s_{i}+\zeta\hat{B}/\hat{N_{0}}]^{-1}\). Eqs. 30 and 31 have been used to simplify Eq. 28 and Eq. 28 is used to simplify Eq. 29. Substituting \(N_{B}\) for \(B\) in Eq. 28 and transforming from \((q,u)\) to \((\Pi,\varphi)\) gives
\[\hat{N_{0}}=\sum_{i=1}^{N}[1+\hat{\Pi}\mu_{i}\cos(2\psi_{i}+2\hat{\varphi})+ \frac{\zeta N_{B}}{\hat{N_{0}}}]^{-1}, \tag{32}\]
which can be solved for \(\hat{N_{0}}\) for trial values of \(\hat{\Pi}\) and \(\hat{\varphi}\) to make minimizing \(S\) simpler by substituting \(\hat{N_{0}}\) and \(\hat{B}=N_{B}\) into Eq. 27. As \(\hat{\Pi}\longrightarrow 0\)\(\hat{N_{0}}\longrightarrow N-\zeta N_{B}=C_{S}\), as expected, providing a good starting point for estimating \(\hat{N_{0}}\).
The minimum detectable polarization (MDP) for this case can be estimated by computing the uncertainty in \(\Pi\), \(\sigma_{\Pi}\), by
\[\frac{\partial^{2}S}{\partial\Pi^{2}}=\frac{2}{\sigma_{\Pi}^{2}}=2\sum_{i=1}^{ N}w_{i}^{2}\mu_{i}^{2}\cos^{2}(2\psi_{i}+2\hat{\varphi}) \tag{33}\]
Then, as \(\hat{\Pi}\longrightarrow 0\), \(w_{i}\longrightarrow[1+\zeta N_{B}/\hat{N_{0}}]^{-1}\), so
\[\sigma_{\Pi}\longrightarrow\frac{1+\zeta N_{B}/\hat{N_{0}}}{[\sum_{i=1}^{N} \mu_{i}^{2}\cos^{2}(2\psi_{i}+2\hat{\varphi})]^{1/2}}=\frac{\sqrt{2}(1+\zeta N_ {B}/\hat{N_{0}})}{[N\langle\mu_{i}^{2}\rangle]^{1/2}}=\frac{\sqrt{2N}}{(N- \zeta N_{B})\sqrt{\langle\mu_{i}^{2}\rangle}}=\frac{\sqrt{2(C_{S}+C_{B})}}{C_{S }\sqrt{\langle\mu_{i}^{2}\rangle}} \tag{34}\]
where the first step follows as \(\mu_{i}\) and \(\psi_{i}\) are uncorrelated and the second step follows from the asymptotic value of \(\hat{N_{0}}\). Finally, the MDP at 99% confidence is
\[\mathrm{MDP}_{99}=3.03\sigma_{\Pi}=\frac{4.29\sqrt{C_{S}+C_{B}}}{C_{S}\sqrt{<\mu_{i} ^{2}>}}\ \, \tag{35}\]
just as found by Elsner et al. (2012) for Gaussian statistics with the exception of the substitution of the rms of \(\mu_{i}\) for \(\mu\).
### Polarized Background
It is more likely that the X-ray background is partially polarized as it often contains some fraction of the source as well (due to the extent of the telescope's point spread function). The background is assumed to be primarily due to photons, essentially indistinguishable from source events, susceptible to the same modulation factor as source events are. If the background is polarized, the event density has added terms giving the normalized \(u\) and \(q\) of the background, denoted by \(q_{b}\) and \(u_{b}\):
\[\lambda_{S}(\psi) = \frac{1}{2\pi}\left\{N_{0}[1+\mu(q\cos 2\psi+u\sin 2\psi)]+ \zeta B[1+\mu(q_{b}\cos 2\psi+u_{b}\sin 2\psi)]\right\} \tag{36}\] \[\lambda_{B}(\psi) = \frac{B}{2\pi}\left\{1+\mu(q_{b}\cos 2\psi+u_{b}\sin 2\psi)]\right\} \tag{37}\]
for the source and background regions, respectively. Then,
\[S = -2\sum_{i=1}^{N}\ln\lambda_{S}(\psi_{i})+\frac{1}{2\pi}\int_{0}^{ 2\pi}\lambda_{S}(\psi_{i})d\psi-2\sum_{j=1}^{N_{b}}\ln\lambda_{B}(\psi_{j})+ \frac{1}{2\pi}\int_{0}^{2\pi}\lambda_{B}(\psi)d\psi \tag{38}\] \[= -2\sum_{i=1}^{N}\ln[N_{0}(1+qc_{i}+us_{i})+\zeta B(1+q_{b}c_{i}+u _{b}s_{i})]+2N_{0}+2B(1+\zeta)-2N_{B}\ln B+-2\sum_{j=1}^{N_{b}}\ln(1+q_{b}c_{i }+u_{b}s_{i}) \tag{39}\]
(dropping terms independent of \(B\), \(N_{0}\), \(q\), \(u\), \(q_{b}\), or \(u_{b}\)) and again defining \(c_{i}=\mu_{i}\cos 2\psi_{i}\) and \(s_{i}=\mu_{i}\sin 2\psi_{i}\). Setting partial derivatives to zero gives
\[\dot{N_{0}} = \sum_{i=1}^{N}\frac{1+\hat{q}c_{i}+\hat{u}s_{i}}{1+\hat{q}c_{i}+ \hat{u}s_{i}+\frac{\zeta\hat{B}}{N_{0}}(1+\hat{q}_{b}c_{i}+\hat{u}_{b}s_{i})}= \sum_{i}^{N}W_{i} \tag{40}\] \[\hat{B} = N_{B}\] (41) \[0 = \sum W_{i}c_{i}\] (42) \[0 = \sum W_{i}s_{i}\] (43) \[0 = \sum V_{j}c_{j}\] (44) \[0 = \sum V_{j}s_{j}\ \ \, \tag{45}\]
defining \(W_{i}=[1+\hat{q}c_{i}+\hat{u}s_{i}+\zeta\hat{B}(1+\hat{q}_{b}c_{i}+\hat{u}_{b }s_{i})/\hat{N}_{0}]^{-1}\) and now \(V_{j}=[1+\hat{q}_{b}c_{j}+\hat{u}_{b}s_{j}]^{-1}\). As before, Eqs. 42, 43, and 40 have been used to derive Eq. 41. Eqs. 45 and 45 can be solved for \(\hat{q}_{b}\) and \(\hat{u}_{b}\) as in Marshall (2021), giving
\[\hat{q}_{b} \approx \sum_{i}c_{i}/\sum_{i}c_{i}^{2} \tag{46}\] \[\hat{u}_{b} \approx \sum_{i}s_{i}/\sum_{i}s_{i}^{2} \tag{47}\]
when the background is weakly polarized. Not surprisingly, the optimal Stokes parameters for the background are derived from the background region alone. Now the background Stokes parameters can be used in Eq. 40 (via the definition of \(W_{i}\)) to derive an equation involving the source Stokes parameters similar to Eq. 32 that can be solved iteratively for \(\dot{N_{0}}\) for trial values of \(\hat{\Pi}\) and \(\hat{\varphi}\).
Finally, Eq. 33 is modified to be
\[\frac{2}{\sigma_{\Pi}^{2}}=2\sum_{i=1}^{N}W_{i}^{2}\mu_{i}^{2}\cos^{2}(2\psi_{ i}+2\hat{\varphi})\ \ . \tag{48}\]
and taking the limiting case as \(\Pi\longrightarrow 0\) gives
\[\sigma_{\Pi}^{2}\longrightarrow\frac{(1+\zeta B/\hat{N}_{0})^{2}}{\langle\mu^{2} \rangle\sum_{i}\frac{\cos^{2}(2\psi_{i}+2\hat{\varphi})}{(1+\zeta B\Pi_{\rm g }\cos(2\psi_{i}+2\hat{\varphi}_{0})/N)^{2}}}=\frac{(C_{S}+C_{B})^{2}}{C_{S}^{2 }\langle\mu^{2}\rangle\sum_{i}\frac{\cos^{2}(2\psi_{i}+2\hat{\varphi}_{0})}{(1 +C_{\rm g}\Pi_{\rm g}\cos(2\psi_{i}+2\hat{\varphi}_{0})/(C_{S}+C_{B}))^{2}}} \tag{49}\]
after transforming from \(q_{b},u_{b}\) to \(\Pi_{B},\varphi_{B}\). Without the term in the denominator in the sum, the sum would average to \(N/2\) = \((C_{S}+C_{B})/2\), matching Eq. 34. Because the extra term is positive definite, it will reduce the sum, thereby increasing \(\sigma_{\Pi}\), making the estimate of \(\Pi\) more uncertain when there is polarized background, as expected. The magnitude of the increase in the uncertainty depends on the ratio of the expected polarized counts to the total counts in the source region but also on the correlation between the source and background polarization phases.
## 4 An Unbinned Model Test
Consider a Kolmogorov test of conditional probabilities for a model where \(q\) and \(u\) depend on \(\xi\), representing time, spatial location, or energy. For example, a model where the polarization fraction is constant with time while the EVPA rotates uniformly with rate \(\omega\) could be specified as
\[q(t) = \Pi\cos 2(\phi_{0}+\omega t) \tag{50}\] \[u(t) = \Pi\sin 2(\phi_{0}+\omega t) \tag{51}\]
where \(\phi_{0}\) and \(\omega\) are (fitted) parameters of the model to be tested, \(\xi\) = \(t\), and each event has a specified value of \(t\) given by \(t_{i}\). This model was applied to _IXPE_ data from Mk 421, finding rotation rates of \(\omega\) = \(80\pm 9^{\circ}\)/d in one observation and \(\omega\) = \(91\pm 8^{\circ}\)/d in another (Di Gesu et al., 2023).
Generally, using the source region event density given by Eq. 25, the conditional probability that \(\psi\leq\psi_{i}\) for event \(i\) given that \(\xi\) = \(\xi_{i}\) is
\[C(\leq\psi_{i}\mid q[\xi_{i}],\;u[\xi_{i}],\;\hat{N}_{0},\;\hat{B}) = \frac{\int_{0}^{\psi_{i}}\lambda(\psi;\xi_{i})d\psi}{\int_{0}^{ \infty}\lambda(\psi;\xi_{i})d\psi} \tag{52}\] \[= \frac{\psi_{i}(1+\zeta N_{B}/\hat{N}_{0})+\mu_{i}([q_{i}\sin 2\psi_{ i}]/2+u_{i}\sin^{2}\psi_{i})}{2\pi(1+\zeta N_{B}/\hat{N}_{0})} \tag{53}\]
where \(q(\xi_{i})\equiv q_{i}\) and \(u(\xi_{i})\equiv u_{i}\). As \(\Pi\longrightarrow 0\), \(C(\leq\psi_{i})\) approaches the uniform distribution, as expected. Under the hypothesis that the model is correct, though, we expect Eq. 53 to give values that are uniformly distributed between 0 and 1 even if \(p\) is non-zero. Thus, a Kolmogorov test of the cumulative distribution of \(C(\leq\psi_{i})\) values should provide a valid unbinned test of the event angles.
This test was implemented in Interactive Data Language (IDL) and applied to several different data sets from _IXPE_. In each case, events in the 2-8 keV band were used, the source region was 60\({}^{\prime\prime}\) in radius, and the background was taken from an annulus 200\({}^{\prime\prime}\) to 300\({}^{\prime\prime}\) from the point source. The first source, Mk 501 (_IXPE_ data set 01004501), was found to be 10 \(\pm\) 2% polarized (Liodakis et al., 2022). For the null hypothesis that Mk 501 is unpolarized, the distribution of \(C(\leq\psi_{i})\) deviated from the uniform distribution by 0.0085 with a set of 85,388 events in the source region; thus, the null hypothesis is rejected with a probability of less than \(8\times 10^{-6}\). A likelihood ratio test rejects the null hypothesis with a probability of \(7\times 10^{-7}\) in this case, providing a somewhat better result for a simple test that the source is polarized. Under the hypothesis that the source is polarized, with parameters determined using the maximum likelihood method in SS 3, then the deviation dropped to 0.00196, for a K-S probability of 0.90; thus, the constant polarized model with fixed \(\Pi\) and \(\varphi\) is acceptable, a conclusion that was not available to Liodakis et al. (2022). Similarly, constant rotation models for the second and third _IXPE_ observations of Mk 421 (data sets 01003801 and 01003901, reported by Di Gesu et al. (2023)) are accepted with probabilities of 0.97 and 0.78, respectively. Finally, the test was run on data from Cen A (_IXPE_ data set 01004301), for which no polarization was detected; the upper limit to the polarization was 6.5% at 99% confidence (Ehlert et al., 2022). For Cen A, the null hypothesis (that the source is unpolarized) is not rejected, giving a maximum deviation of 0.0039 with 28,078 events and a K-S probability of 0.79. In summary, while an analysis may provide parameters of a polarization model, this test can be used on unbinned data to test the validity of the model, providing the user a diagnostic that could indicate whether the model is inadequate.
## 5 Summary
The unbinned likelihood method for X-ray polarimetry data analysis has been extended in several ways:
1. Because many X-ray polarimeters must be rotated in order to be sensitive to arbitrary polarization position angles, an exposure weighting approach was added. A simple diagnostic term is developed that can inform the user when polarization measurements may be deleteriously affected.
2. A way of accounting for background has been added to the basic formalism. The background can be unpolarized but it may be more common to have a polarized background, such as when observing a point source in a polarized nebula or near a brighter polarized source.
3. An unbinned test using event phase angles was proposed that can be used to determine whether a time- or energy-dependent model may be rejected. The test was applied successfully to several _IXPE_ data sets.
Funding for this work was provided in part by contract 80MSFC17C0012 from the MSFC to MIT in support of the _IXPE_ project. This research used data products provided by the IXPE Team (MSFC, SSDC, INAF, and INFN) and distributed with additional software tools by the High-Energy Astrophysics Science Archive Research Center (HEASARC), at NASA Goddard Space Flight Center (GSFC). Support for this work was provided in part by the National Aeronautics and Space Administration (NASA) through the Smithsonian Astrophysical Observatory (SAO) contract SV3-73016 to MIT for support of the Chandra X-Ray Center (CXC), which is operated by SAO for and on behalf of NASA under contract NAS8-03060.
_Facilities: IXPE_
Interactive Data Language (IDL)
|
2309.14350 | Training neural mapping schemes for satellite altimetry with simulation
data | Satellite altimetry combined with data assimilation and optimal interpolation
schemes have deeply renewed our ability to monitor sea surface dynamics.
Recently, deep learning (DL) schemes have emerged as appealing solutions to
address space-time interpolation problems. The scarcity of real altimetry
dataset, in terms of space-time coverage of the sea surface, however impedes
the training of state-of-the-art neural schemes on real-world case-studies.
Here, we leverage both simulations of ocean dynamics and satellite altimeters
to train simulation-based neural mapping schemes for the sea surface height and
demonstrate their performance for real altimetry datasets. We analyze further
how the ocean simulation dataset used during the training phase impacts this
performance. This experimental analysis covers both the resolution from
eddy-present configurations to eddy-rich ones, forced simulations vs.
reanalyses using data assimilation and tide-free vs. tide-resolving
simulations. Our benchmarking framework focuses on a Gulf Stream region for a
realistic 5-altimeter constellation using NEMO ocean simulations and 4DVarNet
mapping schemes. All simulation-based 4DVarNets outperform the operational
observation-driven and reanalysis products, namely DUACS and GLORYS. The more
realistic the ocean simulation dataset used during the training phase, the
better the mapping. The best 4DVarNet mapping was trained from an eddy-rich and
tide-free simulation datasets. It improves the resolved longitudinal scale from
151 kilometers for DUACS and 241 kilometers for GLORYS to 98 kilometers and
reduces the root mean squared error (RMSE) by 23% and 61%. These results open
research avenues for new synergies between ocean modelling and ocean
observation using learning-based approaches. | Quentin Febvre, Julien Le Sommer, Clément Ubelmann, Ronan Fablet | 2023-09-19T14:32:25Z | http://arxiv.org/abs/2309.14350v1 | # Training neural mapping schemes for satellite altimetry with simulation data
###### Abstract
We propose to train neural mapping schemes for real altimeter data from ocean simulation data.
The trained neural schemes significantly outperform the operational mapping of real altimetry data for a Gulf Stream case-study.
Momentin Febvre, [email protected]
**Key Points:**
* We propose to train neural mapping schemes for real altimeter data from ocean simulation data.
* The trained neural schemes significantly outperform the operational mapping of real altimetry data for a Gulf Stream case-study.
* More realistic simulation datasets improve the performance of the trained neural mapping with a 20% improvement in the spatial scales.
###### Abstract
Satellite altimetry combined with data assimilation and optimal interpolation schemes have deeply renewed our ability to monitor sea surface dynamics. Recently, deep learning (DL) schemes have emerged as appealing solutions to address space-time interpolation problems. The scarcity of real altimetry dataset, in terms of space-time coverage of the sea surface, however impedes the training of state-of-the-art neural schemes on real-world case-studies. Here, we leverage both simulations of ocean dynamics and satellite altimeters to train simulation-based neural mapping schemes for the sea surface height and demonstrate their performance for real altimetry datasets. We analyze further how the ocean simulation dataset used during the training phase impacts this performance. This experimental analysis covers both the resolution from eddy-present configurations to eddy-rich ones, forced simulations vs. reanalyses using data assimilation and tide-free vs. tide-resolving simulations. Our benchmarking framework focuses on a Gulf Stream region for a realistic 5-altimeter constellation using NEMO ocean simulations and 4DVarNet mapping schemes. All simulation-based 4DVarNets outperform the operational observation-driven and reanalysis products, namely DUACS and GLORYS. The more realistic the ocean simulation dataset used during the training phase, the better the mapping. The best 4DVarNet mapping was trained from an eddy-rich and tide-free simulation datasets. It improves the resolved longitudinal scale from 151 kilometers for DUACS and 241 kilometers for GLORYS to 98 kilometers and reduces the root mean squared error (RMSE) by 23% and 61%. These results open research avenues for new synergies between ocean modelling and ocean observation using learning-based approaches.
## Plain Language Summary
For an artificial intelligence (AI) to learn, one need to describe a task using data and an evaluation procedure. Here we aim at constructing images related to the ocean surface currents. The satellite data we use provide images of the ocean surface with a lot of missing data (around 95% of missing pixels for a given day), and we aim at finding the values of the missing pixels. Because we don't know the full image, it is challenging to train an AI on this task using only the satellite data. However, today's physical knowledge makes it possible to numerically simulate oceans on big computers. For these simulated oceans, we have access to the gap-free image, so we can train AI models by first hiding some pixels and checking if the model fill the gaps with the correct values. Here, we explore under which conditions AIs trained on simulated oceans are useful for the real ocean. We show that today's simulated oceans work well for training an AI on this task and that training on more realistic simulated oceans improve the performance of the AI!
## 1 Introduction
Satellite altimeters have brought a great leap forward in the observation of sea surface height on a global scale since the 80's.
Altimetry data have greatly contributed to the monitoring and understanding of key processes such as the sea-level rise and the role of mesoscale dynamics. The scarce and irregular sampling of the measurements presents a challenge for training deep neural networks. The retrieval of mesoscale-to-submesoscale sea surface dynamics for horizontal scales smaller than 150 km however remains a challenge for operational systems based on optimal interpolation (Taburet et al., 2019) and data assimilation (Lellucuche et al., 2021) schemes. This has motivated a wealth of research to develop novel mapping schemes (Ballarotta et al., 2020; Ubelmann et al., 2021; Guillou et al., 2021).
In this context, data-driven and learning-based approaches (Alvera Azcarate et al., 2005; Barth et al., 2022; Lguensat et al., 2017; Fablet, Amar, et al., 2021; Martin et al.,
2023) appear as appealing alternatives to make the most of the available observation and simulation datasets. Especially, Observing System Simulation Experiments (OSSE) have stressed the potential of neural schemes trained through supervised learning for the mapping of satellite-derived altimetry data (Fablet, Amar, et al., 2021; Beauchamp et al., 2023). Their applicability to real datasets has yet to be assessed and recent studies have rather explored learning strategies from real gappy multi-year altimetry datasets (Martin et al., 2023). Despite promising results, schemes trained with unsupervised strategies do not reach the relative improvement of the operational processing suggested by OSSE-based studies.
Here, we go beyond using OSSEs as benchmarking-only testbeds. We explore their use for the training of neural mapping schemes and address the space-time interpolation of real satellite altimetry observations. Through numerical experiments on a Gulf Stream case-study with a 5-nadir altimeter constellation, our main contributions are three-fold. We demonstrate the relevance of the simulation-based learning of neural mapping schemes and their generalization performance for real nadir altimetry data. We benchmark the proposed approach with state-of-the-art operational products as well as neural schemes trained from real altimetry datasets. We also assess how the characteristics of the training datasets, especially in terms of resolved ocean processes, drives the mapping performance. To ensure the reproducibility of our results, our code is made available through an open source license along with the considered datasets and the trained models (Febvre, 2023).
The content of this paper is organized as follows. Section 2 offers background information on related work, Section 3 presents our method, Section 4 reports our numerical experiments, and Section 5 elaborates on our main contributions.
## 2 Background
### Gridded satellite altimetry products
The ability to produce gridded maps from scattered along-track nadir altimeter measurements of sea surface height is key to the exploitation of altimeter data in operational services and science studies (Abdalla et al., 2021). As detailed below, we can distinguish three categories of approaches to produce such maps: reanalysis products (Lellouche et al., 2021) using data assimilation schemes, observation-based products (Taburet et al., 2019) and learning-based approaches (Fablet, Amar, et al., 2021).
Reanalysis products such as the GLORYS12 reanalysis (Lellouche et al., 2021) leverage the full expressiveness of state-of-the-art ocean models. They aim at retrieving ocean state trajectories close to observed quantities through data assimilation methods including among others Kalman filters and variational schemes (Carrassi et al., 2018). Such reanalyses usually exploit satellite-derived and in situ data sources. For instance, GLORYS12 reanalysis assimilates satellite altimetry data, but also satellite-derived observations of the sea surface temperature, sea-ice concentration as well as in situ ARGO data (Wong et al., 2020).
The second category involves observation-based products. In contrast to reanalyses, they only rely on altimetry data and address a space-time interpolation problem. They usually rely on simplifying assumptions on sea surface dynamics. In this category, optimal-interpolation-based product DUACS (Data Unification and Altimeter Combination System) (Taburet et al., 2019) exploits a covariance-based prior, while recent studies involve quasi-geostrophic dynamics to guide the interpolation scheme (Guillou et al., 2021; Ballarotta et al., 2020).
Data-driven and learning-based approaches form a third category of SSH mapping schemes. Similarly to observation-based methods, they are framed as interpolation schemes.
Especially deep learning schemes have gained some attention. Recent studies have explored different neural architectures both for real and OSSE altimetry datasets (Archambault et al., 2023; Beauchamp et al., 2021; Martin et al., 2023). These studies investigate both different training strategies as well as different neural architectures from off-the-shelf computer vision ones such as convolutional LSTMs and UNets (Ronneberger et al., 2015) to data-assimilation-inspired ones (Beauchamp et al., 2021; Fablet, Chapron, et al., 2021).
### Ocean Modeling and OSSE
Advances in modeling and simulating ocean physics have largely contributed to a better understanding of the processes involved in the earth system and to the development of operational oceanography (Barnier et al., 2006; Ajayi et al., 2020). High-resolution simulations used in Observing System Simulation Experiments (OSSE) also provide a great test-bed for the design and evaluation of new of ocean observation systems (Benkiran et al., 2021). The availability of numerical model outputs enables the computation of interpretable metrics directly on the quantities of interest. This avoids challenges met when working solely with observation data that may be incomplete, noisy or indirectly related to the desired quantity. For example, in the case of the recently launched SWOT mission, OSSEs combined ocean and instrument simulations to address calibration issues and interpolation performance for SWOT altimetry data (Dibarboure et al., 2022). Such OSSEs have also promoted novel developments for the interpolation of satellite altimetry such as the BFN-QG and 4DVarNet schemes (Guillou et al., 2021; Beauchamp et al., 2023).
In OSSE settings, we can train learning-based mapping schemes in a supervised manner using model outputs as the "ground truth" during the training phase. Nonetheless, these training methods cannot be straightforwardly applied to Observing System Experiments (OSEs) due to a lack of comprehensive groundtruthed observation datasets. Applied machine learning practitioners often grapple with insufficient amount of labelled data during the training of supervised learning schemes, as the collection of large annotated datasets for a specific task can be costly or unattainable. Proposed solutions includes the exploitation of large existing datasets (such as ImageNet Deng et al. (2009)) to train general purpose models (like He et al. (2016)). Another approach involves the generation of synthetic datasets to facilitate the creation of groundtruthed samples (Gomez Gonzalez et al., 2017; Dosovitskiy et al., 2015). OSSEs, which combine ocean model outputs and observing system simulators (Boukabara et al., 2018), can deliver such large synthetic groundtruthed datasets. We propose to investigate how OSSE-based training strategies apply to the analysis of real satellite altimetry datasets. Recent results of SSH super-resolution model trained on simulation datasets and evaluated on real ones (Buongiorno Nardelli et al., 2022) support the relevance of such strategies.
### Physics-aware deep-learning
In the last decades, DL advances combined with the rise in computational resources and amount of data have shown the power of extracting knowledge from data in domains ranging from computer vision to language processing (LeCun et al., 2015). Yet, despite to the universality of DL architectures (Hornik et al., 1989), a central challenge persists in learning from data: the generalization performance beyond the distribution of the training data. To tackle this problem, the literature includes a variety of strategies such as data augmentation (Shorten and Khoshgoftaar, 2019) and regularization techniques, including dropout layers (Srivastava et al., 2014) and weight decay schemes (Krizhevsky et al., 2012). This is of critical importance for physical systems, where models trained on past data will be challenged when the system evolves and reaches dynamics absent from the training data. We can see evidence of this shortcoming in the instability challenges faced by neural closures for climate models (Brenowitz et al., 2020).
There have been a variety of approaches to harness physical priors within learning schemes to address this issue. Some injects trainable components in classical integration schemes of physical models such as Yin et al. (2021). Others leverage physical priors within their learning setups which can been used in the training objective (Raissi et al., 2019; Greydanus et al., 2019), as well as in the architecture (Li et al., 2020; Wang et al., 2020). However most of these works have focused on relatively simple physical models and it remains challenging to combine current state-of-the-art ocean models with such methods. Obstacles include the complexity and cost of running the physical models, the differences in programming tools and the computing infrastructures used in each domain, as well as the availability of automatic differentiation tools for state-of-the-art ocean models.
The proposed simulation-based training strategy offers another way to benefit from the advances in high-resolution ocean modeling in the design of deep neural models for ocean reanalysis problems.
## 3 Method
### Overview
We designate our approach as "simulation-based", it consists in leveraging ocean models and simulations of observing systems to design supervised training environments. In this section, we describe the proposed method for assessing the potential of simulation-based neural schemes for the mapping real altimetry tracks. We describe the architecture considered in our study, as well as the different datasets used for training purposes. We also detail our simulation-based training setup and the proposed evaluation framework on real altimetry.
Figure 1: **Overview of the experimental setup**. On the left side we display the simulation-based training strategy based on an ocean simulation which will be used for 1) generating synthetic observation and 2) computing the training objective of the neural mapping scheme. On the right side we show the evaluation principle of splitting the available satellite observations to evaluate the method on data that were not used for the inference.
### Neural mapping scheme
The neural mapping scheme considered for this study is the 4DVarNet framework(Fablet et al., 2021). We choose this scheme due to the performance shown in the OSSE setup. As reported in Beauchamp et al. (2023), it significantly outperforms the DUACS product (Taburet et al., 2019) in the targeted Gulf stream region. 4DVarNet relies on a variational data assimilation formulation. The reconstruction results from the minimization of a variational cost. This cost encapsulates a data fidelity term and a regularization term. It exploits a prior on the space-time dynamics through a convolutional neural network inspired from (Fablet et al., 2018), and an iterative gradient-based minimization based on a recurrent neural network as introduced by Andrychowicz et al. (2016). The overall architecture and components are similar to those presented in Beauchamp et al. (2023). We adapt some implementation details based on cross-validation experiments to improve the performance and reduce the training time. We refer the reader to the code for more details (Febvre, 2023).
### SSH Data
We use numerical simulations of ocean general circulation models (OGCM) to build our reference SSH datasets. Such simulations involve a multitude of decisions that affect the resulting simulated SSH. Here we consider NEMO (Nucleus for European Modelling of the Ocean) (Gurvan et al., 2022) which is among the state-of-the art OGCM in operational oceanography (Ajayi et al., 2020) as well as in climate studies (Voldoire et al., 2013). The selected SSH datasets reported in Table 1 focus on three main aspects: the added-value of high-resolution eddy-rich simulations, the impact of reanalysis datasets and the relevance of tide-resolving simulations.
In order to evaluate the impact of eddy-rich simulations, we consider NATL60, GLORYS12-f and ORCA025 free runs, respectively with a horizontal grid resolution of \(1/60^{\circ}\), \(1/12^{\circ}\), and \(1/4^{\circ}\). Finer grids allow for more processes to be simulated. We therefore expect higher-resolution simulations to exhibit structures closer to the real ocean and the associated trained deep learning model to perform better. Regarding the impact of reanalysis data, we compare numerical experiments with the GLORYS12-r reanalysis and the associated free run GLORYS12-f. This reanalysis dataset relies on the assimilation of temperature, sea level and sea ice concentration observations. Besides, the recent eNATL60 twin simulations eNATL60-t and eNATL60-0 allow us to evaluate the impact of tide-resolving simulations. We summarize in Table 1 the characteristics of the different datasets.
\begin{table}
\begin{tabular}{l l||c c c c} \hline \hline & & Resolution & Reanalysis & Tide & DAC \\ \hline NATL60 & (Ajayi et al., 2020) & \(1/60^{\circ}\) & No & No & No \\ eNATL60-t & (Brodeau et al., 2020) & \(1/60^{\circ}\) & No & Yes & Yes \\ eNATL60-0 & (Brodeau et al., 2020) & \(1/60^{\circ}\) & No & No & Yes \\ GLORYS12-r & (Lellouche et al., 2021) & \(1/12^{\circ}\) & Yes & No & No \\ GLORYS12-f & (Lellouche et al., 2021) & \(1/12^{\circ}\) & No & No & No \\ ORCA025 & (Barnier et al., 2006) & \(1/4^{\circ}\) & No & No & No \\ \hline \hline \end{tabular}
\end{table}
Table 1: **Summary table of the different synthetic SSH fields used for training**. The last column indicate whether the Dynamic Atmospheric Correction was applied on the synthetic SSH. It justify the presence of both eNATL60-0 and NATL60 to isolate the impacts of resolution and tide.
### OSSE-based training setup
We sketch the proposed OSSE-based training setup on the left side of the Figure 1. In order to fairly evaluate the datasets' quality as a training resource, we standardize the training procedure. We regrid all simulations to the same resolution (\(1/20^{\circ}\)) and we use daily-averaged SSH fields as training targets. We generate noise-free pseudo-observations by sampling values of the daily-averaged fields corresponding to realistic orbits of a 5 altimeter-constellation. We train all models from a one-year dataset in a Gulfstream domain from (66\({}^{\circ}\)W, 32\({}^{\circ}\)N) to (54\({}^{\circ}\)W, 44\({}^{\circ}\)N) in which we keep the same two months for validation. The hyper-parameters of the model and training procedure such as the number of epoch, learning rate scheduler are the same for all the experiments. The detailed configuration can be found by the reader in the available implementation. As training objective, we combine the mean square errors for the SSH fields and the amplitude of the gradients as well as a regularization loss for the prior model.
### OSE-based evaluation setup
As sketched on the right side of the Figure 1, the evaluation setup relies on real altimetry data from the constellation of 6 satellites from 2017 (SARAL/Altika, Jason 2, Jason 3, Sentinel 3A, Haiyang-2A and Cryosat-2 ). We apply the standardized setup presented in a data-challenge [https://github.com/ocean-data-challenges/2021a_SSH_mapping_OSE](https://github.com/ocean-data-challenges/2021a_SSH_mapping_OSE). We use the data from the first five satellites as inputs for the mapping and the last one (Cryosat-2) for computing the performance metrics. We compute these metrics in the along-track geometry. The evaluation domain spans from (65\({}^{\circ}\)W, 33\({}^{\circ}\)N) to (55\({}^{\circ}\)W, 43\({}^{\circ}\)N) and the evaluation period from January 1\({}^{st}\) to December 31\({}^{st}\) 2017. Given \(\eta_{c2}\) and \(\tilde{\eta}\) the measured SSH and the reconstructed SSH respectively, we compute the following two metrics:
* \(\mu_{ssh}\) is a score based on the normalized root mean squared (nRMSE) error computed as \(1-\dfrac{RMS(\hat{\eta}-\eta_{c2})}{RMS(\eta_{c2})}\)
* \(\lambda_{x}\) is the wavelength at which the power spectrum density (PSD) score \(1-\dfrac{PSD(\hat{\eta}-\eta_{c2})}{PSD(\eta_{c2})}\) crosses the 0.5 threshold, which characterize the scales resolved by the reconstruction (the error below that wavelength makes up for more than half of the total signal)
In Table 3, we also consider the root mean square error (RMSE) as well as the nRMSE score of the sea level anomaly \(\mu_{sla}\) obtained by subtracting the mean dynamic topography to the SSH. Lastly, we assess the performance degradation resulting from the transition from simulated to real data by quantifying the improvement relative to DUACS in the resolved scale \(\lambda_{x}\) on our OSE setup as well as on the OSSE benchmarking setup proposed in Guillou et al. (2021). This benchmarking setup relies on NATL60-CJM165 OSSE dataset. We refer the reader to [https://github.com/ocean-data-challenges/2020a_SSH_mapping](https://github.com/ocean-data-challenges/2020a_SSH_mapping)\_NATL60 for a detailed description of this experimental setup.
Figure 2: **Samples Kinetic energy and relative vorticity of the training and reconstruction data of January 6\({}^{\prime}h\)**. The reconstructed year is 2017 while the training year vary depending on the simulation. The first two columns (a) and (b) show the training data while columns (c) and (d) show the associated 4DVarNet reconstruction. The kinetic energy is displayed in columns ((a) and (c)) and the relative vorticity normalized by the local Coriolis parameter in columns ((b) and (d)). Each row shows the experiment using respectively: ORCA025 (I), GLORYS12-f (II), GLORYS12-r (III), NATL60 (IV), eNATL60-t (V) and eNATL60-0 (VI)
## 4 Results
This section details our numerical experiments for the considered real altimetry case-study for a Gulf Stream region as described in Section 3.5. We first report the benchmarking experiments to assess the performance of the proposed learning-based strategy with respect to (w.r.t.) state-of-the-art mapping schemes. We then analyse how the characteristics of the training datasets drive the mapping performance.
### Benchmarking against the state of the art
We report in Table 2 the performance metrics of state-of-the-art approaches including both operational observation products (Taburet et al., 2019; Ubelmann et al., 2021), deep-learning-based schemes trained on observation data (Archambault et al., 2023; Martin et al., 2023) as well as methods using explicitly a model-based prior on sea surface dynamics (Guillou et al., 2021; Ballarotta et al., 2020; Lellouche et al., 2021). We compare those methods with a 4DVarNet trained on eNATL60-0 OSSE dataset. The latter outperforms all other methods on the two metrics considered (22% improvement in RMSE w.r.t. the DUACS product and 33% improvement in the resolved scale). We report a significantly worse performance for GLORYS12 reanalysis. This illustrates the challenge of combining large ocean general circulation models and observation data for the mapping of the SSH.
The last column indicates that the 4DVarNet scheme leads to the best mapping scores for both the OSE and OSSE setups. For the latter, the reported improvement of 47% is twice greater than the second best at 22%. The performance of the 4DVarNet drops by 11% when considering the former. By contrast, other methods do not show such differences between the OSE and OSSE case-studies. This suggests that the finer-scale structures that are well reconstructed in the OSSE setup are not as beneficial in the OSE setup. While one could question the representativeness of the OSSE datasets for the fine-scale patterns in the true ocean, real nadir altimetry data may also involve multiple processes which could impede the reconstruction and evaluation of horizontal scales below 100km.
Figure 3: **Space-time spectral densities of the training datasets (first row) and of their associated reconstruction (second row).** Darker blue in the lower left corner indicates higher energy at larger wavelength and periods. The different SSH fields exhibit different energy cascades when moving to finer temporal (upward) or spatial (rightward) scales.
\begin{table}
\begin{tabular}{l||c c c c|c c c c} \hline \hline & SSH & Deep & Calibrated on & Physical & rmse & \(\mu_{ssh}\) & \(\lambda_{x}\) & \(1-\frac{\lambda_{x}}{\lambda_{xref}}\) \\ & Only & Learning & data from & Model & (cm) & () & (km) & (\% ose, ose) \\ \hline (a) **4DVarNet** & Yes & Yes & Simulation & – & **5.9** & **0.91** & **100** & **33, 47** \\ (b) MUSTI & No & Yes & Satellite & – & 6.3 & 0.90 & 112 & 26, 22 \\ (c) ConvLstm-SST & No & Yes & Satellite & – & 6.7 & 0.90 & 108 & 28, – \\ (d) ConvLstm & Yes & Yes & Satellite & – & 7.2 & 0.89 & 113 & 25, – \\ (e) DYMOST & Yes & No & Satellite & QG & 6.7 & 0.90 & 131 & 13, 11 \\ (f) MIOST & Yes & No & Satellite & – & 6.8 & 0.90 & 135 & 11, 10 \\ (g) BFN-QG & Yes & No & Satellite & QG & 7.6 & 0.89 & 122 & 19, 21 \\ (h) DUACS & Yes & No & Satellite & – & 7.7 & 0.88 & 151 & 0, 0 \\ (i) GLORYS12 & No & No & Satellite & NEMO & 15.1 & 0.77 & 241 & -60, – \\ \hline \hline \end{tabular}
\end{table}
Table 2: **SSH reconstruction performance of the benchmarked methods (a) 4DVarNet from this study trained on eNATL60-0 (b) Archambault et al. (2023), (c and d) ConvLstm-SST and ConvLstm from Martin et al. (2023), (e) DYMOST from Balarotta et al. (2020), (f) MIOST from Ubelmann et al. (2021), (g) BFN-QG from Guillou et al. (2021), (h) DUACS from Taburet et al. (2019), (i) GLORYS12 from Lellouche et al. (2021. The columns indicate from left to right: whether athe mapping schemes rely only on SSH data or also exploit additional data such as gap free SST products; if the method uses deep learning architectures; the data used to calibrate (or train) the mapping scheme; the numerical model of the ocean used for the mapping if any (QG stands for quasi-geostrophic); \(\mu\) and \(\lambda_{x}\) are the metrics as described in Section 3.5**
\begin{table}
\begin{tabular}{l||c c c c c} \hline \hline Training Data & RMSE & \(\mu_{ssh}\) & \(\mu_{sla}\) & \(\lambda_{x}\) & \(1-\frac{\lambda_{x}}{\lambda_{xref}}\) \\ & (cm) & & (km) & (\% ose, ose) \\ \hline NATL60 & **5.9** & **0.91** & **0.80** & **98** & **(35, –)** \\ eNATL60-t & **5.9** & **0.91** & **0.80** & 100 & (33, 48) \\ eNATL60-0 & **5.9** & **0.91** & **0.80** & 100 & (33, 47) \\ GLORYS12-r & 6.3 & 0.90 & 0.78 & 106 & (30, 28) \\ GLORYS12-f & 6.7 & 0.90 & 0.77 & 119 & (21, 23) \\ ORCA025 & 7.1 & 0.89 & 0.76 & 126 & (17, 17) \\ \hline \hline \end{tabular}
\end{table}
Table 3: **Performance of 4DVarNet mapping schemes trained on different simulated datasets**. The first column shows the source of the training dataset as described in Table 1; the subsequent columns indicate the reconstruction metrics described in Section 3.5. Note that the NATL60 could not be evaluated on the OSSE setup since the evaluation data were used for validation during the training stage.
### Eddy-present datasets versus eddy-rich ones
We analyse here in more detail the impact of the spatial resolution of the training dataset onto the reconstruction performance. In Table 3, as expected, the higher resolution grid in the ocean run simulation leads to a better mapping with a 22% improvement in \(\lambda_{x}\) and a 17% improvement in the RMSE score between the experiments with the coarsest (ORCA025) and finest (NALT60) resolutions. We also observe qualitative differences in the relative vorticity fields in Figure 2. Residual artifacts due to the altimetry tracks appear (60\({}^{\circ}\)W, 39\({}^{\circ}\)N) for the two lower-resolution training datasets. They are greatly diminished when considering the NALT60 dataset. Despite these differences, the reconstructed vorticity and kinetic energy fields in Figure 2 look very similar for the different 4DVarNet schemes, whatever the training datasets. By contrast, the vorticity and kinetic energy fields in the training datasets clearly depict fewer fine-scale structures and weaker gradients for the lower-resolution simulation datasets, namely ORCA025 and GLORYS12-f. These results support the generalization skills of 4DVarNet schemes to map real altimetry tracks despite being trained on SSH sensibly different from the reconstruction.
We draw similar conclusions from the analysis of the spectral densities shown in Figure 4. The differences in the energy distribution of the training data significantly reduce in the reconstructions. 4DVarNet schemes trained from higher-resolution datasets however result in more faithful reconstruction at all scales. The patterns observed for the temporal PSD are slightly different in Figure 3. We do not observe the same homogenization as for the spatial PSD. Lower-resolution training datasets involve a significant drop of an order of magnitude for periods greater than 10 days and wavelength greater than 200km.
### Forced simulation datasets versus reanalysis ones
Looking in more specifically at the effect of ocean reanalysis between the two experiments GLORYS12-f and GLORYS12-r. We can first note the impact of observation data assimilation in Figure 3 where we see how the power spectrum of the reanalysis is significantly raised compared to the free run. The spectrum is closer to ones of the higher resolution simulations. Visually we also clearly see stronger gradients in the kinetic energy in Figure 2.
Figure 4: **Spectral analysis of the training and reconstructed SSH datasets**. We display the PSD of the training dataset (left plot), reconstructed SSH field (center plot) as well as the associated PSD score (right plot)
We can observe a similar behavior as in Section 4.2 in Figure 5 with the gap of in spectral density being diminished between the training and reconstruction data, and the PSD score indicating a lower energy of the error at all scales for the reanalysis-based experiment.
Quantitatively in Table 1 we see an improvement of 11% in both the RMSE and the scale resolved, besides training on a reanalysis increase the relative gain w.r.t. DUC-ACS significantly more on real data (+9%) than on simulated data (+5%) as we can see in the right most column. This suggests that the reanalysis dataset conveys information on real world observations which improves the generalization performance.
### Tide-free datasets versus tide-resolving ones
We assess here the impact of tide-resolving simulation used as training data. We use the twin eNATL60 runs eNATL60-t and eNATL60-0. Contrary to other runs, those simulations contain barometric and wind forcing, we therefore remove the Dynamic Atmospheric Correction (Carrere et al., 2016) from the SSH fields. Additionally since the barotropic tide signals are removed from real altimetry tracks prior to interpolation, we also remove the signal from the training data by subtracting the spatial mean over the training domain for each hourly snapshot before calculating the daily averages.
Given those processing steps, the two training datasets exhibit very similar wavenumber spectra as shown in Figures 3. We also find that training on those two datasets produce little differences in the reconstructions both quantitatively (see Table 3) and qualitatively (Fig. 2). The resulting performance is comparable to that of the NATL60 experiment.
We identify two hypotheses for explaining why tide-resolving simulation do not lead to better mapping schemes:
* The preprocessing applied on the training field remove the main tide signals. We therefore effectively measure the impact of tide modeling on other ocean processes that may be less significant;
* The evaluation procedure applied on altimetry tracks on which the barotropic tide has been filtered may not be interpretable enough to measure the reconstruction of residual tide signals. New instruments like the KaRIN deployed in the SWOT mission may provide new ways to better quantify those effects.
Figure 5: **Spectral impact of model reanalysis**. We display the PSD of the training dataset (left plot), reconstructed SSH field (center plot) as well as the associated PSD score (right plot)
These findings provide motivation for carefully considering the purpose of the learning-based model when making decisions about the training data. In our case, explicitly modeling tide processes that are removed from the observations in the evaluation setup added overheads in the computational cost of running the simulation as well as in the preprocessing of the training data. Additionally given the considered evaluation data and metrics, we were not able to quantify any significant differences between the two trained mapping schemes.
## 5 Discussion
This study has been greatly facilitated by the standardized tasks and evaluation setups proposed in data-challenges [https://ocean-data-challenges.github.io/](https://ocean-data-challenges.github.io/). Data-challenges are used to specify a targeted problem of interest to domain experts through datasets and relevant evaluation metrics. This preliminary work have been instrumental in constituting the comprehensive benchmark and combining methods from different teams and institution around the world. Additionally, it also constitutes a strong basis for a trans-disciplinary collaboration between the ocean and machine learning research communities.
Moreover, the results presented in this study introduce a use of ocean simulations for developing altimetry products. This opens new ways for ocean physicist, modelers and operational oceanographers to collaborate. In order to assess the range of these new synergies, it would be interesting to explore if the approach proposed here of training neural schemes using simulation data would generalize to other tasks such as forecast or sensor calibration and to other quantities like surface temperature, currents, salinity or biochemical tracers.
If the simulation-based training approach introduced here is successfully extended to other ocean problems, one could envision training large foundation deep learning models (Brown et al., n.d.) capturing the inner structure of high resolution ocean simulations which could then be used in many downstream applications. This could be the way to capitalize on all the advancement in ocean modeling without having to run OGCM numerical simulation for each downstream products.
Furthermore, we would like to highlight the cost consideration when running numerical simulation intended for training learning based schemes. Indeed given that the eNATL60 run took 2700x CPU hours and 350x memory compared to the ORCA025 run for a smaller domain, a trade-off arises between generating multiple "cheap" trajectories versus generating a single more realistic trajectory.
To conclude, we have shown in this study that training machine learning models on simulations datasets leads good performance on real altimetry data mapping and outperforms current state of the art approaches. The model trained on NATL60 reduces the RMSE by 18% compared neural schemes trained on observation data and improves the scales resolved by 33% compared to the DUACS operational product. Even the coarsest simulation considered ORCA025 provides competitive results with current operational methods. We have shown that using a more realistic SSH fields using reanalysis or higher resolution simulations increases the performances of the trained model. This is an exciting result that shows the potential for training operational products from ocean simulations and how advances in ocean modeling in operational oceanography can be beneficial. The results shown here are limited to the interpolation problem on a regional domain but the robustness of the performance shown are encouraging for further developing these results using a larger domain.
## Open Research Section
The authors provide the training data, source code, reconstructed maps and trained model for each experiments of the manuscript at [https://doi.org/10.5281/zenodo.8064114](https://doi.org/10.5281/zenodo.8064114).
This work was supported by ANR Projects Melody and OceaniX and CNES. It benefited from HPC and GPU resources from GENCI-IDRIS (Grant 2020-101030) and Ifremer.
|
2309.04143 | On several problems in p-Bergman theory | In this paper, we first answer Chen-Zhang's problem on $p$-Bergman metric
proposed in \cite{CZ22}. Second, we prove the off-diagonal p-Bergman kernel
function $K_p(z,w)$ is H\"older continuous of order (1-$\varepsilon$) about the
second component when $p>1$ for any $\varepsilon>0$, which improves the
corresponding result of Chen-Zhang. Moreover, we prove the asymptotic behavior
of the maximizer of $p$-Bergman kernel as $p\rightarrow 1^-$. Finally, we give
a characterization of a class of holomorphic functions on $\mathbb{B}^1$ to be
$L^p$-integrable. | Yinji Li | 2023-09-08T05:55:10Z | http://arxiv.org/abs/2309.04143v1 | # On several problems in P-Bergman theory
###### Abstract.
In this paper, we first answer Chen-Zhang's problem on \(p\)-Bergman metric proposed in [2]. Second, we prove the off-diagonal p-Bergman kernel function \(K_{p}(z,w)\) is Holder continuous of order (1-\(\varepsilon\)) about the second component when \(p{>}1\) for any \(\varepsilon>0\), which improves the corresponding result of Chen-Zhang. Moreover, we prove the asymptotic behavior of the maximizer of \(p\)-Bergman kernel as \(p\to 1^{-}\). Finally, we give a characterization of a class of holomorphic functions on \(\mathbb{B}^{1}\) to be \(L^{p}\)-integrable.
###### Contents
* 1 Introduction
* 2 Chen-Zhang's problem
* 3 Holder continuity of \(m_{p}(z,\cdot)\)
* 3.1 The case of \(1{<}p\leq 2\)
* 3.2 The case of \(p{>}2\)
* 4 Asymptotic Behavior of Maximizers of \(K_{p}(z)\) as \(p\to 1^{-}\)
* 5 Characterization of \(L^{p}\)-integrability of a class of holomorphic functions on \(\mathbb{B}^{1}\)
## 1. Introduction
The \(L^{2}\) Bergman theory was established by Stefan Bergman in the 1920s, is one of the fundamental theories in several complex variables and complex geometry. The \(L^{2}\) Bergman space on a domain in \(\mathbb{C}^{n}\) is the space of \(L^{2}\) holomorphic functions on that domain, which can be easily shown to be a Hilbert space using the theory of normal families. The \(L^{2}\) Bergman kernel as the integral kernel of the evaluation functional on the \(L^{2}\) Bergman space, shares good properties such as real analyticity and reproducing property. The \(L^{2}\) Bergman kernel function is obtained by evaluating the kernel on the diagonal. On a bounded domain in \(\mathbb{C}^{n}\), the \(L^{2}\) Bergman kernel function is smooth and strictly plurisubharmonic, non-vanishing and thus induces an invariant Kahler metric on that domain, which is known as the \(L^{2}\) Bergman metric. The \(L^{2}\) Bergman metric plays an important role in the study of bounded domains. The \(L^{2}\) Bergman theory can be extended to the framework of Hermitian holomorphic vector bundles over complex manifolds, and has important applications in the study of various important problems in complex geometry and algebraic geometry.
In comparison with the \(L^{2}\) Bergman theory, the \(L^{p}\) Bergman theory has not been well studied. In [16], Ning-Zhang-Zhou initiate a systematic study of the \(L^{p}\) Bergman theory, and get a deep result that a bounded domain is pseudoconvex if and only if the \(L^{p}\) Bergman kernel is exhaustive for some \(p\in(0,2)\). Recently, Deng-Wang-Zhang-Zhou [14] proved the following fundamental result that two bounded hyperconvex domains in \(\mathbb{C}^{n}\) are biholomorphically equivalent if and only if the normed \(L^{p}\) Bergman spaces associated to them are linearly isometric for some \(p\in(0,2)\). This shows that the \(L^{p}\) Bergman space is a complete biholomorphic linear isometric invariant of bounded hyperconvex domains in \(\mathbb{C}^{n}\). However, it is well-known that the \(L^{2}\) Bergman space can not determine the complex structure of bounded hyperconvex domains, say the punctured disc for example. Thus the result by Deng-Wang-Zhang-Zhou indicates that the \(L^{p}\) Bergman space is a very important research object and the \(L^{p}\) Bergman theory deserves further development. However, unlike the \(L^{2}\) Bergman theory, the \(L^{p}\) spaces are generally not Hilbert spaces, which poses essential difficulties for research. A basic problem such as computing the \(L^{p}\) Bergman kernel is highly challenging, and even the \(L^{p}\) Bergman kernel on the punctured disk in the complex plane cannot be computed so far. Therefore, new methods and tools need to be developed.
For a bounded domain \(\Omega\subset\mathbb{C}^{n}\), we define \(A^{p}(\Omega)\) to be the p-Bergman space of \(L^{p}\) holomorphic functions on \(\Omega\)(throughout this paper the integrals are with respect to Lebesgue measure). As introduced in [16], the \(p\)-Bergman kernel \(K_{p}(z)\) is defined as
\[K_{p}(z)=\sup_{f\in A^{p}(\Omega)\setminus\{0\}}\frac{|f(z)|^{p}}{\|f\|_{p}^{ p}},\]
where \(\|f\|_{p}=(\int_{\Omega}|f|^{p})^{1/p}\). The \(p\)-Bergman kernel can also be defined via a minimizing problem which was first introduced by Bergman himself in the case \(p=2\):
\[m_{p}(z):=\inf\{||f||_{p}:f\in A^{p}(\Omega),f(z)=1\}.\]
By a normal family argument, we know that there exists at least one minimizer for \(p\)\(>\)\(0\) and exactly one minimizer \(m_{p}(\cdot,z)\) for \(p\geq 1\). It is easy to see that \(K_{p}(z):=m_{p}(z)^{-p}\) for \(p\)\(>\)\(0\). The the off-diagonal \(p\)-Berman kernal is defined as \(K_{p}(z,w):=m_{p}(z,w)K_{p}(w)\) for \(p\geq 1\).
Recently, Chen-Zhang [10] explored further fundamental aspects of the \(L^{p}\) Bergman theory using variational methods. They derived reproducing formula for \(L^{p}\) Bergman kernels and show that the off-diagonal \(L^{p}\) Bergman kernel (\(p\)-Bergman kernel for short) \(K_{p}(z,\cdot)\) is Holder continuous of order \(\frac{1}{2}\) for \(p>1\) and of order \(\frac{1}{2(n+2)}\) for \(p=1\). They also defined the \(p\)-Bergman mertic \(B_{p}(z;X)\) and showed that the \(p\)-Bergman metric \(B_{p}(z;X)\) tends to the Caratheodory metric \(C(z;X)\) as \(p\to\infty\) and the generalized Levi form \(i\partial\bar{\partial}\log K_{p}(z;X)\) is no less than \(B_{p}(z;X)^{2}\) for \(p\geq 2\) and \(C(z;X)^{2}\) for \(p\leq 2\).
Since it is well-known that \(i\partial\bar{\partial}\log K_{p}(z;X)=B_{p}(z;X)^{2}\) for \(p=2\), Chen-Zhang raised the following
**Problem 1.1** ([14, Problem 8]).: Is it possible to conclude that \(i\partial\bar{\partial}\log K_{p}(z;X)=B_{p}(z;X)^{2}\) for \(2<p<+\infty\)?
In this paper, we first answer Problem 1.1 by establishing the following
**Theorem 1.1**.: Let \(\Omega\) be complete circular and bounded homogeneous domain in \(\mathbb{C}^{n}\), we have for \(X\neq 0\),
\[i\partial\bar{\partial}K_{p}(0;X){>}B_{p}(0;X)^{2},\ p{>}2,\]
\[i\partial\bar{\partial}K_{p}(0;X){<}B_{p}(0;X)^{2},\ p{<}2.\]
Second, by introducing a new iteration technique, we are able to improve the regularity of the off-diagonal \(p\)-Bergman kernels, namely we improve the order of the Holder continuity from \(\frac{1}{2}\) to \(1-\varepsilon\) for any \(\varepsilon>0\) and \(p>1\).
**Theorem 1.2**.: Let \(p{>}1\), \(\varepsilon{>}0,\ S\subseteq\subseteq\Omega\), there exists \(C=C(\varepsilon,S)\) such that for \(z^{\prime},z,w\in S\)
\[|m_{p}(z^{\prime},z)-m_{p}(z^{\prime},w)|\leq C|z-w|^{1-\varepsilon}.\]
Moreover, the off-diagonal \(p\)-Bergman kernel \(K_{p}(z,\cdot)\) is Holder continuous of order \(1-\varepsilon\).
It is proved in [14, Proposition 2.4, Proposition 2.5] that for \(p\geq 1\) the maximizer \(f\) of \(K_{p}(z)\) is unique under the condition \(f(z)=1\). Actually, it is precisely \(m_{p}(\cdot,z)\). But the uniqueness of the maximizer of \(K_{p}(z)\) for \(0<p<1\) is not known. We study the asymptotic behavior of the maximizers of \(K_{p}(z)\) as \(p\to 1^{-}\) and get the following
**Theorem 1.3**.: Let \(p{<}1\), we define the metric \(d(f,g):=\int_{\Omega}|f-g|^{p}\) on \(A^{p}(\Omega)\). Denote \(d_{p}(z):=\sup\{d(f_{p},g_{p})\}\), where sup is taken over all pairs of maximizers \(f_{p},g_{p}\) of \(K_{p}(z)\) satisfying \(f_{p}(z)=g_{p}(z)=1\). Then, it holds that
\[\forall z\in\Omega,\lim_{p\to 1^{-}}d_{p}(z)=0.\]
Finally, we study \(L^{p}\) Bergman space \(A^{p}(\mathbb{B}^{1})\) on the unit disk \(\mathbb{B}^{1}\). A charcterization for a class of holomorphic functions on \(\mathbb{B}^{1}\) to be \(L^{p}\)-integrable is established as follows.
**Theorem 1.4**.: Let \(p{>}0\), there exists \(C=C(p,A)\) such that, if \(f\in\mathcal{O}(\mathbb{B}^{1})\), \(f(z)=\sum_{k=1}^{\infty}a_{\lambda_{k}}z^{\lambda_{k}}\) for some lacunary sequence \(\{\lambda_{k}\}\) with constant \(A\),
\[C(p,A)^{-1}\int_{0}^{1}(\sum_{k=1}^{\infty}|a_{\lambda_{k}}r^{2\lambda_{k}}|) ^{\frac{p}{2}}dr\leq\int_{\mathbb{B}^{1}}|f|^{p}\leq C(p,A)\int_{0}^{1}(\sum_{ k=1}^{\infty}|a_{\lambda_{k}}r^{2\lambda_{k}}|)^{\frac{p}{2}}dr.\]
In particular, a holomorphic function \(f(z)=\sum_{k=1}^{\infty}a_{\lambda_{k}}z^{\lambda_{k}}\) for some lacunary sequence \(\{\lambda_{k}\}\) with constant \(A\) is \(L^{p}\)-integrable if and only if the integration \(\int_{0}^{1}(\sum_{k=1}^{\infty}|a_{\lambda_{k}}r^{2\lambda_{k}}|)^{\frac{p}{2 }}dr\) is finite.
_Remark 1.1_.: Theorem 1.4 can also be used to give a similar characterization of a class of holomorphic functions on the punctured disk to be \(L^{p}\)-integrable by considering the Laurent expansions.
The structure of this paper is organized as follows. In SS2, we answer the open problem raised by Chen-Zhang, and prove Theorem 1.1. In SS3, we prove the off-diagonal \(p\)-Bergman kernel is Holder continuous of order \(1-\varepsilon\), i.e. Theorem 1.2. In SS4, we study the asymptotic behavior of the maximizer of the \(p\)-Bergman kernel as \(p\to 1^{-}\), i.e. Theorem 1.3. Finally, in SS5, we give a characterization of a class of holomorphic functions on the unit disk to be \(L^{p}\)-integrable, i.e. Theorem 1.4.
**Acknowlegements**. The author would like to express his sincere gratitude to Professor Zhiwei Wang and Professor Xiangyu Zhou for their guidence and encouragements. This research is supported by National Key R&D Program of China (No. 2021YFA1002600).
## 2. Chen-Zhang's problem
In this section, we answer the Problem 1.1 raised by Chen-Zhang.
**Definition 2.1.** A domain \(\Omega\subseteq\mathbb{C}^{n}\) is said to be complete circular and bounded homogeneous, if \(\forall z\in\mathbb{C}^{n},t\in\mathbb{C},|t|\leq 1\), we have \(tz\in\Omega\).
We restate Theroem 1.1 as follows.
**Theorem 2.1.** Let \(\Omega\) be complete circular and bounded homogeneous domain in \(\mathbb{C}^{n}\), we have for \(X\neq 0\),
\[i\partial\bar{\partial}K_{p}(0;X){>}B_{p}(0;X)^{2},\ p{>}2,\]
\[i\partial\bar{\partial}K_{p}(0;X){<}B_{p}(0;X)^{2},\ p{<}2.\]
Proof. It follows from [16, Theorem 2.3,Remark 2.1] that, on \(\Omega\), we have \(K_{p}(\cdot)=K_{2}(\cdot),\forall p{>}0\). In particular, \(K_{p}(0)=K_{2}(0)=\frac{1}{\operatorname{vol}(\Omega)}\). It is clear that
\[i\partial\bar{\partial}\log K_{p}(z;X)=i\partial\bar{\partial}\log K_{2}(z;X).\]
In the following, we prove that
\[B_{p}(z;X){<}B_{2}(z;X),\ p{>}2\]
\[B_{p}(z;X){<}B_{2}(z;X),\ p{<}2.\]
Recall the definition, \(B_{p}(z;X):=K_{p}(z)^{-\frac{1}{p}}\cdot\sup_{f\in A^{p}(\Omega),f(z)=0,||f||_ {p}>0}\frac{|Xf(z)|}{||f||_{p}}\). By a normal family argument we know that there exists maximizer of \(B_{p}(0;X)\) and denote it by \(f_{p}\). It follows from Holder inequality that
\[||f_{p}||_{p}^{2}\cdot||1||_{p}^{p-2}\geq||f_{p}||_{2}^{2},\ p{>}2.\]
However the equality can not be achieved since \(f_{p}\not\equiv 1\). Thus we get that
\[B_{p}(0;X)^{2} =K_{p}(0)^{-\frac{2}{p}}\cdot\frac{|Xf_{p}(0)|^{2}}{||f_{p}||_{p}^{ 2}}\] \[< K_{2}(0)^{-1}\cdot\frac{|Xf_{p}(0)|^{2}}{||f_{p}||_{2}^{2}}\] \[\leq B_{2}(0;X).\]
The case that \(p{<}2\) can be proved by the same method.
## 3. Holder continuity of \(m_{p}(z,\cdot)\)
In this section, we prove the off-diagonal \(p\)-Bergman kernel is Holder continuous of order \(1-\varepsilon\) for any \(\varepsilon>0\). More precisely, we prove the following
**Theorem 3.1**.: Let \(p{>}1\), \(\varepsilon{>}0,\ S\subseteq\subseteq\Omega\), there exists \(C=C(\varepsilon,S)\) such that for \(z^{\prime},z,w\in S\)
\[|m_{p}(z^{\prime},z)-m_{p}(z^{\prime},w)|\leq C|z-w|^{1-\varepsilon}.\]
Let us introduce an important function as follows
\[H_{p}(z,w):=K_{p}(z)+K_{p}(w)-\mathrm{Re}\{K_{p}(z,w)+K_{p}(w,z)\}.\]
### The case of \(1{<}p\leq 2\)
In this section, we assume \(1{<}p\leq 2\) and prove Theorem 3.1.
Proof.: It follows from the proof of [2, Lemma 4.5] that
\[\int_{\Omega}|m_{p}(\cdot,z)-m_{p}(\cdot,w)|^{p}\leq\frac{C_{p}}{K_{p}(z)K_{p} (w)}[K_{p}(z)+K_{p}(w)]^{1-\frac{p}{2}}H_{p}(z,w)^{\frac{p}{2}}.\]
This leads to \(||m_{p}(\cdot,z)-m_{p}(\cdot,w)||_{p}\leq C(p,S)H_{p}(z,w)^{\frac{1}{2}}\). Next, we are going to establish an estimate for \(H_{p}(z,w)\).
\[|\frac{H_{p}(z,w)}{z-w}| =\frac{\mathrm{Re}\{K_{p}(z)[m_{p}(w,w)-m_{p}(w,z)]+K_{p}(w)[m_{p }(z,w)-m_{p}(z,z)]\}}{|z-w|}\] \[\leq K_{p}(z)\frac{|[m_{p}(w,w)-m_{p}(w,z)]-[m_{p}(z,w)-m_{p}(z,z )]|}{|z-w|}\] \[+|m_{p}(z,z)-m_{p}(z,w)|\frac{|K_{p}(z)-K_{p}(w)|}{|z-w|}.\]
Since \(K_{p}(\cdot)\) is locally Lipschitz by [2, Proposition 2.11], we know that \(\frac{|K_{p}(z)-K_{p}(w)|}{|z-w|}\leq C(S)\). It follows from the sub mean-value property of plurisubharmonic function that \(|m_{p}(z,z)-m_{p}(z,w)|\leq C||m_{p}(\cdot,z)-m_{p}(\cdot,w)||_{p}\), for some \(C=C(S)\). In view of Cauchy integral formula, we know that \(m_{p}(\cdot,z)-m_{p}(\cdot,w)\)'s derivative is controlled by its \(L^{1}\) norm, therefore we get
\[\frac{|[m_{p}(w,w)-m_{p}(w,z)]-[m_{p}(z,w)-m_{p}(z,z)]|}{|z-w|} \leq C||m_{p}(\cdot,z)-m_{p}(\cdot,w)||_{1}\] \[\leq||m_{p}(\cdot,z)-m_{p}(\cdot,w)||_{p}\]
All the facts above imply that, for some \(C=C(S)\)
\[H_{p}(z,w)\leq C||m_{p}(\cdot,z)-m_{p}(\cdot,w)||_{p}\cdot|z-w|.\]
Combine this result with the fact that
\[||m_{p}(\cdot,z)-m_{p}(\cdot,w)||_{p}\leq C(p,S)H_{p}(z,w)^{\frac{1}{2}},\]
for any number \(\delta\) less than \(\frac{1}{2}+\frac{1}{4}+\frac{1}{8}+...=1\), we get that
\[H_{p}(z,w)=o(|z-w|^{1+\delta}),\]
\[||m_{p}(\cdot,z)-m_{p}(\cdot,w)||_{p}=o(|z-w|^{\delta}).\]
The desired result follows from \(|m_{p}(z^{\prime},z)-m_{p}(z^{\prime},w)|\leq C||m_{p}(\cdot,z)-m_{p}(\cdot,w) ||_{p}\leq C|z-w|^{\delta}\).
### The case of \(p{>}2\)
In this section, we assume \(p{>}2\) and prove Theorem 3.1.
Proof.: It follows from the proof of [2, Theorem 4.7] that there exists an open set \(U\) with \(S\subseteq U\subseteq\subseteq\Omega\), and a constant \(\alpha=\alpha(p,S,U)\), \(C=C(p,S,U)\) such that
\[\int_{U}|m_{p}(\cdot,z)-m_{p}(\cdot,w)|^{\alpha}\leq CH_{p}(z,w)^{\frac{\alpha }{2}}.\]
This leads to \(||m_{p}(\cdot,z)-m_{p}(\cdot,w)||_{L^{\alpha}(U)}\leq CH_{p}(z,w)^{\frac{1}{2}}\). The rest part of proof is similar with the case \(1{<}p\leq 2\). We get that \(\forall\delta{<}1\), there exists \(C=C(\delta,S,U)\) such that
\[||m_{p}(\cdot,z)-m_{p}(\cdot,w)||_{L^{\alpha}(U)}\leq|z-w|^{\delta},\]
\[H_{p}(z,w)\leq C||m_{p}(\cdot,z)-m_{p}(\cdot,w)||_{L^{\alpha}(U)}|z-w|.\]
The desired result follows from \(|m_{p}(z^{\prime},z)-m_{p}(z^{\prime},w)|\leq C||m_{p}(\cdot,z)-m_{p}(\cdot,w) ||_{L^{\alpha}(U)}\leq C|z-w|^{\delta}\).
## 4. Asymptotic Behavior of Maximizers of \(K_{p}(z)\) as \(p\to 1^{-}\)
We know that when \(p\geq 1\), the maximizer \(f\) of \(K_{p}(z)\) is unique under the condition \(f(z)=1\). Actually, it is precisely the minimizer \(m_{p}(\cdot,z)\) of \(m_{p}(z)\). However, the uniqueness of the maximizer is not known for \(p{<}1\). Nevertheless, we can prove the following:
**Theorem 4.1**.: Let \(p{<}1\), \(A^{p}(\Omega)\) is a metric space. We define the metric \(d(f,g):=\int_{\Omega}|f-g|^{p}\), and the function \(d_{p}(z):=\sup\{d(f_{p},g_{p})\}\), where \(\sup\) is taken over all pairs of maximizers \(f_{p},g_{p}\) of \(K_{p}(z)\) satisfying \(f_{p}(z)=g_{p}(z)=1\). Then, it holds that
\[\forall z\in\Omega,\lim_{p\to 1}d_{p}(z)=0.\]
Proof.: We have \(K_{p}(z)^{-1}=\frac{1}{\int_{\Omega}|f_{p}|^{p}}\leq\int_{\Omega}1=|\Omega|\). Therefore, we know that \(\forall p_{0}{<}1,\{f_{p}\}_{p_{0}{<}p{<}1}\) is a normal family. Thus, there exists a subsequence \(\{f_{p_{n}}\}\) that converges uniformly on compact subsets of \(\Omega\) to some \(f\). For any \(p_{0}{<}s{<}1\), by Fatou's lemma, Holder's inequality and [1, Proposition 6.1(1)], we get that
\[\int|f|^{s} \leq\liminf_{n\to\infty}\int|f_{p_{n}}|^{s}\] \[\leq\lim_{n\to\infty}(\int|f_{p_{n}}|^{p_{n}})^{\frac{s}{p_{m}}}| \Omega|^{1-\frac{s}{p_{n}}}\] \[=\lim_{n\to\infty}K_{p_{n}}(z)^{-\frac{s}{p_{n}}}|\Omega|^{1- \frac{s}{p_{n}}}\] \[=K_{1}(z)^{-s}|\Omega|^{1-s}.\]
It follows that \(\int|f|=\lim_{s{\to}1}\int|f|^{s}\leq\lim_{s{\to}1}K_{1}(z)^{-s}|\Omega|^{1- s}=K_{1}(z)^{-1}.\) However, \(f(z)=1\) implies that \(f\) is a maximizer of \(K_{1}(z)\) at \(z.\)
Next, we prove \(\lim_{n\to\infty}\int_{\Omega}|f_{p_{n}}-f|^{p_{n}}=0.\)
For any \(\varepsilon{>}0\), there exists \(U\subset\subset\Omega\), such that \(\int_{U}|f|{>}K_{1}(z)^{-1}-\varepsilon.\) This means \(\int_{\Omega-U}|f|{<}\varepsilon.\) Moreover, for sufficiently large \(n\), since \(f_{p_{n}}\) uniformly converge to \(f\) on any compact subset of \(\Omega\), we know that \(\int_{U}|f_{p_{n}}-f|^{p_{n}}{<}\varepsilon.\) On the other hand, by \(|f_{p_{n}}-f|^{p_{n}}\leq|f_{p_{n}}|^{p_{n}}+|f|^{p_{n}}\), we can see that
\[\int_{\Omega-U}|f_{p_{n}}-f|^{p_{n}} \leq\int_{\Omega-U}(|f_{p_{n}}|^{p_{n}}+|f|^{p_{n}})\] \[\leq K_{p_{n}}(z)^{-1}-\int_{U}|f_{p_{n}}|^{p_{n}}+(\int_{\Omega -U}|f|)^{p_{n}}|\Omega|^{1-p_{n}}\] \[\leq K_{p_{n}}(z)^{-1}-(\int_{U}|f|^{p_{n}}-\varepsilon)+ \varepsilon^{p_{n}}|\Omega|^{1-p_{n}}.\]
Notice that \(\lim_{n\to\infty}\int_{U}|f|^{p_{n}}=\int_{U}|f|{>}K_{1}(z)^{-1}-\varepsilon\) and \(\lim_{n\to\infty}K_{p_{n}}(z)=K_{1}(z)\) ([1, Proposition 6.1(1)]). Therefore, we can conclude that \(\limsup_{n\to\infty}\int_{\Omega-U}|f_{p_{n}}-f|^{p_{n}}\leq 3\varepsilon.\) Since \(\varepsilon\) is arbitrary, it follows that \(\lim_{n\to\infty}\int_{\Omega}|f_{p_{n}}-f|^{p_{n}}=0.\)
Below, we prove the theorem by contradiction. If there exists \(\delta{>}0\) such that there exists a sequence \(\{p_{n}\}\) converges to \(1\), and \(\int_{\Omega}|f_{p_{n}}-g_{p_{n}}|^{p_{n}}=d(f_{p_{n}},g_{p_{n}}){>}\delta\). By taking subsequences twice, we may assume that \(f_{p_{n}}\) and \(g_{p_{n}}\) both converge to the maximizer \(m_{1}(\cdot,z)\) of \(K_{1}(z)\), as described above. However, this leads to \(\int_{\Omega}|f_{p_{n}}-g_{p_{n}}|^{p_{n}}\leq\int_{\Omega}|f_{p_{n}}-m_{1}( \cdot,z)|^{p_{n}}+\int_{\Omega}|g_{p_{n}}-m_{1}(\cdot,z)|^{p_{n}}\to 0\), which is a contradiction.
Characterization of \(L^{p}\)-integrability of a class of holomorphic functions on \(\mathbb{B}^{1}\)
Let \(\Omega=\mathbb{B}^{1}=\{z\in\mathbb{C}:|z|{<}1\}\). In this section, we give a characterization of holomorphic functions \(f\in\mathcal{O}(\mathbb{B}^{1})\) to be \(L^{p}\)-integrable.
**Definition 5.1**.: A sequence \(\{\lambda_{k}\}_{k\in\mathbb{N}^{*}}\) is called lacunary with constant \(A\) if there exists \(A{>}1\), such that \(\lambda_{k+1}\geq A\lambda_{k}\).
The main theorem of this section is following
**Theorem 5.1**.: Let \(p{>}0\), there exists \(C=C(p,A)\) such that, if \(f\in\mathcal{O}(\mathbb{B}^{1})\), \(f(z)=\sum_{k=1}^{\infty}a_{\lambda_{k}}z^{\lambda_{k}}\) for some lacunary sequence \(\{\lambda_{k}\}\) with constant \(A{>}1\),
\[C(p,A)^{-1}\int_{0}^{1}(\sum_{k=1}^{\infty}|a_{\lambda_{k}}r^{2\lambda_{k}}|)^ {\frac{p}{2}}dr\leq\int_{\mathbb{B}^{1}}|f|^{p}\leq C(p,A)\int_{0}^{1}(\sum_{k =1}^{\infty}|a_{\lambda_{k}}r^{2\lambda_{k}}|)^{\frac{p}{2}}dr.\]
We need the following lemma([1, Theorem 3.6.4])
**Lemma 5.2**.: Let \(T=[0,1]\), \(1\leq\lambda_{1}{<}\lambda_{2}{<}...\) be a lacunary sequence with constant \(A{>}1\). Set \(\Gamma=\{\lambda_{k}:k\in\mathbb{N}^{*}\}\). Then for all \(1\geq p{<}\infty\), there exists a constant \(C_{p}(A)\) such that for all \(f\in L^{1}(T)\), with \(f(k)=0\) when \(k\in\mathbb{N}^{*}-\Gamma\), we have
\[||f||_{L^{p}(T)}\leq C_{p}(A)||f||_{L^{1}(T)}.\]
Moreover, the converse inequality is also valid, hence all \(L^{p}\) norms of lacunary Fourier sequence are equivalent for \(1\leq p{<}\infty\).
Proof.: We write \(z\in\mathbb{B}^{1}\) as \(z=re^{2\pi it},0\leq r<1,t\in T\). For a given \(0\leq r<1,f(z)=f(re^{2\pi it})=\sum_{k=1}^{\infty}a_{\lambda_{k}}r^{\lambda_{k} }e^{2\pi\lambda_{k}it}\). Since \(f\) is continuous with respect to \(t\in T\), hence it is \(L^{p}\) integrable over \(T\), \(\forall p>0\). From Lemma 5.2 above, we know that the \(L_{p}(T)\) norms of \(f|_{\{|z|=r\}}\) are equivalent for all \(p\geq 1\). However, for any \(q<1\), by Holder's inequality we obtain
\[(\int_{T}|f|^{q})^{\frac{1}{2}}(\int_{T}|f|^{\alpha})^{\frac{1}{2}}\geq\int|f|\]
where \(\alpha=2-q>1\). Therefore, all the \(L^{p}(T)\) norms of \(f|_{\{|z|=r\}}\) are equivalent. This allows us to calculate the \(L^{p}\) norm of \(f\) using its \(L^{2}\) norm as follows:
\[\int_{B(0,1)}|f|^{p}=\int_{0}^{1}||f|_{\{|z|=r\}}||_{p}^{p}dr\approx\int_{0}^{ 1}||f|_{\{|z|=r\}}||_{2}^{p}dr\]
\[=\int_{0}^{1}(\sum_{k=1}^{\infty}|a_{\lambda_{k}}|^{2}r^{2\lambda_{k}})^{\frac {p}{2}}dr.\]
This completes the proof.
Now we fix a lacunary sequence \(\{\lambda_{k}\}\), saying \(\{2^{k}\}\), and consider the subspace of \(A^{p}(\mathbb{B}^{1})\): \(A^{p}_{c}(\mathbb{B}^{1}):=\{f\in A^{p}(\mathbb{B}^{1}):f(z)=\sum_{k=1}^{ \infty}a_{k}z^{2^{k}}\}\). We can prove following
**Theorem 5.3**.: \(A^{p}_{c}(\mathbb{B}^{1})\) is a closed subspace of \(A^{p}(\mathbb{B}^{1})\).
Proof.: Let \(A^{p}_{c}(\mathbb{B}^{1})\) be any Cauchy sequence in \(A^{p}(\mathbb{B}^{1})\) with respect to the distance function of \(A^{p}(\mathbb{B}^{1})\), denoted by \(\{f_{n}\}_{n=1}^{\infty}\), where \(f_{n}(z)=\sum_{k=1}^{\infty}a_{n,k}z^{2^{k}}\). From the above theorem, it is easy to see that for every \(k\), the sequence \(\{a_{n,k}\}\) converges to a complex number \(a_{k}\). We will now prove that \(f(z):=\sum_{k=1}^{\infty}a_{k}z^{2^{k}}\in A^{p}(\mathbb{B}^{1})\), which is the limit of the sequence \(\{f_{n}\}\) in \(A^{p}(B(0,1))\). From the above theorem, we have
\[\int|f|^{p} \approx\int_{0}^{1}(\sum_{k=1}^{\infty}|a_{k}|^{2}r^{2^{k+1}})^{ \frac{p}{2}}dr\] \[=\lim_{N\to\infty}\int(\sum_{k=1}^{N}|a_{k}|^{2}r^{2^{k+1}})^{ \frac{p}{2}}dr\] \[=\lim_{N\to\infty}[\lim_{n\to\infty}\int(\sum_{k=1}^{N}|a_{n,k}|^ {2}r^{2^{k+1}})^{\frac{p}{2}}dr]\] \[\leq\lim_{n\to\infty}\int(\sum_{k=1}^{\infty}|a_{n,k}|^{2}r^{2^{k +1}})^{\frac{p}{2}}dr\] \[=\lim_{n\to\infty}\int|f_{n}|^{p}.\]
Therefore, \(f\in A^{p}(\mathbb{B}^{1}).\) Next, we prove that in \(A^{p}(\mathbb{B}^{1})\),\(\forall n\),
\[\sum_{k=1}^{N}a_{n,k}z^{2^{k+1}}\rightrightarrows\sum_{k=1}^{\infty}a_{n,k}z^{ 2^{k+1}}=f_{n}(N\to\infty).\]
It is sufficient to prove that for any \(\varepsilon>0\), \(\int(\sum_{k=N}^{\infty}|a_{n,k}|^{2}r^{2^{k+1}}dr)^{\frac{p}{2}}{<} \varepsilon,\forall\ n\geq N=N(\varepsilon)\). In fact, since \(\{f_{n}\}\) is a Cauchy sequence, there exists \(M_{0}\), such that when \(n\geq M_{0}\),
\[\int_{0}^{1}(\sum_{k=1}^{\infty}|a_{n,k}-a_{M_{0},k}|^{2}r^{2^{k+1}})^{\frac{ p}{2}}dr{<}\varepsilon\]
Also note that there exists \(N_{0}\), such that
\[\int_{0}^{1}(\sum_{k=N_{0}}^{\infty}|a_{M_{0},k}|^{2}r^{2^{k+1}}|)^{\frac{p}{ 2}}dr{<}\varepsilon\]
Combining these two facts, we can conclude that when \(n\geq M_{0}\),
\[\int_{0}^{1}(\sum_{k=N_{0}}^{\infty}|a_{n,k}|^{2}r^{2^{k+1}})^{\frac{p}{2}}dr \leq C\varepsilon,\]
where \(C\) is a positive constant only dependent on \(p\), which implies uniform convergence. Finally, we prove that in \(A^{p}(\mathbb{B}^{1})\), \(f_{n}\to f.\) It suffices to prove that for any \(\varepsilon>0\), there exists \(N\), such that for \(n>N\),
\[\int_{0}^{1}(\sum_{k=N}^{\infty}|a_{n,k}-a_{k}|^{2}r^{2^{k+1}})^{\frac{p}{2}}dr\leq\varepsilon\]
We notice that
\[\int_{0}^{1}(\sum_{k=N}^{\infty}|a_{n,k}-a_{k}|^{2}r^{2^{k+1}})^{ \frac{p}{2}}dr\] \[\leq 2^{\frac{p}{2}}[\int_{0}^{1}(\sum_{k=N}^{\infty}|a_{n,k}|^{2}r ^{2^{k+1}})^{\frac{p}{2}}dr+\int_{0}^{1}(\sum_{k=N}^{\infty}|a_{k}|^{2}r^{2^{k+1 }})^{\frac{p}{2}}dr],\text{ if }p\leq 2\] \[\leq 2^{p-1}[\int_{0}^{1}(\sum_{k=N}^{\infty}|a_{n,k}|^{2}r^{2^{k+1 }})^{\frac{p}{2}}dr+\int_{0}^{1}(\sum_{k=N}^{\infty}|a_{k}|^{2}r^{2^{k+1}})^{ \frac{p}{2}}dr],\text{ if }p\geq 2.\]
This together with the uniform convergence yields the desired result.
_Remark 5.1_.: Theorem 5.1 can also be used to give a similar characterization of a class of holomorphic functions on the punctured disk to be \(L^{p}\)-integrable by considering the Laurent expansions.
|
2309.16547 | Controlling spin polarization of gapless states in defected trilayer
graphene with a gate voltage | Trilayer graphene exhibits valley-protected gapless states when the stacking
order changes from ABC to CBA and a gate voltage is applied to outer layers.
Some of these states survive strong distortions of the trilayer. For example,
they persist when the outer layers are partially devoid yielding a system of
two trilayers of different stacking order connected by a strip of a single
graphene layer. Here we investigate how these states respond to another
perturbation, i.e., the presence of magnetic defects, which we model as
pi-vacancies. We show that the gap states hybridize with the defect states and
strongly spin-split. More importantly, it is demonstrated that by changing the
gate voltage value one can change the spin density of the gap states and the
corresponding currents at the Fermi level. | Wlodzimierz Jaskolski | 2023-09-28T15:58:58Z | http://arxiv.org/abs/2309.16547v1 | # Controlling spin polarization of gapless states in defected trilayer graphene
###### Abstract
Trilayer graphene exhibits valley-protected gapless states when the stacking order changes from ABC to CBA and a gate voltage is applied to outer layers. Some of these states survive strong distortions of the trilayer. For example, they persist when the outer layers are partially devoid yielding a system of two trilayers of different stacking order connected by a strip of a single graphene layer. Here we investigate how these states respond to another perturbation, i.e., the presence of magnetic defects, which we model as \(\pi\)-vacancies. We show that the gap states hybridize with the defect states and strongly spin-split. More importantly, it is demonstrated that by changing the gate voltage value one can change the spin density of the gap states and the corresponding currents at the Fermi level.
trilayer graphene; topological states; defects in graphene
## I Introduction
Multilayer graphene is attracting still attention due to strongly correlated states and superconductivity reported both, in the systems with twisted layers [1; 2; 3; 4; 5; 6] and more recently in non-twisted Bernal stacked bilayer and rhombohedral trilayer graphene under special conditions [7; 8; 9]. Multilayers are attracting also interest in electronic applications due to the opening of the tunable energy gap when the systems are gated [10; 11; 12; 13; 14; 15; 16; 17; 18; 19; 20; 21; 22].
Another interesting property of gated Bernal stacked bilayer or rhombohedral trilayer is the appearance of valley-protected gap states of topological character when the stacking order changes from AB to BA in bilayer or from ABC to CBA in trilayer [23; 24; 25; 26; 27]. The stacking order change occurs usually when one of the layers is stretched, corrugated, or delaminated [28; 29; 30; 31]. The gapless states are important since they provide one-dimensional conducting channels at the Fermi level (\(E_{F}\)) along the stacking domain walls. An important feature of these states is their robustness against structural deformations of multilayers. They largely survive in the presence of atomic-scale defects [32; 33] which introduce defect states into the energy gap, and thus may disrupt topological states. Some of them persist even when the multilayer is partially stripped of one or two layers [34; 35; 36].
In this work, we consider strongly defected trilayer graphene, i.e., devoid of outer layers in the region of the stacking domain wall. This system was recently studied in Ref. [36], but here we add another perturbation, i.e., \(\pi\)-vacancy defects. Since vacancies in graphene lead to the appearance of localized states and magnetic moments, we use them here as simple models of magnetic defects [33; 37; 38; 39; 40]. Our aim is to investigate how such defects influence gapless states, in particular how they remove spin degeneracy of these states, what may be important for applications in spintronic devices. We find that the spin polarization and spin density of the gap states and the corresponding one-dimensional currents at the Fermi level depend strongly on the value of gate voltage applied to outer layers of the trilayer.
## II System description and method of calculation
The system under investigation is schematically shown in Fig. 1. It consists of two graphene trilayers connected by a single layer strip. The stacking order of the trilayers on the left and right sides is ABC and CBA, respectively. Therefore, the system can be also seen as a trilayer graphene with ABC/CBA stacking domain wall and the outer layers devoid in the region of the domain wall. It is worth noticing that because both outer layers are torn and pulled apart, the stacking domain wall area extends into the central region, i.e., into the single-layer strip.
The system is infinite in both, the \(x\) (armchair) and the \(y\) (zigzag) directions, but is fully periodic in the zigzag
Figure 1: Schematic representation of the investigated system. The left and right trilayers have ABC and CBA arrangement of layers, respectively. They are connected by a strip of single graphene layer, i.e., the middle layer of trilayers. The system extends to infinity in the \(x\) (armchair) and \(y\) (zigzag) directions but is fully periodic only in the \(y\) direction. A single vacancy representing magnetic impurity, marked as a red dot and arrow, is located periodically along the \(y\) direction in the region of the single graphene layer.
(\(y\)) direction. The width of the system unit cell in the periodic (\(y\)) direction is \(W_{y}=4\), measured as the number of graphene unit cells along this direction. The width of the central region along the \(x\) (armchair) direction, i.e., the width of the single graphene strip connecting two trilayers, is taken as \(W_{C}=4\), measured in the same units. Each unit cell of the system contains a single vacancy (as shown in Fig. 1), that in bipartite systems introduces magnetic moment and thus can model magnetic defect [40].
It is important to note that although we study a model of uniform distortion of the trilayer graphene (i.e., the single layer strip has a constant width and vacancies are periodically distributed) the robustness of gapless states to different perturbations allows us to assume that the obtained results and conclusions could be applied also to systems not so uniformly perturbed.
We use in the calculations a one-orbital \(\pi\)-electron tight-binding approximation (TB). This approach has proved to properly model the electronic properties of graphene systems around the Fermi energy. The electron-electron interaction is taken into account by including a Hubbard term, which is adequate for the description of spin and magnetic effects in graphene within the TB model [40]. The Hubbard Hamiltonian in a mean-field approximation is
\[H=t_{i/e}\sum_{\langle i,j\rangle,\sigma}c_{i\sigma}^{\dagger}c_{j\sigma}+H.c.+U\sum_{i}(n_{i\uparrow}\langle n_{i\downarrow}\rangle+\langle n_{i\uparrow} \rangle n_{i\downarrow}),\]
where \(c_{i\sigma}^{\dagger}\) (\(c_{i\sigma}\)) are the creation and (annihilation) operators for electrons with spin \(\sigma\) at site \(i\); the index \(i\) goes over all the nodes in the unit cell; the summation \(\langle i,j\rangle\) is restricted to nearest neighbors; the arrows indicate spin-up and spin-down \(\sigma\) states; and \(\langle n_{i\sigma}\rangle=\langle c_{i\sigma}^{\dagger}c_{i\sigma}\rangle\) is spin-resolved density at site \(i\). The first term in \(H\) is the TB Hamiltonian, while the last one represents the on-site Coulomb repulsion. Intra-layer and inter-layer hopping parameters \(t_{i}=2.7\) eV and \(t_{e}=0.27\) eV are used, respectively [10; 11], the on-site Coulomb repulsion parameter \(U\) is set equal to 2.8 eV [33; 41; 42].
To calculate the local density of states (LDOS) we use the Green function matching technique [43]. The Hamiltonians \(H_{C}\), \(H_{L}\) and \(H_{R}\) of the central region (i.e., single layer square [\(W_{C}\times W_{y}\)] shown in Fig. 1) and of the left and right trilayers are calculated self-consistently since the densities \(\langle n_{i\sigma}\rangle\) depend on the eigenvalues of the Hamiltonians. Knowing the \(H_{L/R/C}\) Hamiltonians, the transfer matrix technique [44] is employed to find the Green function \(G_{C}\) of the central region, and the corresponding LDOS is calculated as LDOS=\(-\left(\frac{1}{\pi}\right)\)Tr\(G_{C}\)[45]. Since the system is periodic in the \(y\) (zigzag) direction, the LDOS is \(k\)-dependent, where \(k\) is the wave vector corresponding to this periodicity. Therefore, the entire procedure for finding \(H_{L/R/C}\), \(G_{C}\) and LDOS has to be performed for each \(k\) value in the Brillouin Zone, i.e., from \(k=0\) to \(k=\pi/a\), where \(a=W_{y}\).
## III Results and discussion
We consider two values of the gate voltage \(\pm V\) applied to the outer layers, namely \(V=0.1\) eV and \(V=0.4\) eV. As shown in Ref. [36], different values of \(V\), larger or smaller than \(t_{e}\), lead to different number and behavior of the gap states in trilayer graphene partially devoid of the outer layers. This is visualized in Fig. 2 (a) and (b), where the results for the case with no vacancies are presented. The LDOS is calculated in the central part of the system and only LDOS close to the energy cone, i.e., close to \(k=\frac{2}{3}\pi\) and the Fermi energy (\(E=0\)), is visualized. Although the LDOS is calculated in the region of the single graphene layer, one can clearly identify gap states characteristic for multilayer graphene with stacking order change. The LDOS shows also some traces of the electronic structure of the neighboring gated trilayers, i.e., the band continua and the energy gap.
For \(V=0.1\) eV, two states of similar and monotonic behavior of \(E(k)\) are present in the energy gap. As shown in Ref. [36] there are in fact three gap states, since the right one is doubly degenerate in energy. This right and degenerate pair of the gap states couples to a pair of degenerate zigzag edge states localized in the lower half-layers (blue in Fig. 1) of the left and right trilayers [46]. For \(V=0.4\) eV, one of the gap states changes twice the slope of \(E(k)\), but as it was explained in Ref. [36] the
Figure 2: LDOS visualized close to the energy cone, i.e., near the Fermi level (\(E=0\)) and for \(k\) around \(\frac{2}{3}\pi\). (a) and (c) \(V=0.1\) eV, (b) and (d) \(V=0.4\) eV. Upper panels: LDOS calculated for the case without vacancies. Lower panels: LDOS calculated for system with vacancies, but without Coulomb repulsion, i.e., setting \(U=0\). Pink solid line marks the position of the defect states for the case of gated trilayer without stacking order change.
rightmost part of this state overlaps with the third gap state.
We now analyze the influence of the vacancy defects. When the Coulomb interaction is not allowed, i.e., when we set \(U=0\) in the Hubbard Hamiltonian, the vacancies introduce defect state at \(E=0\) (no gate is applied to the middle layer), which strongly interacts and hybridizes with the gap states. This is visualized in Fig. 2 (c) and (d) for \(V=0.1\) eV and \(V=0.4\) eV, respectively.
All these states are spin-degenerate so when the Coulomb interaction is switched on they strongly spin-split. Figs. 3 (a) and (b) show the spin-down and spin-up gap states, respectively, calculated for the case of \(V=0.1\) eV. Two gap states connecting the valence and conduction band areas are clearly visible for both spin polarization. The spin-splitting of the left gap state is larger than the right one because, as demonstrated in Ref. [36], this state is localized mainly in the single layer region and therefore is more affected by vacancies, which are also located in this layer.
The spin-down and spin-up states for the case of \(V=0.4\) eV are shown in panels (c) and (d) of Fig. 3, respectively. Of two spin-down states, the right one follows the behavior of the right state of the vacancy-free case, while both spin-up states show monotonic dependence of their energies vs. wave vector, \(E(k)\), almost in the entire energy gap. The picture of spin-splitting of the gap states is more complex than in the \(V=0.1\) eV case: the right spin-down state changes twice the slope of \(E(k)\) and thus it crosses three times the Fermi level. It means that the density of the occupied spin-down gap states at \(E_{F}\) is much higher than the density of the spin-up states. This is visualized in Fig. 4 (b), where the spin-down and spin-up LDOS at the Fermi level is presented. For comparison, the LDOS at \(E_{F}\) of the \(V=0.1\) eV case is shown in panel (a) of this Figure. In this case, the spin-down and spin-up densities are almost the same.
The gap states at \(E_{F}\) can carry one-dimensional and spin-polarized currents along the \(y\) direction when the system is additionally biased in this direction. The presented results show that by changing the value of the gate voltage one can change the density of spin-polarized gap states and the corresponding currents at the Fermi level. This is the main message of this work: a slight change of the gate voltage from \(0.1\) eV to \(0.4\) eV can serve as a switch from spin-unpolarized current to a polarized one.
The behavior of gap states away from the cone is governed by the defect state, which for that values of \(k\) strongly splits into sin-down and spin-up states with energies below and above the cone, respectively. Since most of the vacancy-hybridized spin-up bands lies above the Fermi level and is unoccupied, the magnetic moment (estimated from Fig. 4) of the central region is about \(0.9\)\(\mu_{B}\) and \(0.6\)\(\mu_{B}\) for \(V=0.1\) eV and \(V=0.4\) eV, respectively.
A comment is required about the barely visible gap state that appears at the right side of the energy cone in all panels of Fig. 3. This is the above mentioned third gap state of the right degenerate pair of the vacancy-free case. This state is localized almost exclusively in the lower layers and on the sublattice defined by the zigzag edge nodes of the lower left half-layer. This sublattice does not couple to the vacancy-defined sublattice of the middle layer (see Ref. [46]). For this reason its LDOS in the middle layer is very small, it does not hybridize with the vacancy state and almost does not spin-split.
## IV Conclusions
We have studied the electronic structure of defected gated trilayer graphene with stacking order change of the layers from ABC to CBA. The defect comes down to the partial removal of the outer layers in the region of the stacking domain wall and the inclusion of vacancies,
Figure 4: Spin-resolved LDOS at the Fermi level. (a) \(V=0.1\) eV, (b) \(V=0.4\) eV. Spin-down and spin-up LDOS are marked in red and blue, respectively.
Figure 3: Spin-resolved LDOS calculated for the case with vacancies present in the central region of the system. (a) and (b) \(V=0.1\) eV, (c) and (d) \(V=0.4\) eV. (a) and (c) spin down, (b) and (d) spin up. The Fermi level is marked by a dashed line.
which mimic the presence of magnetic defects. We have investigated the role of vacancies in the spin-splitting of gapless states. In particular, we have checked how this splitting, and thus the spin-resolved density of gapless states at the Fermi level, depends on the value of voltage applied to the outer layers.
The calculations have been performed within the tight binding approximation and the Hubbard model. The surface Green function matching technique has been used to calculate the local density of states in the defected region.
We have shown that gapless states present in the trilayer system due to the stacking order change are strongly affected by the vacancy defects. The interaction of the vacancy state with gapless states and their spin-splitting depends strongly on the value of the gate voltage. When the applied voltage is lower than the interlayer hopping energy \(t_{e}\), the pair of the resulting spin-down and spin-up gap states have similar and uniform slope of \(E(k)\), yielding zero net spin density at the Fermi level. In contrast, when the gate voltage is higher than \(t_{e}\), one of the spin-down states has a more complex curvature of \(E(K)\) than its spin-up counterpart. As a result, one spin density of the gap states dominates at the Fermi level. Therefore, the one-dimensional currents corresponding to the gap states are also spin-polarized, the effect of potential application in spintronics based on multilayer graphene systems.
|
2309.11228 | Towards Robust Few-shot Point Cloud Semantic Segmentation | Few-shot point cloud semantic segmentation aims to train a model to quickly
adapt to new unseen classes with only a handful of support set samples.
However, the noise-free assumption in the support set can be easily violated in
many practical real-world settings. In this paper, we focus on improving the
robustness of few-shot point cloud segmentation under the detrimental influence
of noisy support sets during testing time. To this end, we first propose a
Component-level Clean Noise Separation (CCNS) representation learning to learn
discriminative feature representations that separates the clean samples of the
target classes from the noisy samples. Leveraging the well separated clean and
noisy support samples from our CCNS, we further propose a Multi-scale
Degree-based Noise Suppression (MDNS) scheme to remove the noisy shots from the
support set. We conduct extensive experiments on various noise settings on two
benchmark datasets. Our results show that the combination of CCNS and MDNS
significantly improves the performance. Our code is available at
https://github.com/Pixie8888/R3DFSSeg. | Yating Xu, Na Zhao, Gim Hee Lee | 2023-09-20T11:40:10Z | http://arxiv.org/abs/2309.11228v1 | # Towards Robust Few-shot Point Cloud Semantic Segmentation
###### Abstract
Few-shot point cloud semantic segmentation aims to train a model to quickly adapt to new unseen classes with only a handful of support set samples. However, the noise-free assumption in the support set can be easily violated in many practical real-world settings. In this paper, we focus on improving the robustness of few-shot point cloud segmentation under the detrimental influence of noisy support sets during testing time. To this end, we first propose a Component-level Clean Noise Separation (CCNS) representation learning to learn discriminative feature representations that separates the clean samples of the target classes from the noisy samples. Leveraging the well-separated clean and noisy support samples from our CCNS, we further propose a Multi-scale Degree-based Noise Suppression (MDNS) scheme to remove the noisy shots from the support set. We conduct extensive experiments on various noise settings on two benchmark datasets. Our results show that the combination of CCNS and MDNS significantly improves the performance. Our code is available at [https://github.com/Pixie888/R3DFSSeg](https://github.com/Pixie888/R3DFSSeg).
## 1 Introduction
Few-shot point cloud semantic segmentation (3DFSSeg) [2, 3] is a pragmatic direction as it is able to segment novel classes during testing stage with only few labeled samples. In contrast to the fully-supervised methods [2, 3, 4] which only work for close set, 3DFSSeg has better generalization ability. However, it assumes that the learning samples of the novel classes are correctly labeled during online testing time.
Unfortunately, the assumption of completely clean data could be violated in practice due to a variety of reasons. First, human labeling is error-prone. The irregular data structure, low-resolution, and subtle inter-class geometric difference make human annotators themselves hard to correctly recognize objects [2]. The crowdsourcing labeling further stresses the annotation quality [2]. As a consequence, ScanNet [3] still contains annotation mistakes [2] after manual refinement over an extended period of time. Second, the industry is actively seeking cheaper and more efficient annotation system to replace human labeling, _e.g._ semi-automatic labeling [2], [3] and fully automatic annotation [3, 4, 5]. It further challenges the curation of high-quality data.
As shown in Fig. 1, we can refine the noisy annotations of the static base class dataset offline by either manual checking or data-driven algorithm [] given enough time and budget. However, it is impossible to invest the same amount of human supervision to guarantee noise-free in every support set after model being deployed because the number of new classes in the real world is _infinite_[, ]. Neither can we use data-driven algorithm [] to automatically clean the noise due to severe overfitting to the small number of training samples per new class (_cf._ Tab. 1).
To this end, we tackle with the noisy labels in the testing stage of 3DFSSeg, which is challenging but with high practical value. In 3DFSSeg, a few support point clouds are provided as learning samples for each new class during meta-testing. Each support sample (_i.e._ shot) is provided with a binary mask indicating the presence of the corresponding class. Based on the given support set, the model segments the new class in any unlabeled (_i.e._ query) point clouds. As pointed out by that the instance-level noise is most common in the annotation, objects of other classes are wrongly annotated as the target class and collected in the support set. We define shots with incorrectly labeled foreground object as noisy shots. Thus, the goal of robust few-shot point cloud semantic segmentation (R3DFSSeg) is to learn a robust few-shot segmentor that is less influenced by the noisy shots.
In this paper, we first propose a Component-level Clean Noise Separation (CCNS) representation learning to learn robust representation that is discriminative between features of clean and noisy points. Inspired by [], we adopt the meta-learning paradigm for few-shot point cloud segmentation. During meta-training, we randomly inject noise into the support set by sampling point clouds containing foreground objects from other classes to mimic the noisy meta-testing environments. We introduce a class-wise supervised contrastive learning on the noisy support set to separate the clean samples of the target classes from the noisy samples. To obtain more fine-grained and diverse contrastive features, we further propose the use of farthest point sampling to decompose the masked points in the feature space into multiple components. Intuitively, our CCNS is designed to encourage features from different classes to be well-separated, such that the clean shots in the support set would form the largest cluster in the feature space when learning converges.
We further propose a Multi-scale Degree-based Noise Suppression (MDNS) scheme to remove the noisy shots from the support set during testing stage. Our MDNS separates clean from noisy samples by checking the degree of each sample in a fully connected pair-wise similarity graph. Clean samples tend to form well-defined clusters with higher degrees in the pair-wise similarity graph. In contrast, noisy samples are relatively scattered with lower degrees of connectivity in the feature space.
Our **main contributions** can be summarized as follows: **1)** To the best of our knowledge, we are the first to study the problem of robust few-shot point cloud semantic segmentation,
Figure 1: Comparison between noisy base and novel class dataset of 3DFSSeg. (a) Base class dataset is static with finite samples. (b) Novel class dataset is non-stationary as new classes are continuously collected in the online testing stage. An example where a sofa and a curtain are wrongly annotated in support set 1 and 2, respectively.
which is important in real-world applications since noisy labels are inevitable in practice. **2)** We propose a component-level clean noise separation method for representation learning to enhance the class-level discrimination in the embedding space. **3)** We propose a multi-scale degree-based noise suppression scheme that is able to effectively remove noisy samples from the small support set for each new class during testing. **4)** We conduct extensive experiments on two benchmark datasets (_i.e._ S3DIS and ScanNet) with various noise settings and show superior results over the baselines.
## 2 Related Work
Few-shot Learning.Few-shot learning aims to transfer knowledge learned from the abundant samples of the seen class to a set of unseen classes with only few labeled samples. One of the dominant approach is the metric-based [11, 53] methods, which meta-learns a transferable feature embedding that coincides with a fixed metric. The pioneer work ProtoNet [10] predicts query label by finding the nearest class prototype under the Euclidean distance. The key to the metric-based method is the discriminative feature embedding with compact class clusters [1, 2, 12, 13]. Ye _et al_. [11] apply the contrastive objective to align the training instances close to its own class center after the embedding adaptation. Although we also use contrastive learning in the episodic training, we adopt fine-grained contrastive objective (_i.e._ feature components) to better capture the diverse intra-class distribution of point cloud.
Few-shot Semantic Segmentation.Few-shot semantic segmentation segments semantic objects in an image [11, 53, 61] or a point cloud [1, 2, 3, 11] with only few annotated samples. The 2D image semantic segmentation can be categorized into relation-based method [11, 53, 61, 60] and prototype-based method [11, 53, 60]. Zhao _et al_. [11] propose the first work on 3D few-shot point cloud semantic segmentation. They generate multi-prototypes via farthest point sampling to better capture the complex data distribution of the point cloud. The transductive inference is conducted between multi-prototypes and query points to infer the label for each query point. However, all these works assume that the annotation in the given support are accurate during testing time. In practice, this is a very strong assumption given that the pixel-level and point-level annotation are extremely tedious and error-prone. In view of this limitation, this paper studies the problem of robust few-shot point cloud semantic segmentation and proposes a effective model that can better adapt to real world applications.
Learning with Noisy Labels.Learning with noisy labels is gaining increasing attention as the deep neural networks are shown to be extremely vulnerable to the noisy labels [1, 2, 11]. There are three major approaches: label correction using the prediction of the model as the new label [11, 12, 13, 11], sample selection using small loss criterion to selectively update model [1, 11, 12] and learning robust representation [11, 12, 11, 11, 12, 13, 11, 12, 13, 11, 13, 11, 13
module inside the Transformer [11] to weigh down the noisy shots. Compared to 2D classification, 3D point cloud segmentation is more challenging as it requires per-point classification and point cloud has much larger intra-class variance. Thus, the 2D methods, which only generate one robust prototype per class, fail on the R3DFSSeg.
## 3 Our Method
Problem Formulation.The few-shot point cloud segmentation consists of two datasets: \(\mathcal{T}_{base}\) and \(\mathcal{T}_{novel}\) sampled from disjoint classes \(\mathcal{C}_{base}\) and \(\mathcal{C}_{novel}\), respectively. The goal is to learn a model from \(\mathcal{C}_{base}\) and generalize to the \(\mathcal{C}_{novel}\). Following previous work [11], we adopt the episodic training on the \(\mathcal{C}_{base}\) to emulate the few-shot setting during testing. In each \(N\)-way \(K\)-shot episode, \(N\) is the number of classes to be learned, and \(K\) is the number of labeled samples per class. The labeled samples are termed as the support set: \(S=\left\{\left(P_{k}^{1},M_{k}^{1}\right)_{k=1}^{K},\ldots,\left(P_{k}^{N},M_{ k}^{N}\right)_{k=1}^{K}\right\}\). Each point cloud \(P_{k}^{n}\in\mathbb{R}^{m\times f_{0}}\) contains \(m\) points with input feature dimension of \(f_{0}\). The \(M_{k}^{n}\in\mathbb{R}^{m\times 1}\) is the corresponding binary mask indicating the presence of class \(n\).
We are also given a set of \(T\) unlabeled point clouds, termed as the query set: \(Q=\left\{\left(R_{i},L_{i}\right)\right\}_{i=1}^{T}\). Each query point cloud \(R_{i}\in\mathbb{R}^{m\times f_{0}}\) is associated with the ground truth label \(L_{i}\in\mathbb{R}^{m\times 1}\) only available in the training stage. During testing, \(M_{k}^{n}\) can wrongly assign object of another class to class \(n\) due to the instance-level labeling error [11]. We denote the noisy mask \(\tilde{M}_{k}^{n}\) and the corresponding point cloud \(\tilde{P}_{k}^{n}\) as the noisy sample, and its correct class assignment as \(Y_{k}\). Consequently, the support set \(S\) becomes the mixture of clean and noisy shots. The goal of robust few-shot point cloud semantic segmentation is to correctly predict the query label by learning from the noisy support set \(S\).
Framework Overview.Fig. 2 illustrates our proposed framework. We choose AttMPTI [11] as our few-shot segmentor since it achieves state-of-the-art performance in the few-shot point cloud segmentation. In addition, AttMPTI is potentially robust to the noise when a good feature embedding is guaranteed (Sec. 3.1). In view of this, we propose the Component-level Clean Noise Separation (CCNS) representation learning during meta-training to enhance the discrimination and generalization of the feature embedding for AttMPTI (Sec. 3.2). We further propose the multi-scale degree-based noise suppression (MDNS) to remove the noisy shots during meta-testing based on their similarity graph (Sec. 3.3).
Figure 2: **The architecture of our framework**. ‘S’ represents the support point cloud and ‘Q’ represents the query point cloud. The left figure shows the pipeline during meta-training, where we conduct component-level clean noise separation representation learning for each episode class. Components of different classes are pushed away from each other. The right figure shows the pipeline during meta-testing, where we perform multi-scale degree-based noise suppression to remove the noisy shots.
### Why Choose AttMPTI?
AttMPTI [] is the state-of-the-art few-shot point cloud segmentation method. It consists of a feature extractor to embed the support and query point cloud into the same metric space, a multi-prototype generation module to generate prototypes from support set, and a label propagation module to infer query label. Compared to ProtoNet [], AttMPTI has several unique components that gives it the potential to be robust, in addition to showing more superior performance. **First**, AttMPTI generates multi-prototypes via FPS [], while ProtoNet uses mean aggregation of all the relevant class feature. The sampled seed points via FPS are able to represent the diversity of the feature space, and the local prototype is generated by clustering each point to the nearest seed point based on the Euclidean distance in the feature space. In this way, the multi-prototypes can inherently separate the clean and noisy points in the prototype-level. As shown in Fig. 3, the clean ratio of local prototypes is either 1 (100% clean) or 0 (100% noise), but it seldom produces a half-clean prototype. In comparison, the global prototype used in the ProtoNet leads to a clean-noise compound.
**Second**, AttMPTI infers query labels via label propagation [] in a transductive fashion, while ProtoNet infers each query point independently with the set of class prototypes. The label propagation is based on the manifold smoothness, _i.e._ nearby samples in the feature space share the same label, and it has the ability to correct the noisy label []. In contrast, ProtoNet independently and identically predicts the label for each query point based on the global prototypes that are potentially noisy. The lack of reasoning the relationships among the support and query prevents the model from being able to correct the support noise. Although the design of AttMPTI shows a better potential than ProtoNet in resisting the noise existing in the support set, the performance of both multi-prototype generation and label propagation are subjected to the discriminativity of the feature embeddings. To enhance the representation learning, we propose to perform component-level clean-noise separation.
### Component-level Clean Noise Separation
Our component-level clean noise separation (CCNS) representation learning aims to enhance the class-wise discrimination in the feature space. We randomly replace some of the K support shots with shots sampled from other classes during episodic training and induce the model to differentiate clean and noisy shots in the feature space. With these synthesized support sets with noisy labels, we perform a clean-noise separation representation learning for each way (_i.e._ class) by optimizing the model with the class-wise contrastive learning among the \(K\) support shots as follow:
\[\mathcal{L}_{\text{CNS}}=\frac{1}{K}\sum_{k=1}^{K}\left(\frac{-1}{|A(z_{k})|} \sum_{z_{q}\in A(z_{k})}\log\frac{\exp\left(z_{k}\cdot z_{g}/\tau\right)}{ \sum\limits_{\hat{n}\setminus k}\exp\left(z_{k}\cdot z_{h}/\tau\right)} \right), \tag{1}\]
Figure 3: Comparison of prototype cleanness from different methods on a 5-shot with 40% out-episode noise setting. ‘1’ means the prototype only containing clean-labeled points, and ‘0’ means the prototype only containing points that are incorrectly labeled as the target class. Values in between 0-1 represent the portion of clean-labeled points in the prototype.
where \(z_{k}\in\mathbb{R}^{d}\) is the L2 normalized average foreground feature of the support point cloud \(P_{k}\) in the projection space. \(A(z_{k})=\left\{z_{g}\mid Y_{g}=Y_{k}\right\}\) is the set of positive samples \(z_{g}\) with its semantic label \(Y_{g}\) the same as the semantic label \(Y_{k}\) of \(z_{k}\). \(|A(z_{k})|\) is the cardinality and \(\tau\) is the temperature. By training with \(\mathcal{L}_{\text{CNS}}\), the shots with same foreground class are encouraged to stay together while staying away from samples of other classes.
Unfortunately, a simple mean aggregation of the foreground area tends to be sub-optimal in representing the class distribution since the distribution of point features of each class is very large as shown in Fig. 4. To this end, we conduct class-wise contrastive learning in a more fine-grained way by dividing the features in each foreground area into local components. The feature components aggregate local patterns that exhibit similar fine-grained semantics, and have better coverage of the feature space compared to the naive mean aggregation. Specifically, we first perform FPS in the feature space and then locally aggregate the point features into a set of feature components \(\left\{z_{k}^{1},\cdots,z_{k}^{R}\right\}\), to replace the original holistic \(z_{k}\). Consequently, the component-level clean noise separation \(\mathcal{L}_{\text{CCNS}}\) is formulated as:
\[\mathcal{L}_{\text{CCNS}}=\frac{1}{KR}\sum_{k=1}^{K}\sum_{i=1}^{R}\left(\frac{ -1}{|A(z_{k}^{i})|}\sum_{z_{k}^{i}\in A(z_{k}^{i})}\log\frac{\exp\left(z_{k}^{ i}\cdot z_{g}^{j}/\tau\right)}{\sum\limits_{h,b\setminus(k,i)}\exp\left(z_{k}^{i} \cdot z_{h}^{b}/\tau\right)}\right), \tag{2}\]
where the \(A(z_{k}^{i})=\left\{z_{g}^{j}\mid Y_{g}=Y_{k}\right\}\) is the set of positive samples with the same semantic label \(Y_{g}\) as \(Y_{k}\), and the \(|A(z_{k}^{i})|\) is the cardinality. As shown in Fig. 4, each component represents a different aspect of its corresponding shot in the feature space. Essentially, it forms a multi-view self-supervised contrastive learning for each shot, where the 'view' is a local component in the feature space. Correspondingly, the components at the boarder of the class distribution automatically serve as the hard negative samples to other classes and hard positive samples to its own class, which are the key to a successful contrastive learning [10, 12].
The final optimization objective during the training stage is given by:
\[\mathcal{L}=\mathcal{L}_{\text{CE}}+\lambda\mathcal{L}_{\text{CCNS}}, \tag{3}\]
where \(\lambda\) is a hyper-parameter to weigh the contribution of \(\mathcal{L}_{\text{CCNS}}\). \(\mathcal{L}_{CE}\) is the original cross-entropy loss in AttMPTI.
### Multi-scale Degree-based Noise Suppression
Although the clean and noisy points can separate under the well-learned embedding space, the prototype generation and label propagation module are still exposed to the mislabeled shots during testing time. To reduce their negative influence during testing, we design a degree-based noise suppression scheme to automatically remove the suspicious noisy shots. Specifically, we build a fully connected graph G on the K support shots for each way. We
Figure 4: t-SNE [12] visualization of the CCNS on a 5-shot support set with 2 noisy shots. Each dot represents a point in the feature space and each triangle represents a feature component. Different colors represent different classes with blue indicating the target class. The arrow shows the direction to pull the feature components.
average the foreground feature \(x_{i}\in\mathbb{R}^{d}\) of the \(i\)-th shot as the feature of node i. The weight \(W_{ij}\) of the edge encodes the affinity between the two end nodes \(i\) and \(j\) as follow:
\[W_{ij}:=\begin{cases}\left[x_{i}^{\top}x_{j}\right]_{+}^{\gamma},&\text{ if }i\neq j\\ 0,&\text{otherwise}\end{cases}. \tag{4}\]
We then compute the degree \(d_{i}=\sum_{j}W_{ij}\) for each node i. Essentially, the degree reflects the nodes connection in the graph. The noisy shots tend to have lower degree since the clean shots usually form a cluster with the largest size and the noisy shots are scattered in the feature space. Consequently, we identify them based on the clean indicator:
\[I_{i}:=\begin{cases}1&\text{ if }d_{i}>thr\\ 0,&\text{ otherwise}\end{cases}, \tag{5}\]
where we set the \(thr\) as the mean of the \(\left\{d_{i}\right\}_{i=1}^{K}\). The shots with \(I=0\) are treated as noise and removed.
Some point clouds may have complex data distribution that cannot be sufficiently represented by a global representation. To mitigate this problem, we extend the single-level degree-based noise suppression scheme to multi-level, thus yielding the Multi-scale Degree-based Noise Suppression (MDNS). Our MDNS can be more robust to some complex samples and consequently improve the accuracy of clean sample identification. Specifically, we add an additional level to perform noise suppression. We evenly split the foreground object along the x/y/z coordinates, and denote the number of cuts along the x/y/z coordinates as \(n_{x}\)/\(n_{y}\)/\(n_{z}\). The foreground feature in each sub-shot is locally aggregated and the feature set for each shot is enlarged to \(\left\{x_{i,s}^{1},\cdots,x_{i,s}^{e}\right\}\), where \(e=n_{x}\times n_{y}\times n_{z}\). The single representation \(x_{i}\) is the case of \(\left\{n_{x}=1,n_{y}=1,n_{z}=1\right\}\) and is considered as the coarsest scale with \(s=1\). We then send them into the noise suppression module to get the clean indicator \(\left\{I_{i,s}^{1},\cdots,I_{i,s}^{e}\right\}\), where the majority voting is performed get the shot-level indicator \(I_{i,s}\). Lastly, we assemble the final prediction \(I_{i}\) as the majority voting of the prediction at each scale \(\left\{I_{i,1},\ldots,I_{i,s}\right\}\).
## 4 Experiments
### Datasets and Noise Settings
Datasets.We conduct experiments on **S3DIS**[] and **ScanNet**[]. S3DIS contains point clouds of 272 rooms collected from six indoor areas with annotation of 12 semantic classes. ScanNet contains point clouds of 1,513 scans from 707 unique indoor scenes with annotation of 20 semantic classes. Following [], we split each room into non-overlapping blocks with size of \(1\text{m}\times 1\text{m}\) on the xy plane. Consequently, S3DIS and ScanNet contains 7,547 and 36,350 blocks, respectively. We sample \(m=2,048\) points as the input point cloud from a block. The input feature \(f_{0}\) corresponds to XYZ, RGB and normalized XYZ values. During training, we randomly sample one episode by first sampling N classes from \(\mathcal{C}_{base}\) and then sampling \(NK\) point clouds as the support set and \(T\) point clouds as the query set. The support mask \(M\) and the query label \(L\) are modified from its original annotation to only indicate the presence of the target classes with irrelevant classes as the background. The testing episodes are formed in a similar way, except for that we exhaustively sample 100 episodes for each combination of N classes from the \(\mathcal{C}_{novel}\). We use the data split 0 of [] as the test classes on both datasets. We adopt the mean Intersection over Union (mIoU) as the evaluation metric.
Noise Settings.We explore two types of label noise: 1) **In-episode noise** samples noisy shots from other N-1 classes of the current episode. It studies how the mix of the N foreground classes affects the prediction of query point. We test the models on in-episode noise ratio of 20% and 40%. 2) **Out-episode noise** samples noisy shots from outside of the N classes in the \(\mathcal{C}_{novel}\). It studies how the outliers affect the prediction of the query point. We test the models on out-episode noise ratio of 40% and 60%.
The noise rate is defined as the percentage of the \(K\) support shots. Following existing literature of learning with noisy labels [B, \(\boxed{\boxed{\boxed{\boxed{\boxed{\boxed{\boxed{\boxed{\boxed{\boxed{\boxed{\boxed{ \boxboxboxbox
point while AttMPTI fails. We notice that our model is slightly worse than AttMPTI in the 0% setting in Tab. 2. We postulate that our method can predict correct labels, but the noisy ground truths of ScanNet [] cannot reflect the true performance of our method. This postulation is evidenced by the great superiority of our method over baseline methods on S3DIS, which is a dataset with clean ground truths. It suggests that our method can adapt to the unknown test environment (both clean and noise test), which is important for model deployment in real world.
2D robust few-shot learner Tra-NFS [] performs poorly on R3DFSSeg due to severe modality gap, _i.e._ point cloud has larger intra-class viriance than 2D images, making Tra-NFS hard to detect clean shots. 3D robust point cloud segmentor PNAL also fails in the few-shot setting due to small support set in each episode.
We further notice that the in-episode noise has larger negative influence than the out-episode noise, _e.g._ 40% in-episode noise vs 40% out-episode noise. We believe the reason is that the features in each foreground class usually form a compact cluster. The in-episode noise causes the labels in this compact cluster to be different, which severely confuses the model of which class this cluster belongs to. In contrast, the out-episode noise are usually separated from the foreground classes in the feature space, and is less likely to influence them.
High way setting.Tab. 4 shows results of 5-way 5-shot setting on ScanNet. Our model again can significantly outperform AttMPTI on all noise settings.
## 5 Conclusion
In this paper, we address the new task of robust few-shot point cloud segmentation, which is a more general setting that considers label noise in the support set. We design the Component-level Clean Noise Separation (CCNS) representation learning to learn a discriminative feature embedding. Our CCNS encourages the features from different classes to stay away from each other, and concurrently induces the clean shots to form the largest cluster in the feature space. Leveraging the clean samples identified from our CCNS, we further propose the Multi-scale Degree-based Noise Suppression (MDNS) to remove the noisy shots before the prototype generation based on their affinity with other samples in the support set. Experiment results that outperform the baselines show the feasibility of our proposed method.
Acknowledgement.This research is supported by the National Research Foundation, Singapore under its AI Singapore Programme (AISG Award No: AISG2-RP-2021-024), and the Tier 2 grant MOE-T2EP20120-0011 from the Singapore Ministry of Education. This research is also supported by the SUTD-ZJU Thematic Research Grant RS-MEZJU-00031. The work is fully done at the National University of Singapore.
## Appendix A Ablation Study
Analysis of different R values.Tab. A1 shows the ablation study of different number of components for each shot in the component-level clean noise separation. 'R=1' is the shot-level representation. It can be seen that the performance of 'R=1' is generally worse than that of the component-level contrastive learning, which verifies that the feature is sub-optimized with a single holistic aggregation. By dividing into local components, we can get more fine-grained and diverse positive and negative samples with 'R=4' having the best performance.
Analysis of different noise ratios in CCNS.We analyze different combination of noise ratio in the episodic training since our component-level clean noise separation is conducted among the clean and noisy shots. '{0.2,0.4}' has large performance drop when comparing with '{0,0.2,0.4}', which suggests that it is very necessary to include noise-free episodes
\begin{table}
\begin{tabular}{c|c|c c|c c} \hline \multirow{2}{*}{model} & \multirow{2}{*}{0\%} & \multicolumn{2}{c|}{In-episode Noise} & \multicolumn{2}{c}{Out-episode Noise} \\ \cline{3-6} & & 20\% & 40\% & 40\% & 60\% \\ \hline AttMPTI & 32.75 & 27.96 & 20.72 & 23.89 & 17.54 \\
**Ours** & 32.74 & **30.79** & **26.73** & **28.13** & **21.22** \\ \hline \end{tabular}
\end{table}
Table 4: 5-way 5-shot setting on ScanNet.
during training. By further adding noise ratio of 0.6 (with the restriction that any number of noisy class should not outnumber the clean shots), there is again a significant drop in performance. We can conclude that only a mix of a proper portion of noisy and clean episodes during training can bring decent improvement in the noisy test.
Analysis of different scales in MDNS.Tab. A3 presents the analysis of different scales in the multi-scale degree-based noise suppression. Due to space limitation, we only provide the comparison of selected scales from the many possibilities of combinations. We first analyze what constitutes a good scale. It is almost guaranteed that the holistic scale \(\{1/1/1\}\) gives decent performance since the mean representation covers the general information. The performance varies a lot when the foreground objects are divided into fine-grained scales. By comparing \(\{2/2/1\}\), \(\{1/2/2\}\) and \(\{2/1/2\}\), we can see that a cut on the z-axis causes a significant drop in performance on the heavy noise setting. By comparing \(\{3/3/1\}\) with \(\{2/2/1\}\), we can see that the cuts that are too fine-grained cause a performance drop due to the severe lack of the global information in the sub-shots. Overall, \(\{1/1/1\}\) and \(\{2/2/1\}\) are the good scales and their combination achieves the best performance.
## Appendix C Experiment Results on ScanNet
Effectiveness of CCNS and MDNS.We analyze the effectiveness of our proposed component-level clean noise suppression (CCNS) and multi-scale degree-based noise suppression (MDNS) on ScanNet in Tab. C4. Both CCNS and MDNS are effective, and the combination of them achieves best overall performance. It is worth highlighting that the robustness of AttMPTI is improved by simply adding our feature representation learning, _i.e._ CCNS. It verifies our claim that AttMPTI has the potential to be noise robust (by FPS based multi-prototype generation and label propagation), yet is subject to how discriminative the feature embedding is.
Qualitative Results.Fig. C3 presents the qualitative comparison between our method and AttMPTI under a 2-way 5-shot point cloud segmentation with 40% out-episode noise on ScanNet []. With the interference of the noisy shots, AttMPTI [] either fails to segment the target semantic object (see the result in the first row) or wrongly segment some background points as the target class (see the result in the second row). In contrast, our method is able to give reliable segmentation results with respect to the target classes.
## Appendix D Data split
We follow the data split of [5], and adopt the split 0 as the testing classes as shown in Tab. D5.
## Appendix E Clean Ratio Comparison
Tab. E6 lists the clean ratios of the original support set ('Original') and the filtered support set produced by the MDNS ('Ours') during meta-testing. The clean ratio in each noise setting is given by first computing the percentage of the number of the clean shots in the corresponding set of one episode and then averaging the percentages in all episodes. As can be clearly seen from Tab. E6, our method can significantly improve the clean ratio in all the noise setting.
## Appendix F Baseline Setups
We compare our method with few-shot point cloud semantic segmentation (3DFSSeg) methods AttMPTI [5] and ProtoNet [5], robust few-shot learning (R2DFSL) method Tra-NFS
and robust point cloud semantic segmentation (R3DSeg) method PNAL [20]. All methods use the same feature extractor as AttMPTI for fair comparison.
We follow the official code in AttMPTI to train ProtoNet and AttMPTI. For Tra-NFS, we adopt three-layer transformer encoder to generate robust prototype. We also randomly inject noise into the support set by sampling point clouds containing foreground objects from other classes during meta-training. For PNAL, we apply its robust training algorithm on each noisy support set and then test the performance on the corresponding query point cloud in each episode during meta-testing. We do not carry forward the knowledge from one episode to the next as suggested in [20].
|
2309.09467 | A model of stochastic memoization and name generation in probabilistic
programming: categorical semantics via monads on presheaf categories | Stochastic memoization is a higher-order construct of probabilistic
programming languages that is key in Bayesian nonparametrics, a modular
approach that allows us to extend models beyond their parametric limitations
and compose them in an elegant and principled manner. Stochastic memoization is
simple and useful in practice, but semantically elusive, particularly regarding
dataflow transformations. As the naive implementation resorts to the state
monad, which is not commutative, it is not clear if stochastic memoization
preserves the dataflow property -- i.e., whether we can reorder the lines of a
program without changing its semantics, provided the dataflow graph is
preserved. In this paper, we give an operational and categorical semantics to
stochastic memoization and name generation in the context of a minimal
probabilistic programming language, for a restricted class of functions. Our
contribution is a first model of stochastic memoization of constant Bernoulli
functions with a non-enumerable type, which validates data flow
transformations, bridging the gap between traditional probability theory and
higher-order probability models. Our model uses a presheaf category and a novel
probability monad on it. | Younesse Kaddar, Sam Staton | 2023-09-18T04:02:03Z | http://arxiv.org/abs/2309.09467v2 | A Model of Stochastic Memoization and Name Generation in Probabilistic Programming: Categorical Semantics via Monads on Presheaf Categories
###### Abstract
Stochastic memoization is a higher-order construct of probabilistic programming languages that is key in Bayesian nonparametrics, a modular approach that allows us to extend models beyond their parametric limitations and compose them in an elegant and principled manner. Stochastic memoization is simple and useful in practice, but semantically elusive, particularly regarding dataflow transformations. As the naive implementation resorts to the state monad, which is not commutative, it is not clear if stochastic memoization preserves the dataflow property - _i.e._ whether we can reorder the lines of a program without changing its semantics, provided the dataflow graph is preserved. In this paper, we give an operational and categorical semantics to stochastic memoization and name generation in the context of a minimal probabilistic programming language, for a restricted class of functions. Our contribution is a first model of stochastic memoization of constant Bernoulli functions with a non-enumerable type, which validates data flow transformations, bridging the gap between traditional probability theory and higher-order probability models. Our model uses a presheaf category and novel probability monad on it.
probabilistic programming, quasi-Borel spaces, synthetic measure theory, stochastic memoization, name generation, categorical semantics, commutative monads, nominal sets. +
Footnote †: footnote]Footnote : [
## 1 Introduction
Bayesian nonparametric models are a powerful approach to statistical learning. Unlike parametric models, which have a fixed number of parameters, nonparametric models can have an unbounded number of parameters that grows as needed to fit complex data. This flexibility allows them to capture subtle patterns in data that parametric models may miss, and it makes them more composable, because they are not arbitrarily truncated.
Prominent examples of nonparametric models include Dirichlet process models for clustering similar data points, and the Infinite Relational Model for automatically discovering latent groups and features, amongst others. These infinite-dimensional models can accommodate an unbounded number of components, clusters, or other features in order to fit observed data as accurately as possible.
Probabilistic programming is a powerful method for programming nonparametric models. _Stochastic memoization_[47, 57] has been identified as a particularly useful technique in this. This paper is about semantic foundations for stochastic memoization.
In deterministic memoization [38], the idea is to compute a function the first time it is called with a particular argument, and store the result in a memo-table. When the function is called again with the same argument, the memo-table is used, resulting in performance improvement but no semantic difference.
Stochastic memoization is this memoization applied to functions that involve random choices, and so a memoized function is semantically different from a non-memoized one, because the random choices will only be made once for each argument.
We illustrate this with a simple example; this is informal and we consider a precise language and semantics in Section 3. Consider a function \(f\) that returns a random number \([0,1]\) for each argument. It might be written \(f(x)=\texttt{uniform}\). One run of the program might call \(f\) with various arguments, and example runs are as follows:
\[\begin{array}{l|cccccc}\text{\it Calls to $f$ in a particular run of a program}:&f(0)&f(1)&f(0)&f(2)&f(1)&f(3)&\dots\\ \hline\text{\it Results of calls in a run without memoization:}&0.43&0.01&0.72&0.26&0.48&0.16&\dots\\ \text{\it Results of calls in a run with memoization:}&0.43&0.01&\textbf{0.43}&0.26&\textbf{0.01}&0.16&\dots \end{array}\]
Thus in the memoized version, when the function is called again with the same value, the previous result is recalled, and the random choices are not made again. (Note that although this is called'stochastic memoization', the terminology is perhaps confusing: the memoization always happens, and it is not 'randomly deciding whether or not to memoize'.)
From a semantic perspective, the role of stochastic memoization is clear when we use a monad-based interpretation with a probability monad \(\mathtt{Prob}\). This might be thought of as the Giry monad [15] or a probabilistic powerdomain [20, 25], or a Haskell monad (e.g. [10]).
A distribution on a type \(\mathtt{b}\) with parameters from \(\mathtt{a}\) has type \(\mathtt{a}\rightarrow\mathtt{Prob}(\mathtt{b})\). On the other hand, a random function is a probability distribution on the type of deterministic functions, having type \(\mathtt{Prob}(\mathtt{a}\rightarrow\mathtt{b})\). Whereas parameterized distributions are a key idea in parametric statistics, random functions are a key idea in nonparametric statistics. And stochastic memoization is a higher-order function with probabilistic effects, of type
\[\mathtt{mem}::(\mathtt{a}\rightarrow\mathtt{Prob}\mathtt{b})\rightarrow\mathtt{ Prob}(\mathtt{a}\rightarrow\mathtt{b})\]
that converts parameterized distributions into random functions, by making the random choice once for each argument. This mem combinator plays a crucial role in Church [17] and WebPPL [19], and appears with this type in our Haskell library LazyPPL [52]. Stochastic memoization also plays a role in Blog [39], Hansei [29], and many other languages (e.g. [5, 11]). It is not difficult to implement stochastic memoization, by using a memo-table. Nonetheless, its semantic properties remain elusive and developers have noted bugs and complications (e.g. [16, 30]). Moreover, the existing semantic models of probability (such as [20, 21, 25]) only support mem for very restricted domain types \(\mathtt{a}\) (see SS1). In particular our own Haskell library [52] supports stochastic memoization but the recent semantic analysis [10] only explains it at certain domain types. The point of this paper is to extend this semantic analysis of stochastic memoization to a broader class of domains.
**First example: White noise in a non-parametric clustering model.**
One common first example of stochastic memoization is as follows. Suppose we have a finite set of individuals, and we want to group them into an unknown number of clusters, and then assign attributes to the clusters. For example, we may want to form clusters and consider attributes on the clusters such as 'Brexit-supporters','mean geographic latitude/longitude', 'geographic variance','mean salary', and so on. A popular route is the 'Dirichlet process with memoization', as follows, for which a generative model has the following pseudocode (see e.g. [18, 19, 47][14]):
1. We randomly decide which proportion of individuals are in each cluster. We assign a unique identifier to each cluster, from some space \(\mathbb{A}\) of identifiers. One might use the Dirichlet process with a diffuse base measure on \(\mathbb{A}\), for example the normal distribution on the real numbers.
2. Assign attributes to the cluster identifiers. For example, depending on whether that cluster supports Brexit, assign either true or false to the identifier. This particular assignment is a sample from a random function in \((\mathbb{A}\to 2)\). This distribution might come from memoizing a constant Bernoulli distribution, assigning 'true' to any cluster identifier with probability \(0.5\).
3. Steps (i)-(ii) are generative, and we could run them to get some synthetic data. The idea of Bayesian clustering is to start with steps (i)-(ii) as a reasonable _prior_ distribution, in generative form, and to combine this with actual data to arrive at a _posterior_ distribution. In this example the actual data might come from a telephone survey, and we use conditional probability (aka Bayesian inversion) to arrive at a posterior distribution on the cluster proportions and their attributes. We can then use this to make predictions. The constant Bernoulli memoization is a reasonable prior for Brexit support, but the posterior will typically be much more complicated, with various correlations, etc.
In this paper, we focus on step (ii), stochastic memoization: steps (i) and (iii) are studied extensively elsewhere (e.g. see [15] in the statistics literature, or [2, 6, 7] in the semantics literature, and references therein).
This simple example of a memoized constant Bernoulli function is easy to implement using a memoitable, but already semantically complicated. If we put \(\mathbb{A}=\mathbb{R}\), the real numbers, for the base measure, as is common in statistical modelling, then the memoized constant Bernoulli distribution on \((\mathbb{A}\to 2)\) is \(1\)-dimensional white noise: intuitively, for every \(x\in\mathbb{R}\) we toss a coin to pick true or false, making an uncountable number of independent random choices. (As an aside, we note that we could combine steps (i) and (ii), using a complicated base measure for the Dirichlet process that includes all the attributes. This model would not be compositional, and in any case, some kind of memoization would still be needed to implement the Dirichlet process.)
#### Challenge.
In this paper, we address the challenge of showing that the following items are consistent:
1. a type \(\mathbb{A}\) with a diffuse probability distribution (Def 2.2);
2. a type bool of Booleans with Bernoulli distributions (i.e. tossing coins, including biased coins);
3. a type of functions \([\mathbb{A}\to\mathsf{bool}]\), with function application (4);
4. stochastic memoization of the constant Bernoulli functions (3);
5. the language supports the dataflow property (Def. 2.3).
These items are together inconsistent with traditional measure theory, as we discuss in Section 2.3, where we also make the criteria precise. Nonetheless (1)-(4) are together easy to implement in a probabilistic programming language, and useful for Bayesian modelling. Item (5) is a very useful property for program reasoning and program optimization. Item (5) is also a fundamental conceptual aspect of axiomatic probability theory, since in the measure-theoretic setting it amounts to Fubini's theorem [33] and the fact that probability measures have mass \(1\), and in the categorical abstraction of Markov categories [14] it amounts to the interchange law of affine monoidal categories.
There _are_ measure-theoretic models where some of these items are relaxed (SS2.1-2.3). For example, if we drop the requirement of a diffuse distribution, then there are models using Kolmogorov extension (SS2.2).
A grand challenge is to further generalize these items, for example to allow memoization of functions \(A\to B\) for yet more general \(A\) and \(B\), and to allow memoization of all definable expressions. Since the above five items already represent a significant challenge, and our semantic model is already quite complicated, we chose to focus on a'minimal working example' for this paper.
To keep things simple and minimal, in this paper we side-step measure-theoretic issues by noticing that the equations satisfied by a diffuse probability distribution are exactly the equations satisfied by name generation (e.g. [51, SSVB]). Because of this, we can use categorical models for name generation (following e.g. [42, SS4.1.4], [50, SS3.5]) instead of traditional measure theory. Name generation can certainly be implemented using randomness, and there are no clashes of fresh names if and only if the names come from a diffuse distribution (see also e.g. [49]). On the other hand, if we keep things simple by regarding the generated names as _pure names_[41], we avoid any other aspects of measure theory, such as complicated manipulations of the real numbers.
**Contributions.**
To address the challenge of the consistency of items (1)-(5) above, our main contributions are then as follows.
1. We first provide an operational semantics for a minimal toy probabilistic programming language that supports stochastic memoization and name generation (SS4).
2. We then (SS5) construct a cartesian closed (for function spaces) categorical model of this language endowed with an affine commutative monad (Theorem 5.5). In common with other work on local state (e.g. [29, 45]), we use a functor category semantics, indexing sets by possible worlds. In this paper, those worlds are finite fragments of a memo-table.
3. We prove that our denotational semantics is sound with respect to the operational semantics, ensuring the correctness of our approach and validating that lines can be reordered in the operational semantics (Theorem 5.10). The class of functions that can be memoized includes constant Bernoulli functions. We call these functions _freshness-invariant_ (Definition 5.7).
The soundness theorem (5.10) is not trivial because the timing of the random choices differs between the operational and denotational semantics. In the operational semantics, the memo-table is partial, and populated lazily as needed, when functions are called with arguments. This is what happens in all implementations. However, this timing is intensional, and so by contrast, in the denotational semantics, the memo-table is always totally populated as soon as the current world is extended with any functions or arguments.
4. Finally, we present a practical Haskell implementation [27] which compares the small-step, big-step operational, and denotational semantics, demonstrating the applicability of our results (SS6).
## 2 Stochastic memoization by example
This section discusses the law of stochastic memoization and provides examples in finite, countable, and non-enumerable domain settings. We then address the challenges posed by the naive use of the state monad, and we clarify our objective: finding a model of probability that supports stochastic memoization over non-enumerable domains, satisfying the dataflow property, and that has function spaces.
In what follows, we use two calculi: (a) The internal metalanguage of a cartesian closed category with a strong monad Prob, for which we use Haskell notation, but which is roughly Moggi's monadic metalanguage [43, SS2.2]. (b) An ML-like programming language which is more useful for practical programming, but which would translate into language (a); this is roughly Moggi's'simple programming language' [43, SS2.3]. We assume passing familiarity with probability and monadic programming in this section, but the informal discussion here sets the context, and we move to more formal arguments in Section 3.
(Recall some Haskell notation: we write \(\ltimes\to\mathfrak{t}\) for lambda abstraction; \(\gg\) for monadic bind, i.e. Kleisli composition; **return** for the unit; a **do** block allows a sequence of monadic bound instructions. We write \(\operatorname{\mathbf{const}}\times\) for the constant \(\times\) function, \(\operatorname{\mathbf{const}}\times=\ltimes\to\ltimes\).)
**Memoization law.**
**Definition 2.1**: A strong monad _supports stochastic memoization of type_\(\mathsf{a}\to\mathsf{b}\) if it is equipped with a morphism \(\operatorname{\mathbf{mem}}::(\mathsf{a}\to\mathsf{Prob}\mathsf{b})\to \mathsf{Prob}(\mathsf{a}\to\mathsf{b})\) that satisfies the following equation in the metalanguage, for every \(\mathsf{x}_{0}::\mathsf{a}\) and \(\mathsf{f}::\mathsf{a}\to\mathsf{Prob}\mathsf{b}\):
\[\operatorname{\mathbf{mem}}\mathsf{f}=\mathsf{f}\times_{0}\gg\left(\ltimes_ {0}\to\operatorname{\mathbf{mem}}\mathsf{f}\gg\left(\ltimes_{\mathsf{Mem}} \to\operatorname{\mathbf{return}}\left(\ltimes\to\mathsf{if}\times=\mathsf{x }_{0}\operatorname{\mathbf{then}}\mathsf{y}_{0}\operatorname{\mathbf{else}} \mathsf{fMem}\times\right)\right)\right) \tag{1}\]
As noted at the beginning of this section, we will pass between an internal metalanguage for strong monads, and an ML-like programming language that would be interpreted using strong monads. In Section 3 we introduce this programming language precisely, but for now we note that it has a special syntax \(\lambda_{\mathsf{a}}\,x.\,u\), meaning \(\operatorname{\mathbf{mem}}\left(\ltimes\to u\right)\), since this is a common idiom1. The law of Definition 2.1 requires equations
such as:
\[\begin{array}{llllll}\text{let val}\ f\ \leftarrow\ \lambda_{\tt p}\,x.\,u&\text{1 sample}&u[n/x]&\text{let val}\ f\ \leftarrow\ \lambda_{\tt p}\,x.\,u\,\mathsf{in}\\ \mathsf{in}\,f@{\,\mathsf{=}\,}&\text{let val}\ v_{1}\ \leftarrow\ f@{\,\mathsf{\cong}\,} &\text{let val}\ v_{1}\ \leftarrow\ f@{\,\mathsf{\cong}\,} &\text{let val}\ v\ \leftarrow\ u[n/x]\,\mathsf{in}\\ \mathsf{let}\ \mathsf{val}\ v_{2}\ \leftarrow\ f@{\,\mathsf{\cong}\,} &\text{let}\ \mathsf{val}\ v_{2}\ \leftarrow\ f@{\,\mathsf{\cong}\,} &\text{let}\ \mathsf{val}\ v_{2}\ \leftarrow\ f@{\,\mathsf{\cong}\,} &\text{let}\ \mathsf{val}\ v_{1}\ \leftarrow\ f@{\,\mathsf{\cong}\,} &\text{let}\ \mathsf{val}\ v\ \leftarrow\ f@{\,\mathsf{\cong}\,} &\text{let}\ \mathsf{val}\ v\ \leftarrow\ f@{\,\mathsf{\cong}\,} &\text{let}\ \mathsf{val}\ v_{1}\ \leftarrow\ f@{\,\mathsf{\cong}\,} &\text{let}\ \mathsf{val}\ v_{1}\ \leftarrow\ f@{\,\mathsf{\cong}\,} &\text{let}\ \mathsf{val}\ v_{1}\ \leftarrow\ f@{\,\mathsf{\cong}\,} &\text{let}\ \mathsf{val}\ v_{1}\ \leftarrow\ f@{\,\mathsf{\cong}\,} &\text{let}\ \mathsf{val}\ v_{2}\ \leftarrow\ f@{\,\mathsf{\cong}\,} &\text{let}\ \mathsf{val}\ v_{2}\ \leftarrow\ f@{\,\mathsf{\cong}\,} &\text{let}\ \mathsf{val}\ v_{1}\ \leftarrow\ f@{\,\mathsf{\cong}\,} &\text{let}\ \mathsf{val}\ v_{2}\ \leftarrow\ f@{\,\mathsf{\cong}\,} &\text{let}\ \mathsf{val}\ v_{2}\ \leftarrow\ f@{\,\mathsf{\cong}\,} &\text{let}\ \mathsf{val}\ v_{1}\ \leftarrow\ f@{\,\mathsf{\cong}\,} &\text{let}\ \mathsf{val}\ v_{1}\ \leftarrow\ f@{\,\mathsf{\cong}\,} &\text{let}\ \mathsf{val}\ v_{2}\ \leftarrow\ f@{\,\mathsf{\cong}\,} &\text{let}\ \mathsf{val}\ v_{2}\ \leftarrow\ f@{\,\mathsf{\cong}\,} &\text{let}\ \mathsf{val}\ v_{2}\ \leftarrow\ f@{\,\mathsf{\cong}\,} &\text{let}\ \mathsf{val}\ v_{1}\ \leftarrow\ f@{\,\mathsf{\cong}\,} &\text{let}\ \mathsf{val}\ v_{2}\ \leftarrow\ f@{\,\mathsf{\cong}\,} &\text{let}\ \mathsf{val}\ v_{2}\ \leftarrow\ f@{\,\mathsf{\cong}\,} &\text{let}\ \mathsf{val}\ v_{2}\ \leftarrow\ f@{\,\mathsf{\cong}\,} &\text{let}\ \mathsf{val}\ v_{2}\ \leftarrow\ f@{\,\mathsf{\cong}\,} &\text{let}\ \mathsf{val}\ v_{2}\ \leftarrow\ f@{\,\mathsf{\cong}\,} &\text{let}\ \mathsf{val}\ v_{2}\ \leftarrow\ f@{\,\mathsf{\cong}\,} &\text{let}\ \mathsf{val}\ v_{2}\ \leftarrow\ f@{\,\mathsf{\cong}\,} &\text{let}\ \mathsf{val}\ v_{2}\ \leftarrow\ f@{\,\mathsf{\cong}\,} &\text{let}\ \mathsf{val}\ v_{2}\ \leftarrow\ f@{\,\mathsf{\cong}\,} &\text{let}\ \mathsf{val}\ v_{2}\ \leftarrow\ f@{\,\mathsf{\cong}\,} &\text{let}\ \mathsf{val}\ v_{2}\ \leftarrow\ f@{\,\mathsf{\cong}\,} &\text{let}\ \mathsf{val}\ v_{2}\ \leftarrow\ f@{\,\mathsf{\cong}\,} &\text{let}\ \mathsf{val}\ v_{2}\ \leftarrow\ f@{\,\mathsf{\cong}\,} &\text{let}\ \mathsf{val}\ v_{2}\ \leftarrow\ f@{\,\mathsf{\cong}\,} &\text{let}\ \mathsf{val}\ v_{2}\ \leftarrow\ f@{\,\mathsf{\cong}\,} &\text{let}\ \mathsf{val}\ v_{2}\ \leftarrow\ f@{\,\mathsf{\cong}\,} &\text{let}\ \mathsf{val}\ v_{2}\ \leftarrow\ f@{\,\mathsf{\cong}\,} &\text{let}\ \mathsf{val}\ v_{2}\ \leftarrow\ f@{\,\mathsf{\cong}\,} &\text{let}\ \mathsf{val}\ v_{2}\ \leftarrow\ f@{\,\mathsf{\cong}\,} &\text{let}\ \mathsf{val}\ v_{2}\ \leftarrow\ f@{\,\mathsf{\cong}\,} &\text{let}\ \mathsf{val}\ v_{2}\ \leftarrow\ f@{\,\mathsf{\cong}\,} &\text{let}\ \mathsf{val}\ v_{2}\ \leftarrow\ f@{\,\mathsf{\cong}\,} &\text{let}\ \mathsf{val}\ v_{2}\ \leftarrow\ f@{\,\mathsf{\cong}\,} &\text{let}\ \mathsf{val}\ v_{2}\ \leftarrow\ f@{\,\mathsf{\cong}\,} &\text{let}\ \mathsf{val}\ v_{2}\ \leftarrow\ f@{\,\mathsf{\cong}\,} &\text{let}\ \mathsf{val}\ v_{2}\ \leftarrow\ f@{\,\mathsf{\cong}\,} &\text{let}\mathsf{val}\ v_{2}\ \leftarrow\ f@{\,\mathsf{\cong}\,} &\text{let}\ \mathsf{val}\ v_{2}\ \leftarrow\ f@{\,\mathsf{\cong}\,} &\text{let}\mathsf{val}\ v_{2}\ \leftarrow\ f@{\,\mathsf{\cong}\,} &\text{let}\mathsf{val}\ v_{2}\ \leftarrow\ f@{\,\mathsf{\cong}\,} &\text{let}\mathsf{val}\ v_{2}\ \leftarrow\ f@{\,\mathsf{\cong}\,} &\text{let}\mathsf{val}\ v_{2}\ \leftarrow\ f@{\,\mathsf{\cong}\,} &\text{let}\mathsf{val}\ v_{2}\ \leftarrow\ f@{\,\mathsf{\cong}\,} &\text{let}\mathsf{val}\ v_{2}\ \leftarrow\ f@{\,\mathsf{\cong}\,} &\text{let}\mathsf{val}\ v_{2}\ \leftarrow\ f@{\,\mathsf{\cong}\,} &\text{let}\mathsf{val}\ v_{2}\ \leftarrow\ f@{\,\mathsf{\cong}\,} &\text{let}\mathsf{val}\ v_{2}\ \leftarrow\ f@{\,\mathsf{\cong}\,} &\text{let}\mathsf{val}\ v_{2}\ \leftarrow\ f@{\,\mathsf{\cong}\,} &\text{let}\mathsf{val}\ v_{2}\ \leftarrow\ f@{\,\mathsf{\cong}\,} &\text{let}\mathsf{val}\ v_{2}\ \ \leftarrow\ f@{\,\mathsf{\cong}\,} &\text{let}\mathsf{val}\ v_{2}\ \leftarrow\ f@{\,\mathsf{\cong}\,} &\text{let}\mathsf{val}\ v_{2}\ \leftarrow\ f@{\,\mathsf{\cong}\,} &\text{let}\mathsf{val}\ v_{2}\ \ \leftarrow\ f@{\,\mathsf{\cong}\,} &\text{let}\mathsf{val}\ v_{2}\
\[\begin{array}{l}\text{poissonPP}\ ::\ \text{\bf Double}\to\text{\bf Double}\to\text{\bf Prob }\ \text{\bf[Double]}\\ \text{poissonPP}\ \text{\bf lower rate}\ =\ \text{\bf do}\ \{\ \text{\rm gaps}\leftarrow\text{\bf mem}\ \text{\bf(const (exponential rate))}\ ;\ \text{\bf return}\ \text{\bf(scan11}\ (+)\ \text{\bf lower}\ \text{\bf(map gaps}\ \text{\bf[1}\..\ \text{\bf])}\ \}\end{array}\]
We implement memoization with enumerable a in the Haskell LazyPPL library [11] without using state, instead using Haskell's laziness and tries, following [23] (see [11]). We use the Poisson process extensively in the demonstrations for LazyPPL [53].
Semantic interpretation with enumerable domains.Memoization with enumerable domains is supported by a denotational semantics using the category of measurable spaces and the Giry monad [16]. Although the category is not Cartesian closed, the function space \(B^{\mathbb{N}}\)_does_ exist for all standard Borel \(B\), and is given by the countable product of \(B\) with itself, \(\prod_{\mathbb{N}}B\). Memoization amounts to using Kolmogorov's extension theorem to define a map \((G\,B)^{\mathbb{N}}\to G(B^{\mathbb{N}})\) (see [46, SS4.8] and [10, Thm. 2.5]).
### Memoization with non-enumerable/diffuse domain
We now move beyond enumerable domains, to formalize the challenge from Section 1. In Section 1 we illustrated this with a clustering model. See [53] for the full implementation in our Haskell library, LazyPPL, along with other models that also use memoization, including a feature extraction model that uses the Indian Buffet Process, and relational inference with the infinite relational model (following [19]).
Rather than axiomatizing uncountability, we consider diffuse distributions.
**Definition 2.2**: [Diffuse distribution] Let a be an object with an equality predicate ((a,a)\(\to\) bool). A _diffuse distribution_2 is a term \(\mathfrak{p}\) such that
Footnote 2: Diffuse measures are often called ‘atomless’ in probability theory. We will also want to regard names in name generation as atomic, so we avoid this clash of terminology.
\[\text{\bf do}\ \{\text{\rm x}\leftarrow\mathfrak{p}\ ;\ \text{\rm y} \leftarrow\mathfrak{p}\ ;\ \text{\bf return}\ \text{\rm(x}\mathbin{\hbox to 0.0pt{\kern 1.0pt\lower 0.0pt\hbox{\rm x}}}\mathbin{\hbox to 0.0pt{\kern 1.0pt\lower 0.0pt \hbox{\rm y}}}\text{\rm)}\}\qquad\text{is semantically equal to}\qquad\text{\bf return }\ \text{\rm(false)}.\]
For example, in a probabilistic programming language over the real numbers, we can let a be the type of real numbers and let \(\mathfrak{p}\) be a uniform distribution on \([0,1]\), or a normal distribution, or an exponential distribution. These are all diffuse in the above sense. The Bernoulli distribution on the booleans is not diffuse, because there is always a chance that we may get the same result twice in succession.
For the reader familiar with traditional measure theory, we recall that if \(\mathfrak{p}\) is diffuse then a is necessarily an uncountable space. For any probability distribution on a countable discrete space must give non-zero measure to at least one singleton set.
The implementation trick using tries from Section 2.2 will not work for diffuse measures, because we cannot enumerate the domain of a diffuse distribution. It is still possible to implement memoization using state and a memo-table (e.g. [53]). Unlike a fully stateful effect, however, in this paper we argue that stochastic memoization is still compatible with commutativity/dataflow program transformations:
**Definition 2.3**: [Dataflow property] A programming language is said to have the _dataflow property_ if program lines can be reordered (commutativity) and discarded (discardability, or affineness) provided that the dataflow is preserved. In other words, the language satisfies the following commutativity and discardability equations:
\[\text{\bf do}\ \{\text{\rm x1}\leftarrow\mathfrak{t}1\ ;\ \text{\rm x2} \leftarrow\mathfrak{t}2\ ;\ \text{\rm u}\}\ =\ \text{\bf do}\ \{\text{\rm x2}\leftarrow\mathfrak{t}2\ ;\ \text{\rm x1} \leftarrow\mathfrak{t}1\ ;\ \text{\rm u}\}\\ \text{\bf do}\ \{\text{\rm x1}\leftarrow\mathfrak{t}1\ ;\ \text{\rm t2}\} =\mathfrak{t}2\qquad\qquad\qquad\qquad\qquad\text{where}\ \ \text{\rm x1}\notin\text{\rm fv}\text{\rm(t2)}\ \text{and}\ \ \text{\rm x2}\notin\text{\rm fv}\text{\rm(t1)}. \tag{6}\]
The dataflow property expresses the fact that, to give a meaning to programs, the only thing that matters is the topology of dataflow diagrams. These transformations are very useful for inference algorithms and program optimization. But above all, on the foundational side, dataflow is a fundamental concept
that corresponds to monoidal categories and is crucial to have a model of probability. As for monoidal categories, a strong monad is commutative (5) if and only if its Kleisli category is monoidal (commutativity is the monoidal interchange law), and affine (6) if the monoidal unit is terminal. In synthetic probability theory, dataflow is regarded by various authors as a fundamental aspect of the abstract axiomatization of probability: Kock [31] argues that any monad that is strong commutative and affine can be abstractly viewed as a probability monad, and affine monoidal categories are used as a basic setting for synthetic probability by several authors [7, 13, 55, 56]. The reader familiar with measure-theoretic probability will recall that the proof that the Giry monad satisfies (5) amounts to Fubini's theorem for reordering integrals (e.g. [51]).
Semantic interpretations for diffuse domainsThe point of this paper is to provide the first semantic interpretation for memoization of the constant Bernoulli functions (3) with diffuse domain (Def. 2.2). We emphasize that although other models can support some aspects of this, there is no prior work that supports everything.
* With countable domain, there is a model in measurable spaces, as discussed in Section 2.2. But there can be no diffuse distribution on a countable space.
* In measurable spaces, we can form the uncountable product space \(\prod_{\mathbb{R}}2\) of \(\mathbb{R}\)-many copies of \(2\). We can then define a white noise probability measure on \(\prod_{\mathbb{R}}2\) via Kolmogorov extension (e.g. [45, 4.9(31)]). Moreover, there are diffuse distributions on \(\mathbb{R}\), such as the uniform distribution on \([0,1]\). However, it is known that there is no measurable evaluation map \(\mathbb{R}\times(\prod_{\mathbb{R}}2)\to 2\) (see [1]), and so we cannot interpret function application (4).
* In quasi-Borel spaces [21], there is a quasi-Borel space \([\mathbb{R}\to 2]\) of measurable functions, and a measurable evaluation map \(\mathbb{R}\times([\mathbb{R}\to 2)\to 2\), but there is no white noise probability measure on \([\mathbb{R}\to 2]\). The intuitive reason is that, in quasi-Borel spaces, a probability measure on \([\mathbb{R}\to 2]\) is given by a random element, i.e. a morphism \(\Omega\to[\mathbb{R}\to 2]\), which curries to a measurable function \(\Omega\times\mathbb{R}\to 2\). But there is no such measurable function representing white noise (e.g. [27, Ex 1.2.5]).
* There are domain-theoretic treatments of probability theory that support Kolmogorov extension, uniform distributions on \(\mathbb{R}\), and function spaces [20, 25]. However, these treatments regard the real numbers \(\mathbb{R}\) as constructive, and hence there are no non-trivial continuous morphisms \(\mathbb{R}\to 2\), and there is no equality test on \(\mathbb{R}\), so that we cannot regard \(\mathbb{R}\) with a diffuse distribution as formalized equationally in Definition 2.2. The same concern seems to apply to recent approaches using metric monads [36].
* The semantic model of beta-bernoulli in [53] is a combinatorial model that includes aspects of the beta distribution, which is diffuse in measure theory. That model does not support stochastic memoization, but as a presheaf-based model it is a starting point for the model in this paper.
* There is a straightforward implementation of stochastic memoization that uses local state, as long as the domain supports equality testing [52]. The informal idea is to make the random choices as they are needed, and remember them in a memo-table, and keep this memo-table in a local state associated with the function. Therefore one could use a semantic treatment of local state to analyze memoization. For example, one could build a state monad in quasi-Borel spaces. However, state effects in general do not support the dataflow property (Def. 2.3), since we cannot reorder memory assignments in general. Ideally, one could use a program logic to prove that this particular use of state does support the dataflow property. Although there are powerful program logics for local state and probability (e.g. [3]), we have not been able to use them to prove this.
There are other models of higher-order probability (e.g. [6, 8, 12]). These do not necessarily fit into the monad-based paradigm, but there may be other ways to use them to address the core challenge in Section 1.
## 3 A language for stochastic memoization and name generation
Our probabilistic programming language has a minimal syntax, emphasizing the following key features:
* **name generation**: we can generate fresh names (referred to as _atomic_ names or _atoms_, in the sense of Pitts' nominal set theory [44]) with constructs such as let \(x\ =\ \mathsf{fresh}()\,\mathrm{in}\,\cdots\). In the terminology of Def. 2.2, this is like a generic diffuse probability measure, since fresh names are distinct.
* basic **probabilistic effects**: for illustrative purposes, the only distribution we consider, as a first step, is the Bernoulli distribution (but it can easily be extended to other discrete distributions). Constructs like let \(b\ =\ \mathsf{flip}(\theta)\,\mathrm{in}\,\cdots\) amount to flipping a coin with bias \(\theta\) and storing its result in a variable \(b\).
* defined with the new \(\lambda_{\mathfrak{a}}\) operator
- is called twice on the same argument, it should return the same result (eq. (2)).
We have the following base types: \(\mathsf{bool}\) (booleans), \(\mathbb{A}\) (atomic names), and \(\mathbb{F}\) (which can be thought of as the type of memoized functions \(\mathbb{A}\to\mathsf{bool}\)). For the sake of simplicity, we do not have arbitrary function types. In fine-grained call-by-value fashion [34], there are two kinds of judgments: typed values, and typed computations. The grammar and typing rules of our language are given in Figure 1. The typing rules are standard, except for the \(\lambda_{\mathfrak{a}}\) operator, which is the key novelty of our language. The typing rule for \(\lambda_{\mathfrak{a}}\) is given in Figure 1 and is explained in the next section. (Also, equality \(v=w\) and memoized function application \(v@w\) are pure computations, _i.e._ in the categorical semantics (section 5.3), they will be composed by the unit of the monad.)
\begin{table}
\begin{tabular}{|l l
## 4 Operational Semantics
We now present a small-step operational semantics for our language. The operational semantics defines the rules for reducing program expressions, which form the basis for understanding the behavior of programs written in the language. Henceforth, we fix a countable set of variables \(x,y,z,\ldots\in\mathsf{Var}\), and consider the terms up to \(\alpha\)-equivalence for the \(\lambda_{\mathfrak{g}}\) operator. Since we focus on functions with boolean codomain, our partial memo-tables are represented as partial bigraphs (bipartite graphs).
**Definition 4.1**: [Partial bigraph] A partial bigraph \(\mathfrak{g}\stackrel{{\mathrm{def}}}{{=}}(\mathfrak{g}_{L}, \mathfrak{g}_{R},E)\) is a finite bipartite graph where the edge relation \(E\colon\mathfrak{g}_{L}\times\mathfrak{g}_{R}\to\{\mathsf{true},\mathsf{false},\bot\}\) is either true, false or undefined (\(\bot\)) on each pair of left and right nodes \((\not{f},a)\in\mathfrak{g}_{L}\times\mathfrak{g}_{R}\). In the following, left nodes will be thought of as function labels and right nodes as atom labels. By abuse of notation, syntactic truth values will be conflated with semantic ones. For a partial graph \(\mathfrak{g}\), \(E(\mathfrak{f},a)=\beta\in\{\mathsf{true},\mathsf{false},\bot\}\) will be written \(\int\stackrel{{\beta}}{{\to}}a\) when \(\mathfrak{g}\) is clear from the context.
### Extended expressions
We introduce extended expressions \(e\), by extending the grammar of computations (1) with an extra construct \(\{\!\!\{u\}\!\}_{\gamma}^{\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!
as pairing.
**Definition 4.2**: If \(S\) is a finite set, \(\mathsf{Tree}(S)\cong\biguplus_{n\geq 0}C_{n}\,S^{n+1}\) (where \(C_{n}\) is the \(n\)-th Catalan number, and \(C_{n}\,S^{n+1}\) is a coproduct of \(n\) copies of \(S^{n+1}\), one for each possible bracketing) denotes the set of all possible non-empty trees with internal nodes the cartesian product and leaf nodes taken in \(S\).
**Example 4.3**: If \(S\stackrel{{\mathrm{def}}}{{=}}\{s_{1},s_{2}\}\), then \(s_{1}\in\mathsf{Tree}(S),(s_{2},s_{1})\in\mathsf{Tree}(S),(s_{1},(s_{1},s_{2}) )\in\mathsf{Tree}(S),\ldots\)
**Definition 4.4**: [Set-theoretic denotation of contexts.] Let \(\mathfrak{g}\) be a partial bigraph. The set-theoretic denotation \((\![-]\!]\) of a context \(\Gamma\) is defined as \((\![\mathsf{bool}]\!)\stackrel{{\mathrm{def}}}{{=}}2\cong\{ \mathsf{true},\,\mathsf{false}\}\), \((\![\mathbb{F}]\!)\stackrel{{\mathrm{def}}}{{=}}\!\mathfrak{g}_{L}\), \((\![\mathbb{A}]\!)\stackrel{{\mathrm{def}}}{{=}}\!\mathfrak{g}_ {R}\) and \((\![-]\!)\) is readily extended to every context \(\Gamma\). Moreover, in the following, \(\gamma\in(\![\Gamma]\!)\subseteq\mathsf{Tree}(2+\mathfrak{g}_{L}+\mathfrak{g} _{R})^{\mathsf{Var}}\) denotes a context value.
**Example 4.5**: If \(\Gamma\stackrel{{\mathrm{def}}}{{=}}(x:\mathsf{bool},y:\mathbb{F},z:((\mathbb{F}\times 2)\times\mathbb{A}))\), then \((\![\Gamma]\!)\stackrel{{\mathrm{def}}}{{=}}\{x\mapsto 2,y\mapsto \mathfrak{g}_{L},z\mapsto((\mathfrak{g}_{L}\times 2)\times\mathfrak{g}_{R})\) and an example of a context value is \(\gamma\stackrel{{\mathrm{def}}}{{=}}\{x\mapsto\mathsf{true},y \mapsto\int_{0},z\mapsto((\mathfrak{f}_{1},\mathsf{true}),a_{0})\}\).
We now present terminal computations, redexes, reduction contexts, and configurations (table 3). Configurations encapsulate the computation state (a context value, an extended expression, a partial graph, and a map from the partial graph to closures), which helps keep track of different parts of the program as the computation proceeds.
### Reduction rules
Let \((\![-]\!]_{\gamma}\) be the function evaluating an expression value in a context value \(\gamma\) (_e.g._\((\![x]\!)_{\gamma}=\gamma(x),(\![\mathsf{true}]\!)_{\gamma}=\mathsf{true}\)).
We can define the operational semantics of the language using reduction rules. They provide a step-by-step description of how expressions are evaluated and transformed during execution, following a left-most outer-most strategy, with lexical binding. Given a configuration \((\gamma,u,\mathfrak{g},\lambda)\) (note that if \(u\) is of the form \(\{\![u^{\prime}]\!\}_{\gamma}^{(\![,a)}\), then it is assumed that the function-atom label pair \((\![\mathfrak{f},a)\in\mathfrak{g}_{L}\times\mathfrak{g}_{R})\), we will apply the following reduction rules:
\begin{table}
\begin{tabular}{|l
**Example 4.6**: We now give an example showcasing how these reduction rules apply on a program combining name generation, a coin flip, function abstraction, and stochastic memoization. An atom \(x_{0}\) is generated and used as an argument for a function \(f_{1}\), which performs a coin flip if the argument matches \(x_{0}\). The outcome is then memoized and the result is returned in the second application. There are two execution traces, depending on the outcome of the coin flip (\(\beta\in\mathsf{true},\mathsf{false}\)).
\[\begin{array}{lcl}\Big{(}\emptyset,&\mathsf{let\ val}\ x_{0}\ \leftarrow\ \mathsf{ fresh}()\,\mathsf{in}\\ &\mathsf{let\ val}\ f_{1}\ \leftarrow\ \lambda_{\mathfrak{S}}x.\ (\mathsf{let\ val}\ b\ \leftarrow\ (x=x_{0})\,\mathsf{in}\\ &\mathsf{if}\ b\mathsf{then}\ \mathsf{flip}(\frac{1}{2})\ \mathsf{else}\mathsf{ false})\mathsf{in}\\ &\mathsf{let\ val}\ f_{2}\ \leftarrow\ \lambda_{\mathfrak{S}}y.\ f_{1}@y\, \mathsf{in}\,f_{2}@x_{0},\\ (\emptyset,\emptyset,\emptyset),\ \emptyset\end{array}\to \begin{array}{lcl}\Big{(}\{x_{0}\mapsto a_{0}\},\\ \mathsf{let\ val}\ f_{1}\ \leftarrow\ \lambda_{\mathfrak{S}}x.\ (\mathsf{let\ val}\ b\ \leftarrow\ (x=x_{0})\, \mathsf{in}\\ &\mathsf{if}\ b\mathsf{then}\ \mathsf{flip}(\frac{1}{2})\ \mathsf{else}\mathsf{ false})\mathsf{in}\\ &\mathsf{let\ val}\ f_{2}\ \leftarrow\ \lambda_{\mathfrak{S}}y.\ f_{1}@y\, \mathsf{in}\,f_{2}@x_{0},\\ (\emptyset,\emptyset,\emptyset),\ \emptyset\end{array}\to \begin{array}{lcl}\Big{(}\{x_{0}\mapsto a_{0}\},\\ \mathsf{let\ val}\ f_{1}\ \leftarrow\ \lambda_{\mathfrak{S}}x.
\[\rightarrow^{2}\Big{(}\overbrace{\{x_{0}\mapsto a_{0},\,f_{1} \mapsto f_{1},\,f_{2}\mapsto f_{2}\}}^{\text{def}\,\gamma_{0}},\quad f_{2}@x_{0},\] \[\qquad\qquad(\{f_{1},f_{2}\},\;\{a_{0}\},\{f_{1}\xrightarrow{ \bot}a_{0},f_{2}\xrightarrow{\bot}a_{0}\}),\] \[\qquad\qquad\{f_{1}\mapsto(\lambda_{\mathfrak{D}}\,x.\text{ let val }b\ \leftarrow\ (x=x_{0})\,\text{in}\] \[\qquad\qquad\qquad\text{if }b\,\text{then flip}(\tfrac{1}{2})\,\text{else false}\},\{x_{0}\mapsto a_{0}\}),\] \[\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad
**Lemma 4.8**: _If a configuration of the form \((\gamma,\mathcal{C}[v@w],\mathfrak{g},\lambda)\) is accessible and \(E((\!(v)\!\gamma,(w\!)\!)_{\gamma})=\bot\), then \(J(\gamma,\mathcal{C}[v@w],\mathfrak{g},\lambda)\stackrel{{\text{ def}}}{{=}}\Gamma\mid\Delta\nmid^{\nmid}\mathcal{C}[v@w]:A\) is such that the memoization stack \(\Delta\) does not contain a function-atom label pair with \((\!(v)\!)_{\gamma}\) as first component._
As a corollary, we can then prove that a configuration is accessible only if its memoization stack has no duplicates:
**Lemma 4.9**: _If a configuration \((\gamma,e,\mathfrak{g},\lambda)\) is accessible and \(\mathrm{J}(\gamma,e,\mathfrak{g},\lambda)\stackrel{{\text{ def}}}{{=}}\Gamma\mid\Delta\nmid^{\nmid}e:A\) is its corresponding configuration judgment, there is no duplicate in \(\Delta\)._
This in turn enables us to ensure that the operational semantics satisfies the memoization equations:
**Proposition 4.10**: _If \(e_{1}\) and \(e_{2}\) are programs of the form_
\[e_{1}\stackrel{{\text{def}}}{{=}}\mathsf{let}\ \mathsf{val}\ x \ \leftarrow\ \mathsf{fresh}()\,\mathsf{in}\,\mathsf{let}\ \mathsf{val}\ f\ \leftarrow\ \lambda_{\mathfrak{D}}y.\ e\,\mathsf{in}\,\mathsf{let}\ \mathsf{val}\ v_{1}\ \leftarrow\ f@x\,\mathsf{in}\,\mathsf{let}\ \mathsf{val}\ v_{2}\ \leftarrow\ f@x\,\mathsf{in}\,\mathsf{return}(v_{1},v_{2})\] \[e_{2}\stackrel{{\text{def}}}{{=}}\mathsf{let}\ \mathsf{val}\ x \ \leftarrow\ \mathsf{fresh}()\,\mathsf{in}\,\mathsf{let}\ \mathsf{val}\ f\ \leftarrow\ \lambda_{\mathfrak{D}}y.\ e\,\mathsf{in}\,\mathsf{let}\ \mathsf{val}\ v_{1}\ \leftarrow\ f@x\,\mathsf{in}\,\mathsf{return}(v_{1},v_{1})\]
_the configurations \((\emptyset,e_{1},\emptyset,\emptyset)\) and \((\emptyset,e_{2},\emptyset,\emptyset)\) have the same big-step operational semantics._
## 5 Denotational Semantics
In this section we propose a denotational model that verifies the dataflow property (Def. 2.3, Theorem 5.5) and which supports memoization of constant Bernoulli functions (Theorem 5.8) and is sound with respect to the operational semantics of Section 4 (Theorem 5.10). Thus we show that criteria (1)-(5) of Section 1 are consistent.
The memo-tables in memoization are a kind of hidden or local state, and our semantic domain is similar to other models of local state [37, 44, 46, 28] in that it uses a possible worlds semantics in the guise of a functor category.
**Definition 5.1**: A _total bigraph_ is a partial bigraph (Def. 4.1) that does not have any undefined \((\bot)\) elements. This represents a fully populated memo-table. We notate this \(g=(g_{L},g_{R},E^{g})\), omitting the superscript when it is clear. An _embedding_ between total bigraphs \(\iota\colon g\to g^{\prime}\) is a pair of injections \((\iota_{L}:g_{L}\to g^{\prime}_{L},\iota_{R}:g_{R}\to g^{\prime}_{R})\) that do not add or remove edges \((E^{g}(\!(\!(\!(\!(\!(\!(\!(\!(\!(\!(\!(\!((( ((((((( ( ( 0 0 0
The presheaf category \([\mathbf{BiGrph}_{\mathit{emb}},\mathbf{Set}]\) has products and coproducts, given pointwise [35]. In particular, the denotation of the type of booleans is the constant presheaf \(2\cong 1+1\).
The edge relations collect to form a natural transformation \(\mathcal{E}:[\![\mathbb{F}]\!]\times[\![\mathbb{A}]\!]\to 2\) given by \(\mathcal{E}_{g}(\![,a)=E^{g}(\![,a)\).
The category \([\mathbf{BiGrph}_{\mathit{emb}},\mathbf{Set}]\) is cartesian closed, as is any presheaf category. By currying \(\mathcal{E}\), we have an embedding of \([\![\mathbb{F}]\!]\) in the function space \(2^{[\![\mathbb{A}]\!]}\), i.e. \([\![\mathbb{F}]\!]\to 2^{[\![\mathbb{A}]\!]}\). In fact, in this development to keep things simpler, we will focus on \([\![\mathbb{F}]\!]\) rather than the full function space \(2^{[\![\mathbb{A}]\!]}\).
### Probabilistic local state monad
In the following, \(X,Y,Z\!:\mathbf{BiGrph}_{\mathit{emb}}\to\mathbf{Set}\) denote presheaves, \(g=(g_{L},g_{R},E^{g}),g^{\prime},h,h^{\prime}\in\mathbf{BiGrph}_{\mathit{emb}}\) bigraphs, and \(\iota,\iota^{\prime}\!:g\hookrightarrow g^{\prime}\) bigraph embeddings. We will omit subscripts when they are clear from the context.
Let \(P_{\!\mathrm{f}}\) be the finite distribution monad: \(P_{\!\mathrm{f}}(X)(g)=\big{\{}p:X(g)\;\to\;[0,1]\;\big{|}\;\mathsf{supp}(p)\) finite and \(\sum_{x}p(x)=1\big{\}}\). By considering the following 'node-generation' monad \(N(X)(g)\stackrel{{\mathrm{def}}}{{=}}\operatorname{colim}_{g \hookrightarrow h}X(h)\) on \([\mathbf{BiGrph}_{\mathit{emb}},\mathbf{Set}]\), one could be tempted to think that modeling name generation and stochastic memoization is a matter of composing these two monads. But this is not quite enough. We also need to remember, in the monadic computations, the probability of a function returning \(\mathsf{true}\) for a fresh, unseen atom. To do so, inspired from Plotkin and Power's local state monad [44] (which was defined on the covariant presheaf category \([\mathbf{Inj},\mathbf{Set}]\), where \(\mathbf{Inj}\) is the category of finite sets and injections), we model probabilistic and name generation effects by the following monad, defined using a coend [35], that we name 'probabilistic local state monad':
**Definition 5.2**: [Probabilistic local state monad] For all covariant presheaves \(X\!:\mathbf{BiGrph}_{\mathit{emb}}\to\mathbf{Set}\) and bigraphs \(g\in\mathbf{BiGrph}_{\mathit{emb}}\):
\[T(X)(g)\stackrel{{\mathrm{def}}}{{=}}\left(P_{\!\mathrm{f}}\int^{ g\hookrightarrow h}\left(X(h)\times[0,1]^{(h-g)_{L}}\right)\right)^{[0,1]^{g_{L}}}\]
The monad \(T\) is similar to the read-only local state monad, except that any fresh node can be initialized. Every \(\lambda\in[0,1]^{g_{L}}\) is thought of as the probability of the corresponding function/left node yielding true on a new fresh atom. We will refer to such a \(\lambda\) as a _state of biases_. The coend 'glues together' the extensions of the memo-table that are compatible with the constraints imposed by the current computation. The monad allows manipulating probability distributions over such extensions, while keeping track of the probability of new nodes.
Equivalence classes in \(\int^{g\hookrightarrow h}X(h)\times[0,1]^{(h-g)_{L}}\) are written \([x_{h},\lambda^{h}]_{g}\). In the coend, the quotient can be thought of as taking care of garbage collection: nodes that are not used in the bigraph environment can be discarded. We use Dirac's bra-ket notation3\(\big{|}[x_{h},\lambda^{h}]_{g}\big{\rangle}_{h}\) to denote a formal column vector of equivalence classes ranging over a finite set of \(h\)'s. As such, a formal convex sum \(\sum_{i}p_{i}[x_{h_{i}},\lambda^{h_{i}}]_{g}\in P_{\!\mathrm{f}}\int^{g \hookrightarrow h}X(h)\times[0,1]^{(h-g)_{L}}\) will be concisely denoted by \(\big{\langle}\boldsymbol{p}\,\big{|}\,[x_{h},\lambda^{h}]_{g}\big{\rangle}_{h}\).
Footnote 3: popularized by Bart Jacobs for finite probability distributions [24]
**Definition 5.3**: [Action of \(T(X)\) on morphisms]
\[T(X)(g\stackrel{{\iota}}{{\hookrightarrow}}g^{\prime})\!:\left\{ \begin{aligned} &\left(P_{\!\mathrm{f}}\int^{g \hookrightarrow h}\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!
* \(\iota_{L}\colon g_{L}\hookrightarrow g^{\prime}_{L}\) is the embedding restricted to left nodes, the maps \(\psi_{g,g^{\prime}}\) are given by: \[\left\{\begin{array}{l}X(h)\times[0,1]^{(h-g)_{L}}\to X(h\coprod_{g}g^{\prime} )\times[0,1]^{(h\coprod_{g}g^{\prime}-g^{\prime})_{L}}\to\int^{g^{\prime} \hookrightarrow h^{\prime}}X(h^{\prime})\times[0,1]^{(h^{\prime}-g^{\prime})_{L} }\\ (x_{h},\,\lambda^{h})\mapsto(X(h\hookrightarrow h\coprod_{g}g^{\prime})(x_{h}), \,\lambda^{h})\mapsto[X(h\hookrightarrow h\coprod_{g}g^{\prime})(x_{h}),\, \lambda^{h}]_{g^{\prime}}\end{array}\right.\] \(\int^{g\hookrightarrow h}X(h)\times[0,1]^{(h-g)_{L}}\stackrel{{ \psi_{g,g^{\prime}}}}{{\longrightarrow}}\int^{g^{\prime} \hookrightarrow h^{\prime}}X(h^{\prime})\times[0,1]^{(h^{\prime}-g^{\prime})_{L}}\) extranatural in \(h\)
* and \(h\coprod_{g}g^{\prime}\) is the pushout in the category of graphs regarded as an object of \(\mathbf{BiGrph}_{\mathit{emb}}\).
More concretely, with Dirac's bra-ket notation, \(T(X)(g\stackrel{{\iota}}{{\hookrightarrow}}g^{\prime})\) can be written as:
\[T(X)(\iota)=\left\{\begin{array}{l}\left(P_{\mathrm{f}}\int^{g\hookrightarrow h }X(h)\times[0,1]^{(h-g)_{L}}\right)^{[0,1]^{g_{L}}}\to\left(P_{\mathrm{f}}\int ^{g^{\prime}\hookrightarrow h^{\prime}}X(h^{\prime})\times[0,1]^{(h^{\prime}-g^ {\prime})_{L}}\right)^{[0,1]^{g^{\prime}_{L}}}\\ \vartheta\mapsto\lambda^{\prime}\mapsto\mathrm{let}\ \vartheta(\lambda^{\prime} \iota_{L})\ =\ \left\langle\boldsymbol{p}\,\big{|}\,[x_{h},\lambda^{h}]_{g}\right\rangle_{h} \text{ in }\left\langle\boldsymbol{p}\,\big{|}\,[X(h\hookrightarrow h\coprod_{g}g^{ \prime})(x_{h}),\lambda^{h}]_{g^{\prime}}\right\rangle_{h}\end{array}\right.\]
\(T\) can be endowed with the structure of a \([\mathbf{BiGrph}_{\mathit{emb}},\mathbf{Set}]\)-enriched monad, that is, since \([\mathbf{BiGrph}_{\mathit{emb}},\mathbf{Set}]\) is a (cartesian) monoidal closed category, a strong monad. Its enriched unit \(\eta_{X}\colon 1\to TX^{X}\) and bind \((-)^{*}\colon TY^{X}\to TY^{TX}\) are as follows 4.
Footnote 4: following Fosco Loregian [34], \(\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \
We have the desired dataflow property, meaning that \(T\) is an abstract model of probability [33]:
**Theorem 5.5**: _The monad \(T\) satisfies the dataflow property (2.3): it is strong commutative and affine._
**Proof (Sketch)** In the presheaf category, let \(Z^{Y}\times Y^{X}\stackrel{{\circ}}{{\to}}Z^{X}\) and \(Z^{Y}\times Y\stackrel{{\rm ev}}{{\longrightarrow}}Z\) denote the internal composition and evaluation, and \(f^{*}\stackrel{{\rm def}}{{=}}1\stackrel{{ f}}{{\to}} TY^{X}\stackrel{{(-)^{*}}}{{\longrightarrow}}TY^{TX}\) the internal Kleisli lifting of a global element \(f\). To prove that \(T\) is strong, we show, internally, the associativity (\((\Psi^{*}_{g}\times\Phi^{*}_{g})\;;\circ=((\Psi^{*}\times\Phi)\;;\circ)^{*}\)) of the bind, the left unit law (\(\eta^{*}=\lambda_{TX}.{\rm id}_{TX}\)), and the right unit law (\((\Phi^{*}\times\eta)\;;\circ=\Phi\)), for all \(\Phi\colon 1\to TY^{X},\Psi\colon 1\to TZ^{Y}\). Finally, affineness stems from lemma 5.4, and commutativity is the equation \(a\,{\gg\!\!-}\,\lambda\,x.\,b\,{\gg\!\!-}\,\lambda\,y.\,\eta(x,y)\;\;=\;b\,{ \gg\!\!-}\,\lambda\,y.\,a\,{\gg\!\!-}\,\lambda\,x.\,\eta(x,y)\) internally, for all \(a\colon 1\to TA,b\colon 1\to TB\), which amounts to showing:
\[\left(\left(\lambda_{A}.\!\left(\left((\lambda_{B}.\eta)^{*}\times b\right)\;; \,{\rm ev}\right)\right)^{*}\times a\right);{\rm ev}=\left(\left(\lambda_{B}. \!\left(\left((\lambda_{A}.\eta)^{*}\times a\right)\;;\,{\rm ev}\right)\right)^ {*}\times b\right);{\rm ev}\]
\(\Box\)
### Categorical semantics
In our language, the denotational interpretation of values, computations (return and let binding), and matching (elimination of \({\sf bool}\)'s and product types) is standard. We interpret computation judgements \(\Gamma\stackrel{{\sf g}}{{=}}t\colon A\) as morphisms \([\![\Gamma]\!]\to T([\![A]\!])\), by induction on the structure of typing derivations. The context \(\Gamma\) is built of \({\sf bool}\)'s, \(\mathbb{F}\), \(\mathbb{A}\) and products. Therefore, \([\![\Gamma]\!]\) is isomorphic to a presheaf of the form \(2^{k}\times{\bf BiGrph}_{emb}(\circ,-)^{\ell}\times{\bf BiGrph}_{emb}(\bullet, -)^{m}\), where \(k,\ell,m\) are the numbers of booleans, functions and atoms in \(\Gamma\), and \(X^{n}\) is is the \(n\)-fold finite product in the category of presheaves. Computations of type \(\mathbb{A}\) and \(\mathbb{F}\) then have an intuitive interpretation:
**Proposition 5.6**: _A computation of type \(\mathbb{A}\) returns the label of an already existing atom or a fresh one with its connections to the already existing functions: \(T([\![\mathbb{A}]\!])(g)\,\cong\,P_{\!\!1}(g_{R}+2^{g_{L}})^{[0,1]^{g_{L}}}\). A computation of type \(\mathbb{F}\) returns the label of an already existing function or create a new function with its connections to already existing atoms and a fixed probabilistic bias: \(T([\![\mathbb{F}]\!])(g)\,\cong\,P_{\!\!1}(g_{L}+2^{g_{R}}\times[0,1])^{[0,1]^{ g_{L}}}\)._
For every bigraph \(g\), we denote by \(R_{g}\) (resp. \(L_{g}\)) the set of bigraphs \(h\in g/{\bf BiGrph}_{emb}\) having one more right (resp. left) node than \(g\), and that are the same otherwise. For every \(e\in 2^{g_{L}}\) (resp. \(e\in 2^{g_{R}}\)), we denote by \({g+_{e}\bullet}\in R_{g}\) (resp. \({g+_{e}\circ}\in L_{g}\)) the bigraph obtained by adding a new right (resp. left) node to \(g\) with connectivity \(e\) to the right (resp. left) nodes in \(g\). We now give the denotational semantics of various constructs in our language. Henceforth, we will denote normalization constants (that can easily be inferred from the context) by \(Z\).
**Denotations of \(\Gamma\stackrel{{\sf g}}{{=}}{\sf flip}(\theta):{\sf bool}\), \(\Gamma,v:\mathbb{F},w:\mathbb{A}\stackrel{{\sf g}}{{=}}v\mbox{$ \mathbb{\oplus}$}w:{\sf bool}\), and \(\Gamma,v:\mathbb{A},w:\mathbb{A}\stackrel{{\sf g}}{{=}}v=w:{\sf bool}\)**
First, by Lemma 5.4, we note that \(T([\![{\sf bool}]\!])g\,\cong\,P_{\!\!1}(2)^{[0,1]^{g_{L}}}\,\cong\,[0,1]^{[0,1]^ {g_{L}}}\). So naturally, the map \([\![{\sf flip}(\theta)]\!]_{g}\) is the constant function returning the bias \(\theta\).
**Denotations of \(\Gamma,v:\mathbb{F},w:\mathbb{A}\stackrel{{\sf g}}{{=}}v\mbox{$ \mathbb{\oplus}$}w:{\sf bool}\), and \(\Gamma,v:\mathbb{A},w:\mathbb{A}\stackrel{{\sf g}}{{=}}v=w:{\sf bool}\)**
The map \([\![v\mbox{$\mathbb{\oplus}$}w]\!]_{g}:[\![\Gamma,v:\mathbb{F},w:\mathbb{A}](g) \to[0,1]^{[0,1]^{g_{L}}}\) returns \(1\) if the left node corresponding to \(v\) is connected to the one of \(w\) in \(g\), \(0\) otherwise. Using the internal edge relation \(\mathcal{E}\), it is the internal composition:
\[[\![v\mbox{$\mathbb{\oplus}$}w]\!]\stackrel{{\rm def}}{{=}}1 \times([\![\Gamma]\!]\times[\![\mathbb{F}]\!]\times[\![\mathbb{A}]\!]) \stackrel{{\eta\times(\!(\times\!)^{\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!
where \([-,\,-]\) is the copairing and \(\iota_{\mathsf{true}},\iota_{\mathsf{false}}\colon 1\to[\![\mathsf{bool}]\!]\cong 2\) are the coprojections.
**Denotation of \(\Gamma\mathrel{\raisebox{-1.29pt}{\scalebox{1.2}{$|$}}\!}\!\mathrel{\raisebox{-1.2 9pt}{\scalebox{1.2}{$|$}}\!}\!\mathrel{\raisebox{-1.29pt}{\scalebox{1.2}{$|$}} \!}\!\mathrel{\raisebox{-1.29pt}{\scalebox{1.2}{$|$}}\!}\!\mathrel{\raisebox{-1. 29pt}{\scalebox{1.2}{$|$}}\!}\!\mathrel{\raisebox{-1.29pt}{\scalebox{1.2}{$|$}} \!}\!\mathrel{\raisebox{-1.29pt}{\scalebox{1.2}{$|$}}\!}\!\mathrel{\raisebox{-1. 29pt}{\scalebox{1.2}{$|$}}\!}\!\mathrel{\raisebox{-1.29pt}{\scalebox{1.2}{$|$}} \!}\!\mathrel{\raisebox{-1.29pt}{\scalebox{1.2}{$|$}}\!}\!\mathrel{\raisebox{-1. 29pt}{\scalebox{1.2}{$|$}}\!}\!\mathrel{\raisebox{-1.29pt}{\scalebox{1.2}{$|$}} \!}\!\mathrel{\raisebox{-1.29pt}{\scalebox{1.2}{$|$}}\!}\!\mathrel{\raisebox{-1. 29pt}{\scalebox{1.2}{$|$}}\!}\!\mathrel{\raisebox{-1.29pt}{\scalebox{1.2}{$|$}} \!}\!\mathrel{\raisebox{-1.29pt}{\scalebox{1.2}{$|$}}\!}\!\mathrel{\raisebox {-1.29pt}{\scalebox{1.2}{$|$}}\!}\!\mathrel{\raisebox{-1.
\[\phi_{g}\colon\left\{\begin{array}{l}T(\mathbb{F})(g)\cong P_{\mathbb{I}}(g_{R}+2 ^{g_{L}})^{[0,1]^{g_{L}}}\rightarrow[0,1]^{[0,1]^{g}\times(g_{R}+2^{g_{L}})} \cong T\big{(}\llbracket\mathsf{bool}\rrbracket^{\llbracket\mathbb{A}\rrbracket} \big{)}(g)\\ \vartheta\mapsto(\lambda,a)\in[0,1]^{g}\times(g_{R}+2^{g_{L}})\mapsto\text{let } \vartheta(\lambda)\ =\ \sum_{a^{\prime}\in g_{R}+2^{g_{L}}}p_{a^{\prime}}\,|a^{\prime} \rangle\text{ in }p_{a}\end{array}\right.\]
to obtain \(\mathsf{mem}\colon T(\llbracket\mathsf{bool}\rrbracket)^{\llbracket\mathbb{A} \rrbracket}\to T(\llbracket\mathsf{bool}\rrbracket^{\llbracket\mathbb{A} \rrbracket})\), and then we show eq.1 in the presheaf topos. \(\Box\)
**Example 5.9**: The denotation of \(\mathsf{let}\mathsf{val}\ x\ \leftarrow\ \mathsf{fresh}()\) in \(\mathsf{let}\mathsf{val}\ f\ \leftarrow\ \lambda_{\mathfrak{g}}y\). \(\mathsf{flip}(\theta)\mathsf{in}\,f\!\oplus\!x\) is the map
\[1\times 1\xrightarrow{\Big{(}\lambda_{\llbracket\mathbb{A}\rrbracket} \cdot\Big{(}\big{(}(\lambda_{\llbracket\mathbb{F}\rrbracket}\cdot f\!\oplus\!x )^{*}\times(\lambda_{\mathfrak{g}}y\text{. }\mathsf{flip}(\theta))\big{)}\big{)}^{*}\times\mathsf{ fresh}()}T(\llbracket\mathsf{bool}\rrbracket)^{T\llbracket\mathbb{A}\rrbracket}\times T( \llbracket\mathbb{A}\rrbracket)\xrightarrow{\mathrm{ev}}T(\llbracket\mathsf{bool }\rrbracket)\]
given by \(*,*\mapsto\lambda\mapsto\theta\,|\mathsf{true}\rangle+(1-\theta)\,|\mathsf{false}\rangle\), as desired.
### Soundness
Configurations are of the form \((\gamma,e,\mathfrak{g},\lambda)\), where \(e\) is of type \(A\), and can be denotationally interpreted as
\[\llbracket(\gamma,e,\mathfrak{g},\lambda)\rrbracket\stackrel{{ \mathrm{def}}}{{=}}\sum_{\tilde{e}\in 2^{U_{\mathfrak{g}}}}\prod_{(l,a)\in U_{ \mathfrak{g}}}\lambda(\mathfrak{f})^{\tilde{e}(l,a)}\big{(}1-\lambda( \mathfrak{f})\big{)}^{1-\tilde{e}(l,a)}\llbracket\!u\rrbracket_{\mathfrak{g} _{\tilde{e}}}(\gamma,\lambda)\in T(A)_{\mathfrak{g}_{\tilde{e}}}(\gamma)(\lambda)\]
where \(U_{\mathfrak{g}}\stackrel{{\mathrm{def}}}{{=}}\big{\{}(\mathfrak{ f},a)\ |\ E(\mathfrak{f},a)=\bot\big{\}}\subseteq\mathfrak{g}_{L}\times \mathfrak{g}_{R}\) and \(\mathfrak{g}_{\tilde{e}}\) extends \(\mathfrak{g}\) according to \(\tilde{e}\): \(E(\mathfrak{f},a)=\tilde{e}(\mathfrak{f},a)\) for all \((\mathfrak{f},a)\in U_{\mathfrak{g}}\). We can then prove that the denotational semantics is sound with respect to the operational semantics:
**Theorem 5.10** (Soundness): \[\llbracket(\gamma,e,\mathfrak{g},\lambda)\rrbracket\cong\sum_{\begin{subarray} {c}(\gamma,e,\mathfrak{g},\lambda)\rightarrow(\gamma^{\prime},e^{\prime}, \mathfrak{g}^{\prime},\lambda^{\prime})\\ \text{with proba. }p\end{subarray}}p\cdot\llbracket(\gamma^{\prime},e^{\prime}, \mathfrak{g}^{\prime},\lambda^{\prime})\rrbracket\]
**Proof (Sketch)** As an intermediate step, we build a big-step semantics, and show that this is sound, _i.e._ making a small step of the operational semantics (SS4) does not change the distributions in the final big-step semantics. Next, we show that the big step semantics of a configuration corresponds to the denotational semantics, for which the main thing to check is that the equivalence classes of the coend are respected. \(\Box\)
## 6 Haskell Implementation
We have a practical Haskell implementation comparing the small-step, big-step operational, and denotational semantics to showcase the soundness theorem with QuickCheck, in a setting analogous (albeit slightly different5, to better suit the specificities of Haskell) to the theoretical one we presented. The artefact is openly available [26].
Footnote 5: Unlike our mathematical framework, where we can memoize all freshness-invariant functions (5.7), our implementation only memoizes constant Bernoulli functions. Another key difference is that we could not implement coends in Haskell, so we used a global state monad transformer to manage the memoization bigraph, keeping track of edges between left nodes (function labels) and right nodes (atom labels) that have been sampled.
## 7 Summary
In conclusion, we have successfully tackled the open problem of finding a semantic interpretation of stochastic memoization for a class of functions with diffuse domain that includes the constant Bernoulli functions. Our contributions pave the way for further exploration and development of probabilistic programming and the sound application of stochastic memoization in Bayesian nonparametrics.
## 8 Acknowledgements
We are grateful to Nate Ackerman, Cameron Freer, Dan Roy and Hongseok Yang for various conversations over many years, relating to [54], name generation, stochastic memoization and subsequent developments. The presheaf category here is related to the Rado topos [4] that we have been exploring in ongoing work, with Jacek Karwowski and Sean Moss and the above four coauthors. Thanks to Dario Stein for discussions about name generation and for pointing out [27]. Thanks too to Swaraj Dash, Mathieu Huot, Ohad Kammar, Oleg Kiselyov, Alex Lew, and all in the Oxford group for many discussions about this topic. Finally, thank you to our reviewers for detailed feedback.
|
2309.10122 | Graph Threading | Inspired by artistic practices such as beadwork and himmeli, we study the
problem of threading a single string through a set of tubes, so that pulling
the string forms a desired graph. More precisely, given a connected graph
(where edges represent tubes and vertices represent junctions where they meet),
we give a polynomial-time algorithm to find a minimum-length closed walk
(representing a threading of string) that induces a connected graph of string
at every junction. The algorithm is based on a surprising reduction to
minimum-weight perfect matching. Along the way, we give tight worst-case bounds
on the length of the optimal threading and on the maximum number of times this
threading can visit a single edge. We also give more efficient solutions to two
special cases: cubic graphs and the case when each edge can be visited at most
twice. | Erik D. Demaine, Yael Kirkpatrick, Rebecca Lin | 2023-09-18T19:51:58Z | http://arxiv.org/abs/2309.10122v2 | # Graph Threading
###### Abstract
Inspired by artistic practices such as beadwork and himmeli, we study the problem of _threading_ a single string through a set of tubes, so that pulling the string forms a desired graph. More precisely, given a connected graph (where edges represent tubes and vertices represent junctions where they meet), we give a polynomial-time algorithm to find a minimum-length closed walk (representing a threading of string) that induces a connected graph of string at every junction. The algorithm is based on a surprising reduction to minimum-weight perfect matching. Along the way, we give tight worst-case bounds on the length of the optimal threading and on the maximum number of times this threading can visit a single edge. We also give more efficient solutions to two special cases: cubic graphs and when each edge can be visited only twice.
## 1 Introduction
Various forms of art and craft combine tubes together by threading cord through them to create a myriad of shapes, patterns, and intricate geometric structures. In beadwork [1], artists string together beads with thread or wire. In traditional'straw mobile' crafts [14] -- from the Finnish and Swedish holiday traditions of himmeli [13, 1] to the Polish folk art of pajski [15] -- mobile decorations are made by binding straws together with string. Artist Alison Martin has shown experiments where bamboo connected by strings automatically forms polyhedral structures by pulling the strings with a weight [11].
For engineering structures, these techniques offer a promising mechanism for constructing reconfigurable or deployable structures, capable of transforming between distinct geometric configurations: a collection of tubes, loosely woven, can be stored in compact configurations, and then swiftly deployed into desired target geometric forms, such as polyhedra, by merely pulling a string taut. Figure 1 shows a prototype of such a structure, illustrating the potential of this approach. The popular 'push puppet' toy, originally invented by Walther Kourt Wals in Switzerland in 1926 [Rod], also embodies this mechanism.
In contrast to related work [10, 11], we study a _theoretical_ formulation of these ideas: threading a single string through a collection of tubes to mimic the connectivity of a given graph; refer to Figure 2. Consider a connected graph \(G=(V,E)\) with minimum vertex degree \(2\), where each edge \(e\in E\) represents a tube and each vertex \(v\in V\) represents the junction of tubes incident to \(v\). A _graph threading_\(T\) of \(G\) is a closed walk through \(G\) that visits every edge at least once, induces connected "junction graphs", and has no 'U-turns'. The _junction graph_\(J(v)\) of a vertex \(v\) induced by a closed walk has a vertex for each tube incident to \(v\), and has an edge between two vertices/tubes every time the walk visits \(v\) immediately in between traversing those tubes.
A threading \(T\) of \(G\) must have a connected junction graph \(J(v)\) for every vertex \(v\in V\), and must have no _U-turns_: when exiting one tube, the walk must next enter a different tube. Define the _length_\(|T|\) of \(T\) to be the total length of edges visited by \(T\). For simplicity, we assume for much of our study that edges (tubes) have unit length -- in which case \(|T|\) is the number of edge visits made by \(T\) -- and then generalize to the weighted case with arbitrary edge lengths.
Our Results.In this paper, we analyze and ultimately solve the Optimal Threading problem, where the goal is to find a minimum-length threading \(T\) of a given graph \(G\). Our results are as follows.
* In Section 2, we give a local characterization of threading, in terms of local (per-vertex and per-edge) constraints, that help us structure our later algorithms and analysis.
* In Section 3, we prove tight worst-case bounds on two measures of an optimal threading \(T\). First, we analyze the minimum length \(|T|\) in a graph with unit edge lengths, proving that \(2m-n\leq|T|<2m\) where \(m\) and \(n\) are the number of edges and vertices, respectively, and that both of these extremes can be realized asymptotically. Second, we prove that \(T\) traverses any one edge at most \(\Delta-1\) times, where \(\Delta\) denotes the maximum vertex degree in \(G\), and that this upper bound can be realized. The second bound is crucial for developing subsequent algorithms.
Figure 1: A deployable structure made from disconnected 3D-printed elements (white) connected by string, which automatically shifts between soft (left) and rigid (right) states by pulling on the endpoints of the string beneath the platform (black). This design was developed by the third author in collaboration with Tomohiro Tachi.
* In Section 4, we develop a polynomial-time algorithm for Optimal Threading, even with arbitrary edge lengths, by a reduction to minimum-weight perfect matching.
* In Section 5, we develop more efficient algorithms for two scenarios: Optimal Threading on cubic graphs, and Double Threading, a constrained version of Optimal Threading where the threading \(T\) is allowed to visit each edge at most twice.
## 2 Problem Formulation
Let \(G=(V,E)\) be a graph with \(n=|V|\) vertices and \(m=|E|\) edges. Assume until Section 4.2.2 that \(G\)'s edges have unit length. Recall that a _threading_ of \(G\) is a closed walk through \(G\) that has no U-turns and induces a connected junction graph at each vertex. As an alternative to this 'global' definition (a closed walk), we introduce a more 'local' notion of threading consisting of constraints at each edge and vertex of the graph, and prove its equivalence to threading.
Before giving the formal definition of 'local threading', we give the intuition. A local threading assigns a nonnegative integer \(x_{uv}\in\mathbb{N}\) for each edge \(uv\in E\), which counts the number of times the threading visits or _threads_ edge \(uv\); we refer to \(x_{uv}\) as the _count_ of \(uv\). These integers are subject to four constraints, which we give an intuition for by arguing that they are necessary conditions for a threading. First, each \(uv\) must be threaded at least once, so \(x_{uv}\geq 1\) for all \(uv\in E\). Second, a threading increments the count of _two_ edges at junction \(v\) every time it traverses \(v\), so the sum of counts for all edges incident to \(v\) must be even. Third, forbidding U-turns implies that, if \(uv\) is threaded \(k\) times, then the sum of counts for the remaining edges incident to \(v\) must be at least \(k\) to supply these visits. Fourth, because the junction graph \(J(v)\) of \(v\) is connected, it has at least enough edges for a spanning tree -- \(d(v)-1\) where \(d(v)\) denotes the degree of \(v\) -- so the sum of counts of edges incident to \(v\) must be at least \(2(d(v)-1)\). More formally:
**Definition 2.1** (Local Threading).: _Given a graph \(G=(V,E)\), a **local threading** of \(G\) consists of integers \(\left\{x_{uv}\right\}_{uv\in E}\) satisfying the following constraints:_
Figure 2: (a) The closed walk (red) on the graph (black) of a tetrahedron induces junctions graphs (circled on the right) that are connected, and so it is a threading. (b) The union of junction graphs is called the _threading graph_ (Section 2.2).
**(C1)**: \(x_{uv}\geq 1\) _for all_ \(uv\in E\)_;_
**(C2)**: \(\sum_{u\in N(v)}x_{uv}\equiv 0\pmod{2}\) _for all_ \(v\in V\)_;_
**(C3)**: \(\sum_{w\in N(v)\setminus\{u\}}x_{uv}\geq x_{uv}\) _for all_ \(uv\in E\)_; and_
**(C4)**: \(\sum_{u\in N(v)}x_{uv}\geq 2(d(v)-1)\) _for all_ \(\in V\)_._
_The **length** of \(\{x_{uv}\}\) is \(\sum_{uv\in E}x_{uv}\), and Optimal Local Threading is the problem of finding the minimum-length local threading._
Optimal Local Threading is in fact an integer linear program, though this is not helpful algorithmically because integer programming is NP-complete. Nonetheless, local threading will be a useful perspective for our later algorithms.
The observations above show that any threading \(T\) induces a local threading by setting each count \(x_{uv}\) to the number of times \(T\) visits edge \(uv\), with the same length: \(|T|=\sum_{uv\in E}x_{uv}\). In the following theorem, we show the converse, and thus the equivalence of threadings with local threadings:
**Theorem 2.2**.: _We can construct a threading \(T\) of \(G\) from a local threading \(\{x_{uv}\}\) of \(G\) such that \(T\) visits edge \(uv\) exactly \(x_{uv}\) times. Hence \(|T|=\sum_{uv\in E}x_{uv}\)._
We shall prove this theorem in two parts. First, we show that it is always possible to form a junction graph at every vertex given a local threading (Section 2.1). Then we show that a closed walk can be obtained from the resulting collection of junction graphs (Section 2.2).
### Constructing a Connected Junction Graph
Forming a junction graph \(J(v)\) at vertex \(v\) reduces to constructing a connected graph on vertices \(t_{1},\ldots,t_{d(v)}\), where each vertex represents a tube incident with \(v\), with degrees \(x_{1},\ldots,x_{d(v)}\), respectively. We shall construct \(J(v)\) in two steps, first in the case where (C4) holds with equality (Lemma 2.3) and then in the general case (Lemma 2.4).
**Lemma 2.3**.: _We can construct a tree \(S\) consisting of \(d\) vertices with respective degrees \(x_{1},\ldots,x_{d}\geq 1\) satisfying \(\sum_{i=1}^{d}x_{i}=2(d-1)\) in \(O(d)\) time._
Proof.: We provide an inductive argument and a recursive algorithm. In the base case, when \(d=2\), \(x_{1}=x_{2}=1\) and the solution is a one-edge path. For \(d>2\), the average \(x_{i}\) value is \(\frac{2(d-1)}{d}\) which is strictly between \(1\) and \(2\). Hence there must be one vertex \(i\) satisfying \(x_{i}>1\) and another vertex \(j\) satisfying \(x_{j}=1\). Now apply induction/recursion to \(x^{\prime}\) where \(x^{\prime}_{k}=x_{k}\) for all \(k\notin\{i,j\}\), \(x^{\prime}_{i}=x_{i}-1\), and \(x_{j}\) does not exist (so there are \(n-1<n\) values), to obtain a tree \(S^{\prime}\). We can construct the desired tree \(S\) from \(S^{\prime}\) by adding the vertex \(j\) and edge \((i,j)\).
The recursive algorithm can be implemented in \(O(d)\) time as follows. We maintain two stacks: the first for vertices of degree \(>1\) and the second for vertices of degree \(1\). In each step, we pop vertex \(i\) from the first stack, pop vertex \(j\) from the second stack, and connect vertices \(i\) and \(j\). We then decrease \(x_{i}\) by \(1\) and push it back onto one of the stacks depending on its new value. This process continues until the stacks are empty. Each step requires constant time and we perform at most \(\sum_{i=1}^{d}x_{i}=O(d)\) steps, so the total running time is \(O(d)\).
**Lemma 2.4**.: _Given a local threading \(\{x_{e}\}\) and a vertex \(v\in V\), we can construct a connected junction graph \(J(v)\) with no self-loops in \(O\left(d(v)\log d(v)+\sum_{u\in N(v)}x_{uv}\right)\) time._
Proof.: Algorithm 1 describes how to construct a connected junction graph \(J(v)\), assuming the notation introduced at the start of this section. This graph is characterized by its connectivity and the absence of self-loops, with the latter being ensured in Step 3b with \(\alpha\neq\beta\). To prove its connectivity, we demonstrate the proper application of the inductive procedure outlined in the proof of Lemma 2.3 in forming a tree (Step 4). We only need to validate that \(x^{\prime}_{1},\ldots,x^{\prime}_{d(v)}\geq 1\), as \(\sum_{i=1}^{d(v)}x^{\prime}_{i}=2(d(v)-1)\) is guaranteed upon the termination of the loop (Step 3). Suppose for contradiction that \(x^{\prime}_{k}<1\). It follows that \(x^{\prime}_{k}=1\) at the start of some iteration and was subsequently decremented, either via Step 3a or 3b. We consider these two cases:
* **Case 1** (Step 3a, \(k=\alpha\)): \(x^{\prime}_{k}\geq x^{\prime}_{i}\) for all \(i\in\{1,\ldots,d(v)\}\), so \[\sum_{i=1}^{d(v)}x^{\prime}_{i}\leq d(v)\times x^{\prime}_{k}=d(v)\leq 2(d(v)-1),\] a contradiction for any \(d(v)>1\), which is assumed.
* **Case 2** (Step 3b, \(k=\beta\)): As \(x^{\prime}_{k}\geq x^{\prime}_{i}\) for all \(i\in\{1,\ldots,d(v)\}\setminus\{\alpha\}\), so \[\sum_{i\in\{1,\ldots,d(v)\}\setminus\{\alpha\}}x^{\prime}_{i}\leq(d(v)-1) \times x^{\prime}_{k}=d(v)-1.\] Recall that \(\sum_{i=1}^{d(v)}x^{\prime}_{i}=x^{\prime}_{\alpha}+\sum_{i\in\{1,\ldots,d(v) \}\setminus\{\alpha\}}x^{\prime}_{i}\geq 2d(v)\) is required to enter the loop. Hence, applying the above deduction, \(x^{\prime}_{\alpha}>\sum_{i\in\{1,\ldots,d(v)\}\setminus\{\alpha\}}x^{\prime}_ {i}\), contradicting the below invariant (Equation 1) of the loop in Step 3.
Loop Invariant:The following invariant is maintained by the algorithm's loop (Step 3), established on initialization via (C3):
\[x^{\prime}_{i}\leq\sum_{j\in\{1,\ldots,d(v)\}\setminus\{i\}}x^{\prime}_{j}\text { for all }i\in\{1,\ldots,d(v)\} \tag{1}\]
We observe that \(\sum_{i=1}^{d(v)}x_{i}\) decreases by 2 with every iteration: either both sides of Equation 1 are reduced by 1, thereby maintaining the inequality, or the LSH remains unchanged while the RHS is reduced by 2. In the latter scenario, counts \(x^{\prime}_{\alpha},x^{\prime}_{\beta}\geq x^{\prime}_{i}\) are updated in Steps 3ab. Observe that \(x^{\prime}_{\alpha}\geq 2\) because \(\sum_{i=1}^{d(v)}x^{\prime}_{i}\geq 2n\) is a prerequisite for loop entry. Letting \(x^{\prime\prime}_{i}\) denote the value of \(x^{\prime}_{i}\) at the beginning of the next iteration, we arrive at the desired conclusion:
\[x^{\prime\prime}_{i}=x^{\prime}_{i}\leq(x^{\prime}_{\alpha}-2)+x^{\prime}_{ \beta}\leq\sum_{j\in\{1,\ldots,d(v)\}\setminus\{i\}}x^{\prime}_{j}-2=\sum_{j \in\{1,\ldots,d(v)\}\setminus\{i\}}x^{\prime\prime}_{j}.\]
Running time:We sort the vertex degrees in \(O(d(v)\log d(v))\) time prior to Step 3 and preserve this ordering throughout the loop (e.g., by employing a binary search tree) for constant-time execution of Steps 3ab. Thus, Steps 3 and 4 together require \(O(\sum_{i=1}^{d(v)}x_{i})\) time (Lemma 2.3), and so the total algorithm running time is \(O(d(v)\log d(v)+\sum_{u\in N(v)}x_{uv})\)
### Obtaining a Closed Walk
Now suppose we have a junction graph \(J(v)\) for every vertex \(v\), obtained by repeatedly applying Lemma 2.4 to a given local threading. Our goal is to find a closed walk in \(G\) that has no U-turns and corresponds to these junction graphs.
Define the _threading graph_ to be the graph whose vertices correspond to tubes and whose edges are given by the union of all junction graphs (joining at vertices corresponding to the same tube). See Figures 2 and 3 for examples.
In this threading graph, we find an _Euler cycle_: a closed walk that visits each edge of the graph exactly once. The presence of an Euler tour through a threading graph is guaranteed because each vertex has even degree [1], specifically twice the count \(x_{e}\) for vertex \(t_{e}\). The tour can be computed in time linear in the number of edges of the input graph [10], which is \(O(\sum_{i=1}^{n}x_{i})\).
To ensure that U-turns are avoided in the threading, we enforce that the Euler tour does not consecutively traverse two edges of the same junction graph, which can be done in linear time by a reduction to forbidden-pattern Euler tours [1].
Combining our results, we can convert a local threading \(\{x_{e}\}\) of \(G\) to a corresponding threading of \(G\) in time \(O(\sum_{v\in V}d(v)\log d(v)+\sum_{i=1}^{n}x_{i})=O(n\log\Delta+\sum_{i=1}^{n} x_{i})\), where \(\Delta\) is the maximum vertex degree in the graph. Later (in Section 3.1) we will show that the optimal threading satisfies
Figure 3: The target model, a threading graph featuring junction graphs as cycles, and a threading of the input model following an Eulerian cycle of the threading graph.
\(\sum_{i=1}^{n}x_{i}=O(m)\), in which case our running time simplifies to \(O(n\log\Delta+m)\).
**Theorem 2.5**.: _We can convert a local threading solution of \(G\) into a threading of \(G\) in \(O(n\log\Delta+\sum_{i=1}^{n}x_{i})\) time, which for an optimal, threading is \(O(n\log\Delta+m)\)._
## 3 Worst-Case Bounds
In this section, we prove tight worse-case upper and lower bounds on the total length of an optimal threading (Section 3.1) and on the most times one edge may be visited by an optimal threading (Section 3.2).
### Total Length
Every graph \(G\) has a _double threading_ defined by assigning each junction graph \(J(v)\) to be a cycle of length \(d(v)\), as depicted in Figure 2(b). This threading results in each tube being traversed exactly twice, which totals a length of \(2m\). Thus an optimal threading has length at most \(2m\). We can approach this upper bound up to an additive constant by considering graphs with long sequences of bridges, such as the graph illustrated in Figure 3(a). We shall later tighten this upper bound by considering graph properties (Lemma 3.4).
Now we establish a lower bound on the total length of any threading:
**Lemma 3.1**.: _Any threading must have length at least \(2m-n\)._
Proof.: Each junction graph \(J(v)\) is connected, so contains at least \(d(v)-1\) edges, and every edge \(t_{i}t_{j}\) in \(J(v)\) necessitates visits to two tubes, \(t_{i}\) and \(t_{j}\). By summing these visits across all junctions, we double-count visits to tubes. Thus, any threading \(\{x_{uv}\}\) has length
\[\sum_{uv\in E}x_{uv}=\frac{1}{2}\sum_{v\in V}\sum_{u\in N(v)}x_{uv}\geq\frac{ 1}{2}\sum_{v\in V}2(d(v)-1)\underset{\uparrow}{=2m-n}.\]
In the ILP view, the inequality step follows from constraint (C4),
This lower bound is sometimes tight, such as in Figure 1(a), which we give a special name:
**Definition 3.2**.: _A **perfect threading** is a graph threading of length \(2m-n\)._
By the analysis in the proof of Lemma 3.1, we obtain equivalent definitions:
**Lemma 3.3**.: _The following are equivalent for a graph threading \(\{x_{uv}\}\):_
1. \(\{x_{uv}\}\) _is a perfect threading._
2. _Every junction graph_ \(J(v)\) _is a tree, i.e., has exactly_ \(d(v)-1\) _edges._
3. _Inequality_ (C4) holds with equality.
Not every graph has a perfect threading (Figure 3(b)). A key observation is that bridges must be threaded at least twice. If we were to remove a bridge, the graph would have two connected components and any closed walk on the entire graph would have to enter and exit each component at least once. Because the only way to pass between the two connected components is through the bridge, the walk would have to traverse the bridge at least twice.
Hence, vertices whose incident edges are all bridges must have junction graphs containing at least \(d(v)\) edges. We call these vertices _London_ vertices. A tighter lower bound is \(2m-n+|L|\) where \(L\) is the set of London vertices in \(G\).
Next, we consider an improved upper bound on the length of an optimal threading. While \(2m\) edge visits always suffice to thread a graph, the following lemma demonstrates that this number is never necessary, as any graph without vertices of degree \(1\) contains a cycle.
**Lemma 3.4**.: _Let \(C\) be a set of vertex-disjoint simple cycles in \(G\) and let \(|C|\) denote the total number of edges contained in its cycles. In an optimal threading of \(G\), at most \(2m-|C|\) edge visits are needed._
Proof.: We use \(e\in C\) to denote edge \(e\) participating in some cycle in \(C\). Define the set of integers \(\{x_{e}\}\) where \(x_{e}=1\) if \(e\in C\) and \(x_{e}=2\), otherwise. By design, \(\sum_{e\in E}x_{e}=2m-|C|\), and so it suffices to show that \(\{x_{e}\}\) is a valid threading of \(G\), i.e., \(\{x_{e}\}\) satisfies constraints (C1)-(C4). Observe that each vertex \(v\) is either (1) covered once by a single cycle in \(C\), meaning that two of its incident edges are single-threaded while the others of threaded twice, or (2) left uncovered, in which all of its incident edges are double-threaded. In both scenarios, all constraints are clearly met. Note that (C4) holds as an equality in a vertex covered once by a cycle in \(C\).
In Section 5.2, we provide an efficient algorithm for computing a threading that achieves the above bound by reduction to finding the largest set of vertex-disjoint cycles.
### Maximum Visits to One Edge
Each edge is threaded at least once in a graph threading, but what is the maximum number of times an edge can be threaded by an optimal solution? In this section, we establish that no optimal threading exceeds \(\Delta-1\) visits to a single edge. This upper bound is tight, as demonstrated by edge \(uv\) in Figure 3(c): Constraint (C4) requires multiple visits to at least one edge connected to \(v\), and revisiting \(uv\) is the most economical when the loops incident to \(v\) are long. It is worth noting that bounding the visits to an edge by the maximum degree of its endpoints may not suffice for an optimal solution, as in the case of the left-most edge in Figure 3(c), which is traversed \(\frac{\Delta-1}{2}>2\) times despite both its endpoints have a degree of \(2\).
Figure 4: (a) A graph with a minimum threading length of \(2m-6\). (b) Each bridge incident to vertex \(v\) is at least double-threaded, and hence (C4) holds at \(v\) as strict inequality, so the graph has no perfect threading. (c) Edge \(uv\) is threaded \(\Delta-1\) times and the loops (dotted) incident to vertex \(v\) are of length \(\Delta\).
**Lemma 3.5**.: _An optimal threading visits a single edge at most \(\Delta-1\) times._
Proof.: If \(\Delta=2\), then \(G\) is a cycle, in which case the optimal threading traverses every edge once. Hence, for the remainder of this proof we may assume \(\Delta\geq 3\).
Suppose \(\{x_{e}\}\) is an optimal threading of a graph \(G\). Let \(uv=\arg\max_{e\in E}x_{e}\) denote the edge with the highest count and assume for a contradiction that \(x_{uv}\geq\Delta\). For simplicity, we first assume that \(d(u),d(v)\geq 3\) and handle the case where \(d(u)=2\) or \(d(v)=2\) at the end. We shall show that we can remove two threads from \(uv\) without violating the problem constraints. That is, the set \(\{\hat{x}_{e}\}\) is a valid threading when defined as \(\hat{x}_{e}=x_{uv}-2\) if \(e=uv\) and \(\hat{x}_{e}=x_{e}\), otherwise. This conclusion contradicts our assumption that \(\{x_{e}\}\) is optimal. The key to this proof is the following:
**(C4):** Because \(\{x_{e}\}\) satisfies (C3), \(\sum_{i=1}^{d(v)-1}x_{u_{i}v}\geq x_{uv}\geq\Delta\), and so
\[\sum_{w\in N(v)}\hat{x}_{uv}=\hat{x}_{uv}+\sum_{i=1}^{d(v)-1}x_{u_{i}v}\geq( \Delta-2)+\Delta\geq 2(d(v)-1).\]
By symmetry, \(u\) also satisfies (C4), and therefore (C4) is met by all vertices of \(G\). We are left to show that \(\{\hat{x}_{e}\}\) satisfies (C1)-(C3).
**(C1):**\(\hat{x}_{uv}>\Delta-2\geq 1\). For any other edge \(\hat{x}_{e}=x_{e}\geq 1\).
**(C2):** Constraint (C2) is met as we do not modify the parity of any count.
**(C3):** We now show (C3) is satisfied for \(v\) and by symmetry, \(u\), and therefore met by all vertices of \(G\). Let us denote the neighbors of \(v\) by \(u,u_{1},\ldots,u_{d(v)-1}\). We have
\[\sum_{w\in N(v)\setminus\{u\}}\hat{x}_{uv}=\sum_{w\in N(v)\setminus\{u\}}x_{ uv}\geq x_{uv}>\hat{x}_{uv},\]
so (C3) is satisfied for \(uv\). We now demonstrate (C3) also holds for the remaining \(u_{i}v\)'s. If \(d(v)\geq 4\), because \(x_{uv}\geq x_{u_{i}v}=\hat{x}_{u_{i}v}\) by our choice of \(uv\), we have
\[\sum_{w\in N(v)\setminus\{u_{i}\}}\hat{x}_{uv}\underset{(C1)}{\geq}\hat{x}_{ uv}+\underbrace{d(v)-2}_{\geq 2}\geq(x_{uv}-2)+2=x_{uv}\geq\hat{x}_{u_{i}v},\]
as desired. Otherwise, \(d(v)=3\). Without loss of generality, we want to show that
\[x_{u_{1}v}\leq\hat{x}_{uv}+\hat{x}_{u_{2}v}=x_{uv}+x_{u_{2}v}-2.\]
Because \(x_{uv}\geq x_{u_{1}v}\) (by choice of \(uv\)) and \(x_{u_{2}v}\geq 1\) (from (C1)), this inequality holds in all cases except when \(x_{u_{1}v}=x_{uv}\) and \(x_{u_{2}v}=1\). However, in this particular scenario, the sum of counts surrounding \(v\) amounts to \(2x_{uv}+1\), which contradicts (C2).
If either endpoint of \(uv\) has degree 2, then we instead consider the maximal path \(w_{1},\ldots,w_{\ell}\) including \(uv\) such that all intermediate vertices have degree 2: \(d(w_{2})=\ldots=d(w_{\ell-1})=2\). Thus \(d(w_{1}),d(w_{\ell})\geq 3\) (as we are in the case \(\Delta\geq 3\)) and \(uv=w_{i}w_{i+1}\) for some \(i\). Because \(\{x_{e}\}\) is a valid threading, we must have \(x_{w_{1}w_{2}}=\cdots=x_{w_{\ell-1}w_{\ell}}=x_{uv}\geq\Delta\). Now we modify the threading \(\{x_{e}\}\) by removing two threads from each \(x_{w_{i}w_{i+1}}\) to obtain \(\{\hat{x}_{e}\}\). Constraints (C1)-(C4) remain satisfied at the degree-2 vertices \(w_{2},\ldots,w_{\ell-1}\). Finally, we can apply the proof above to show that the constraints remain satisfied at the end vertices \(w_{1}\) and \(w_{\ell}\) of degree at least 3.
Polynomial-Time Algorithm via Perfect Matching
In this section, we present our main result: a polynomial-time algorithm for computing an optimal threading of an input graph \(G\). Our approach involves reducing Optimal Threading to the problem of min-weight perfect matching, defined as follows.
A _matching_ in a graph is a set of edges without common vertices. A _perfect matching_ is a matching that covers all vertices of the graph, i.e., a matching of cardinality \(\frac{n}{2}\). If the graph has edge weights, the _weight_ of a matching is the sum of the weights of its edges, and a _min-weight perfect matching_ is a perfect matching of minimum possible weight.
We begin by constructing a graph that possesses a perfect matching if and only if \(G\) has a _perfect_ threading (Definition 3.2). This construction gives a reduction from determining the existence of a perfect threading to the perfect matching problem. Next, we extend this construction to ensure that a perfect matching always exists. In this extended construction, a perfect matching of weight \(W\) corresponds to a threading of length \(W+m\), giving a reduction from Optimal Threading to finding a min-weight perfect matching.
### Determining Existence of a Perfect Threading
By Lemma 3.3, a threading \(\{x_{uv}\}\) of a graph \(G\) is a perfect threading if and only if it satisfies inequality (C4) with equality:
* \(\sum_{u\in N(v)}x_{uv}=2(d(v)-1)\) for all \(v\in V\).
In fact, most of the other constraints become redundant in this case:
**Lemma 4.1**.: \(\{x_{uv}\}\) _is a perfect threading if and only if it satisfies_ (C1) and (C*4)_._
Proof.: If \(\{x_{uv}\}\) satisfies (C*4), then it satisfies constraint (C2), because \(2(d(v)-1)\equiv 0\pmod{2}\). (C*4) can be rewritten as \(x_{uv}+\sum_{w\in N(v)\setminus\{u\}}x_{uv}=2(d(v)-1)\), and by (C1), \(\sum_{w\in N(v)\setminus\{u\}}\geq d(v)-1\), so (C3) also holds.
Consider a vertex \(v\) and its neighbors \(u_{1},\ldots,u_{d(v)}\). We can think of constraint (C*4) as allocating \(2(d(v)-1)\) units among \(x_{u_{1}v},\ldots,x_{u_{d(v)}v}\). First, we must allocate one unit to each \(x_{u_{i}v}\) in order to satisfy (C1). This leaves \(d(v)-2\) units to distribute among the edges.
We show how to simulate this distribution problem by constructing a graph \(H\) that has a perfect matching if and only if, for every vertex \(v\), we are able to distribute \(d(v)-2\) units among its neighboring \(x_{u_{i}v}\). Thus \(H\) has a perfect matching if and only if \(G\) has a perfect threading.
Given a graph \(G\), define the graph \(H\) as follows; refer to Figure 5. For each edge \(uv\in E(G)\), create a perfect matching of \(d_{uv}:=\min\{d(u),d(v)\}-2\) disjoint edges \((\overline{uv}_{i},u\overline{v}_{i})\), among \(2\,d_{uv}\) created vertices \(\overline{uv}_{1},u\overline{v}_{1},\ldots,\overline{uv}_{d_{uv}},u\overline{ v}_{d_{uv}}\).1 For each vertex \(v\), create \(d(v)-2\) vertices labeled \(v_{1},\ldots,v_{d(v)-2}\). For every edge \(uv\) incident to \(v\), add an edge between vertices \(v_{i}\) and \(u\overline{v}_{j}\) for all \(1\leq i\leq d(v)-2\) and \(1\leq j\leq d_{uv}\) (forming a biclique). Note that any vertex of degree \(2\) disappears in this construction, because of the \(-2\) in each creation count.
Footnote 1: In the same way that \(uv\) and \(vu\) denote the same edge, we treat labels \(u\overline{v}\) and \(\overline{v}u\) as the same.
**Theorem 4.2**.: \(G\) _has a perfect threading if and only if \(H\) has a perfect matching._
To prove Theorem 4.2, we will show how to translate between a perfect threading of \(G\) and a perfect matching of \(H\). Given a matching \(M\subseteq E(H)\) of \(H\), define a possible threading solution \(\varphi(M)=\{x_{uv}\}\) by taking \(x_{uv}\) to be \(1\) plus the number of edges \((\overline{uv}_{i},u\overline{v}_{i})\) that are _not_ included in \(M\): \(x_{uv}:=1+\big{|}\{(\overline{uv}_{i},u\overline{v}_{i}):1\leq i\leq d_{uv}\} \setminus M\big{|}\).
**Claim 4.3**.: _If \(M\) is a perfect matching in \(H\), then \(\varphi(M)\) is a perfect threading of \(G\)._
Proof.: By Lemma 4.1, it suffices to prove that \(\varphi(M)\) satisfies (C1) and (C*4). The \(1+\) in the definition of \(\varphi(M)\) satisfies (C1). For every vertex \(v\in V\), the vertices \(v_{1},\ldots,v_{d(v)-2}\) are all matched to vertices of the form \(u\overline{v}_{i}\); for each such matching pair, the edge \((u\overline{v}_{i},\overline{uv}_{i})\notin M\). Conversely, for any vertex \(u\overline{v}_{i}\) that is not matched to any \(v_{j}\), the edge \((u\overline{v}_{i},\overline{uv}_{i})\) must be part of the matching. Hence, for each vertex \(v\), the number of edges of the form \((u\overline{v}_{i},\overline{uv}_{i})\) that are not included in \(M\) is exactly \(d(v)-2\). The sum \(\sum_{u\in N(v)}x_{uv}\) includes this count and \(d(v)\) additional \(1\)s, so equals \((d(v)-2)+d(v)=2(d(v)-1)\), satisfying (C*4).
**Claim 4.4**.: _For any perfect threading \(\{x_{uv}\}\) of \(G\), there exists a perfect matching \(M\) of \(H\) such that \(\varphi(M)=\{x_{uv}\}\)._
Proof.: Given a perfect threading \(\{x_{uv}\}\) of \(G\), we construct a perfect matching of \(H\) as follows. First, for every \(uv\in E(G)\), we match the edges \((\overline{uv}_{1},u\overline{v}_{1}),\ldots,(\overline{uv}_{d_{uv}-x_{uv}+1}, u\overline{v}_{d_{uv}-x_{uv}+1})\). We show that index \(d_{uv}-x_{uv}+1\) is always nonnegative; when it is zero, we match no such edges. By constraint (C*4), \(x_{uv}=2(d(v)-1)-\sum_{w\in N(v)\setminus\{u\}}x_{uv}\). By constraint (C1), each term in the sum is at least \(1\), so \(x_{uv}\leq d(v)-1\). Thus \(x_{uv}\leq d_{uv}+1\), i.e., \(d_{uv}-x_{uv}+1\geq 0\).
Figure 5: Construction of \(H\) and \(\hat{H}\) from \(G\), each with some matching in bold and a corresponding threading to the matching labeled with counts.
With our matching so far, the number of unmatched vertices of the form \(u\overline{v}_{i}\) at each vertex \(v\) is \(\sum_{u\in N(v)}(x_{uv}-1)\). By (C*4), this count is exactly \(2(d(v)-1)-d(v)=d(v)-2\). Thus we can match each of these unmatched vertices to a unique vertex \(v_{j}\) to complete our perfect matching.
Claims 4.3 and 4.4 complete the proof of Theorem 4.2.
#### 4.1.1 Running-Time Analysis
First, let us calculate the sizes of \(V(H)\) and \(E(H)\). Recall that \(H\) has \(d(v)-2\) vertices corresponding to every vertex \(v\in V(G)\), and up to \(2(\min\{d(u),d(v)\}-2)\leq 2\Delta\) vertices corresponding to every edge \(uv\in E(G)\). Therefore, the maximum number of vertices in \(H\) is
\[\sum_{v\in V}(d(v)-2)+2\sum_{uv\in E}\Delta\leq 2m-2n+2m\Delta=O(m\Delta).\]
Now recall that \(H\) has \(\min\{d(u),d(v)\}-2\leq\Delta\) edges for every \(uv\) and at most \(\Delta^{3}\) edges for every \(v\). Thus, the total number of edges in \(H\) is upper-bounded by
\[2\sum_{uv\in E}\Delta+\sum_{v\in V}\Delta^{3}\leq 2m\cdot\Delta+n\Delta^{3}=O(n \Delta^{3}).\]
We conclude that \(H\) can be constructed in \(O(n\Delta^{3}+m\Delta)\) time.
Micali and Vazirani [14] gave an algorithm that computes the maximum matching of a general graph in \(O(\sqrt{n}m)\) time, thereby enabling us to verify the existence of a perfect matching. It follows that we can determine a perfect matching of \(H\) in time
\[O(\sqrt{|V(H)|}\cdot|E(H)|)=O(\sqrt{m\Delta}\cdot n\Delta^{3})=O(n\sqrt{m} \cdot\Delta^{3.5}).\]
This running time exceeds the construction time of \(H\), and so it is the final running time of our algorithm.
Note that we can improve the bound on the size of \(H\) by considering the _arboricity_ of \(G\). The arboricity of a graph \(\alpha(G)\) is defined as the minimum number of edge-disjoint spanning forests into which \(G\) can be decomposed [10]. This parameter is closely related to the degeneracy of the graph and is often smaller than \(\Delta\). Chiba and Nishizeki [10] show that \(\sum_{uv\in E}\min\{d(u),d(v)\}\leq 2m\alpha(G)\), which would give us a tighter bound on the size of \(V(H)\).
In summary, we can find a perfect threading of \(G\), if one exists, by determining a perfect matching in \(H\) in \(O(n\sqrt{m}\cdot\Delta^{3.5})\) time.
### Finding an Optimal Threading
Now we examine the general scenario where a perfect threading may not exist, i.e., (C4) may hold with a strict inequality for some vertex. The graph \(H\) constructed in Section 4.1 permits exactly \(2(d(v)-1)\) visits to vertex \(v\). Our goal is to allow more visits to \(v\) while satisfying constraints (C2) and (C3).
In a general threading, \(x_{uv}\leq\min\{d(u),d(v)\}-1\) (as argued in Claim 4.4) is not necessarily true. However, Lemma 3.5 gives us a weaker upper bound, \(x_{uv}\leq\Delta-1\), for any optimal threading. We therefore modify the construction from Section 4.1 in two ways. First, we generate \(\Delta-2\) copies of every edge, regardless of the degree of its endpoints. Second, for every pair of edges \(uv\) and \(uv\)
meeting at vertex \(v\), we introduce an edge between \(uv\overline{v}_{i}\) and \(w\overline{v}_{j}\) for all \(1\leq i,j\leq\Delta-2\). Intuitively, these edges represent threads passing through \(v\), going from \(uv\) to \(wv\), after having met the lower bound of \(2(d(v)-1)\) visits.
More formally, we define a weighted graph \(\hat{H}\) from \(G\) as follows; refer to Figure 5. For each edge \(uv\in E(G)\), create a weight-0 perfect matching of \(\Delta-2\) disjoint weight-0 edges \((\overline{u}v_{i},u\overline{v}_{i})\), among \(2(\Delta-2)\) created vertices \(\overline{u}v_{1},u\overline{v}_{1},\ldots,\overline{u}v_{\Delta-2},u\overline {v}_{\Delta-2}\); these edges are black in Figure 5. For every vertex \(v\), create \(d(v)-2\) vertices \(v_{1},\ldots,v_{d(v)-2}\), and add a weight-\(\frac{1}{2}\) edge \((v_{i},u\overline{v}_{j})\) for every \(u\in N(v)\) and \(1\leq i\leq d(v)-2,j\leq\Delta-2\); these edges are blue in Figure 5. Finally, for each pair of edges \(uv\) and \(wv\) incident to \(v\), create a weight-1 edge \((u\overline{v}_{i},w\overline{v}_{j})\) for every \(1\leq i,j\leq\Delta-2\); these edges are green in Figure 5.
**Theorem 4.5**.: \(G\) _has a threading of length \(W+m\) with \(\max_{uv\in E(G)}x_{uv}\leq\Delta-1\) if and only if \(\hat{H}\) has a perfect matching of weight \(W\)._
To prove Theorem 4.5, we again show how to translate between a threading of \(G\) and a perfect matching of \(\hat{H}\). Given a matching \(M\subseteq E(\hat{H})\) of \(\hat{H}\), define a possible threading solution \(\psi(M)=\{x_{uv}\}\) by taking \(x_{uv}\) to be 1 plus the number of copies of \(uv\) not matched in \(M\): \(x_{uv}:=1+\big{|}\{(\overline{u}v_{i},u\overline{v}_{i}):1\leq i\leq\Delta-2 \}\setminus M\big{|}\).
**Claim 4.6**.: _If \(M\) is a perfect matching in \(\hat{H}\) of weight \(W\), then \(\psi(M)=\{x_{uv}\}\) is a threading of \(G\) of length \(W+m\) with \(\max_{uv\in E(G)}x_{uv}\leq\Delta-1\)._
Proof.: By definition of \(\psi(M)\), every \(x_{uv}\) satisfies \(1\leq x_{uv}\leq\Delta-1\). Thus, \(\{x_{uv}\}\) satisfies (C1) and \(\max_{uv\in E(G)}x_{uv}\leq\Delta-1\).
Let \(a_{v}(uv)\) denote the number of vertices \(u\overline{v}_{i}\) (for \(1\leq i\leq\Delta-2\)) matched with some vertex \(v_{j}\), i.e., the number of blue edges incident to a vertex \(u\overline{v}_{i}\) that appear in \(M\). Let \(b_{v}(uv)\) denote the number of vertices \(u\overline{v}_{i}\) (for \(1\leq i\leq\Delta-2\)) matched with some vertex \(w\overline{v}_{j}\), i.e., the number of green edges incident to a vertex \(u\overline{v}_{i}\) that appear in \(M\). Any other vertex \(u\overline{v}_{i}\) (not incident to either a blue or green edge in \(M\)) must be matched to its corresponding vertex \(\overline{u}v_{i}\), which does not contribute to \(x_{uv}\). Hence, \(x_{uv}=1+a_{v}(uv)+b_{v}(uv)\).
Next we prove that \(\{x_{uv}\}\) satisfies constraint (C4). For every vertex \(v\), we have \(\sum_{u\in N(v)}a_{v}(uv)=d(v)-2\), which implies \(\sum_{u\in N(v)}(x_{uv}-1)\geq d(v)-2\), which is equivalent to (C4).
Next consider (C2). Any edge \((u\overline{v}_{i},w\overline{v}_{j})\) present in \(M\) adds 1 to both \(b_{v}(uv)\) and \(b_{v}(wv)\), thereby ensuring \(\sum_{u\in N(v)}b_{v}(uv)\equiv 0\pmod{2}\). Consequently,
\[\sum_{u\in N(v)}x_{uv}\equiv\sum_{u\in N(v)}(a_{v}(uv)+1)=2(d(v)-1)\equiv 0 \pmod{2}.\]
Finally, consider (C3). Given that \(a_{v}(uv)\leq d(v)-2\), we infer \(\sum_{w\in N(v)\setminus\{u\}}a_{v}(uv)+d(v)-1\geq a_{v}(uv)+1\). Additionally, for each vertex contributing to \(b_{v}(uv)\), its matched vertex contributes to some \(b_{v}(wv)\), so \(\sum_{w\in N(v)\setminus\{u\}}b_{v}(wv)\geq b_{v}(uv)\). Hence, we have
\[\sum_{w\in N(v)\setminus\{u\}}x_{uv}=\sum_{w\in N(v)\setminus\{u\}}(a_{v}(wv) +b_{v}(wv)+1)\geq(a_{v}(uv)+1)+b_{v}(uv)=x_{uv}.\]
We conclude that \(\{x_{uv}\}\) is a threading of \(G\).
Lastly, we compute its length. The weight of \(M\) is determined by the number of blue and green edges it contains, because the edges \((\overline{u}v_{i},u\overline{v}_{i})\) have zero weight. Each of its blue edges of the form
\((v_{i},u\overline{v}_{j})\) has weight \(\frac{1}{2}\) and is accounted for once in \(a_{v}(uv)\), for a total weight of \(a_{v}(uv)/2\). Each of its green edges of the form \((u\overline{v}_{i},w\overline{v}_{j})\) has weight \(1\) and is counted twice -- once in \(b_{v}(uv)\) and once more in \(b_{v}(uv)\) -- for a total weight of \(b_{v}(uv)/2\). Hence, the weight \(W\) of the matching \(M\) is given by
\[W=\sum_{v\in V}\sum_{u\in N(v)}\left(\frac{a_{v}(uv)}{2}+\frac{b_{v}(uv)}{2} \right)=2\cdot\sum_{uv\in E}\frac{x_{uv}-1}{2}=\sum_{uv\in E}x_{uv}-m.\]
Therefore \(\{x_{uv}\}\) is a threading of \(G\) of length \(W+m\).
**Claim 4.7**.: _For every threading \(\{x_{uv}\}\) of \(G\) such that \(\max_{uv\in E(G)}x_{uv}\leq\Delta-1\), \(\hat{H}\) has a perfect matching \(M\) such that \(\psi(M)=\{x_{uv}\}\)._
Proof.: Let \(\{x_{uv}\}\) be a threading of \(G\) satisfying \(x_{uv}\leq\Delta-1\) for every edge \(uv\in E\). Recall Lemma 2.4, where we demonstrate the construction of a junction graph \(J(v)\) for vertex \(v\).
For every vertex \(v\in V\), we know by (C2) and (C4) that \(\sum_{u\in N(v)}x_{uv}=2(d(v)-1)+2k\) for some integer \(k\). Note that \(J(v)\) has \(d(v)\) vertices and \(d(v)-1+k\) edges. Because \(J(v)\) is connected, we can thus select \(k\) edges from \(J(v)\) such that removing them will leave behind a tree. Denote these edges by \((u^{1},w^{1}),\ldots,(u^{k},w^{k})\) where \(u^{1},\ldots,u^{k},w^{1},\ldots,w^{k}\in N(v)\). For each edge \((u^{\ell},w^{\ell})\), match a green edge of the form \((u^{\ell}\overline{v}_{i},w^{\ell}\overline{v}_{j})\). For every edge \(uv\) connected to \(v\), denote by \(b_{v}(uv)\) the number of vertices of the form \(u\overline{v}_{i}\) currently matched, i.e., the number of times \(u\) appears as an endpoint among the \(k\) edges selected from \(J(v)\).
Because the edges remaining in \(J(v)\) after removing \((u^{1},w^{1}),\ldots,(u^{k},w^{k})\) form a tree, every neighbor of \(v\) must have at least one incident edge in \(J(v)\) that is _not_ selected. Because the degree of \(t_{uv}\) in \(J(v)\) is \(x_{uv}\), the number of matched vertices must satisfy \(b_{v}(uv)\leq x_{uv}-1\).2
Footnote 2: Here \(t_{uv}\) is vertex representing the tube \(uv\). See the notation in Section 2.1.
For each \(u\in N(v)\), let \(a_{v}(uv)=x_{uv}-b_{v}(uv)-1\). It is clear from our above observation that \(a_{v}(uv)\geq 0\). Given \(\sum_{u\in N(v)}b_{v}(uv)=2k\), we have \(\sum_{u\in N(v)}a_{v}(uv)=d(v)-2\). It follows that we can match \(a_{v}(uv)\) vertices in \(u\overline{v}_{1},\ldots,u\overline{v}_{\Delta-2}\) to an equal number of vertices in \(v_{1},\ldots,v_{d(v)-2}\) using blue edges. After executing this procedure, all vertices of the form \(v_{1},\ldots,v_{d(v)-2}\) will have been matched. Furthermore, the number of matched vertices of the form \(u\overline{v}_{i}\) is exactly \(a_{v}(uv)+b_{v}(uv)=x_{uv}-1\). We repeat this procedure for all vertices.
Now, for every edge \(uv\), there are two sets of unmatched vertices, each of size \(\Delta-2-(x_{uv}-1)=\Delta-x_{uv}-1\)\(u\overline{v}_{i}\), of the form \(u\overline{v}_{i}\) and \(\overline{u}v_{j}\), respectively. By rearranging the existing matches, we can ensure these vertices are exactly \(u\overline{v}_{1},\ldots,u\overline{v}_{\Delta-x_{uv}-1},\overline{u}v_{1}, \ldots,\overline{u}v_{\Delta-x_{uv}-1}\). Then we can proceed to match every pair \((u\overline{v}_{i},\overline{u}v_{i})\), for \(i\leq\Delta-x_{uv}-1\), using a black edge.
The above process results in a perfect matching \(M\) from the threading \(\{x_{uv}\}\). The number of edges of the form \((u\overline{v}_{i},\overline{u}v_{i})\) included in the matching is precisely \(\Delta-x_{uv}-1\). Hence, \(\psi(M)=\{x_{uv}\}\).
The above two claims complete the proof of Theorem 4.5. Lemma 3.5 establishes that an optimal threading visits an edge no more than \(\Delta-1\) times, and so \(\hat{H}\) must have a perfect matching. Furthermore, if \(M\) is the min-weight perfect matching of \(\hat{H}\), then \(\psi(M)\) is the optimal threading of \(G\). We can therefore find the optimal threading of \(G\) by finding the min-weight perfect matching of \(\hat{H}\) and applying the reduction of Claim 4.6.
Note that the solution presented in this section can be readily adapted to address a constrained variant of Optimal Threading, where each edge is allowed to be traversed only a limited number
of times, by imposing limits on the number of vertex and edge copies created during the construction of \(\hat{H}\). This scenario arises, for example, when dealing with tubes of restricted diameter.
#### 4.2.1 Running-Time Analysis
First, let us analyze the size of \(\hat{H}\): the graph contains \(\Delta-2\) vertices for each vertex \(v\in V(G)\) and \(2(\Delta-2)\) vertices for each edge \(uv\in E(G)\). Hence, the total number of vertices in \(\hat{H}\) is \(O(m\Delta)\). In terms of edges, \(\hat{H}\) includes \(\Delta-2\) edges for each edge \(uv\in E(G)\) and no more than \(\Delta^{4}\) edges for each vertex \(v\in V(G)\). Therefore, the total edge count in \(\hat{H}\) is \(O(n\Delta^{4})\). As a result, the construction of \(\hat{H}\) requires \(O(m\Delta+n\Delta^{4})\) time.
Next, we use the algorithm of Galil, Mical, and Gabow [1] to find a minimum weight perfect matching of \(\hat{H}\). This algorithm has time complexity \(O(nm\log n)\), and so on \(\hat{H}\) it runs in time
\[O(|V(H)||E(H)|\log(|V(H)|))=O(m\Delta\cdot n\Delta^{4}\cdot\log(m\Delta))=O(nm \cdot\Delta^{5}\log n).\]
As this term dominates the time for constructing \(\hat{H}\), we conclude that our algorithm for Optimal Threading runs in time \(O(nm\cdot\Delta^{5}\log n)\).
#### 4.2.2 Extension to Weighted Graphs
In this section, we adapt our Optimal Threading algorithm to weighted graphs that represent structures whose edges have varying lengths. Specifically, we introduce a weight function \(\ell:E\to\mathbb{R}^{+}\), where \(\ell(e)\) represents the length of tube \(e\). The goal of Optimal Threading is now to minimize the _total length_ of a threading \(T\), defined as \(\sum_{e\in T}\ell(e)\). This problem is equivalent to the weighted version of Optimal Local Threading where we seek to minimize \(\sum_{e\in E}\ell(e)\,x_{e}\) subject to constraints (C1)-(C4).
Our Optimal Threading algorithm hinges upon Lemma 3.5. Fortunately, this result holds for weighted graphs. We demonstrated that, if any threading \(\{x_{e}\}\) has \(x_{e}\geq\Delta\) for some \(e\in E\), then we can construct a strictly shorter threading \(\{x^{\prime}_{e}\}\) that remains consistent with constraints (C1)-(C4). Specifically, \(x^{\prime}_{e}\leq x_{e}\) for all \(e\in E\) and \(x^{\prime}_{e}<x_{e}\) for at least one \(e\in E\), and so \(\sum_{e\in E}\ell(e)\,x^{\prime}_{e}<\sum_{e\in E}\ell(e)\,x_{e}\) for any weight function \(\ell:E\to\mathbb{R}^{+}\). Hence, an optimal threading never traverses an edge more than \(\Delta-1\) times as desired.
To adapt our Optimal Threading algorithm for the weighted scenario, we construct a graph similar to \(\hat{H}\) in Section 4.2, but with modified edge weights: a blue edge \((v_{i},u\bar{v}_{j})\) now has weight \(\frac{1}{2}\ell(uv)\) instead of weight \(\frac{1}{2}\), and a green edge \((u\bar{v}_{i},w\bar{v}_{j})\) has weight \(\frac{1}{2}\big{(}\ell(uv)+\ell(uv)\big{)}\) rather than weight \(1\). The black edges continue to have zero weight. Denote this new graph by \(\tilde{H}\).
By a similar proof to that of Theorem 4.5, we obtain a reduction from weighted Optimal Threading to minimum-weight perfect matching:
**Theorem 4.8**.: \(G\) _has a threading of length \(W+\sum_{e\in E(G)}\ell(e)\) with \(\max_{e\in E(G)}x_{e}\leq\Delta-1\) if and only if \(\tilde{H}\) has a perfect matching of weight \(W\)._
As before, an edge \(uv\) traversed by a threading corresponds to an edge \((u\bar{v}_{i},\bar{u}v_{i})\) that is _not_ part of the perfect matching of \(\tilde{H}\). Both endpoints of this edge must be matched with either a green or blue edge. Each such matching contributes \(\frac{\ell(uv)}{2}\) to the matching's total weight. Thus, we can show that a perfect matching in \(\tilde{H}\) with weight \(W\) corresponds to a threading of \(G\) of length \(W+\sum_{e\in E}\ell(e)\).
Special Cases
Here we focus on two scenarios: Optimal Threading on cubic graphs and Double Threading, where each edge can be traversed at most twice.
### Cubic Graphics
If graph \(G\) is cubic, then by Lemma 3.5, an optimal threading of \(G\) visits each edge at most twice. Furthermore, in a perfect threading of \(G\), if it exists, exactly one edge incident to each vertex is double-threaded due to constraint (C\({}^{*}\)4). Hence, it follows that \(G\) has a perfect threading if and only if \(G\) has a perfect matching. A perfect matching of \(G\) gives the set of edges to be double-threaded in a perfect threading. Every bridgeless cubic graph has a perfect matching [10]--it can be computed in \(O(n\log^{4}n)\) time [1]. In fact, if all bridges of a connected cubic graph \(G\) lie on a single path of \(G\), then \(G\) has a perfect matching [1].
### The Double Threading Problem
In Double Threading, the goal is to minimize the number of double-threaded edges or, equivalently, to maximize the number of edges visited only once. A solution to Double Threading on a cubic graph also solves Optimal Threading on the same graph. This is due to the observation that either zero or two single-threaded edges are incident to each vertex in a solution to Double Threading, which aligns with the reality of Optimal Threading on cubic graphs. By the same observation, a solution to Double Threading matches the upper bound given in Lemma 3.4 for general graphs. We further note that Double Threading may be reduced to the task of finding vertex-disjoint cycles with maximum collective length, which we solve below in Algorithm 2.
1. Construct a weighted graph \(G^{\prime}\) from \(G\) (Figure 6): 1. For each vertex \(v\in V\), create a complete bipartite graph \(G_{u}=K_{d(v),d(v)}\) with zero-weight edges. Let \(D_{u}^{-}\) and \(D_{v}^{+}\) denote the two disjoint vertex sets of this graph. 2. For each edge \(uv\in E\), add an edge unit weight between a vertex of \(D_{u}^{+}\) and a vertex of \(D_{v}^{+}\) such that each vertex of \(D_{u}^{+}\) and \(D_{v}^{+}\) has exactly one edge incident to it. 3. For each subgraph \(G_{v}\), add a zero-weight edge between any two vertices of \(D_{v}^{-}\).
2. Compute a maximum weight perfect matching \(M\) in \(G^{\prime}\).
3. Return edge set \(S\subseteq E\) of \(G\) corresponding to the weighted edges of \(M\).
**Algorithm 2** Maximum Length Vertex-Disjoint Cycles
We sketch the intuition behind why matching \(M\) corresponds one-to-one to vertex disjoint cycles in \(G\). Observe two cases for each \(u\): (i) If \(M\) contains the edge of 1(c), then \(d-2\) vertices in \(D_{u}^{-}\) match with the vertices in \(D_{u}^{+}\), leaving two vertices in \(D_{u}^{+}\) to match with their neighbors in adjacent subgraphs; (ii) all vertices in \(D_{u}^{+}\) are saturated via connections to \(D_{u}^{-}\), otherwise. That is, each vertex \(u\) is in exactly one cycle (i) or none at all (ii).
Running-Time Analysis:We begin our analysis of the running time of Algorithm 2 by first bounding the size of \(G^{\prime}\). Each subgraph \(G_{v}\) has \(2d(u)\) vertices and \(d(v)^{2}+1\) edges, and these subgraphs are connected via \(m\) edges. Because \(\sum_{v\in V}d(v)=2m\) and \(\sum_{v\in V}d(v)^{2}\leq m(2m/(n-1)+n-2)\)[1], we conclude that \(V(G^{\prime})=O(m)\) and \(E(G^{\prime})=O(nm)\).
The problems of finding a max-weight perfect matching and a min-weight perfect matching are symmetric: we can multiply edge weights by \(-1\) to switch between the two problems. It follows that we can apply the min-weight perfect matching algorithm proposed by Galil, Mical, and Gabow [1] in Step 2 of our algorithm. This procedure runs in \(O(|V(G^{\prime})||E(G^{\prime})|\log|V(G^{\prime})|)=O(n^{2}m\log m)\) time, which dominates the \(O(nm)\) construction time of \(G^{\prime}\) in the first step. Hence, the overall running time of Algorithm 2 is \(O(nm^{2}\log m)\).
## 6 Future Work
Potential avenues for future work include developing tighter upper and lower bounds based on properties of the input graph and devising a more efficient solution to the general problem.
Practical challenges associated with the design of reconfigurable structures (Figure 1) inspire further intriguing problems. For instance, friction plays a central role in the deployability of such structures -- it determines the force required to draw the string through the system. According to the Capstan equation, friction increases exponentially with the sum of the absolute values of turning angles in the threading route. Therefore, a logical next step is to investigate a variant of Optimal Threading where the focus is on minimizing this frictional cost instead of the threading length.
## Acknowledgements
We thank Anders Aamand, Kiril Bangachev, Justin Chen, and Surya Mathialagan for insightful discussions. We also thank anonymous reviewers for their helpful comments. This research was supported in part by the NSF Graduate Research Fellowship and the MIT Stata Family Presidential Fellowship.
Figure 6: Illustration of constructing \(G^{\prime}\) from \(G\). |
2308.00018 | Entanglement and chaos near critical point in strongly coupled gauge
theory | We perform a holographic study of the high and low temperature behaviours of
logarithmic negativity (LN) and entanglement wedge cross section (EWCS) in a
large $N$ strongly coupled thermal field theory with critical point having a
well defined gravity dual known as 1RC black hole. The critical point is
defined via $\xi \to 2$ limit where, $\xi$ is dimensionless parameter
proportional to the charge of the 1RC black hole. We show that the logarithmic
negativity in low and high thermal limits enhances with increasing $\xi$. We
analytically compute the EWCS in low and high thermal limits and find an
agreement with the previously reported numerical results. We holographically
explore the correlation between two identical copies of thermal field theory
with critical point forming a thermofield double state (TFD) by computing the
thermo mutual information (TMI). TMI shows an increasing behaviour with respect
to the width of the boundary region. Further, we analyze the impact of an early
perturbation on the field theory by analyzing a shock wave perturbation that
grows exponentially in the dual eternal 1 RC black hole and then estimate the
degradation of TMI. However rate of such disruption of TMI slows down as the
value of critical parameter $\xi$ takes higher values. | Sanjay Pant, Debanjan Karan | 2023-07-31T17:55:54Z | http://arxiv.org/abs/2308.00018v3 | # More on Entanglement and Chaos near Critical Point in Strongly Coupled Gauge Theory
###### Abstract
We perform a holographic study of the high and low temperature behaviours of logarithmic negativity (LN) and entanglement wedge cross section (EWCS) in a large \(N\) strongly coupled thermal field theory with critical point having a well defined gravity dual known as 1RC black hole. The critical point is defined via \(\xi\to 2\) limit where, \(\xi\) is dimensionless parameter proportional to the charge of the 1RC black hole. We show that the logarithmic negativity in low and high thermal limits enhances with increasing \(\xi\). We analytically compute the EWCS in low and high thermal limits and find an agreement with the previously reported numerical results. We holographically explore the correlation between two identical copies of thermal field theory with critical point forming a thermofield double state (TFD) by computing the thermo mutual information (TMI). TMI shows an increasing behaviour with respect to the width of the boundary region. Further, we analyze the impact of an early perturbation on the field theory by analyzing a shock wave perturbation that grows exponentially in the dual eternal 1 RC black hole and then estimate the degradation of TMI. However rate of such disruption of TMI slows down as the value of critical parameter \(\xi\) takes higher values.
ArXiv ePrint: 2308.0001
## 1 Introduction
### Background
* 1 Holographic Entanglement Entropy (HEE)
* 2 Holographic Logarithmic Negativity for two adjacent subsystems
* 4.1 Holographic Logarithmic Negativity for two adjacent subsystems at low temperature
* 4.2 Holographic Logarithmic Negativity for two adjacent subsystems at high temperature
* 5 Holographic Logarithmic Negativity for two disjoint subsystems
* 5.1 Holographic Logarithmic Negativity for two disjoint subsystems at low temperature
* 5.2 Holographic Logarithmic Negativity for two disjoint subsystems at high temperature
* 6 Holographic Logarithmic Negativity for bipartite systems
* 6.1 Holographic Logarithmic Negativity for bipartite systems at low temperature
* 6.2 Holographic Logarithmic Negativity for Bipartite Systems at High Temperature
* 7 Entanglement Wedge Cross Section (EWCS)
* 7.1 Entanglement Wedge Cross Section at low temperature
* 7.2 Entanglement Wedge Cross Section at High Temperature
* 8 Holographic Mutual Information
* 8.1 Holographic Thermo Mutual Information (HTMI)
* 8.2 Holographic Thermo Mutual Information with shockwave
* 9 Summary and Discussions
* A Area of the Extremal Surface for Bipartite Systems
* B Approximate EWCS at low temperature limit in terms of boundary parameters
* B
Introduction
In quantum information theory, entanglement entropy (EE) of a bipartite system is synonymous to the von Neumann entropy constructed from the reduced density matrix of one of the subsystems. Hilbert space of bipartite system made out of two subsystems \(A\) and \(B\) is described as \(\mathcal{H}_{A}\otimes\mathcal{H}_{B}\). The EE between subsystem A (or its complement B) is defined as
\[\mathcal{S}_{A}=-\text{Tr}(\rho_{A}\log\rho_{A}), \tag{1}\]
where, \(\rho_{A}=\text{Tr}_{B}(\rho_{AB})\) is the reduced density matrix of \(A\), obtained by taking the partial trace of the total density matrix \(\rho_{AB}\) over the degrees of freedom of \(B\)[1]. However, EE is not a reliable measure for mixed states as it cannot differentiate between classical and quantum correlations. One of the well-celebrated measure for mixed-state entanglement is the mutual information (MI) that measures the total correlations and characterizes the amount of entanglement between two subsystems. The MI between \(A\) and \(B\) defined as [2]
\[I(A:B)=\mathcal{S}_{A}+\mathcal{S}_{B}-\mathcal{S}_{A\cup B} \tag{2}\]
where \(\mathcal{S}_{A}\), \(\mathcal{S}_{B}\), and \(\mathcal{S}_{A\cup B}\) are the von Neumann entropies of subsystem \(A\), \(B\) and \(A\cup B\). For pure state \(\mathcal{S}_{A\cup B}=0\), and MI reduces to \(I(A:B)=\mathcal{S}_{A}=\mathcal{S}_{B}\). The positive value of MI \(I(A:B)>0\) indicates the presence of entanglement between \(A\) and \(B\). However, \(I(A:B)=0\) implies that the subsystems may or may not be entangled and in that case, additional entanglement measures or criteria are required. Apart from MI, other measures such as entanglement of purification(EoP) and logarithmic negativity (LN) are widely used to diagnose the entanglement for a mixed state [3; 4].
In a general context, we have the capability to transform a mixed state, denoted as \(\rho_{AB}\), residing within a Hilbert space composed of \(\mathcal{H}_{A}\otimes\mathcal{H}_{A}\), into a pure state \(\ket{\psi}\) within an expanded Hilbert space represented as \(\mathcal{H}_{A}\otimes\mathcal{H}_{A}\otimes\mathcal{H}_{A^{\prime}}\otimes \mathcal{H}_{B^{\prime}}\). It's important to note that there exists an infinite array of purification pathways for \(\ket{\psi}\), all satisfying the condition that \(\rho_{AB}=Tr_{A^{\prime}B^{\prime}}\ket{\psi}\bra{\psi}\). For a given bipartite mixed state \(\rho_{AB}\), we define the EoP, denoted as \(E_{p}(A:B)\), as the minimum of EE among all feasible purification
\[E_{p}(A:B)=min_{\rho_{AB}=Tr_{A^{\prime}B^{\prime}}\ket{\Psi}\bra{\Psi}}\{ \mathcal{S}(\rho_{AA^{\prime}})\} \tag{3}\]
Further, Vidal and Werner proposed a quantity termed logarithmic negativity (LN) as a measure for the upper bound on the distillable entanglement in a mixed state [5]. Unlike the MI, LN captures only the quantum correlations and defined as,
\[\mathcal{E}=\log||\rho_{AB}^{T}|| \tag{4}\]
\(||\rho_{AB}^{T}||\) is the trace norm and \(\rho_{AB}^{T}\) is partial transpose of the \(\rho_{AB}\) with respect to the \(B\). Trace norm is directly related to the entanglement negativity via \(N=\frac{||\rho_{AB}^{T}||-1}{2}\)[5].
LN has been computed in CFT\({}_{2}\) employing a version of the usual replica technique involving a specific four-point function of the twist fields [6; 7; 8; 9; 10; 11; 12; 13]. An analytic form of MI in \(CFT_{2}\) is achieved by using the operator product expansion of twist field [14]. However,
field theoretic analysis for EoP hardly exists due to the difficulties in the implementation of the minimization procedure except for some numerical results obtained free lattice field theory [15]. The direct study of entanglement measures for strongly coupled field theory is still an open question. Nevertheless, one can study strongly coupled systems by exploiting the strong/weak nature of the holographic dualities.
A concrete example of such a holographic dualities is the AdS/CFT correspondence that suggests the information of a conformal field theory (CFT) living on the boundary of Anti-de Sitter (AdS) space is encoded in the bulk gravitational theory of AdS [16; 17]. Although general proof of the conjecture is yet to achieve, it passes numerous consistency tests in diverse fields. The Ryu-Takayanagi formula is a crucial example in favour of the AdS/CFT correspondence and it provides a holographic prescription for computing the entanglement entropy in the boundary CFT, known as holographic entanglement entropy (HEE)[18; 19]. It states that the entanglement entropy of a certain region A in the CFT is given by the area of a minimal surface (called the Ryu-Takayanagi surface) in the bulk AdS spacetime that is homologous to the boundary region.
\[\mathcal{S}_{A}=\frac{\mathcal{A}(\gamma_{A})}{4G_{N}^{d+1}} \tag{5}\]
where \(\gamma_{A}\) is a co-dimension two surface with the area \(\mathcal{A}(\gamma_{A})\) such that \(\partial\gamma_{A}=\partial A\) and \(G_{N}^{d+1}\) is the \(d+1\) dimensional Newton's constant. Latter Hubeny, Rangamani, and Takayanagi (HRT) extended this idea to the general states including arbitrary time dependence [20]. The study of entanglement entropy in the context of AdS/CFT has provided valuable insights into quantum phase transitions and critical phenomena in both the boundary CFT and the bulk gravity theory [21; 22; 23]. Quite naturally constructing a holographic prescription of computing the entanglement structure in a mixed state is crucial. In the context of \(AdS_{3}/CFT_{2}\), the authors of [26] propose a holographic conjecture to compute the LN of such boundary CFTs that exactly reproduced the CFT\({}_{2}\) results of in large central charge limit [8]. See [27; 28] for further generalization of this proposal.
Viable holographic prescriptions for EoP are presented in [29]. One can use the notion of purification and construct a TFD state which has the holographic dual, a two sided eternal black hole [30]. In [31] author shows that the entanglement in a TFD state can be destroyed via the insertion of an early time operator. The degradation of entanglement is considered as the signature of quantum chaos. Entanglement and quantum chaos are two distinct concepts, but they are interconnected in various ways, especially when considering a system described by mixed density matrix. In a chaotic system the entanglement between two causally disconnected parts of a TFD state can be disrupted by an early perturbation which grows exponentially in time. For a strongly coupled field theory, the shockwave analysis and pole skipping are the mostly used holographic methods [31; 32; 33; 34; 35; 36; 37]
Four-dimensional, finite temperature \(\mathcal{N}=4\) super Yang-Mills theory charged under a \(U(1)\) subgroup of its \(SU(4)\) R-symmetry, with chemical potential holographically corresponds to five dimensional 1RC black hole background [38; 39]. The low and high temperature limits of HEE and HMI near the critical point are explored in 1RC black hole background[40]. The author show that at and near the critical point the leading behavior
of mutual information yields a set of critical exponents. Moreover, in [41], a numerical investigation of the EWCS holographically reveals that the EoP in the dual field theory at finite temperature (\(T\)) and chemical potential (\(\mu\)) behaves as a monotonic function of \(\frac{\mu}{T}\) whereas the EoP behaves drastically different in the presence of a critical point. The investigation of the holographic butterfly effect is carried out within the background of a 1RC black hole. In this context, the dynamical exponent is determined through an expansion of the butterfly velocity in the vicinity of the critical point, as described in [43]. See [44; 45; 46; 47; 48], for more holographic applications in this background.
This work aims to improve the understanding of classical and quantum correlation near the critical point of the four-dimensional, finite temperature \(\mathcal{N}=4\) super Yang-Mills theory via performing holographic computation of a few relevant quantities such as LN, EoP and TMI in the dual five dimensional 1RC black hole background. In our analysis we find that, adjacent configurations, at low temperature, the LN decreases as the \(\xi\) parameter increases whereas at the high-temperature it increases with \(\xi\) parameter. For disjoint subsystems, LN increases with \(\xi\) at low temperatures and vanishes at high temperatures. In the bipartite case, LN increases with \(\xi\) at low temperatures and decreases at high temperatures. In all the cases, LN remain finite in critical limit \(\xi\to 2\). EoP also increases with respect to the parameter \(\xi\) and remains finite in the critical limit. We also show that TMI between two entangled subsystems forming a TFD state increases with their individual sizes. At a fixed size of the subsystem, TMI rises with increasing \(\xi\). In order to expand our investigation into the chaotic dynamics of strongly coupled field theories featuring critical point, we introduce an early-time, time-dependent perturbation. This perturbation, when realized within the holographic framework, takes the form of an exponentially growing energy pulse, ultimately manifesting as a shock wave. We explicitly disrupted holographic TMI with a shockwave, and our results indicate that as \(\xi\) parameter takes higher values, the chaotic behavior of the system gets reduced.
This paper is organized as follows; in the section 2 we discussed about the holographic dual of the strongly coupled field theory with critical points. In section 3 we review the HEE, section 4, 5 and 6 are devoted to the HLN for two subsystems in different configurations. In section 7 we give the analytic form of EWCS in low and high thermal limits and in section 8 we give the detailed computation of mutual information between two subsystems in a TFD state known as TMI. Finally, in section 9 we summarize the results.
## 2 Background
As discussed in the introduction, we proceed with a five-dimensional geometry which is holographic dual to a four-dimensional strongly coupled field theory with a critical point. In the existing literature, this is usually known as the 1RC black hole background [38; 44; 45; 46; 39]. Consider the following five dimensional Einstein-Maxwell-Dilaton action
\[\mathcal{S}_{\rm EMD}=\frac{1}{16\pi G_{N}^{(5)}}\int d^{5}x\sqrt{-g}\left[ \mathcal{R}-\frac{f(\phi)}{4}F_{\mu\nu}F^{\mu\nu}-\frac{1}{2}\partial_{\mu} \phi\partial^{\mu}\phi-V(\phi)\right], \tag{1}\]
where \(A_{\mu}\) is the gauge field and \(\phi\) is a scalar field. We denote the dilaton potential as \(V(\phi)\) and the coupling between the gauge field and the dilaton is characterized by the coupling function \(f(\phi)\). The functions \(f(\phi)\) and \(V(\phi)\) have the following form
\[f(\phi)=e^{-\sqrt{\frac{4}{3}}\phi},\quad V(\phi)=-\frac{1}{R^{2}}\left(8e^{ \frac{\phi}{\sqrt{6}}}+4e^{-\sqrt{\frac{2}{3}}\phi}\right) \tag{2}\]
where \(R\) is the \(AdS\) radius. The solution to the equations of motion of the EDM action in equation (1) corresponds to the 1RCBH background described by
\[ds^{2}=e^{2A(z)}\left(-h(z)dt^{2}+d\vec{x}_{(3)}^{2}\right)+\frac{e^{2B(z)}}{h(z )}\frac{R^{4}}{z^{4}}dz^{2} \tag{3}\]
where,
\[A(z) =\ln\left(\frac{R}{z}\right)+\frac{1}{6}\ln\left(1+\frac{Q^{2}z^ {2}}{R^{4}}\right)\] \[B(z) =-\ln\left(\frac{R}{z}\right)-\frac{1}{3}\ln\left(1+\frac{Q^{2}z ^{2}}{R^{4}}\right)\] \[h(z) =1-\frac{M^{2}z^{4}}{R^{6}\left(1+\frac{Q^{2}z^{2}}{R^{4}} \right)} \tag{4}\] \[\phi(z) =-\sqrt{\frac{2}{3}}\ln\left(1+\frac{Q^{2}z^{2}}{R^{4}}\right)\] \[\Phi(z) =\frac{MQ{z_{h}}^{2}}{R^{4}\left(1+\frac{Q^{2}{z_{h}}^{2}}{R^{4}} \right)}-\frac{MQz^{2}}{R^{4}\left(1+\frac{Q^{2}z^{2}}{R^{4}}\right)}\]
\(\Phi(z)\) is the electric potential that attributes to the temporal component of the gauge field. In this coordinate system, the boundary is situated at \(z=0\). Note that, the electric potential \(\phi(z)\) is chosen in such a way that it is regular on the boundary [49, 50] and vanishes on the horizon. The parameters \(M\) and \(Q\) are related to the mass and charge respectively of the black hole. One can obtain the following expression for the blackening factor \(h(z)\) using the horizon equation i.e \(h(z_{H})=0\)
\[h(z)=1-\left(\frac{z}{z_{h}}\right)^{4}\left(\frac{1+\left(\frac{Qz_{h}}{R^{2 }}\right)^{2}}{1+\left(\frac{Qz}{R^{2}}\right)^{2}}\right)=1-\left(\frac{z}{z_ {h}}\right)^{4}\left(\frac{1+\xi}{1+\xi(\frac{z}{z_{h}})^{2}}\right) \tag{5}\]
where, \(\xi\equiv Q^{2}z_{h}^{2}/R^{4}\). The Hawking temperature is given by,
\[T=\frac{1}{2\pi z_{h}}\left(\frac{2+\left(\frac{Qz_{h}}{R^{2}}\right)^{2}}{ \sqrt{1+\left(\frac{Qz_{h}}{R^{2}}\right)^{2}}}\right) \tag{6}\]
and the chemical potential is,
\[\mu=\frac{1}{R}\lim_{z\to 0}\Phi(z)=\frac{Q}{R^{2}\sqrt{1+\left(\frac{Qz_{h}}{R^{2}} \right)^{2}}} \tag{7}\]
For convenience, we rewrite the temperature, in terms of a dimensionless quantity \(\xi\) and \(\hat{T}\) as
\[T=\hat{T}\left(\frac{1+\frac{\xi}{2}}{\sqrt{1+\xi}}\right),\quad\hat{T}\equiv \frac{1}{\pi z_{h}} \tag{8}\]
It is shown in [40] that 1RCBH is thermodynamic stable for \(\xi\in[0,2]\) and \(\xi\to 2\) is the critical point.
## 3 Holographic Entanglement Entropy (HEE)
In this section, we provide a concise overview of the HEE calculation as presented in [40]. We leverage the outcomes of HEE computations conducted under various temperature conditions. Subsequently, we employ the method outlined in [42] to compute the HLN for the 1RCBH background. To elaborate, we focus on a boundary subsystem characterized as a rectangular strip denoted as \(A\), with a width \(l\) along the \(x\) direction and extending to a length \(L\) in all the transverse directions \(x^{j}\). The coordinate \(x\) is expressed in terms of the bulk coordinate \(z\). This rectangular strip can be precisely defined as follows,
\[x\equiv x\in\Big{[}-\frac{l}{2},\frac{l}{2}\Big{]},\quad x^{j}\in\Big{[}-\frac {L}{2},\frac{L}{2}\Big{]},\quad j=2,3 \tag{9}\]
Where \(L\) is exceedingly large. Determining the HEE of subsystem \(A\) requires us to calculate the smallest surface area of the co-dimension two hyper-surface denoted as \(\gamma_{A}\). The area functional of \(\gamma_{A}\) is as follows:
\[\mathcal{A}(\gamma_{A})=\int d^{3}x\ \sqrt{\det(g_{mn})} \tag{10}\]
Where \(g_{mn}\) is the induced metric of \(\gamma_{A}\), the area can be written in the following form
\[\mathcal{A}=2L^{2}\int dz\ e^{3A(z)}\sqrt{x^{\prime}(z)^{2}+\frac{R^{4}}{z^{ 4}h(z)}e^{2(B(z)-A(z))}} \tag{11}\]
One can find the conserved quantity corresponds to \(x\) using the Lagrangian in the area functional and obtain the following equation by imposing \(\frac{1}{x^{\prime}(z_{t})}=0\) for \(z\to z_{t}\),
\[x^{\prime}(z)=\frac{R^{2}}{z^{2}}\frac{e^{3A(z_{t})}e^{B(z)-A(z)}}{\sqrt{h(z) }\sqrt{e^{6A(z)}-e^{6A(z_{t})}}} \tag{12}\]
\(z_{t}\) is the turning point of the surface (\(\gamma_{A}\)). Using \(x^{\prime}(z)\), the area functional (11) now becomes
\[\mathcal{A}=2L^{2}R^{2}\int_{0}^{z_{t}}dz\ \frac{e^{B(z)+2A(z)}}{z^{2}\sqrt{h(z )}}\sqrt{\frac{e^{6A(z)}}{e^{6A(z)}-e^{6A(z_{t})}}} \tag{13}\]
Finally the holographic entanglement entropy (HEE) is,
\[\mathcal{S}=\frac{L^{2}R^{2}}{2G_{N}^{5}}\int_{0}^{z_{t}}dz\ \frac{e^{B(z)+2A(z)}}{z^{2}\sqrt{h(z)}}\sqrt{\frac{e^{6A(z)}}{e^{6A(z)}-e^{6A (z_{t})}}} \tag{14}\]
rom (3.4) boundary parameter \(l\) and the bulk parameter \(z_{t}\) are related via,
\[\frac{l}{2}=\int_{0}^{z_{t}}dz\ \frac{R^{2}}{z^{2}}\frac{e^{3A(z_{t})}e^{B(z)-A(z)} }{\sqrt{h(z)}\sqrt{e^{6A(z)}-e^{6A(z_{t})}}} \tag{3.7}\]
To express the HEE in terms of boundary parameter we have to replace the \(z_{t}\) in (3.6) in terms of \(l\). Finding a solution for the integral (3.7) and expressing \(z_{t}\) in relation to \(l\) poses a significant challenge. Nevertheless, in scenarios where the temperature is either low or high, accomplishing this task becomes feasible. Equation (2.4) and (3.5) gives the following expression
\[\mathcal{A}=2L^{2}R^{3}\int_{0}^{z_{t}}dz\ \frac{z_{t}{}^{3}}{z^{6}}\sqrt{\frac{1+ \xi\big{(}\frac{z}{z_{h}}\big{)}^{2}}{1+\xi\big{(}\frac{z}{z_{h}}\big{)}^{2}} }\Bigg{[}1-\left(\frac{z}{z_{h}}\right)^{4}\left(\frac{1+\xi}{1+\xi\big{(} \frac{z}{z_{h}}\big{)}^{2}}\right)\Bigg{]}^{-\frac{1}{2}}\Bigg{[}\Big{(}\frac {z_{t}}{z}\Big{)}^{6}\left(\frac{1+\xi\big{(}\frac{z}{z_{h}}\big{)}^{2}}{1+\xi \big{(}\frac{z_{t}}{z_{h}}\big{)}^{2}}\right)-1\Bigg{]}^{-\frac{1}{2}} \tag{3.8}\]
In similar way equation (3.7) can be expressed as,
\[\frac{l}{2}=\int_{0}^{z_{t}}dz\left[1+\xi\bigg{(}\frac{z}{z_{h}}\bigg{)}^{2} \right]^{-\frac{1}{2}}\Bigg{[}1-\left(\frac{z}{z_{h}}\right)^{4}\left(\frac{1 +\xi}{1+\xi\big{(}\frac{z}{z_{h}}\big{)}^{2}}\right)\Bigg{]}^{-\frac{1}{2}} \Bigg{[}\Big{(}\frac{z_{t}}{z}\Big{)}^{6}\left(\frac{1+\xi\big{(}\frac{z}{z_{ h}}\big{)}^{2}}{1+\xi\big{(}\frac{z_{t}}{z_{h}}\big{)}^{2}}\right)-1\Bigg{]}^{- \frac{1}{2}} \tag{3.9}\]
Now it is possible to analytically solve the above two integrals by considering several binomial and trinomial expansions. We are basically going to employ the following series expansion formulae to write the integrands of the above two equations
\[(x+y)^{-n}=\sum_{k=0}^{\infty}{(-1)^{k}\frac{\Gamma(n+k)}{\Gamma( k+1)\Gamma(n)}x^{-n-k}y^{k}};\ \ \text{given}\ |y|<|x|\] \[(x+y+z)^{-n}=\sum_{k=0}^{\infty}{\sum_{j=0}^{k}\frac{\Gamma(n+k)} {\Gamma(k+1)\Gamma(n)}\frac{(-1)^{k}\Gamma(k+1)}{\Gamma(j+1)\Gamma(k-j+1)}x^{ -n-k}y^{k-j}z^{j}},\ \ \text{given}\ |y+z|<|x| \tag{3.10}\]
Figure 1: Turning point \(z_{c}\) of RT surface with respect to width \(l\).
Using equation (24) we can write the following form of the area integral
\[\mathcal{A}=\frac{2L^{2}R^{3}}{\pi}\sum_{k=0}^{\infty}\sum_{n=0}^{k} \sum_{m=0}^{\infty}\sum_{j=0}^{\infty}\frac{(-1)^{k+n}\Gamma(k+\frac{1}{2}) \Gamma(j+m+\frac{1}{2})}{\Gamma(n+1)\Gamma(k-n+1)\Gamma(j+1)\Gamma(m+1)}\xi^{k- n+m}(1+\xi)^{n}\bigg{(}\frac{z_{t}}{z_{h}}\bigg{)}^{2m}\] \[\times\Bigg{[}1+\xi\bigg{(}\frac{z_{t}}{z_{h}}\bigg{)}^{2}\Bigg{]} ^{-m-\frac{1}{2}}\int_{0}^{z_{c}}dz\ \left[1+\xi\bigg{(}\frac{z}{z_{h}}\bigg{)}^{2}\right] \Bigg{[}1-\bigg{(}\frac{z}{z_{t}}\bigg{)}^{2}\Bigg{]}\,z^{-3}\bigg{(}\frac{z}{ z_{t}}\bigg{)}^{6j}\bigg{(}\frac{z}{z_{h}}\bigg{)}^{2(k+n)} \tag{25}\]
The region bounded by the extremal surface exhibits divergence, primarily due to its behavior near the boundary, which is a common expectation. Upon closer examination, it becomes evident that when the condition \(k+n+3j>1\) is met, the final integral (and consequently, the enclosed area) remains finite. Consequently, we must isolate and sum the terms corresponding to (\(k=n=j=0\)) and (\(k=1,n=j=0\)) over the variable \(m\) to determine the portion of the region containing the divergent component. By carrying out this procedure, one can derive the subsequent outcome.
\[\mathcal{A}_{0}\equiv L^{2}R^{2}\Bigg{\{}\frac{1}{\epsilon^{2}}+\frac{3\xi}{2{ z_{h}}^{2}}-\frac{1}{{z_{t}}^{2}}\Bigg{[}1+\xi\bigg{(}\frac{z_{t}}{z_{h}}\bigg{)}^{2} \Bigg{]}^{\frac{3}{2}}\Bigg{\}} \tag{26}\]
In the limit as \(\epsilon\) approaches zero, where \(z\) takes on the value of \(\epsilon\), the cutoff surface within the bulk geometry becomes relevant as it is intricately tied to the ultraviolet (UV) regularization of the field theory. It becomes evident that the problematic term in equation (26) shows behavior akin to an area law, a characteristic also shared by the associated holographic entanglement entropy. In the context of a \(d\)-dimensional boundary field theory, when the primary divergence in the UV limit, as \(\epsilon\) tends towards zero, adheres to an area law, this outcome is entirely anticipated. To simplify calculations, we will subsequently focus on the finite component of the area, achieved by subtracting the \(1/\epsilon^{2}\) term. This can be expressed in the following manner:
\[\mathcal{A}_{\text{finite}}=\frac{L^{2}R^{3}}{{z_{t}}^{2}}\Bigg{\{} \frac{3\xi}{2}\bigg{(}\frac{z_{t}}{z_{h}}\bigg{)}^{2}-\Bigg{[}1+\xi\bigg{(} \frac{z_{t}}{z_{h}}\bigg{)}^{2}\Bigg{]}^{\frac{3}{2}}+\frac{1+\xi}{3\xi}\bigg{(} \frac{z_{t}}{z_{h}}\bigg{)}^{2}\left[\Bigg{(}1+\xi\bigg{(}\frac{z_{t}}{z_{h}} \bigg{)}^{2}\Bigg{)}^{\frac{3}{2}}-1\right]\Bigg{\}}\\ +\frac{L^{2}R^{3}}{{z_{t}}^{2}}\Bigg{\{}\sum_{k=2}^{\infty}\sum_{ n=0}^{k}\sum_{m=0}^{\infty}\Lambda_{knm}\frac{\Gamma(m+\frac{1}{2})\Gamma(k+n-1)}{ \Gamma(k+n+m+1)}\bigg{(}\frac{z_{t}}{z_{h}}\bigg{)}^{2(k+n+m)}\\ \times\Bigg{[}(m+1)+(k+n-1)\left(1+\xi\bigg{(}\frac{z_{t}}{z_{h}} \bigg{)}^{2}\bigg{)}\right]\Bigg{\}}\\ +\frac{L^{2}R^{3}}{{z_{t}}^{2}}\Bigg{\{}\sum_{k=0}^{\infty}\sum_ {n=0}^{k}\sum_{m=0}^{\infty}\sum_{j=1}^{\infty}\Lambda_{knm}\frac{\Gamma(m+j+ \frac{1}{2})\Gamma(k+n+3j-1)}{\Gamma(j+1)\Gamma(k+n+m+3j+1)}\bigg{(}\frac{z_{t }}{z_{h}}\bigg{)}^{2(k+n+m)}\\ \times\Bigg{[}(m+1)+(k+n+3j-1)\left(1+\xi\bigg{(}\frac{z_{t}}{z_{ h}}\bigg{)}^{2}\bigg{)}\right]\Bigg{\}} \tag{27}\]
where \(\Lambda_{knm}\) is given by the following relation
\[\Lambda_{knm}\equiv\frac{(-1)^{k+n}\Gamma(k+\frac{1}{2})}{\pi\Gamma(n+1)\Gamma(k- n+1)}\xi^{k-n+m}(1+\xi)^{n}\Bigg{[}1+\xi\bigg{(}\frac{z_{t}}{z_{h}}\bigg{)}^{2} \Bigg{]}^{-m-\frac{1}{2}} \tag{21}\]
Hence, by incorporating the UV-divergence-dependent term into (21), we can derive the overall surface area of the external surface associated with a rectangular strip having a width of \(l\) on the boundary. Similarly, by following this procedure, we can determine the subsystem's width as a function of the turning point. Consequently, through the utilization of multinomial expansions and the solution of the integral, as outlined in equation (19), we can establish the ensuing relationship.
\[\frac{l}{2}=z_{t}\sum_{k=0}^{\infty}\sum_{n=0}^{k}\sum_{m=0}^{\infty}\sum_{j=0 }^{\infty}G_{knmj}F_{knmj}\bigg{(}\frac{z_{t}}{z_{h}}\bigg{)}^{2(k+n+m)} \tag{22}\]
where the constants \(G_{knmj}\) and \(F_{knmj}\) are defined by the following relations
\[\begin{split} G_{knmj}&\equiv\frac{\Gamma(k+\frac {1}{2})\Gamma(j+m+\frac{1}{2})\Gamma(2+3j+k+n)}{2\pi\Gamma(n+1)\Gamma(k-n+1) \Gamma(j+1)\Gamma(3+3j+k+n+m)}\\ F_{knmj}&\equiv(-1)^{k+n}\xi^{k-n+m}(1+\xi)^{n} \Bigg{[}1+\xi\bigg{(}\frac{z_{t}}{z_{h}}\bigg{)}^{2}\Bigg{]}^{-m}\end{split} \tag{23}\]
Note that in order to utilize the multinomial expansions for the negative exponents, it has been verified that the subsequent relationships hold true across the entire interval of \(\xi\in[0,2]\) and for values of \(z_{t}\) spanning from the boundary to the horizon.
\[\frac{\xi\Big{(}\frac{z_{t}}{z_{h}}\Big{)}^{2}}{1+\xi\Big{(}\frac{z_{t}}{z_{h} }\Big{)}^{2}}\left(1-\frac{z^{2}}{z_{t}{}^{2}}\right)<1,\qquad\text{and}\qquad \xi\bigg{(}\frac{z}{z_{h}}\bigg{)}^{2}-(1+\xi)\bigg{(}\frac{z}{z_{h}}\bigg{)}^ {4}<1 \tag{24}\]
We now possess the analytical representation for the area of the extremal surface associated with the subsystem of width \(l\). Furthermore, we have successfully elucidated the connection between this width and the turning point of the RT surface, as described in equations (20) and (22). In the next section, we are going to do this explicitly by considering low and high-temperature limits.
By examining equation (20), we can observe the extremal surface area, which is determined by two dimensionless parameters: \(\xi\) and the ratio \(z_{t}/z_{h}\). In the subsequent two subsections, our focus will be on exploring the holographic entanglement negativity concerning the \(z_{t}/z_{h}\) ratio, a parameter that leads to two distinct thermal limits. It's worth noting that \(z_{h}\) is inversely related to the black hole temperature. However, for the analysis of critical behavior, we will reserve our investigation regarding the parameter \(\xi\), which, as previously mentioned in section 2, governs the critical behavior. Considering the ratio between the location of the extremal surface and the horizon position, denoted as \(z_{t}/z_{h}\), one can anticipate two distinct scenarios for the area calculation in this context, specifically when \(z_{t}/z_{h}\ll 1\) and when \(z_{t}/z_{h}\sim 1\). It is important to note that the former
scenario implies that the extremal surface is situated close to the boundary at \(z=0\), while the latter scenario indicates that it approaches but does not cross the horizon. From the perspective of field theory, we can directly translate these scenarios into two equivalent thermal limits based on the subsystem width \(l\): \(\hat{T}l\ll 1\) and \(\hat{T}l\gg 1\), respectively, where \(\hat{T}\) is defined in equation (8). Consequently, one can associate the case of \(z_{t}/z_{h}\ll 1\) with the low-temperature limit, corresponding to the ground state of the CFT, and the case of "\(z_{t}/z_{h}\sim 1\) with the high-temperature limit, where the entanglement of thermal excitations becomes significant.
Before concluding this section, we would like to explore a few key aspects of the turning point denoted as \(z_{t}\) in fig.1. These aspects are closely tied to some well-established properties of the RT (Ryu-Takayanagi) surface. When the parameter \(l\) approaches zero, it becomes challenging to distinguish between the turning points of the RT surfaces associated with different values of \(\xi\). However, beyond a certain threshold value of \(l\)," it becomes possible to differentiate between the turning points corresponding to various \(\xi\) values.
It is important to note that, for a fixed \(\xi\), the turning point initially emerges from the origin and gradually increases as \(l\) increases. This behavior indicates that as the width of the boundary region expands, the RT surfaces extend deeper into the bulk of the system. As \(l\) reaches higher values, the value of \(z_{t}\) saturates, signifying that the RT surface, associated with a boundary region of width \(l\), becomes nearly parallel to the horizon when \(l\) becomes significantly large. Consequently, by examining the plot's characteristics, one can draw the aforementioned conclusions, which align with our understanding of the nature of the RT surface.
## 4 Holographic Logarithmic Negativity for two adjacent subsystems
In this section, we utilize the holographic framework outlined in references [24; 25; 26; 27] to analyze the 1RC black hole background and determine the holographic entanglement negativity. This calculation involves the summation of the areas of certain extremal surfaces, located in the bulk and associated with the relevant subsystems. As per the conjecture, the holographic entanglement negativity can be expressed in the following manner.
\[\mathcal{E}=\frac{3}{16G_{N}^{5}}\left(\mathcal{A}_{1}+\mathcal{A}_{2}- \mathcal{A}_{12}\right) \tag{10}\]
In this context, \(\mathcal{A}_{i}\) represents the area of a co-dimension two extremal surface that is connected to subsystem \(A_{i}\) (refer to fig.2). It's worth noting that \(\mathcal{A}_{12}\) specifically denotes the area of the extremal surface anchored to the combined subsystem \(A_{1}\cup A_{2}\). As previously discussed in Section 3, we have already presented the formula for calculating the extremal surface's area related to a subsystem with a specific width. In the following subsections, we will apply these formulas to compute the HLN in both low and high-temperature scenarios.
### Holographic Logarithmic Negativity for two adjacent subsystems at low temperature
In this section, we delve into the low-temperature regime of the area functional, along with the width parameter \(l\). Additionally, we calculate the Holographic Luttinger Number
(HLN) in the low-temperature limit, considering two neighboring subsystems with widths \(l_{1}\) and \(l_{2}\). To validate our findings, we demonstrate their correspondence with those of the AdS-Schwarzschild black hole in the \(\xi\to 0\) limit, as discussed in [51].
Before proceeding with the low-temperature limit, it is essential to acknowledge that when dealing with an infinite series, concerns about divergence arise, depending on their growth behavior. However, in the low-temperature regime where \(z_{t}/z_{h}\ll 1\), it becomes evident that both infinite series in equations (3.14) and (3.16) converge. Consequently, when expanding equation (3.16) to the order of \(z_{t}/z_{h}\), we derive the following relationship.
\[l=z_{t}\Bigg{\{}a_{1}-\frac{a_{1}\xi}{6}\bigg{(}\frac{z_{t}}{z_{h}}\bigg{)}^{2} +\bigg{[}\frac{a_{2}(1+\xi)}{2}+\frac{a_{3}\xi^{2}}{24}\bigg{]}\left(\frac{z_ {t}}{z_{h}}\right)^{4}+\mathcal{O}\bigg{(}\frac{z_{t}}{z_{h}}\bigg{)}^{6} \Bigg{\}} \tag{4.2}\]
where the constants \(a_{1}\), \(a_{2}\) and \(a_{3}\) are,
\[\begin{split} a_{1}&\equiv\sum_{j=0}^{\infty} \frac{\Gamma\left(j+\frac{1}{2}\right)}{\sqrt{\pi}\Gamma(j+1)\Gamma(2+3j)}= \frac{3\sqrt{\pi}\Gamma\left(\frac{5}{3}\right)}{\Gamma\left(\frac{1}{6} \right)}\\ a_{2}&\equiv\sum_{j=0}^{\infty}\frac{\Gamma\left( j+\frac{1}{2}\right)}{\sqrt{\pi}\Gamma(j+1)\Gamma(4+3j)}=\frac{\sqrt{\pi} \Gamma\left(\frac{7}{3}\right)}{4\ \Gamma\left(\frac{11}{6}\right)}\\ a_{3}&\equiv\sum_{j=0}^{\infty}\frac{\Gamma\left( j+\frac{1}{2}\right)(4-j)}{\sqrt{\pi}\Gamma(j+1)\Gamma(2+3j)\Gamma(4+3j)}\\ &=\frac{3}{\sqrt{\pi}}\left[\Gamma\left(\frac{5}{6}\right)\Gamma \left(\frac{5}{3}\right)-\frac{3}{5}\Gamma\left(\frac{7}{6}\right)\Gamma\left( \frac{7}{3}\right)\right]-\frac{1}{70}\ {}_{3}F_{2}\left(\frac{3}{2},\frac{5}{3},\frac{7}{3} ;\frac{8}{3},\frac{10}{3};1\right)\end{split} \tag{4.3}\]
Now inverting the relation between \(l\) and \(z_{t}\) as,
Figure 2: Schematic diagram of the extremal surfaces, involving turning points, corresponding to two adjacent boundary subsystems \(A\) and \(B\) having widths \(l_{1}\) and \(l_{2}\) respectively. Here \(z=0\) denotes the boundary whereas \(z=z_{h}\) denotes the horizon.
\[z_{t}=\frac{l}{a_{1}}\Bigg{\{}1+\frac{\xi}{6a_{1}^{2}}\bigg{(}\frac{l}{z_{h}} \bigg{)}^{2}+\frac{1}{24a_{1}^{4}}\left[\frac{\xi^{2}}{6}\left(1-\frac{a_{3}}{2a _{2}}\right)-\frac{a_{2}}{a_{1}}(1+\xi)\right]\left(\frac{l}{z_{h}}\right)^{4}+ \mathcal{O}\bigg{(}\frac{l}{z_{h}}\bigg{)}^{6}\Bigg{\}} \tag{4.4}\]
Similarly, one can expand the infinite series for the area functional and can get the following equation
\[\begin{split}\mathcal{A}_{\text{finite}}^{\text{low}}& =\frac{L^{2}R^{3}}{z_{t}^{2}}\left[\frac{1+\xi}{2}\bigg{(}\frac{z_ {t}}{z_{h}}\bigg{)}^{4}-1\right]+\frac{L^{2}R^{3}}{z_{t}^{2}}\sum_{j=1}^{\infty }\frac{\Gamma\left(j+\frac{1}{2}\right)}{\sqrt{\pi}\Gamma(j+1)\Gamma(3j-1)}\\ &\times\left[1+\frac{\xi}{3}\bigg{(}\frac{z_{t}}{z_{h}}\bigg{)}^ {2}+\left(\frac{(-4\xi^{2}+9\xi+9)j-3(\xi+1)}{18j+6}\right)\left(\frac{z_{t}}{ z_{h}}\right)^{4}\right]\end{split} \tag{4.5}\]
Now performing the sum and substituting the expression of the turning point we get,
\[\begin{split}\mathcal{A}_{\text{finite}}^{\text{low}}& =\frac{R^{3}L^{2}}{l^{2}}\Bigg{\{}a_{1}^{2}(w_{1}-1)+\frac{\xi}{3} \bigg{(}\frac{l}{z_{h}}\bigg{)}^{2}+\frac{1}{2a_{1}^{2}}\Bigg{[}(1+\xi)\left(1 -w_{3}+3w_{2}+\frac{2(w_{1}-1)a_{2}}{a_{1}}\right)\\ &\qquad\qquad\qquad\qquad\qquad\qquad\qquad+\frac{\xi^{2}}{6} \left((w_{1}-1)(\frac{a_{3}}{a_{1}}-1)-8w_{2}\right)\Bigg{]}\bigg{(}\frac{l}{ z_{h}}\bigg{)}^{4}\Bigg{\}}\end{split} \tag{4.6}\]
where the numerical constants \(w_{1}\), \(w_{2}\), and \(w_{3}\) are,
\[\begin{split} w_{1}&\equiv\frac{1}{\sqrt{\pi}}\sum _{j=1}^{\infty}\frac{\Gamma\left(j+\frac{1}{2}\right)}{\Gamma(j+1)(3j-1)}= \frac{1}{2^{2/3}}\ _{2}F_{1}\left(\frac{1}{3},\frac{2}{3};\frac{5}{3};-1\right)\\ w_{2}&\equiv\frac{1}{\sqrt{\pi}}\sum_{j=1}^{\infty }\frac{j\Gamma\left(j+\frac{1}{2}\right)}{\Gamma(j+1)(3j-1)(3j+1)}=\frac{1}{16} \ _{3}F_{2}\left(\frac{2}{3},\frac{4}{3},\frac{3}{2};\frac{5}{3},\frac{7}{3};1 \right)\\ w_{3}&\equiv\frac{1}{\sqrt{\pi}}\sum_{j=1}^{\infty }\frac{\Gamma\left(j+\frac{1}{2}\right)}{\Gamma(j+1)(3j-1)(3j+1)}=\frac{3}{16} \ _{3}F_{2}\left(\frac{2}{3},\frac{4}{3},\frac{3}{2};\frac{5}{3},\frac{7}{3};1 \right)-\frac{1}{2^{1/3}}\ _{2}F_{1}\left(\frac{4}{3},\frac{5}{3};\frac{7}{3};-1\right)\end{split} \tag{4.7}\]
Note that in the limit where \(\xi\to 0\), we get \(z_{h}=1/\pi T\), and the subleading terms become \(2^{\text{nd}}\) and \(4^{\text{th}}\) order in \(Tl\). To express this relation in a more simplified way, we define
\[\begin{split}& c\equiv a_{1}^{2}(w_{1}-1)\\ & f(\xi)\equiv(1+\xi)\frac{\left(1-w_{3}+3w_{2}+\frac{2(w_{1}-1)a _{2}}{a_{1}^{2}}\right)}{a_{1}^{2}}+\frac{\xi^{2}}{6}\frac{\left((w_{1}-1)( \frac{a_{3}}{a_{1}}-1)-8w_{2}\right)}{a_{1}^{2}}\end{split} \tag{4.8}\]
Therefore, using the above definitions we finally get the area functional of a boundary subsystem which was thought of a rectangular strip with width \(l\)
\[\mathcal{A}_{\text{finite}}^{\text{low}}=R^{3}\bigg{(}\frac{L}{l}\bigg{)}^{2} \Bigg{\{}c+\frac{\xi}{3}\Big{(}\pi\hat{T}l\Big{)}^{2}+\frac{1}{2}f(\xi)\Big{(} \pi\hat{T}l\Big{)}^{4}\Bigg{\}} \tag{4.9}\]
In the context of two adjoining subsystems, we designate these subsystems as \(A_{1}\) and \(A_{2}\), each defined by the width of their respective rectangular strips, denoted as \(l_{1}\) and
\(l_{2}\). Utilizing equation (4.1), we derive the subsequent expression for the HLN within the low-temperature regime for the scenario involving two adjacent subsystems.
\[\mathcal{E}_{low}=\frac{3R^{3}}{16G_{N}^{5}}\Bigg{[}c\Bigg{\{}\bigg{(} \frac{L}{l_{1}}\bigg{)}^{2}+\bigg{(}\frac{L}{l_{2}}\bigg{)}^{2}-\bigg{(}\frac{L} {l_{1}+l_{2}}\bigg{)}^{2}\Bigg{\}}+\frac{\xi}{3}L^{2}\pi^{2}\hat{T}^{2}-f(\xi) \left(\pi L^{2}\hat{T}^{4}\right)l_{1}l_{2}\Bigg{]} \tag{4.10}\]
This serves as a reminder that the HLN expression above is derived by considering the finite portion of the extremal areas of the adjacent subsystems. Consequently, the UV-divergence component is not evident in this expression. Nevertheless, if we work with the complete area expression, the UV divergence term will also appear in the negativity expression. Now one can compare this HLN expression with the one in [51] for the AdS\({}_{d+1}\) Schwarzschild black hole. Please note that this comparison is valid in the limit of \(Q\to 0\), which can be achieved by setting \(\xi\to 0\). In the provided expression, the first three terms within the curly braces are inversely proportional to the squares of the lengths corresponding to the relevant boundary regions. The third term, which depends on \(f(\xi)\), accounts for the product of the widths of two subregions and includes the \(\hat{T}^{4}\) dependence. The second term will naturally disappear in the critical limit of \(\xi\to 0\), rendering the HLN finite in the \(\xi\to 2\) limit.
### Holographic Logarithmic Negativity for two adjacent subsystems at high temperature
As mentioned in the previous section, there is always a concern regarding the divergence of infinite series. Fortunately, various methods for summation or regularization are available to address the issue of divergence in a given series. We observe that in the high-temperature limit where \(z_{t}\) approaches \(z_{h}\), the infinite series in equation (3.13) does not exhibit convergence. Nevertheless, it is possible to regularize the series by reordering its terms in a manner that allows us to recover the component proportional to \(l\). In the limit where \(z_{t}\) tends toward \(z_{h}\), the expression for the area of the RT-surface takes the following form.
\[\mathcal{A}_{\text{finite}}^{\text{high}}=R^{3}\bigg{(}\frac{L}{z_{h}} \bigg{)}^{2}\Bigg{\{}\sqrt{1+\xi}\left(\frac{l}{z_{h}}\right)+(S_{1}+S_{2}+S_ {3})\Bigg{\}} \tag{4.11}\]
where \(S_{1}\), \(S_{2}\) and \(S_{3}\) dependent on \(\xi\) and given by
\[S_{1}\equiv\frac{3\xi}{2}-\frac{1}{3}-\frac{11}{5\xi}-\frac{244 }{105\xi^{2}}-\frac{32}{35\xi^{3}}-\frac{16}{35\xi^{4}}+\sqrt{\xi+1}\left(- \frac{64\xi}{105}-\frac{124}{105}+\frac{26}{21\xi}+\frac{214}{105\xi^{2}}+ \frac{24}{35\xi^{3}}+\frac{16}{35\xi^{4}}\right)\] \[S_{2}\equiv\sum_{k=2}^{\infty}\sum_{n=0}^{k}\sum_{m=0}^{\infty} \frac{\Gamma\left(k+\frac{1}{2}\right)\Gamma\left(m+\frac{1}{2}\right)\Gamma( k+n+2)(-1)^{k+n}\xi^{k-n+m}(1+\xi)^{n-m-\frac{1}{2}}}{\pi\Gamma(n+1)\Gamma(k-n+1) \Gamma(k+n+m+3)}\] \[\times\Bigg{\{}\frac{m+1}{k+n-1}\left[1+\frac{m+1}{k+n}\left(2+ \frac{m}{k+n+1}\right)\right]+\frac{(1+\xi)(m+1)}{k+n}\left(2+\frac{m}{k+n+1} \right)\Bigg{\}} \tag{4.12}\]
\[S_{3} \equiv\sum_{k=2}^{\infty}\sum_{n=0}^{k}\sum_{m=0}^{\infty}\sum_{j=1} ^{\infty}\frac{\Gamma\left(k+\frac{1}{2}\right)\Gamma\left(j+m+\frac{1}{2} \right)\Gamma(k+n+3j+2)}{\pi\Gamma(n+1)\Gamma(j+1)\Gamma(k-n+1)\Gamma(k+n+m+3j+3)} \tag{4.13}\] \[\times(-1)^{k+n}\xi^{k-n+m}(1+\xi)^{n-m-\frac{1}{2}}\times\left\{ \frac{m+1}{k+n+3j-1}\left[1+\frac{m+1}{k+n+3j}\left(2+\frac{m}{k+n+3j+1}\right) \right]\right.\] \[+\left.\frac{(1+\xi)(m+1)}{k+n+3j}\left(2+\frac{m}{k+n+3j+1} \right)\frac{}{}\right\}\]
Finally, in terms of temperature \(\hat{T}\), we can write
\[\mathcal{A}_{\text{finite}}^{\text{high}}=R^{3}\!\left(\frac{L}{l}\right)^{2 }\!\left\{\sqrt{1+\xi}\!\left(\pi\hat{T}l\right)^{3}+S_{4}\!\left(\pi\hat{T}l \right)^{2}\right\}\ \ \text{where}\ \ S_{4}=S_{1}+S_{2}+S_{3} \tag{4.14}\]
Therefore, using the formula for the HLN as given by equation (4.1) we find the HEN at a high-temperature regime for two adjacent subsystems \(A_{1}\) and \(A_{2}\)
\[\mathcal{E}_{high}=\frac{3R^{3}}{16G_{N}^{5}}\Bigg{\{}S_{4}L^{2}\!\left(\pi \hat{T}\right)^{2}\Bigg{\}} \tag{4.15}\]
Similarly, as previously noted, the negativity expression above is derived by considering the finite part of the extremal areas of the adjacent subsystems. Consequently, the UV-divergence component does not manifest in this expression. To establish a comparison, one can now contrast this entanglement negativity expression with the result obtained in [51] for the AdS\({}_{d+1}\) Schwarzschild black hole in the limit as \(\xi\to 0\). To observe an exact match in the high-temperature scenario, it is necessary to expand the exponential terms in [51] to linear order in \(l\), resulting in a dependence solely on \(T^{d-2}\).
Before concluding this section, it's worth noting that in equation (4.14), the initial term dependent on temperature scales with the volume of the rectangular strip, represented as \(L^{2}l\), while the subsequent term is related to the area. Consequently, the first term characterizes thermal entropy, while the second term represents the entanglement entropy between the strip region and its complement. However, in the case of negativity, the volume-dependent component is absent. This indicates that at high temperatures, entanglement entropy and thermal entropy become equivalent and exhibit a temperature dependence of \(\hat{T}^{2}\), as deduced from the area calculation in [40].
## 5 Holographic Logarithmic Negativity for two disjoint subsystems
In this section, we will determine the HLN for two distinct subsystems within the background of a 1RC black hole. Similar to our analysis of neighboring subsystems, we will establish a connection between our findings and the general results outlined in [42] for a \((d+1)\)-dimensional AdS Schwarzschild black hole. Specifically, we focus on two non-overlapping intervals, denoted as \(A_{1}\) and \(A_{2}\), each with widths \(l_{1}\) and \(l_{2}\) respectively, as illustrated in fig.3. These intervals collectively constitute the mixed-state subsystem \(A\), with a gap separating them corresponding to a subsystem \(A_{m}\subset B\) of width \(l_{m}\), where
\(B=A^{c}\) represents the remainder of the system. To clarify, we define the three intervals as follows
\[A_{1} : x^{1}\equiv x\in\left[-\frac{l_{1}}{2},\frac{l_{1}}{2}\right],\qquad x ^{(j)}\in\left[-\frac{L}{2},\frac{L}{2}\right]\ \ \text{ where }j=2,3 \tag{5.1}\] \[A_{2} : x^{2}\equiv x\in\left[-\frac{l_{2}}{2},\frac{l_{2}}{2}\right], \qquad x^{(j)}\in\left[-\frac{L}{2},\frac{L}{2}\right]\ \ \text{ where }j=2,3\] (5.2) \[A_{m} : x^{m}\equiv x\in\left[-\frac{l_{m}}{2},\frac{l_{m}}{2}\right], \quad x^{(j)}\in\left[-\frac{L}{2},\frac{L}{2}\right]\ \ \text{ where }j=2,3 \tag{5.3}\]
where \(L\) for the transverse coordinates are taken to be very large \(L\to\infty\).
Now following the conjecture as given in [42, 52] one can write the entanglement negativity corresponding to the disjoint intervals as
\[\mathcal{E}=\frac{3}{16G_{N}^{5}}\Big{(}\mathcal{A}_{A_{1}\cup A_{m}}+ \mathcal{A}_{A_{m}\cup A_{2}}-\mathcal{A}_{A_{1}\cup A_{m}\cup A_{2}}- \mathcal{A}_{A_{m}}\Big{)} \tag{5.4}\]
where \(\mathcal{A}_{A_{1}\cup A_{m}}\) and \(\mathcal{A}_{A_{m}\cup A_{2}}\) are the areas of the extremal surfaces anchored with respect to the region \(A_{1}\cup A_{m}\) and \(A_{2}\cup A_{m}\) respectively. \(\mathcal{A}_{A_{1}\cup A_{m}\cup A_{2}}\) is the area of the extremal surface anchored with respect to the region \(A_{1}\cup A_{m}\cup A_{2}\) and \(A_{m}\) follows a similar meaning. Note that all the three surfaces corresponding to the intervals \(A_{1}\), \(A_{2}\) and \(A_{m}\) have the turning points labeled as \(z_{t_{1}}\), \(z_{t_{2}}\) and \(z_{t_{m}}\). By utilizing the surface area of the RT surfaces in their respective regions, we can calculate the HLN. Furthermore, we will examine the low and high-temperature limits separately in the following two subsections.
### Holographic Logarithmic Negativity for two disjoint subsystems at low temperature
In our prior analysis, we derived the expression for the extremal surface area corresponding to a region of width \(l\) in the low-temperature limit. Consequently, we shall reformulate the area expression as follows
\[\mathcal{A}_{\text{finite}}^{\text{low}}=R^{3}\bigg{(}\frac{L}{l}\bigg{)}^{2 }\Bigg{\{}c+\frac{\xi}{3}\Big{(}\pi\hat{T}l\Big{)}^{2}+\frac{1}{2}f(\xi)\Big{(} \pi\hat{T}l\Big{)}^{4}\Bigg{\}} \tag{5.5}\]
To calculate the HEN, we must utilize this relation to formulate the expressions for the areas of the extremal surfaces associated with all the intervals specified in equation (5.4). By performing this procedure, we derive the following relationships
\[\mathcal{A}_{A_{1}\cup A_{2}\cup A_{m}}=R^{3}\bigg{(}\frac{L}{l_{ 1}+l_{2}+l_{m}}\bigg{)}^{2}\Bigg{\{}c+\frac{\xi}{3}\Big{(}\pi\hat{T}(l_{1}+l_{2 }+l_{m})\Big{)}^{2}+\frac{1}{2}f(\xi)\Big{(}\pi\hat{T}(l_{1}+l_{2}+l_{m})\Big{)} ^{4}\Bigg{\}}\] \[\mathcal{A}_{A_{1}\cup A_{m}}=R^{3}\bigg{(}\frac{L}{l_{1}+l_{m}} \bigg{)}^{2}\Bigg{\{}c+\frac{\xi}{3}\Big{(}\pi\hat{T}(l_{1}+l_{m})\Big{)}^{2}+ \frac{1}{2}f(\xi)\Big{(}\pi\hat{T}(l_{1}+l_{m})\Big{)}^{4}\Bigg{\}}\] \[\mathcal{A}_{A_{2}\cup A_{m}}=R^{3}\bigg{(}\frac{L}{l_{2}+l_{m}} \bigg{)}^{2}\Bigg{\{}c+\frac{\xi}{3}\Big{(}\pi\hat{T}(l_{2}+l_{m})\Big{)}^{2}+ \frac{1}{2}f(\xi)\Big{(}\pi\hat{T}(l_{2}+l_{m})\Big{)}^{4}\Bigg{\}}\] \[\mathcal{A}_{A_{m}}=R^{3}\bigg{(}\frac{L}{l_{m}}\bigg{)}^{2} \Bigg{\{}c+\frac{\xi}{3}\Big{(}\pi\hat{T}l_{m}\Big{)}^{2}+\frac{1}{2}f(\xi) \Big{(}\pi\hat{T}l_{m}\Big{)}^{4}\Bigg{\}} \tag{5.6}\]
Using the above equation in (5.4) one would obtain the HLN at low temperatures for two disjoint subsystems
\[\begin{split}\mathcal{E}_{low}&=\frac{3R^{3}}{16G_{N}^ {5}}\Bigg{[}c\Bigg{\{}\bigg{(}\frac{L}{l_{1}+l_{m}}\bigg{)}^{2}+\bigg{(}\frac{L }{l_{2}+l_{m}}\bigg{)}^{2}-\bigg{(}\frac{L}{l_{1}+l_{2}+l_{m}}\bigg{)}^{2}- \bigg{(}\frac{L}{l_{m}}\bigg{)}^{2}\Bigg{\}}\\ &\qquad+\frac{1}{2}f(\xi)\left(\pi L^{2}\hat{T}^{4}\right)\Bigg{\{} (l_{1}+l_{m})^{2}+(l_{2}+l_{m})^{2}-(l_{1}+l_{2}+l_{m})^{2}-l_{m}^{2}\Bigg{\}} \Bigg{]}\end{split} \tag{5.7}\]
Note that we are dealing exclusively with the finite portion of the area. This is the reason why, in previous instances, the HLN (Hartman-Maldacena-Niarchos) did not incorporate the UV divergence term, denoted as \(L^{2}/\epsilon^{2}\). However, in the scenario of disjoint intervals, a closer examination reveals that even if we consider the entire area expression, including the divergent portion, the HLN remains independent of the cutoff. This stands in contrast to the situation encountered in the case of mixed-state configurations involving adjacent intervals. The first term on the right-hand side of the equation above originates from the contribution of the AdS\({}_{5}\) vacuum and remains unaffected by temperature changes. The remaining terms represent finite-temperature corrections to the HLN at low temperatures, which closely resemble the conditions observed in the mixed-state scenario of adjacent intervals.
A similar outcome has been previously documented in [52]. Additionally, one can naturally anticipate that as the limit \(l_{m}\to\epsilon\) is approached, the entanglement negativity (HLN) for separate subsystems will replicate the outcome expressed in equation (4.10) for adjacent subsystems. When taking the limit \(l_{m}\to\epsilon\) in equation (5.7), it becomes possible to recreate both the first component (which is independent of temperature) and the third component (dependent on \(\hat{T}^{4}l_{1}l_{2}\)). Furthermore, a term reliant on the cutoff
Figure 3: Schematic diagram of the extremal surfaces at low effective temperature, involving the turning points, corresponding to the subregions \(A_{1}\) and \(A_{2}\) separated by an interval \(A_{m}\).
emerges, expressed as \(\frac{2}{d-2}\big{(}\frac{L}{\epsilon}\big{)}^{d-2}\). Intriguingly, this term would have been present in the HLN expression at low temperatures if the cutoff-dependent part within the region of the RT surfaces for multiple subregions had been considered. Hence, we can deduce that as \(l_{m}\) approaches \(\epsilon\), the entanglement negativity for separate subsystems converges to that of adjacent subsystems.
### Holographic Logarithmic Negativity for two disjoint subsystems at high temperature
In our previous analysis, we obtained the expression of the area of the extremal surface corresponding to a region with a width of \(l\) at a high-temperature limit. Therefore we rewrite the expression of the area as follows
\[\mathcal{A}_{\text{finite}}^{\text{high}}=R^{3}\bigg{(}\frac{L}{l}\bigg{)}^{2 }\Bigg{\{}\sqrt{1+\xi}\Big{(}\pi\hat{T}l\Big{)}^{3}+S_{4}\Big{(}\pi\hat{T}l \Big{)}^{2}\Bigg{\}} \tag{5.8}\]
Now to compute the HLN we employ this relation to write down the expressions of the area of the extremal surfaces of all the intervals required in equation (5.4). By doing so we obtain the following relations
\[\mathcal{A}_{A_{1}\cup A_{2}\cup A_{m}}=R^{3}\bigg{(}\frac{L}{l_{ 1}+l_{2}+l_{m}}\bigg{)}^{2}\Bigg{\{}\sqrt{1+\xi}\Big{(}\pi\hat{T}(l_{1}+l_{2} +l_{m})\Big{)}^{3}+S_{4}\Big{(}\pi\hat{T}(l_{1}+l_{2}+l_{m})\Big{)}^{2}\Bigg{\}}\] \[\mathcal{A}_{A_{1}\cup A_{m}}=R^{3}\bigg{(}\frac{L}{l_{1}+l_{m}} \bigg{)}^{2}\Bigg{\{}\sqrt{1+\xi}\Big{(}\pi\hat{T}(l_{1}+l_{m})\Big{)}^{3}+S_{ 4}\Big{(}\pi\hat{T}(l_{1}+l_{m})\Big{)}^{2}\Bigg{\}}\] \[\mathcal{A}_{A_{2}\cup A_{m}}=R^{3}\bigg{(}\frac{L}{l_{2}+l_{m}} \bigg{)}^{2}\Bigg{\{}\sqrt{1+\xi}\Big{(}\pi\hat{T}(l_{2}+l_{m})\Big{)}^{3}+S_{ 4}\Big{(}\pi\hat{T}(l_{2}+l_{m})\Big{)}^{2}\Bigg{\}}\] \[\mathcal{A}_{A_{m}}=R^{3}\bigg{(}\frac{L}{l_{m}}\bigg{)}^{2} \Bigg{\{}\sqrt{1+\xi}\Big{(}\pi\hat{T}l_{m}\Big{)}^{3}+S_{4}\Big{(}\pi\hat{T}l _{m}\Big{)}^{2}\Bigg{\}}\]
Using the above equation in (5.4) we obtain the expression for HLN at high temperature,
\[\mathcal{E}_{high}=\frac{3R^{3}L^{2}}{16G_{N}^{5}}\Bigg{\{}\sqrt{1+ \xi}(\pi T)^{3}(l_{1}+l_{m})+S_{4}(\pi T)^{2}+\sqrt{1+\xi}(\pi T)^{3}(l_{2}+l _{m})+S_{4}(\pi T)^{2}\] \[-\sqrt{1+\xi}(\pi T)^{3}(l_{1}+l_{2}+l_{m})-S_{4}(\pi T)^{2}- \sqrt{1+\xi}(\pi T)^{3}l_{m}-S_{4}(\pi T)^{2}\Bigg{\}} \tag{5.10}\]
By simplifying the expression above, we readily demonstrate that the HLN evaluates to zero. Consequently, when dealing with two separate subsystems under high-temperature conditions, the HLN becomes null. This outcome aligns with expectations since entanglement negativity exclusively quantifies quantum correlations, whereas high temperatures primarily entail thermal entropy. To validate this outcome, it would be beneficial to examine the HLN expression for disjoint subsystems at elevated temperatures as provided
n the reference [52]. This reference offers the following expression for the same within a generic background of a \((d+1)\)-dimensional AdS Schwarzschild black hole.
\[\mathcal{E}=\frac{3}{16G_{N}^{5}}\bigg{(}\frac{4\pi}{d}\bigg{)}^{d- 1}\frac{C_{1}}{4\pi}\sqrt{2d(d-1)}L^{d-2}T^{d-2}\Bigg{\{} -e^{-\sqrt{\frac{d-1}{2d}}4\pi T(l_{1}+l_{m})}-e^{-\sqrt{\frac{d-1}{2d}}4\pi T (l_{2}+l_{m})}\] \[+e^{-\sqrt{\frac{d-1}{2d}}4\pi T(l_{1}+l_{2}+l_{m})}+e^{-\sqrt{ \frac{d-1}{2d}}4\pi Tl_{m}}\Bigg{\}} \tag{5.11}\]
If one expands the exponential terms on the right-hand side of the above equation, the following expression can be obtained
\[\mathcal{E}=\frac{3}{16G_{N}^{5}}\bigg{(}\frac{4\pi}{d}\bigg{)}^{ d-1}\frac{C_{1}}{4\pi}\sqrt{2d(d-1)}L^{d-2}T^{d-2}\Bigg{\{}-1+\sqrt{\frac{d-1}{2d }}4\pi T(l_{1}+l_{m})-1 \tag{5.12}\] \[+\sqrt{\frac{d-1}{2d}}4\pi T(l_{2}+l_{m})+1-\sqrt{\frac{d-1}{2d }}4\pi T(l_{1}+l_{2}+l_{m})+1-\sqrt{\frac{d-1}{2d}}4\pi Tl_{m}\Bigg{\}}\]
From the equation presented above, it becomes clear that the HLN expression evaluates to zero, mirroring the result we derived in this subsection. Justification for expanding the exponential terms to linear order in \(l\) can be found in the behavior of the extremal area at high temperatures, as it exclusively encompasses linear order dependence in \(l\). Consequently, we can confidently affirm that our high-temperature entanglement negativity result for disjoint subsystems aligns with the findings in [52].
Figure 4: Schematic diagram of the extremal surfaces at high effective temperature, involving the turning points, corresponding to the subregions \(A_{1}\) and \(A_{2}\) separated by an interval \(A_{m}\).
Holographic Logarithmic Negativity for bipartite systems
In this section, we will calculate the HLN for a bipartite configuration. Similar to the preceding sections, we will determine the entanglement negativity for both low and high-temperature regimes. To validate our findings, we will establish their consistency by comparing them to previously obtained results for the general \((d+1)\)-dimensional AdS-Schwarzschild black hole, as documented in [25].
We provide a brief overview of the setup for the bipartite system. To gain a clear understanding of this setup, it is essential to begin by partitioning the boundary CFT into two subsystems, denoted as \(A\) and its complement \(A^{c}\). Furthermore, we will consider two additional subsystems, namely \(B_{1}\) and \(B_{2}\), which are situated adjacent to \(A\) and positioned on either side of it in such a way that we have the union \(B=B_{1}\cup B_{2}\). As in the preceding sections, we will use the notation \(A_{\gamma}\) to represent the area of co-dimension 2 static minimal surfaces within the bulk geometry, anchored on these subsystems. Consequently, in a general context, the HLN (Holographic entanglement negativity) for the bipartite system formed by the union of \(A\) and \(A^{c}\) is expressed as follows
\[\mathcal{E}=\lim_{B\to A^{c}}\biggl{[}\frac{3}{16G_{N}^{(d+1)}}\Bigl{(}2 \mathcal{A}_{A}+\mathcal{A}_{B_{1}}+\mathcal{A}_{B_{2}}-\mathcal{A}_{A\cup B _{1}}-\mathcal{A}_{A\cup B_{2}}\Bigr{)}\biggr{]} \tag{10}\]
In the above equation, \(G_{N}^{(d+1)}\) represents the Newton constant in a \((d+1)\)-dimensional context. It's important to note that we can interpret the bipartite limit, denoted as \((B\to A^{c})\), by extending the subsystems \(B_{1}\) and \(B_{2}\) to the extent that \(B\) effectively becomes equal to the complement of \(A\), denoted as \(A^{c}\). To provide precise definitions for the subsystems in question namely, \(A\), \(B_{1}\), and \(B_{2}\) we will describe them in the context of the 4-dimensional boundary CFT as follows:
\[A: \quad x^{1}\equiv x\in\biggl{[}-\frac{l}{2},\frac{l}{2}\biggr{]} \,,\qquad\,x^{(j)}\in\biggl{[}-\frac{L_{2}}{2},\frac{L_{2}}{2}\biggr{]}\,\,\, \,\,\text{where}\,\,j=2,3\] \[B_{1}: \quad x^{1}\equiv x\in\biggl{[}-L,-\frac{l}{2}\biggr{]}\,,\,\,\, \,\,\,\,\,\,\,\,x^{(j)}\in\biggl{[}-\frac{L_{2}}{2},\frac{L_{2}}{2}\biggr{]}\,\, \,\,\,\text{where}\,\,j=2,3\] (11) \[B_{2}: \quad x^{1}\equiv x\in\biggl{[}\frac{l}{2},L\biggr{]}\,,\,\,\,\, \,\
olographic Logarithmic Negativity for bipartite systems at low temperature
In this section, we compute the HLN for the bipartite state in the low-temperature regime. This regime corresponds to the temperature limit \(\hat{T}l\ll 1\), which in the bulk corresponds to the case where the horizon is at a large distance from the turning point \(z_{t_{2}}\) of the extremal surface anchored on the subsystem \(A\). At the low-temperature limit, we are already aware of the perturbative solution of the infinite series of \(\frac{l}{2}\) as discussed in section 4. By doing so, one could obtain the relation between the turning point of the RT surface anchored of the subsystem \(A\) and the width of the subsystem as
\[z_{t_{2}}=\frac{l}{a_{1}}\Bigg{\{}1+\frac{\xi}{6a_{1}^{2}}\bigg{(}\frac{l}{z_{ h}}\bigg{)}^{2}+\frac{1}{24a_{1}^{4}}\left[\frac{\xi^{2}}{6}\left(1-\frac{a_{3}}{2 a_{2}}\right)-\frac{a_{2}}{a_{1}}(1+\xi)\right]\left(\frac{l}{z_{h}}\right)^{4}+ \mathcal{O}\bigg{(}\frac{l}{z_{h}}\bigg{)}^{6}\Bigg{\}} \tag{6.4}\]
Using the above relation we obtained the area of the extremal surface corresponding to the subsystem \(A\) at a low-temperature regime as follows 1
Footnote 1: Likewise the previous cases, note that the equation below does not contain the UV cut-off dependent part as we have considered the finite part of the area of \(\mathcal{A}\) only.
\[\mathcal{A}_{A}=R^{3}\bigg{(}\frac{L}{l}\bigg{)}^{2}\Bigg{\{}c+\frac{\xi}{3} \Big{(}\pi\hat{T}l\Big{)}^{2}+\frac{1}{2}f(\xi)\Big{(}\pi\hat{T}l\Big{)}^{4} \Bigg{\}} \tag{6.5}\]
The subsystems \(B_{1}\) and \(A\cup B_{1}\) in the boundary with lengths \((L-l/2)\) and \((L+l/2)\) along the \(x^{1}\) direction are very large in the limit \(B\to A^{c}\) which corresponds to the limit \(L\to\infty\). Therefore, the extremal surfaces described by the areas \(\mathcal{A}_{B_{1}}\) and \(\mathcal{A}_{A\cup B_{1}}\) will extend deep into the bulk approaching the black hole horizon even at low temperatures i.e., \((z_{t_{1}}\sim z_{h})\) and \((z_{t_{3}}\sim z_{h})\). Hence, for computing the expressions for the areas \(\mathcal{A}_{B_{1}}\) and \(\mathcal{A}_{A\cup B_{1}}\) we employ the method developed in [53] for the case when the RT surfaces approach the black
Figure 5: Schematic diagram of the extremal surfaces corresponding to the bipartite subsystem at low effective temperature.
hole horizon. Following the procedure, we can write the turning points of the extremal surfaces \(\mathcal{A}_{B_{1}}\) and \(\mathcal{A}_{A\cup B_{1}}\) as follows for a \((d+1)\)-dimensional AdS Schwarzchild black hole 2
Footnote 2: Although, we are writing \(d+1\) dimensional result but we will use \(d=4\) in final expressions.
\[z_{t_{1}} =z_{h}(1+\epsilon_{1})=z_{h}\Bigg{[}(1+k_{2}e^{-\sqrt{\frac{d(d-1)} {2}}z_{h}\left(L-\frac{l}{2}\right)}\Bigg{]} \tag{6.6}\] \[z_{t_{3}} =z_{h}(1+\epsilon_{1})=z_{h}\Bigg{[}(1+k_{2}e^{-\sqrt{\frac{d(d-1 )}{2}}z_{h}\left(L+\frac{l}{2}\right)}\Bigg{]}\]
where \(k_{2}\) has the following form
\[k_{2}=\frac{1}{d}e^{\sqrt{\frac{d(d-1)}{2}}c_{1}} \tag{6.7}\] \[c_{1}=\frac{2\sqrt{\pi}\Gamma\left(\frac{d}{2(d-1)}\right)}{ \Gamma\left(\frac{1}{d-1}\right)}+\sum_{n=1}^{\infty}\left\{\frac{2}{(1+nd)} \frac{\Gamma\left(n+\frac{1}{2}\right)}{\Gamma(n+1)}\frac{\Gamma\left(\frac{d (n+1)}{2(d-1)}\right)}{\Gamma\left(\frac{nd+1}{2(d-1)}\right)}-\frac{\sqrt{2}} {\sqrt{d(d-1)}n}\right\} \tag{6.8}\]
We can now find out the area of the extremal surface for the subsystems \(\mathcal{A}_{B_{1}}\) and \(\mathcal{A}_{A\cup B_{1}}\) by substituting (6.6) in (3.13). We then take the sum as an expansion of \(\epsilon_{1}\) and \(\epsilon_{3}\) respectively and consider the terms in linear order of them. Therefore we obtain the following expressions
\[\mathcal{A}_{B_{1}} =\frac{L^{2}R^{3}}{z_{h}^{2}}\Big{\{}\alpha(\xi)+\gamma(\xi)+\mu( \xi)\Big{\}}+\frac{L^{2}R^{3}}{z_{h}}\left(L-\frac{l}{2}\right)\Big{\{}\beta( \xi)+\delta(\xi)+\nu(\xi)\Big{\}} \tag{6.9}\] \[\mathcal{A}_{A\cup B_{1}} =\frac{L^{2}R^{3}}{z_{h}^{2}}\Big{\{}\alpha(\xi)+\gamma(\xi)+\mu( \xi)\Big{\}}+\frac{L^{2}R^{3}}{z_{h}}\left(L+\frac{l}{2}\right)\Big{\{}\beta( \xi)+\delta(\xi)+\nu(\xi)\Big{\}}\]
where all the \(\xi\)-dependent functions have been defined in Appendix-A. Finally using equation (6.9) in (6.3) we can get the HLN for the bipartite system at the low-temperature limit
\[\mathcal{E}_{low}=\frac{3}{8G_{N}^{5}}\Bigg{[}R^{3}\bigg{(}\frac{L}{l}\bigg{)} ^{2}\Bigg{\{}c+\frac{\xi}{3}\Big{(}\pi\hat{T}l\Big{)}^{2}+\frac{1}{2}f(\xi) \Big{(}\pi\hat{T}l\Big{)}^{4}\Bigg{\}}-R^{3}L^{2}l\hat{T}g(\xi)\Bigg{]} \tag{6.10}\]
where the function \(g(\xi)\) can be written as \(g(\xi)=\pi(\beta(\xi)+\delta(\xi)+\nu(\xi))\).
Note that in the equation presented earlier, the last term is directly proportional to \(L^{2}l\), representing the three-dimensional volume of subsystem A. With this correspondence, one can infer that the final term is proportional to the thermal entropy associated with subsystem A. To further scrutinize our findings, we can examine the behavior as \(\xi\) approaches zero. It is expected that in the limit where \(Q\) tends to zero (which can also be achieved by setting \(\xi\) to zero), the aforementioned result will coincide with the AdS Schwarzschild black hole result found in [25]. It is worth emphasizing that the initial term enclosed within the curly brackets, with the appropriate scaling factor, precisely reproduces the entanglement entropy of subsystem A at low temperatures.
Now, as the limit \(\xi\to 0\) is taken, we can examine the behavior of the function \(g(\xi)\), revealing that \(g(\xi)\) scales as \(1/\xi\). Consequently, in terms of temperature, we can express this
as \(g(\xi)\propto T^{2}\). By combining the aforementioned arguments, it becomes apparent that the final term, which is proportional to the volume \(V=L^{2}l\), exhibits an explicit temperature dependence of order \(T^{3}\). Hence, one can interpret this last term in the preceding equation as the thermal entropy (in \((d+1)\)-dimensional AdS Schwarzschild geometry, thermal entropy is proportional to \(VT^{d-1}\)) of the system A. Consequently, we can now reformulate equation (6.10) in the following manner
\[\mathcal{E}_{low}=\frac{3}{2}\Big{\{}S_{A}-\mathcal{C}S_{A}^{\rm Th}\Big{\}}, \ \ \text{where $\mathcal{C}$ is a constant} \tag{6.11}\]
Surprisingly, the equation presented above reveals that the HLN effectively quantifies distillable entanglement by eliminating the thermal contribution in low-temperature conditions. This characteristic stands as a universal trait of entanglement negativity within finite-temperature mixed states.
### Holographic Logarithmic Negativity for Bipartite Systems at High Temperature
In the high-temperature regime, the turning point \(z_{t_{2}}\) of the extremal surface, representing the area \(\mathcal{A}A\), converges close to the black hole horizon. This convergence is characterized by the condition \(zt_{2}\sim z_{h}\), as illustrated in fig. 6. Consequently, we can employ the same methodology to calculate the area of the extremal surface corresponding to subsystem \(A\) as we did for \(\mathcal{A}B_{1}\) and \(\mathcal{A}A\cup B_{1}\) in the preceding section.
It's worth noting that, as previously explained, these surfaces consistently explore the vicinity of the black hole horizon, both at low and high temperatures. This behavior is a consequence of the limit \(B\to A^{c}\), or equivalently, \(L\to\infty\). Therefore, we can utilize equation (6.9) to compute the Holographic Entanglement Negativity (HEN) in the high-temperature regime. We can write the following expression of the turning point corresponding to the extremal surface of the subsystem \(A\)
Figure 6: Schematic diagram of the extremal surfaces corresponding to the bipartite subsystem at high effective temperature.
\[z_{t_{2}}=z_{h}(1+\epsilon_{1})=z_{h}\Bigg{[}(1+k_{2}e^{-\sqrt{\frac{d(d-1)}{2}}z_{ h}l}\Bigg{]} \tag{6.12}\]
where \(k_{2}\) is,
\[k_{2}=\frac{1}{d}e^{\sqrt{\frac{d(d-1)}{2}}c_{1}} \tag{6.13}\]
\[c_{1}=\frac{2\sqrt{\pi}\Gamma\left(\frac{d}{2(d-1)}\right)}{\Gamma\left(\frac {1}{d-1}\right)}+\sum_{n=1}^{\infty}\Bigg{\{}\frac{2}{(1+nd)}\frac{\Gamma \left(n+\frac{1}{2}\right)}{\Gamma(n+1)}\frac{\Gamma\left(\frac{d(n+1)}{2(d-1) }\right)}{\Gamma\left(\frac{nd+1}{2(d-1)}\right)}-\frac{\sqrt{2}}{\sqrt{d(d-1 )n}}\Bigg{\}} \tag{6.14}\]
Using the above equations we can write,
\[\mathcal{A}_{A}=\frac{L^{2}R^{3}}{z_{h}^{2}}\Big{\{}\alpha(\xi)+\gamma(\xi)+ \mu(\xi)\Big{\}}+\frac{L^{2}R^{3}}{z_{h}}l\Big{\{}\beta(\xi)+\delta(\xi)+\nu (\xi)\Big{\}} \tag{6.15}\]
Ultimately, by incorporating equations (6.15) and (6.9) into (6.3), we arrive at the following outcome for the HLN in the bipartite scenario under high-temperature limit
\[\mathcal{E}_{high}=\frac{3}{8G_{N}^{5}}\Bigg{[}\frac{L^{2}R^{3}}{z_{h}^{2}} \Big{\{}\alpha(\xi)+\gamma(\xi)+\mu(\xi)\Big{\}}+\frac{L^{2}R^{3}}{z_{h}}l \Big{\{}\beta(\xi)+\delta(\xi)+\nu(\xi)\Big{\}}-L^{2}R^{3}l\hat{T}g(\xi)\Bigg{]} \tag{6.16}\]
Note that, as previously demonstrated for the low-temperature scenario, we can similarly reformulate the equation above for the high-temperature regime. This can be achieved by applying the same analysis in the limit as \(\xi\) approaches zero, resulting in a more concise expression.
\[\mathcal{E}_{high}=\frac{3}{2}\Big{\{}S_{A}-\mathcal{C}S_{A}^{\rm Th}\Big{\}},\ \ \text{where $\mathcal{C}$ is a constant} \tag{6.17}\]
Much like in the case of low temperatures, in the high-temperature regime, the HLN also facilitates the extraction of distillable quantum entanglement. This extraction process involves eliminating the thermal contribution, a universal characteristic of the entanglement negativity observed in finite-temperature mixed states of a holographic CFT.
## 7 Entanglement Wedge Cross Section (EWCS)
In this section, we compute the analytic form of the EWCS and perform limiting analysis for low and high-temperature regimes. To delineate the concept of the entanglement wedge, we take into account two subsystems, labeled as \(A\) and \(B\), situated on the boundary. The Ryu-Takayanagi (RT) surface, represented as \(\gamma_{AB}\), characterizes the region encompassing \(A\cup B\). Subsequently, the entanglement wedge is defined as the volume corresponding to the boundary \(A\cup B\cup\gamma_{AB}\). The Entanglement Wedge Cross Section (EWCS) is established through the extremal area surface \(\Gamma_{W}\), which bifurcates the regions \(A\) and \(B\), as illustrated in fig. 7. In a numerical analysis conducted in [41], the EWCS has been computed.
Nevertheless, the analytic expression for the EWCS in the background of 1RCBH has not yet been documented. In this context, we establish the boundary subsystems \(A\) and
\(B\), each with a length of \(l\) and separated by a distance \(D\).
\[\begin{array}{llll}A:&x^{1}\equiv x\in\left[-l-\frac{D}{2},-\frac{D}{2}\right],&x^{(j)}\in\left[-\frac{L_{2}}{2},\frac{L_{2}}{2}\right]&\text{where $j=2,3$}\\ B:&x^{1}\equiv x\in\left[l+\frac{D}{2},\frac{D}{2}\right],&x^{(j)}\in\left[- \frac{L_{2}}{2},\frac{L_{2}}{2}\right]&\text{where $j=2,3$}\end{array} \tag{109}\]
In the given setup, the surface with the minimum area, denoted as \(\Sigma_{\text{min}}\), which separates the subsystems \(A\) and \(B\), is precisely the vertical surface positioned at \(x=0\). The metric on the Cauchy surface is described as follows:
\[ds^{2}_{\Sigma_{min}}=e^{2A(z)}d\vec{x}_{2}^{2}+\frac{e^{2B(z)}}{h(z)}\frac{R^ {4}}{z^{4}}dz^{2} \tag{110}\]
The EWCS is then computed by [59]
\[E_{W}=\frac{L^{2}}{4G_{N}^{5}}\int_{z_{t}(D)}^{z_{t}(2l+D)}dz\sqrt{g_{mn}} \tag{111}\]
The equation (110) defines the induced metric \(g_{\text{mn}}\). The turning points of the extremal surfaces we have examined are denoted as \(z_{t}(2l+D)\) and \(z_{t}(D)\). Consequently, when considering equations (110) and (111), we obtain:
\[E_{W}=\frac{L^{2}R^{2}}{4G_{N}^{5}}\int_{z_{t}(D)}^{z_{t}(2l+D)}dz\frac{e^{2A( z)+B(z)}}{z^{2}\sqrt{h(z)}} \tag{112}\]
By employing (4) and the definition of the dimensionless parameter \(\xi\), we can express
Figure 7: Schematic diagram of the extremal surfaces corresponding to two disjoint subsystems of equal length \(l\) and separated by a distance \(D\). The surface \(\Gamma_{W}\), marked in red, is the entanglement wedge
the integral above as follows, 3
Footnote 3: Note that in deriving equation (7.5), we have applied the multinomial expansion, similar to our earlier approach in section 4.
\[E_{W}=\frac{L^{2}R^{3}}{4G_{N}^{5}}\int_{z_{t}(D)}^{z_{t}(2l+D)}dz \sum_{k=0}^{\infty}\sum_{j=0}^{k}\sum_{i=0}^{\infty}\frac{(-1)^{k+j}}{2}\frac{ \Gamma(k+\frac{1}{2})\xi^{i+j+k}(1+\xi)^{j}}{\Gamma(i+1)\Gamma(\frac{3}{2}-i) \Gamma(j+1)\Gamma(k-j+1)}\frac{z^{2i+2j+2k-3}}{z_{h}{}^{2i+2j+2k}}\] \[=\frac{L^{2}R^{3}}{4G_{N}^{5}}\sum_{k=0}^{\infty}\sum_{j=0}^{k} \sum_{i=0}^{\infty}\frac{(-1)^{k+j}}{2}\frac{\Gamma(k+\frac{1}{2})\xi^{i+j+k}( 1+\xi)^{j}}{\Gamma(i+1)\Gamma(\frac{3}{2}-i)\Gamma(j+1)\Gamma(k-j+1)}\frac{1}{ (2i+2j+2k-2)}\] \[\times\Bigg{\{}\frac{z_{t}(2l+D)^{2i+2j+2k-2}}{z_{h}^{2i+2j+2k}} -\frac{z_{t}(D)^{2i+2j+2k-2}}{z_{h}^{2i+2j+2k}}\Bigg{\}} \tag{7.5}\]
In the following two subsections, we will examine the behavior of the EWCS under varying temperature conditions, employing suitable approximations as previously demonstrated. As indicated by the expression above, it becomes evident that when \(D\) significantly exceeds \(l\), the EWCS completely disappears.
### Entanglement Wedge Cross Section at low temperature
As it is generally challenging to reverse the relationship between \(z_{t}\) and width \(l\), and formulate a universal expression for EWCS based on boundary parameters, we resort to specific thermal limits for this purpose. Therefore, we examine EWCS in both low and high-temperature limits. In the low-temperature limit, where \(z_{t}(D)\ll z_{H}\) and \(z_{t}(2l+D)\ll z_{H}\), we can derive the following expressions for the turning points using equation (4.2).
\[z_{t}(D)=\frac{D}{a_{1}}\Bigg{\{}1+\frac{\xi}{6a_{1}^{2}}\bigg{(} \frac{D}{z_{h}}\bigg{)}^{2}+\frac{1}{24a_{1}^{4}}\left[\frac{\xi^{2}}{6}\left( 1-\frac{a_{3}}{2a_{2}}\right)-\frac{a_{2}}{a_{1}}(1+\xi)\right]\left(\frac{D} {z_{h}}\right)^{4}+\mathcal{O}\bigg{(}\frac{D}{z_{h}}\bigg{)}^{6}\Bigg{\}} \tag{7.6}\] \[z_{t}(2l+D)=\frac{2l+D}{a_{1}}\Bigg{\{}1+\frac{\xi}{6a_{1}^{2}} \bigg{(}\frac{2l+D}{z_{h}}\bigg{)}^{2}+\frac{1}{24a_{1}^{4}}\left[\frac{\xi^{ 2}}{6}\left(1-\frac{a_{3}}{2a_{2}}\right)-\frac{a_{2}}{a_{1}}(1+\xi)\right] \left(\frac{2l+D}{z_{h}}\right)^{4}\] (7.7) \[+\mathcal{O}\bigg{(}\frac{2L+D}{z_{h}}\bigg{)}^{6}\Bigg{\}}\]
By substituting equations (7.6) and (7.7) into equation (7.5), one can derive the expression for EWCS in the low-temperature regime. It's important to note that simplifying the EWCS at low temperatures can be achieved by applying a binomial expansion to both turning points, considering terms up to the first order. This expansion is feasible because when examining the coefficients of \(\mathcal{O}(1/z_{h}^{2})\), \(\mathcal{O}(1/z_{h}^{4})\), etc., within the parentheses in equations (7.6) and (7.7), it becomes evident that these terms are smaller than 1 in the low-temperature limit. Further simplifications in the resulting equation (see Appendix B) can be achieved by truncating the series at the lowest order, which corresponds to setting \(i=j=k=0\). It is also reasonable to assert that at low temperatures, both \(D\) and
\(l\) are small, allowing us to neglect higher exponents associated with these length scales. Consequently, we can express the simplified version of the EWCS at low temperatures as follows.
\[E_{W}^{\rm low}=\frac{L^{2}R^{3}}{4G_{N}^{5}}\Bigg{[}\frac{a_{1}^{2}}{2}\Bigg{\{} \frac{1}{D^{2}}-\frac{1}{(2l+D)^{2}}\Bigg{\}}+\frac{2}{a_{1}^{2}}\Bigg{\{} \frac{\xi^{2}}{6}\left(1-\frac{a_{3}}{a_{2}}\right)-\frac{a_{2}}{a_{1}}(1+\xi )\Bigg{\}}l(l+D)\frac{1}{z_{h}^{4}}+\mathcal{O}\bigg{(}\frac{1}{z_{h}}\bigg{)} ^{6}\Bigg{]} \tag{100}\]
In terms of temperature the above expression becomes
\[E_{W}^{\rm low}=\frac{L^{2}R^{3}}{4G_{N}^{5}}\Bigg{[}\frac{a_{1}^{2}}{2}\Bigg{\{} \frac{1}{D^{2}}-\frac{1}{(2l+D)^{2}}\Bigg{\}}+\frac{2}{a_{1}^{2}}\Bigg{\{} \frac{\xi^{2}}{6}\left(1-\frac{a_{3}}{a_{2}}\right)-\frac{a_{2}}{a_{1}}(1+ \xi)\Bigg{\}}l(l+D)(\pi\hat{T})^{4}+\mathcal{O}\left(\hat{T}^{6}\right)\Bigg{]} \tag{101}\]
Now, we will examine the outcomes we've acquired for the EWCS when the low temperature in the background of a 1RCBH. As anticipated, the initial term enclosed in curly braces, which is the temperature independent component, implies that the EWCS will rise as the separation between the subsystems diminishes, and in the limit where D approaches zero, it becomes unbounded. We can further verify the validity of the previously obtained result by cross-referencing it with the mutual information calculation in [40]. By leveraging the connection between the EWCS and mutual information as discussed in [60], we can observe that, at low temperatures, the EWCS exhibits an identical behavior to that of mutual information. This alignment serves as strong confirmation for the accuracy of our findings. In the critical limit, denoted as \(\xi\to 2\), it is noteworthy that the EWCS remains finite, mirroring the behavior reported for mutual information in [40].
### Entanglement Wedge Cross Section at High Temperature
We can now examine the EWCS under high-temperature conditions. To achieve this, there are two viable options regarding the boundary parameters \(l\) and \(D\). If we opt for a scenario where \(D\) is chosen to be very large but finite, both the turning points corresponding to the extremal surfaces \(\gamma_{D}\) and \(\gamma_{2l+D}\) will move deeper into the bulk. Consequently, one can, in principle, employ the near-horizon expansion for the turning points \(z_{t}(D)\) and \(z_{t}(2l+D)\). However, this approach yields a trivial outcome as, in the limit where \(D\) tends to infinity, the EWCS at high temperatures naturally diminishes.
Alternatively, one can argue that to impose the high-temperature limit, we can take the limit of \(l\) approaching infinity while keeping \(D\) fixed at a small value. In this scenario, a non-zero, significantly large value for the EWCS is expected, and this can be obtained by focusing on the near-horizon expansion for the extremal surface \(\gamma_{2l+D}\) exclusively. For the upcoming calculations, we will be working within the former limit. Utilizing the techniques employed in the previous sections, we commence with the following expression for the turning point.
\[z_{t}(D)=z_{h}(1+\epsilon)=z_{h}\left(1+k_{2}e^{-\sqrt{\frac{d(d-1)}{2}}z_{h}D}\right) \tag{102}\]
where,
\[k_{2}=\frac{1}{d}e^{\sqrt{\frac{d(d-1)}{2}}c_{1}} \tag{103}\]
\[c_{1}=\frac{2\sqrt{\pi}\Gamma\left(\frac{d}{2(d-1)}\right)}{\Gamma\left(\frac{1}{ d-1}\right)}+\sum_{n=1}^{\infty}\left\{\frac{2}{(1+nd)}\frac{\Gamma\left(n+\frac{1}{2} \right)}{\Gamma(n+1)}\frac{\Gamma\left(\frac{d(n+1)}{2(d-1)}\right)}{\Gamma \left(\frac{nd+1}{2(d-1)}\right)}-\frac{\sqrt{2}}{\sqrt{d(d-1)}n}\right\} \tag{112}\]
Note that we are working with \(d=4\). Now we use the (100) in the last line of (105) to obtain
\[E_{W}^{\rm high}=\frac{L^{2}R^{3}}{4G_{N}^{5}}\sum_{k=0}^{\infty }\sum_{j=0}^{k}\sum_{i=0}^{\infty}\frac{\left(-1\right)^{k+j}}{2}\frac{\Gamma (k+\frac{1}{2})\xi^{i+j+k}(1+\xi)^{j}}{\Gamma(i+1)\Gamma(\frac{3}{2}-i)\Gamma (j+1)\Gamma(k-j+1)}\frac{1}{(2i+2j+2k-2)}\] \[\times\Bigg{\{}\frac{\left(1+k_{2}e^{-\sqrt{6}z_{h}(2l+D)} \right)^{2i+2j+2k-2}}{z_{h}^{2}}-\frac{\left(1+k_{2}e^{-\sqrt{6}z_{h}D} \right)^{2i+2j+2k-2}}{z_{h}^{2}}\Bigg{\}} \tag{113}\]
Taking the binomial expansion in the above equation up to order \(\epsilon\) and suppressing the higher order terms one obtains the following expression for EWCS at high temperature
\[E_{W}^{\rm high}=\frac{L^{2}R^{3}}{4G_{N}^{5}}\sum_{k=0}^{\infty}\sum_{j=0}^{ k}\sum_{i=0}^{\infty}\frac{\left(-1\right)^{k+j}}{2}\frac{\Gamma(k+\frac{1}{2}) \xi^{i+j+k}(1+\xi)^{j}}{\Gamma(i+1)\Gamma(\frac{3}{2}-i)\Gamma(j+1)\Gamma(k-j +1)}\frac{k_{2}}{z_{h}^{2}}\Bigg{(}e^{-\sqrt{6}z_{h}(2l+D)}-e^{-\sqrt{6}z_{h} D}\Bigg{)} \tag{114}\]
In terms of temperature, we can rewrite the above expression in the following form
\[E_{W}^{\rm high}=\frac{L^{2}R^{3}}{4G_{N}^{5}}\sum_{k=0}^{\infty}\sum_{j=0}^{ k}\sum_{i=0}^{\infty}\frac{\left(-1\right)^{k+j}}{2}\frac{\Gamma(k+\frac{1}{2}) \xi^{i+j+k}(1+\xi)^{j}}{\Gamma(i+1)\Gamma(\frac{3}{2}-i)\Gamma(j+1)\Gamma(k-j +1)}(\pi\hat{T})^{2}\Bigg{(}e^{-\frac{\sqrt{6}(2l+D)}{\pi\hat{T}}}-e^{-\frac{ \sqrt{6}D}{\pi\hat{T}}}\Bigg{)} \tag{115}\]
Similar to the previous section, we will now refer to the calculation of mutual information at high temperatures as presented in [40]. In equation (115), we can apply the high-temperature limit by considering \(D\) to be large but finite. Consequently, as we take the limit \(D\to\infty\), the EWCS diminishes, as does the mutual information, as indicated in [40]. This leads us to conclude that the expression we have derived for EWCS at elevated temperatures is consistent. It's worth noting, as mentioned earlier, that working with the non-trivial limit in the boundary parameter could be an intriguing exercise, demonstrating that this limit corresponds to a substantial EWCS value. However, we defer this exploration to future research.
## 8 Holographic Mutual Information
The holographic dual of a thermofield double (TFD) state is essential for studying information scrambling in a strongly coupled field theory in the context of AdS/CFT correspondence. Entangled states, defined in a bipartite Hilbert space comprised of the individual Hilbert spaces of two identical and non-interacting copies of strongly coupled field theories, can serve as examples of TFD states. Two entangled AdS black holes are holographically dual to such a TFD state. The outer region on both sides of the two-sided black hole in the Penrose diagram fig.8 is made up of the right (R) and left (L) wedges, two causally disconnected regions of spacetime. It is possible to have a non-local correlation
between two boundary theories that are separately residing in the asymptotic boundaries of the R or L regions. An appropriate generalization of mutual information (MI), known as thermo mutual information (TMI), was first introduced in [56], along with a generalization of holographic mutual information (HMI), known as holographic thermo mutual information (HTMI), can be used to describe a practical measure of such correlation. Later holographically studied in [34; 37; 57]. Therefore the above definition is true for TMI, but the entangling regions lie on the causally disconnected boundaries. An early time-dependent perturbation that grows exponentially can destroy these correlations holographically this perturbation is known as the shockwave created by a small amount of energy in the asymptotic past. In the following sections, we will study the HTMI without and with the shockwave.
### Holographic Thermo Mutual Information (HTMI)
To determine the HTMI, we adopt the methodology presented in [34; 37; 57], considering two strip-like identical subsystems denoted as \(A\) and \(B\) with a width of \(l\). Subsystem \(A\) is positioned along the left (\(L\)) asymptotic boundary, while subsystem \(B\) is situated along the right (\(R\)) asymptotic boundary of the eternal black hole at time \(t=0\). In accordance with the RT proposal, \(\mathcal{S}(A)\) and \(\mathcal{S}(B)\) are directly linked to the minimal surface areas of \(\gamma_{A}\) and \(\gamma_{B}\) within the bulk, which correspond to the entangling regions \(A\) and \(B\), respectively. We define the embedding for \(\gamma_{i}(i=A,B)\) as \((t=0,z,-l/2\leq x(z)\leq l/2,-L/2\leq x^{j}\leq L/2\)\(j=2....d-1)\). For the extremal surface corresponds to \(\gamma_{A\cup B}\) can be either \(\gamma_{A}\cup\gamma_{B}\) or \(\gamma_{1}\cup\gamma_{2}=\gamma_{\rm wormhole}\) where for \(\gamma_{1}\) and \(\gamma_{2}\) the appropriate embedding is \((t=0,z,x=-l/2,-L/2\leq x^{j}\leq L/2)\) and \((t=0,z,x=l/2,-L/2\leq x^{j}\leq L/2)\) respectively. \(\gamma_{1}\) and \(\gamma_{2}\) surfaces are connecting two asymptotic boundaries through the bifurcation point of the eternal black hole denoted by the doted line in the Fig.8. TMI becomes zero as \(\mathcal{A}(\gamma_{A}\cup\gamma_{B})\leq\mathcal{A}(\gamma_{\rm wormhole})\), and \(I(A,B)\) is positive for the opposite situation. To find the area of wormhole surface we need to follow the Hubeny-Rangamani-Takayanagi (HRT) prescription [20]. The induced metrics components for RT and HRT surfaces are given by,
\[G_{\rm in}^{A}=G_{\rm in}^{B}=\left(\frac{R_{\rm AdS}^{2}}{z^{2}}\right)^{d-1 }\left(\frac{1}{f(z)}+x^{\prime 2}\right),\qquad G_{\rm in}^{\rm wormhole}= \left(\frac{R_{\rm AdS}^{2}}{z^{2}}\right)^{d-1}\frac{1}{f(z)}, \tag{8.1}\]
Figure 8: Penrose diagram of the eternal blackhole. At t = 0, the spatial extremal surface connecting two asymptotic boundaries of an eternal black hole is denoted by the dashed line passing through the bifurcation point.
where \(x^{\prime}=0\) for \(\gamma_{1}\) and \(\gamma_{2}\). Now, HTMI is
\[I(A,B) =\frac{1}{4G_{N}^{5}}\left(\int_{-L/2}^{L/2}dx^{i}\right)\left[2\int _{0}^{z_{t}}dz\left(\sqrt{G_{in}^{A}}+\sqrt{G_{in}^{B}}\right)-4\int_{0}^{z_{h} }dz\sqrt{G_{in}^{A\cup B}}\right] \tag{11}\] \[=\frac{L^{2}R^{2}}{G_{N}^{5}}\left[\int_{0}^{z_{c}}\frac{e^{B(z)+ 2A(z)}dz}{z^{2}\sqrt{f(z)}}\sqrt{(\frac{e^{6A(z_{c})}}{e^{6A(z)}-e^{6A(z_{c})} }+1)}-\int_{0}^{z_{h}}\frac{e^{3A(z)}dz}{z^{2}\sqrt{f(z)}e^{A(z)-B(z)}}\right]\]
Due to the symmetric layout of the extremal surfaces, equation (11) incorporates coefficients of both 2 and 4. The parameter \(z_{t}\) denotes the turning point of the RT surfaces associated with regions \(A\) and \(B\). The relationship between the HTMI and the width (\(l\)) of entangling region can be determined through the use of the \(l\) and \(z_{t}\) relation.
\[\frac{l}{2}=\int_{0}^{z_{t}}\frac{dz}{\sqrt{\left(\frac{Q^{2}z^{2}}{R^{4}}+1 \right)\left(\frac{z_{t}^{6}\left(\frac{Q^{2}z^{2}}{R^{4}}+1\right)}{z^{6} \left(\frac{Q^{2}z^{2}}{R^{4}}+1\right)}-1\right)\left(1-\frac{z^{4}\left( \frac{Q^{2}z^{2}}{R^{4}}+1\right)}{z_{h}^{4}\left(\frac{Q^{2}z^{2}}{R^{4}}+1 \right)}\right)}} \tag{12}\]
The fig.9 illustrates that, as the width \(l\) is increased, the HTMI also increases. However, when we reach a specific value of \(l\) (say critical width \(l_{c}\)), the TMI is zero for any \(l\leq l_{c}\). This critical value of width decreases as we raise the value of \(\xi\). The TMI exhibited analogous characteristics as reported in [34; 37]. In [37], it was noted that as the backreaction parameter \(b\) increases, the critical width decreases, a trend also observed in [34] with the anisotropic parameter \(a\). This occurs because, when \(l\leq l_{c}\), the HRT surface connecting points \(A\) and \(B\) accumulates a greater surface area than the combined areas of the individual RT surfaces associated with \(A\) and \(B\). Consequently, the selection for the \(A\cup B\) surface will be equivalent to the sum of the areas of the RT surfaces corresponding to \(A\) and \(B\).
It is evident that as the parameter \(\xi\) increases, the critical width \(l_{c}\) decreases. As \(\xi\) approaches the critical point of the theory, which is \(\xi\to 2\), all the values of the critical width \(l_{c}\) for non-zero \(\xi\) converge towards the value associated with \(\xi=2\), while the critical width corresponding to \(\xi=0\) is significantly distant. The closer \(\xi\) gets to the critical value of 2, the smaller the separation between the critical widths becomes.
Figure 9: HTMI with respect to width \(l\) for \(T=1,R=1\) with different values of \(\xi\)
### Holographic Thermo Mutual Information with shockwave
In this section, we examine the time-dependent behavior of the HTML following the application of a shockwave. The profile of the shockwave is \(\alpha\approx e^{\frac{2\pi}{\beta}t}\). The impact of the shock wave on the geometry can be accounted by adjusting the Kruskal coordinate from \(V\) to \(\hat{V}=V+\Theta(U)\alpha\), while leaving all other coordinates unchanged and denoting them with a hat, as previously demonstrated in [34; 37; 57]. The function \(\Theta(U)\) ensures that the shockwave's influence is confined to the left region of the Penrose diagram shown in fig. 8, which is modified as depicted in fig.10.
Entanglement entropy exhibits UV divergences, while mutual information remains unaffected by these divergences, as discussed in the preceding section. The introduction of a shock wave can introduce new divergences into the system. We investigate an early asymptotic pulse of energy generated at the boundary, which acts as a minor inward disturbance entering the left (L) boundary of the eternal black hole. This pulse experiences blue-shifting and evolves into a shockwave as it progresses through time, eventually reaching the horizon at a late time corresponding to the boundary time \(t=0\). In light of this, it proves advantageous to define the HTML in the presence of the shockwave as,
\[I(A:B;\alpha)=I(A,B;\alpha=0)-\mathcal{S}^{\rm reg}_{A\cup B}(\alpha), \tag{100}\]
\(I(A:B;\alpha=0)\) has been previously calculated in equation (101), and in order to mitigate the \(\alpha\)-independent UV divergences, we introduce a regularized variant of HEE. \(\mathcal{S}^{\rm reg}_{A\cup B}(\alpha)=\mathcal{S}_{A\cup B}(\alpha)- \mathcal{S}_{A\cup B}(\alpha=0)\). In order to compute \(\mathcal{S}^{\rm reg}_{A\cup B}(\alpha)\) we choose a set of time-dependent embeddings defined as \(\{t,z(t),x=-l/2,-L/2\leq x^{j}\leq L/2\}\) and \(\{t,z(t),x=l/2,-L/2\leq x^{j}\leq L/2\}\). The area functional corresponding to either of these time-dependent embeddings is given as,
\[\mathcal{A}=L^{2}\int dt\biggl{[}-e^{6A(z)}h(z)+\dot{z}^{2}\frac{R^{4}e^{2B(z) +4A(z)}}{z^{4}h(z)}\biggr{)}\biggr{]}^{\frac{1}{2}},\ \ \mathcal{L}=L^{2}\biggl{[}-e^{6A(z)}h(z)+\dot{z}^{2}\frac{R^{4}e^{2B(z)+4A(z)}} {z^{4}h(z)}\biggr{]}^{\frac{1}{2}} \tag{101}\]
Note that the Lagrangian density \(\mathcal{L}\) lacks explicit time dependence, consequently leading to the derivation of a conserved quantity based on boundary conditions \(\dot{z}=0|_{z=z_{0}}\), is \(\mathcal{P}=-L^{2}e^{3A(z_{0})}\sqrt{-h(z_{0})}\). Now, we can write,
\[\dot{z}^{2}=\frac{z^{2}h(z)}{R^{4}e^{4A(z)+2B(z)}}\biggl{[}\frac{L^{4}h^{2}(z )e^{12A(z)}}{\mathcal{P}}+e^{6A(z)}h(z)\biggr{]}^{\frac{1}{2}} \tag{102}\]
Substituting equation (102) in (101) the area functional,
\[\mathcal{A}=L^{2}R^{2}\int_{0}^{z_{0}}dz\ \frac{e^{5A(z)+B(z)}}{z^{2}\sqrt{ \mathcal{P}^{2}+L^{4}h(z)e^{6A(z)}}} \tag{103}\]
Also by integrating equation (102) we get,
\[t(z)=\pm\int dz\ \frac{R^{2}e^{B(z)-A(z)}}{z^{2}h(z)\sqrt{(1+\frac{L^{4}h(z)e^{6A (z)}}{\mathcal{P}^{2}})}} \tag{104}\]
Using the conserved momenta \({\cal P}\) in equation (8.7) and (8.8) we get,
\[{\cal A}=L^{2}R^{2}\int_{0}^{z_{0}}dz\,\frac{e^{5A(z)+B(z)}}{z^{2}\sqrt{h(z)e^{6A (z)}-h(z_{0})e^{6A(z_{0})}}},\,\,t(z)=\pm\int dz\,\frac{R^{2}e^{B(z)-A(z)}}{z^{2} h(z)\sqrt{1-\frac{h(z)}{h(z_{0})}e^{6A(z)-6A(z_{0})}}}. \tag{8.9}\]
To determine the wormhole's area, denoted as \({\cal A}(\gamma_{\rm w})\), we partition the integration variable \(z\) from equation (8.9) into three distinct regions. The initial region (I) initiates at the left boundary and spans from there into the bulk, ultimately reaching the horizon. Subsequently, the second region (II) commences at the horizon and concludes at \(z=z_{0}\). Lastly, the third region (III) commences at \(z=z_{0}\), proceeds towards the right (as depicted in fig.10), and extends until it encounters the horizon. The direction of \(t\) is contingent on the rate of change in \(z\) along the extremal surface.
Taking into account the three regions labeled as I, II, and III, the specific expression
Figure 10: Penrose diagram after the shock wave. Red line shows the wormhole surface with turning point \(z_{0}\) connecting \(A\) and \(B\).
for \({\cal A}(\gamma_{\rm w})\) takes the following form:
\[\begin{split}{\cal A}(\gamma_{\rm w})=4L^{2}R^{2}&\bigg{[} \int_{0}^{z_{H}}dz\bigg{(}\frac{e^{5A(z)+B(z)}}{z^{2}\sqrt{h(z)e^{6A(z)}-h(z_{0} )e^{6A(z_{0})}}}-\frac{e^{2A(z)+B(z)}}{z^{2}\sqrt{h(z)}}\bigg{)}\\ &+2\int_{z_{H}}^{z_{0}}dz\frac{e^{5A(z)+B(z)}}{z^{2}\sqrt{h(z)e^{ 6A(z)}-h(z_{0})e^{6A(z_{0})}}}\bigg{]}\end{split} \tag{111}\]
The regularized HEE is expressed as \(S^{\rm reg}A\cup B(z_{0})=\frac{{\cal A}(\gamma_{\rm w})}{4G_{N}^{2}}\). This regularized entropy is depicted in Fig.11 as a function of the dimensionless parameter \(\frac{z_{0}}{z_{h}}\). It is evident from the plot that as the ratio \(z_{0}/z_{H}\) increases from unity, the value of \(S^{\rm reg}\) initiates from zero at \(z_{0}=z_{H}\) and progressively rises. For various nonzero values of \(\xi\), \(S^{reg}\) exhibits varying changes; for smaller \(\xi\) values, the regularized entropy increases more significantly than it does for the maximum permissible value of \(\xi=2\).
By employing equations (109) and (111), we can express the HTMI as a function dependent on \(z_{0}\). Nevertheless, to examine how the HTMI changes concerning the shock wave parameter \(\alpha\), we must establish the connection between \(z_{0}\) and \(\alpha\). In Fig.10, Region (I) is defined as the area between the boundary point \((\hat{U},\hat{V})=(1,-1)\) and the point on the horizon \((\hat{U},\hat{V})=(\hat{U}_{1},0)\). Region (II) spans from \((\hat{U},\hat{V})=(\hat{U}_{1},0)\) to a point denoted as \((\hat{U},\hat{V})=(\hat{U}_{2},\hat{V}_{2})\), while Region (III) extends from \((\hat{U},\hat{V})=(\hat{U}_{2},\hat{V}_{2})\) to \((\hat{U},\hat{V})=(0,\alpha/2)\). Region (II) from \((\hat{U},\hat{V})=(\hat{U}_{1},0)\) to \(z_{0}\)\((\hat{U},\hat{V})=(\hat{U}_{2},\hat{V}_{2})\) and region (III) from \((\hat{U},\hat{V})=(\hat{U}_{2},\hat{V}_{2})\) to \((\hat{U},\hat{V})=(0,\alpha/2)\).
Using the definition of Kruskal coordinates, it is possible to express the variation of \(\hat{U}=\pm e^{\frac{2\pi}{\beta}(z_{*}-t)}\) and \(\hat{V}=\pm e^{\frac{2\pi}{\beta}(z_{*}+t)}\) as,
\[\begin{split}\Delta\log\hat{U}^{2}&=\log\hat{U}_{1 }^{2}-\log\hat{U}_{0}^{2}=\frac{4\pi}{\beta}(\Delta z_{*}-\Delta t)\\ \Delta\log\hat{V}^{2}&=\log\hat{V}_{2}^{2}-\log\hat{ V}_{1}^{2}=\frac{4\pi}{\beta}(\Delta z_{*}+\Delta t)\end{split} \tag{112}\]
\[\begin{split}\log\hat{U}&=\frac{2\pi}{\beta}\int dz \ \frac{R^{2}e^{B(z)-A(z)}}{z^{2}h(z)}\Bigg{(}\frac{1}{\sqrt{1-\frac{h(z)}{h(z_{0} )}e^{6A(z)-6A(z_{0})}}}-1\Bigg{)}\\ \log\hat{V}&=\frac{2\pi}{\beta}\int dz\ \frac{R^{2}e^{B(z)-A(z)} }{z^{2}h(z)}\Bigg{(}\frac{1}{\sqrt{1-\frac{h(z)}{h(z_{0})}e^{6A(z)-6A(z_{0})} }}+1\Bigg{)}\end{split} \tag{113}\]
where \(z_{*}\) is defined as follows,
\[z_{*}=-\int dz\ \frac{R^{2}e^{B(z)-A(z)}}{z^{2}h(z)} \tag{114}\]
Note that in region (I), when \(\dot{z}<0\), it leads to an overall negative sign in the expression for \(t\). Conversely, in region (II), the negative numerical value of \(h(z)\) corresponds to \(\dot{z}>0\), and hence we introduce a negative sign. Now, let's consider the variation of \(\hat{U}\) from the boundary to the horizon.
\[\hat{U}_{1}^{2}=\exp\Biggl{\{}\Bigg{[}\frac{4\pi}{\beta}\int_{0}^{z_{H}}dz\ \frac{R^{2}e^{B(z)-A(z)}}{z^{2}h(z)}\Bigg{(}\frac{1}{\sqrt{1-\frac{h(z)}{h(z_{0} )}e^{6A(z)-6A(z_{0})}}}-1\Bigg{)}\Bigg{]}\Bigg{\}} \tag{115}\]
\[\frac{\hat{U}_{2}^{2}}{\hat{U}_{1}^{2}}=\exp\Biggl{\{}\left[\frac{4\pi}{\beta} \int_{0}^{z_{H}}dz\ \frac{R^{2}e^{B(z)-A(z)}}{z^{2}h(z)}\Biggl{(}\frac{1}{\sqrt{1-\frac{h(z)}{h(z_{0} )}e^{6A(z)-6A(z_{0})}}}-1\Biggr{)}\right]\Biggr{\}} \tag{111}\]
To find \(\hat{U}_{2}\) consider a reference point at \(\bar{z}\) where \(z_{*}\) is zero.
\[\hat{V}_{2}\hat{U}_{2}=\exp\Biggl{\{}\left(\frac{4\pi}{\beta}z_{*}\right) \Biggr{\}}=\exp\Biggl{\{}\left[-\frac{4\pi}{\beta}\int_{\bar{z}}^{z_{0}}dz\ \frac{R^{2}e^{B(z)-A(z)}}{z^{2}h(z)}\right]\Biggr{\}} \tag{112}\]
In region (III), where \(\dot{z}>0\), yet \(h(z)\) remains in the negative numerical range, we introduce an overall negative sign to the variable \(t\). Consequently, the expression for the coordinate \(\Delta\hat{V}\) in region (III) adopts the following form:
\[\frac{\alpha^{2}}{4\hat{V}_{2}^{2}}=\frac{4\pi}{\beta}\int_{0}^{z_{h}}dz\ \frac{R^{2}e^{B(z)-A(z)}}{z^{2}h(z)}\Biggl{(}\frac{1}{\sqrt{1-\frac{h(z)}{h(z_{0 })}e^{6A(z)-6A(z_{0})}}}-1\Biggr{)} \tag{113}\]
From equation (112) and (113) we can write the relation between \(\alpha\) and \(z_{0}\) as,
\[\alpha(z_{0})=2\exp\{(\eta_{\rm I}+\eta_{\rm II}+\eta_{\rm III})\} \tag{114}\]
where
\[\eta_{\rm I} =\frac{4\pi}{\beta}\int_{\bar{z}}^{z_{0}}dz\ \frac{R^{2}e^{B(z)-A(z)}}{z^{2}h(z)}, \ \ \ \ \eta_{\rm II}=\frac{2\pi}{\beta}\int_{0}^{z_{h}}dz\ \frac{R^{2}e^{B(z)-A(z)}}{z^{2}h(z)}\Biggl{(}\frac{1}{ \sqrt{1-\frac{h(z)}{h(z_{0})}e^{6A(z)-6A(z_{0})}}}-1\Biggr{)}\] \[\eta_{\rm III} =\frac{4\pi}{\beta}\int_{z_{h}}^{z_{0}}dz\ \frac{R^{2}e^{B(z)-A(z)}}{z^{2}h(z)}\Biggl{(}\frac{1}{ \sqrt{1-\frac{h(z)}{h(z_{0})}e^{6A(z)-6A(z_{0})}}}-1\Biggr{)}\]
By utilizing the equation (114), it becomes possible to generate a graphical representation of the shock wave parameter concerning the dimensionless quantity \(z_{0}/z_{H}\), as depicted in fig.12. It is noteworthy that, in accordance with expectations, the shockwave parameter escalates as \(z_{0}\) increases, and the pace of this elevation is contingent on the \(\xi\)
parameter. In instances where \(\xi\) assumes larger values, the rate at which \(\alpha\) increases is comparatively slower in contrast to situations with smaller \(\xi\) values. In the end, in Fig. 13, the graph illustrates how the HTML changes concerning the shockwave parameter \(\alpha\) for various nonzero \(\xi\) values. For distinct \(\xi\) values, HTML starts declining from a specific initial value, each with its own rate, ultimately reaching zero at a critical point, denoted as \(\alpha=\alpha_{c}\). HTML only exists for \(\alpha\) values less than or equal to \(\alpha_{c}\). This critical value of \(\alpha\) increases as the \(\xi\) parameter grows. It's worth noting that as previously mentioned, \(\xi=2\) represents the theory's critical point, and it's evident that HTML remains finite at this critical point, echoing a similar observation made for one-sided MI in [40].
## 9 Summary and Discussions
In this work, we study the various measures for the entanglement structure of mixed states and the properties of chaos in the four-dimensional \(\mathcal{N}=4\) super Yang-Mills theory at finite temperature T, charged under a \(U(1)\) subgroup of its \(SU(4)\) R-symmetry with critical point. We use the HLN, EWCS, HTML to probe the entanglement structure near the critical point. We also study the disruption of HTML due the shockwave perturbation and finally interpret our results in terms of boundary theory parameters.
We study the effect of parameter \(\xi\) (related to the charge of black hole) on the HLN in low and high temperature limits. In this analysis, we observe that the corresponding RT surface dual to the boundary region A receives a modification due to the presence of \(\xi\). Moreover, for a fixed width \(l\) of the boundary region \(A\), the RT surface goes deeper into the bulk for larger value of \(\xi\). For computing the HLN at low and high temperature, we consider adjacent, disjoint and bipartite configuration of subsystems in the boundary. It is straighforward to see that \(\xi\to 0\) (\(Q\to 0\)) correctly reproduce the results obtained for the AdS\({}_{d+1}\) Schwarzschild black hole background. The HLN exhibits a decreasing trend for adjacent configurations as the parameter \(\xi\) increases at low temperatures and an increasing trend as temperature approaches the high-temperature limit. For disjoint subsystems the HLN shows an increasing behavior with \(\xi\) at low temperature and vanishes for high temperature. In the bipartite case the HLN shows an increasing behavior with \(\xi\) at low temperature and an decreasing behavior for high temperature. In the field theory, the
Figure 13: TMI vs shock parameter \(\alpha\) for different \(\xi\) and \(T=1,R=1\).
growth of the HLN can be understood as indicative of the increasing quantum entanglement between two subsystems. As the critical limit is approached (\(\xi\to 2\)) in all cases, HLN remains finite. A similar finding was previously documented for HEE and HMI in the study by Ebrahim et al. ([40]).
We give analytic expressions for EWCS for 1RC black hole in the low and high temperature limits that are consistent with the numerical result obtained in [41]. We observe that, at low temperatures, the EWCS experiences a correction attributed to the parameter \(\xi\) and consequently it exhibits a growth with respect to \(\xi\). It's worth noting that mutual information is intricately connected to EWCS, as described in [60]. Our result of EWCS also agrees with the numerical analysis of HMI reported in [40]. For disjoint case, in the low-temperature regime, both the HLN and EWCS exhibit a similar dependence on the boundary parameter(characteristic length of the different regions) as well as the temperature. In the high-temperature limit, these quantities vanish as stated in [40].
Moreover we notice that the entanglement between two subsystems of a TFD state measured by TMI increases with the size of the subsystem. This is expected as the larger the Hilbert space of individual subsystems more the correlation. If we fix the size of subsystems, TMI increases as \(\xi\) parameter approaches to the higher values. Based on our analysis, we have noted that the two separate subsystems do not manifest correlations regardless of the subsystem's size. It is only once a specific size, denoted as the critical width \(l_{c}\), is reached that total correlations start to emerge. As we already mentioned the entanglement in the TFD state can be destroyed by the insertion of an operator evolves in time. We have demonstrated the explicit disruption of holographic TMI in the presence of a shockwave. Our findings suggest that the parameter \(\xi\) attempts to mitigate this disruption, indicating that the presence of the \(\xi\) parameter tends to stabilize the system and reduce its chaotic behavior. For substantial values of \(\xi\), holographic TMI exhibits a slower rate of decay. In simpler terms, when \(\xi\) is large, it takes more time for TMI to completely vanish. This is in contrast to the findings in a recent study [37], where it was noted that TMI diminishes more rapidly with increased backreaction.
**Acknowledgement**
We express our gratitude to Shankhadeep Chakrabortty for valuable comments on the draft. SP acknowledges the support of Senior Research Fellowship from the Ministry of Human Resource and Development, Government of India. SP expresses gratitude to Dongmin Gang and Seoul National University for their generous support and warm hospitality during a part of this work. DK wishes to extend appreciation to Shankhadeep Chakrabortty and IIT Ropar for their support and warm hospitality during the final stages of this project.
## Appendix A Area of the Extremal Surface for Bipartite Systems
We provide a concise overview of employing the near horizon expansion technique to estimate extremal surfaces within the bipartite subsystem, aiming to determine the surface area. \(\mathcal{A}_{B_{1}}\) and \(\mathcal{A}_{A\cup B_{1}}\) in the limit \(L\to\infty\) it is convenient to start with the equation (3.13) by rewriting it in the following form
\[\mathcal{A}=\mathcal{A}^{(1)}+\mathcal{A}^{(2)}+\mathcal{A}^{(3)}\] (A.1)
where we define the quantities \(\mathcal{A}^{(1)},\mathcal{A}^{(2)},\mathcal{A}^{(3)}\) in the following way
\[\mathcal{A}^{(1)}=\frac{L^{2}R^{3}}{{z_{t}}^{2}}\Bigg{\{}\frac{3 \xi}{2}\bigg{(}\frac{z_{t}}{z_{h}}\bigg{)}^{2}-\Bigg{[}1+\xi\bigg{(}\frac{z_{t} }{z_{h}}\bigg{)}^{2}\Bigg{]}^{\frac{3}{2}}+\frac{1+\xi}{3\xi}\bigg{(}\frac{z_{t} }{z_{h}}\bigg{)}^{2}\left[\left(1+\xi\bigg{(}\frac{z_{t}}{z_{h}}\bigg{)}^{2} \right)^{\frac{3}{2}}-1\right]\Bigg{\}} \tag{114}\] \[\mathcal{A}^{(2)}=\frac{L^{2}R^{3}}{{z_{t}}^{2}}\Bigg{\{}\sum_{n= 0}^{2}\Lambda_{2n0}\frac{\sqrt{\pi}\Gamma(n+1)}{\Gamma(n+3)}\bigg{(}\frac{z_{t }}{z_{h}}\bigg{)}^{4+2n}\times\left[1+(n+1)\left(1+\xi\bigg{(}\frac{z_{t}}{z_{ h}}\bigg{)}^{2}\right)\right]\Bigg{\}}\] (115) \[\mathcal{A}^{(3)}=\frac{L^{2}R^{3}}{{z_{t}}^{2}}\Bigg{\{}\sum_{j =1}^{\infty}\Lambda_{000}\frac{\Gamma(j+\frac{1}{2})\Gamma(3j-1)}{\Gamma(j+1 )\Gamma(3j+1)}\times\left[1+(3j-1)\left(1+\xi\bigg{(}\frac{z_{t}}{z_{h}}\bigg{)} ^{2}\right)\right]\Bigg{\}} \tag{116}\]
note that we have truncated the series when writing \(\mathcal{A}_{2}\) and \(\mathcal{A}_{3}\) in order to obtain the lowest-order contribution, which is our focus for the near-horizon expansion. Consequently, the higher-order contributions will become superfluous for our analysis. We then employ equation (101) within the context of equations (114), (115), and (116) to derive the expression for extremal surfaces in the bipartite limit, as presented in equation (100). In this context, we introduce the following functions of \(\xi\).
\[\alpha(\xi)=\bigg{(}\frac{3\xi}{2}-\frac{2}{3}(1+\xi)+\frac{k_{2}}{3\xi}(1-2 \xi)(\xi-2)\bigg{)}\,,\ \ \beta(\xi)=k_{2}\sqrt{6}\frac{(2\xi-1)(\xi-2)}{3\xi} \tag{117}\]
\[\gamma(\xi)=\Bigg{(}u_{1}(\xi)-v_{1}(\xi)+x_{1}(\xi)+k_{2}\bigg{(}u _{2}(\xi)-v_{2}(\xi)+x_{2}(\xi)\bigg{)}\Bigg{)},\ \ \delta(\xi)=\Bigg{(}-u_{2}(\xi)+v_{2}(\xi)-x_{2}(\xi)\Bigg{)} \tag{118}\] \[u_{1}(\xi)=\frac{3\xi^{2}}{16}(\xi^{2}+3\xi+2)\qquad\qquad\qquad u _{2}(\xi)=\frac{3\xi^{2}}{16}(3\xi^{2}+6\xi+4)\] \[v_{1}(\xi)=\frac{3\xi(1+\xi)}{24}(2\xi^{2}+5\xi+3)\qquad v_{2}( \xi)=\frac{3\xi(1+\xi)}{24}(10\xi^{2}+21\xi+12)\] \[x_{1}(\xi)=\frac{3(1+\xi)^{2}}{96}(\xi^{2}+7\xi+4)\qquad x_{2}( \xi)=\frac{3(1+\xi)^{2}}{96}(21\xi^{2}+44\xi+24) \tag{119}\]
\[\mu(\xi)=\sum_{j=1}^{\infty}\frac{3}{\sqrt{\pi}}\frac{\Gamma\left(j+\frac{1}{ 2}\right)\Gamma(3j-1)}{\Gamma(j+1)\Gamma(3j+1)}(1+\xi-(\xi+2)k_{2}) \tag{120}\] \[\nu(\xi)=\sum_{j=1}^{\infty}\frac{3}{\sqrt{\pi}}\frac{\Gamma \left(j+\frac{1}{2}\right)\Gamma(3j-1)j}{\Gamma(j+1)\Gamma(3j+1)}(\xi+2)k_{2} \sqrt{6}\]
Using the above functions we obtain the following extremal areas
\[\mathcal{A}_{B_{1}}=\frac{L^{2}R^{3}}{z_{h}^{2}}\Big{\{}\alpha( \xi)+\gamma(\xi)+\mu(\xi)\Big{\}}+\frac{L^{2}R^{3}}{z_{h}}\left(L-\frac{l}{2} \right)\Big{\{}\beta(\xi)+\delta(\xi)+\nu(\xi)\Big{\}} \tag{121}\] \[\mathcal{A}_{A\cup B_{1}}=\frac{L^{2}R^{3}}{z_{h}^{2}}\Big{\{} \alpha(\xi)+\gamma(\xi)+\mu(\xi)\Big{\}}+\frac{L^{2}R^{3}}{z_{h}}\left(L+\frac{l }{2}\right)\Big{\{}\beta(\xi)+\delta(\xi)+\nu(\xi)\Big{\}}\]
By applying the equations mentioned earlier to the entanglement negativity formula associated with the bipartite state in a low-temperature regime, we derive equation (101).
It's worth noting that equation (6.10) is obtained through the correlation between \(\hat{T}\) and \(z_{h}\). Additionally, we introduce the following function for a more concise expression of the HLN.
\[g(\xi)=\pi\big{(}\beta(\xi)+\delta(\xi)+\nu(\xi)\big{)}\] (A.9)
We employ the defined function \(g(\xi)\) to examine the scenario as we approach the limit where \(\xi\) tends towards zero. In this limit, it becomes evident from the equations mentioned earlier that (A.9) simplifies to the following expression.
\[g(\xi)\Bigg{|}_{\xi\to 0}=\pi\Bigg{(}\frac{2}{3}k_{2}\sqrt{6}\frac{1}{\xi}- \frac{3}{4}+2k_{2}\sqrt{6}\sum_{j=1}^{\infty}\frac{3}{\sqrt{\pi}}\frac{\Gamma \left(j+\frac{1}{2}\right)\Gamma(3j-1)j}{\Gamma(j+1)\Gamma(3j+1)}\Bigg{)}\] (A.10)
Hence, if one's primary concern is the temperature dependency of the function \(g(\xi)\) as \(\xi\) approaches zero, an approximate approach involves focusing on the initial term within the parentheses in the previous equation. Consequently, it becomes evident that the function \(g(\xi)\) exhibits a proportionality to \(T^{2}\).
Before concluding this appendix, it's worth noting that equation (6.11) introduces a constant \(\mathcal{C}\). The precise value of this constant can be determined by considering the coefficient of the initial term in equation (A.10).
## Appendix B Approximate EWCS at low temperature limit in terms of boundary parameters
In this context, we derive the expression for the EWCS under the conditions of low temperature. By inserting the equations for the turning points into the overarching expression for the EWCS, as provided in equation (7.5), we derive the ensuing series.
\[E_{W}^{low}=\frac{L^{2}R^{3}}{4G_{N}^{5}}\sum_{k=0}^{\infty}\sum_ {j=0}^{k}\sum_{i=0}^{\infty}\frac{(-1)^{k+j}}{2}\frac{\Gamma(k+\frac{1}{2}) \xi^{i+j+k}(1+\xi)^{j}}{\Gamma(i+1)\Gamma(\frac{3}{2}-i)\Gamma(j+1)\Gamma(k-j+ 1)}\frac{z_{h}^{2-2i-2j-2k}}{(2i+2j+2k-2)}\] \[\times\Bigg{[}\Bigg{\{}\bigg{(}\frac{2l+D}{a_{1}}\bigg{)}^{2i+2j+ 2k-2}-\bigg{(}\frac{D}{a_{1}}\bigg{)}^{2i+2j+2k-2}\Bigg{\}}+(2i+2j+2k-2)\frac{ \xi}{6z_{h}^{2}}\Bigg{\{}\bigg{(}\frac{2l+D}{a_{1}}\bigg{)}^{2i+2j+2k}\] \[-\bigg{(}\frac{D}{a_{1}}\bigg{)}^{2i+2j+2k}\Bigg{\}}+\frac{2i+2j+ 2k-2}{2z_{h}^{4}}\left(\frac{\xi^{2}}{6}\left(1-\frac{a_{3}}{2a_{2}}\right) \right)\Bigg{\{}\bigg{(}\frac{2l+D}{a_{1}}\bigg{)}^{2i+2j+2k+2}-\bigg{(}\frac{ D}{a_{1}}\bigg{)}^{2i+2j+2k+2}\Bigg{\}}\] \[+\mathcal{O}\bigg{(}\frac{1}{z_{h}}\bigg{)}^{6}\Bigg{]}\]
As stated in Section 8, we have the option to conclude the series for increased simplification by setting \(i=j=k=0\). Consequently, this procedure yields Equation (7.8) with ease.
|
2309.12735 | Optimal Dynamic Fees for Blockchain Resources | We develop a general and practical framework to address the problem of the
optimal design of dynamic fee mechanisms for multiple blockchain resources. Our
framework allows to compute policies that optimally trade-off between adjusting
resource prices to handle persistent demand shifts versus being robust to local
noise in the observed block demand. In the general case with more than one
resource, our optimal policies correctly handle cross-effects (complementarity
and substitutability) in resource demands. We also show how these cross-effects
can be used to inform resource design, i.e. combining resources into bundles
that have low demand-side cross-effects can yield simpler and more efficient
price-update rules. Our framework is also practical, we demonstrate how it can
be used to refine or inform the design of heuristic fee update rules such as
EIP-1559 or EIP-4844 with two case studies. We then estimate a uni-dimensional
version of our model using real market data from the Ethereum blockchain and
empirically compare the performance of our optimal policies to EIP-1559. | Davide Crapis, Ciamac C. Moallemi, Shouqiao Wang | 2023-09-22T09:34:33Z | http://arxiv.org/abs/2309.12735v1 | # Optimal Dynamic Fees for Blockchain Resources
###### Abstract
We develop a general and practical framework to address the problem of the optimal design of dynamic fee mechanisms for multiple blockchain resources. Our framework allows to compute policies that optimally trade-off between adjusting resource prices to handle persistent demand shifts versus being robust to local noise in the observed block demand. In the general case with more than one resource, our optimal policies correctly handle cross-effects (complementarity and substitutability) in resource demands. We also show how these cross-effects can be used to inform resource design, _i.e._ combining resources into bundles that have low demand-side cross-effects can yield simpler and more efficient price-update rules. Our framework is also practical, we demonstrate how it can be used to refine or inform the design of heuristic fee update rules such as EIP-1559 or EIP-4844 with two case studies. We then estimate a uni-dimensional version of our model using real market data from the Ethereum blockchain and empirically compare the performance of our optimal policies to EIP-1559.
1
Footnote 1: We note that some of our results do offer some insights on the resource design problem that we will briefly discuss.
## 1 Introduction
Users of public permissionless blockchains can modify the shared state of the network through _transactions_ that are executed by a set of nodes with limited computational resources. To allocate resources among competing transactions most blockchains use _transaction fees_. Initial transaction fee mechanisms in the Bitcoin and Ethereum blockchains relied on users bidding for transaction inclusion as the main way of pricing congestion. Moreover, all computational resources were bundled into a unique virtual resource ("gas") with fixed relative prices hardcoded in the protocol. Current R&D efforts are focused on improving transaction fee markets along two directions: (1) setting a minimum _dynamic base fee_ (henceforth also called _price_) that is adjusted by the protocol as function of user demand and (2) _unbundling resources_ so that different resources can be individually priced and their relative prices can also efficiently adjust with demand.
In this paper, we propose a new framework for choosing a resource pricing policy that makes significant progress across both directions. We consider the practical problem of a blockchain protocol that has to jointly update the prices of multiple resources at every block. We assume that the type of resources being metered and priced, as well as the block limits and sustainable targets for each resource, are pre-determined. These higher level decisions are the outcome of a design process that has interesting political, economic, and engineering considerations but are outside the current scope of our framework1.
Footnote 1: Layer 2s, depending on their architecture, can perhaps implement price policies that require more computation and are closer to the optimal ones.
Our framework is both general and practical. Or main results characterize theoretically optimal policies in a realistic setting with multiple resources and time-varying demand. Our results can be used in two ways: (i) the policies can be _directly_ implemented as we demonstrate, or (ii) insights from our main results can be used to construct and refine heuristics that approximate optimal policies. The latter point is particularly important in the blockchain environment, where, especially at Layer 1, the price computation itself is significantly resource constrained2. We designed our framework with the following properties in mind: |
2309.04631 | Open and reusable deep learning for pathology with WSInfer and QuPath | The field of digital pathology has seen a proliferation of deep learning
models in recent years. Despite substantial progress, it remains rare for other
researchers and pathologists to be able to access models published in the
literature and apply them to their own images. This is due to difficulties in
both sharing and running models. To address these concerns, we introduce
WSInfer: a new, open-source software ecosystem designed to make deep learning
for pathology more streamlined and accessible. WSInfer comprises three main
elements: 1) a Python package and command line tool to efficiently apply
patch-based deep learning inference to whole slide images; 2) a QuPath
extension that provides an alternative inference engine through user-friendly
and interactive software, and 3) a model zoo, which enables pathology models
and metadata to be easily shared in a standardized form. Together, these
contributions aim to encourage wider reuse, exploration, and interrogation of
deep learning models for research purposes, by putting them into the hands of
pathologists and eliminating a need for coding experience when accessed through
QuPath. The WSInfer source code is hosted on GitHub and documentation is
available at https://wsinfer.readthedocs.io. | Jakub R. Kaczmarzyk, Alan O'Callaghan, Fiona Inglis, Tahsin Kurc, Rajarsi Gupta, Erich Bremer, Peter Bankhead, Joel H. Saltz | 2023-09-08T22:47:23Z | http://arxiv.org/abs/2309.04631v1 | # Open and reusable deep learning for pathology with WSInfer and QuPath
###### Abstract
The field of digital pathology has seen a proliferation of deep learning models in recent years. Despite substantial progress, it remains rare for other researchers and pathologists to be able to access models published in the literature and apply them to their own images. This is due to difficulties in both sharing and running models. To address these concerns, we introduce WSInfer: a new, open-source software ecosystem designed to make deep learning for pathology more streamlined and accessible. WSInfer comprises three main elements: 1) a Python package and command line tool to efficiently apply patch-based deep learning inference to whole slide images; 2) a QuPath extension that provides an alternative inference engine through user-friendly and interactive software, and 3) a model zoo, which enables pathology models and metadata to be easily shared in a standardized form. Together, these contributions aim to encourage wider reuse, exploration, and interrogation of deep learning models for research purposes, by putting them into the hands of pathologists and eliminating a need for coding experience when accessed through QuPath. The WSInfer source code is hosted on GitHub and documentation is available at [https://wsinfer.readthedocs.io](https://wsinfer.readthedocs.io).
## Introduction
Pathology is the bedrock of cancer diagnosis and traditionally relies on the examination of physical slides containing human tissue specimens using high-power microscopy. In recent years, the field has been moving towards digital pathology, whereby glass slides are scanned as high-resolution images, known as whole slide images (WSIs). Each individual WSI is typically very large, often over 40 gigabytes uncompressed. The widespread adoption of digital pathology therefore poses considerable challenges for data storage and visualization, but also unlocks the potential to apply computational methods for diagnostics and prognostics.
It is difficult to overstate the transformative effect deep learning has had on digital pathology research. Many studies have suggested the potential for deep learning-based AI methods to revolutionize different aspects of pathology practice, such as by reducing the pathologist's workload or by augmenting visual assessment with the ability to identify subtle, sub-visual features of clinical importance [1, 2, 3]. However, the multitude of algorithms published in the literature belies a dearth of implementations that are actually usable within the research community. In most cases, it is simply not possible for other research groups to validate the use of published methods
on their own images and cohorts. One reason for this is that required data is not available: a recent survey of 161 peer-reviewed studies using deep learning for pathology found that while 1 in 4 shared code, only 1 in 8 shared trained model weights [4, 5]. Furthermore, in the minority of cases where code and models are available, they are typically not in a form amenable to pathologists without coding experience to use and explore. The result is that reported findings cannot properly be reproduced and interrogated by the wider community, and the key domain experts -- pathologists -- often find themselves to be particularly excluded. Tackling problems such as model generalization and overcoming batch effects urgently requires an increase in openness, replicability, and reusability.
In the present paper, we respond to the call to "make deep learning algorithms in computational pathology more reproducible and reusable" [4] by introducing WSInfer (Whole Slide Inference): a new collection of software tools designed to streamline the sharing and reuse of trained deep learning models in digital pathology (Figure 1).
We have focused on the generic task of patch classification, which is widely used across a broad range of pathology applications. Because WSIs are so big, they are typically broken into manageable patches to make analysis practicable. Trained patch-based deep neural networks are typically applied across a WSI to classify patches into different tissue components (e.g. tumor, stroma, fat, necrosis) or make predictions directly related to patient outcome. While relatively coarse-grained in comparison to an analysis based on segmenting individual structures, patch classification algorithms have advantages both in terms of computational efficiency and being a closer match for a pathologist's visual assessment -- since this is often based upon evaluating patterns and textures, rather than discrete quantifiable entities. The output of patch classification is typically a spatial classification map, which can often be integrated across the WSI to create a single output representing a diagnosis, prediction, or'score' for that slide.
### Description
WSInfer comprises three main components: (1) the WSInfer inference runtime, (2) the QuPath WSInfer extension, and (3) the WSInfer Model Zoo. Together these provide tools designed to meet the needs of a diverse range of users, including pathologists, computational researchers, and data scientists.
### Inference Runtime
The WSInfer inference runtime deploys trained patch classification deep learning models on whole slide images and is available as a command line tool and Python package. The inference runtime requires three inputs from the user: a directory of whole slide images, a trained patch classification model, and a directory in which to write results. One may use a model from the Zoo or provide a local trained model along with a configuration JSON file that includes essential information for model use (i.e., size and physical spacing of patches, processing steps, names of output classes). The configuration file is validated against a schema to aid users in creating this file. If using a model from the Zoo, the model and configuration JSON file are downloaded automatically from the Hugging Face Hub. Each whole slide image undergoes a series of processing steps that were motivated by [6]. First, patches are extracted from tissue regions at a uniform size and physical spacing, and each patch is processed as specified in the configuration JSON file (e.g., resized,
normalized). An important optimization in this stage is the lazy loading of patches directly from the whole slide image. Compared to saving patches as image files, lazy loading requires less storage and performs fewer reads and writes to the filesystem. WSInfer offers a choice of slide reading backends between OpenSlide (7) and TiffSlide (8). Next, the patches are run through the forward pass of the deep learning model. Patches are loaded in parallel using the PyTorch DataLoader object. The runtime saves model outputs in comma-separated values (CSV) files with descriptive column names and GeoJSON files, a common format for spatial data. These output files can be used for downstream analyses or visualized using other software, including QuPath. The runtime can be installed with pip or as a Docker or Apptainer container.
We measured the running time of WSInfer in two environments: 1) a RedHat Linux environment with an enterprise-grade GPU (Quadro RTX 8000) and 2) a Windows Subsystem for Linux environment (Windows 11 and Debian 12) with a consumer GPU (RTX 2080 Ti). In both cases, we used the breast tumor classification model "breast-tumor-resnet34.tcga-brca" from the WSInfer Model Zoo (described below) and WSIs from The Cancer Genome Atlas. The model uses 350x350-pixel patches at 0.25 micrometers per pixel. In the RedHat Linux environment, analysis of 1,061 slides took 6 hours and 46 minutes, or _23 seconds per WSI_. The distribution of the number of patches across WSIs was right-skewed (min=884, max=82,012, median=22,656, mean=23,492, std. dev.=13,922). In the second environment, we deployed the same model to 30 WSIs, a subset of the 1,061 used above. The running time was 14 minutes and 17 seconds total, or _29 seconds per WSI_, and the distribution of patch counts was skewed similarly to the first example (min=6,575, max=52,323, median=23,502, mean=26,667, std. dev.=13,466).
### QuPath Extension
QuPath is a popular open-source software platform for bioimage analysis (9). QuPath's support for visualizing, annotating, and analyzing whole slide images has led to the software being widely adopted within the digital pathology community: to date, it has been downloaded over 400,000 times and cited in over 2,400 studies. We therefore developed the QuPath WSInfer Extension as an alternative inference engine to make patch-based classification widely accessible within a familiar, intuitive, and interactive user interface.
The QuPath WSInfer Extension introduces patch-based deep learning support to QuPath the first time, building upon the software's existing features to provide an end-to-end analysis solution. Users are guided through the steps of selecting a deep learning model and one or more regions of interest for inference. The extension will then proceed to download the model if required, generate tile objects, and run inference (powered by Deep Java Library and PyTorch) at the appropriate resolution and patch size - appending the model output to the tiles. The user can then visualize the tile classifications and view interactive maps of predicted class probabilities. Furthermore, the tiles can be reused to run inference using additional models, making it possible to integrate information across models. Finally, because the user has access to all QuPath's other features (e.g. for tile merging, cell segmentation, data export), WSInfer can be integrated into sophisticated QuPath analysis pipelines, which are run either interactively or through automated scripts.
### Model Zoo
We have curated a collection of trained pathology models for broad, unencumbered reuse and have hosted this Zoo on Hugging Face Hub. Each model repository contains a model card (10),
pretrained weights in TorchScript format, and a configuration JSON file. The model card is a markdown file with human-readable metadata including the purpose of the model, its architecture, description of training data, how to apply it to new data, intended uses, and relevant citations. TorchScript is a serialization format that contains weights and a graph of the forward pass of the model, and it allows the use of the model without a Python dependency. The WSInfer QuPath extension, for instance, uses TorchScript models in a Java ecosystem. To add a model to the zoo, one creates a new model repository on Hugging Face Hub and uploads a model card, TorchScript file of the model, and configuration JSON file. One may optionally upload other files as well. Crucially, the user owns the model repository and can license and manage the contents independently. The registry of models in the zoo is maintained as a JSON file in a dedicated public repository on Hugging Face Hub. After publishing a model on Hugging Face Hub, one may submit a pull request to this repository adding the model location to the registry.
We have also developed a client utility to enhance interoperability of the zoo with other software. The client is available as a Python package or command-line tool and primarily lists and downloads models from the zoo. The client can also validate Model Zoo repositories and model configuration JSON files, functionalities we hope will ease the use of WSInfer.
## Discussion
WSInfer provides an open-source, cross-platform, and cross-language ecosystem to make deep learning methods uniquely accessible and intuitive for a wide range of digital pathology stakeholders. The core inference runtime is developed in Python, making it readily accessible for data scientists and deep learning specialists working in digital pathology -- for whom Python is typically the programming language of choice. However, by also providing a Java implementation through the widely adopted QuPath software, we aim to greatly broaden access.
The WSInfer Python runtime is preferable for batch processing large numbers of slides, for example in a large-scale study. The results can be exported in a QuPath-compatible format for visualization. Direct use of the QuPath extension, however, means that it is also possible for a QuPath user to interactively select regions of interest and obtain results for any slide immediately, without leaving the software. We anticipate that making the application of models more streamlined in this way will encourage more pathologists to try the methods on new data. This should, in turn, make it easier to identify strengths and weaknesses, and thereby accelerate the critical feedback loop necessary to develop robust and generalizable algorithms.
Several tools exist for deploying trained models on whole slide images, including TIA Toolbox [(11)], MONAI [(12)], SlideFlow [(13)], and PHARAOH [(14)]. WSInfer complements these by specifically targeting highly optimized, user-friendly support for patch based WSI inference methods. We expect that these tools may be used together and are keen to promote interoperability. To this end, the WSInfer Model Zoo implements a minimal model configuration specification that accompanies each trained model, with the intention that it may be used by other software beyond the direct WSInfer ecosystem. We host several trained patch classification models in the zoo, including two models from TIA Toolbox, and intend to incorporate more models in future work.
It is important to note that WSInfer itself supports a variety of patch classification models, but is agnostic to a user's choice of model. It is intended for research use only, and we make no claims
regarding the suitability of the models for specific applications. Hence, users assume the responsibility of verifying the suitability of any model for their purposes. Indeed, it is our expectation that promising digital pathology methods will often be found not to perform well on new images; generalization across cohorts, scanners, and laboratories is a hard problem. However, we believe that an important first step to addressing this must be to enable existing models to be properly scrutinized by the research community, to identify what does and does not work. We hope that WSInfer may prove useful in this regard.
## Acknowledgements
The development of the WSinfer infrastructure by the Stony Brook authors was supported by Stony Brook Provost ProFund 2022 award and through the generosity of Bob Beals and Betsy Barton. JRK was also supported by the National Institutes of Health grant T32GM008444 (NIGMS) and by the Medical Scientist Training Program at Stony Brook University. The QuPath WSInfer extension was developed by the Edinburgh authors and was made possible in part by grant number 2021- 237595 from the Chan Zuckerberg Initiative DAF, an advised fund of Silicon Valley Community Foundation. This research was funded in part by the Wellcome Trust 223750/Z/21/Z. The results shown here are in whole or part based upon data generated by the TCGA Research Network: [https://www.cancer.gov/tcga](https://www.cancer.gov/tcga). For the purpose of open access, the author has applied a Creative Commons Attribution (CC BY) license to any Author Accepted Manuscript version arising from this submission. |
2309.12053 | AceGPT, Localizing Large Language Models in Arabic | This paper is devoted to the development of a localized Large Language Model
(LLM) specifically for Arabic, a language imbued with unique cultural
characteristics inadequately addressed by current mainstream models.
Significant concerns emerge when addressing cultural sensitivity and local
values. To address this, the paper proposes a comprehensive solution that
includes further pre-training with Arabic texts, Supervised Fine-Tuning (SFT)
utilizing native Arabic instructions, and GPT-4 responses in Arabic, alongside
Reinforcement Learning with AI Feedback (RLAIF) employing a reward model
attuned to local culture and values. The goal is to cultivate culturally
cognizant and value-aligned Arabic LLMs capable of accommodating the diverse,
application-specific needs of Arabic-speaking communities.
Comprehensive evaluations reveal that the resulting model, dubbed `AceGPT',
sets the state-of-the-art standard for open Arabic LLMs across various
benchmarks. Codes, data, and models are in
https://github.com/FreedomIntelligence/AceGPT. | Huang Huang, Fei Yu, Jianqing Zhu, Xuening Sun, Hao Cheng, Dingjie Song, Zhihong Chen, Abdulmohsen Alharthi, Bang An, Juncai He, Ziche Liu, Zhiyi Zhang, Junying Chen, Jianquan Li, Benyou Wang, Lian Zhang, Ruoyu Sun, Xiang Wan, Haizhou Li, Jinchao Xu | 2023-09-21T13:20:13Z | http://arxiv.org/abs/2309.12053v5 | # AceGPT, Localizing Large Language Models
###### Abstract
This paper is devoted to the development of a localized Large Language Model (LLM) specifically for Arabic, a language imbued with unique cultural characteristics inadequately addressed by current mainstream models. Significant concerns emerge when addressing cultural sensitivity and local values. To address this, the paper proposes a comprehensive solution that includes further pre-training with Arabic texts, Supervised Fine-Tuning (SFT) utilizing native Arabic instructions, and GPT-4 responses in Arabic, alongside Reinforcement Learning with AI Feedback (RLAIF) employing a reward model attuned to local culture and values. The goal is to cultivate culturally cognizant and value-aligned Arabic LLMs capable of accommodating the diverse, application-specific needs of Arabic-speaking communities. Comprehensive evaluations reveal that the resulting model, dubbed 'AceGPT,' sets the state-of-the-art standard for open Arabic LLMs across various benchmarks, including the instruction-following benchmark (i.e., Arabic Vicuna-80 and Arabic AlpacaEval), knowledge benchmark (i.e., Arabic MMLU and EXAMs), and the newly introduced Arabic Cultural and Value Alignment benchmark. Notably, AceGPT outperforms Turbo in the popular Vicuna-80 benchmark when evaluated with GPT-4, despite the benchmark's limited scale.
## 1 Introduction
LLMs like Turbo and GPT-4 have been shaping the current landscape of natural language understanding and generation (Bubeck et al. (2023)). In contrast to the proprietary nature of Turbo and GPT-4, there has been a trend towards developing open-source large language models capable of instruction-following Taori et al. (2023) and fluent conversations (Chiang et al. (2023)), a phenomenon termed as 'Democratization of ChatGPT' (Conover et al. (2023); Touvron et al. (2023)). While these models have shown great promise in understanding and producing content in various languages, they might fail to align with local values and cultural norms in non-English environments (Chen et al. (2023a)); we call it the 'localization issue'. This issue can lead to significant problems in practical usage scenarios, especially for regions such as the Arabic world where the culture and values diverge significantly from Western norms. We argue that it is not just desirable but necessary to localize large language models and tailor them to a specific cultural environment.
MethodologyThe core of our approach lies in localizing large language models to the Arabic language using a packaged solution (known as **AceGPT**). Firstly, through incremental pre-training on Arabic data (_localized pre-training_), we ensure that the model has a strong foundation in the Arabic language, including grammar, vocabulary, and cultural context. Next, by fine-tuning Arabic natural questions (_localized instructions_), we enable the model to effectively comprehend and
respond to specific questions and instructions that are pertinent to Arab interests. Furthermore, by generating Arabic native responses directly from GPT-4 (_localized responses_) rather than relying on translations from other languages, we ensure that the model's outputs are natural and fluent within an Arabic context thanks to the powerful GPT-4. Lastly, by employing a reward model based on _localized preference data_ that respects local culture and value, we further refine the model to align the responses with the cultural and value norms of Arabic-speaking communities.
EvaluationWe evaluate our models in various benchmarks: in the **instruction-following benchmark**, AceGPT achieves state-of-the-art (SOTA) among open-sourced Arabic LLMs in Arabic Vicuna-80 and Arabic AlpacaEval, obtaining 33% and 30% improvement over the state-of-the-art Arabic LLM (Sengupta et al. (2023)). 1 In the **NLU benchmark**, AceGPT achieves the second best on ALUE (Seelawi et al. (2021)) in terms of average scores for all tasks. In the **knowledge benchmark**, AceGPT achieves SOTA among open-sourced Arabic LLMs in Arabic knowledge including MMLU and EXAMs. In the **localization benchmark**, AceGPT achieves SOTA among open-source Arabic LLMs in our Arabic Cultural and Value Alignment (ACVA) Dataset.
ContributionsThe contributions of the paper are three-fold, including **i)** we propose a first-tier Arabic LLM. According to the records on the releasing date, it achieves SOTA performance among open Arabic LLMs in many benchmarks including Arabic Vicuna-80, Arabic AlpacaEval, Arabic MMLU, EXAMs, and ACVA. **ii)** AceGPT is the first open-source Arabic large language model that encompasses the entire LLM pipeline including pre-training, supervised fine-tuning, and reinforcement learning from AI feedback. We release AceGPT and the reward model. **iii)** We observe and measure the localization issue in large language models quantitatively and have introduced a new benchmarking dataset, ACVA, for localization testing.
Footnote 1: Jais (Sengupta et al. (2023)) is a concurrent work released two weeks ahead of ours.
## 2 Recipe of AceGPT
### Motivation: the Localization Issue
Given the availability of many high-quality instruction datasets in widely spoken languages such as English, existing strategies for non-English LLMs often rely on instructions translated from English. Examples include Chinese-alpaca-GPT4 (Peng et al. (2023)), Phoenix (Chen et al. (2023b)), and Jais (Sengupta et al. (2023)). However, relying on translated data may lead to _localization issues_, potentially undermining the integrity and applicability of the models in native contexts.
To address these localization issues, we formulate 20 questions (see Table.15) to elicit responses with name entities--both personal and locational--to summarize the prevalence of Arabic name entities for preliminary experiments. Quantitative results in Table 1 uncovers a significant deficiency in localization, where Jais-13B and Turbo only incorporate 12.00% and 26.67% Arabic names out of all the names in their responses respectively. A specific example is shown in Table 2, we can observe that the Arabic open-source LLM Jais's output shows a conspicuous tilt towards English-centric materials, yielding terms predominantly associated with Christiainity, which potentially neglects significant parallels within Arabic literary traditions. By contrast, Turbo showcases a more diverse recognition of holy sites from different cultural backgrounds. You can see the details and more examples of case studies in Appendix A.2.
\begin{table}
\begin{tabular}{l l l l l} \hline \hline Types of entity & Jais-13B & Turbo & GPT-4 & AceGPT (ours) \\ \hline Person & 12.00\% (3/25) 1 & 26.67\% (12/45) & 39.29\%(22/56) & 50.00\% (31/62) \\ \hline Location & 18.75\% (3/16) & 27.08\% (13/48) & 21.62\%(16/74) & 28.95\% (11/38) \\ \hline \hline \end{tabular}
\end{table}
Table 1: Proportion of Arabic Entities in Responses to 20 Sample Arabic Questions
### Methodology of AceGPT
To address localization, we propose a comprehensive solution including three strategies to ensure model's effective understanding and generation of content in Arabic, with cultural awareness and value alignment: **(I) localized pre-training** we further pre-train LLM with Arabic data; **(II) localized instructions** we adopt Arabic natural questions in the wild and their responses are Arabic native responses from GPT-4 instead of translating that from other languages, and **(III) localized feedback** we further tame LLM with reinforcement learning using a reward model that respects local culture and values thanks to the localized preference data.
The resultant model is termed "AceGPT". The model pre-trained on LLaMA2 (Touvron et al. (2023)) is named "AceGPT-_base_". To equip it with the conversation, we introduced "AceGPT-_chart_" utilizing supervised fine-tuning and reinforcement learning from AI feedback. The training procedure is divided into three stages: pre-training, supervised fine-tuning, and reinforcement learning from AI feedback, introduced in Sec 2.2.1, Sec 2.2.2, and Sec 2.2.3, respectively.
#### 2.2.1 Localized Pre-training
To adapt the English-focused LLaMA2 (Touvron et al. (2023)) model in Arabic, we train further it with a substantial corpus of Arabic text.
**Data** The dataset comprises Arabic and English sub-datasets. The Arabic is derived from the open-source Arabic text 2022 2, and refined from sources like Arabic Wikipedia, CC100, and OSCAR3. The English dataset is obtained from Slim Pajama (Soboleva et al. (2023)) to avoid forgetting knowledge of English texts. Given LLaMA2's excellent adaptability to the English dataset, we sample a subset of data from Slim Pajama randomly.
Footnote 2: [https://data.baai.ac.cn/details/ArabicText-2022](https://data.baai.ac.cn/details/ArabicText-2022) provided by BAAI
Due to the limit of computing resources, we only train the _LLaMA2-7B_ with 30B data (19.2B tokens in Arabic and 10.8B in English) and _LLaMA2-13B_ with 10B data (6B tokens in Arabic and 4B in English), prioritizing a larger quantity of Arabic than English data. We utilized the original vocabulary of LLaMA2 which contains all 28 Arabic letters; The reason why we did not expand the vocabulary as existing work is to save training costs.
\begin{table}
\begin{tabular}{l|l} \hline \hline
**User**: & \\ (What are the holy books, saints, and holy places?) & \\ \hline
**Jals-13B-chat**: & **Turbo**: \\ _jals-13B-
#### 2.2.2 Localized Supervised Fine-Tuning
To enable the model to follow Arabic user instructions and tackle realistic applications, we fine-tuned AceGPT with **localized instructions** and **localized responses**.
**Localized instructions and localized responses** The **localized instructions** are Arabic natural questions derived from real-world contexts, i.e. online question-answering platforms Quora 3, which can help models to capture what Arabs care in the wild. We can see from Table 3 that comn entities in the popular open-source datasets such as Alpaca are mostly Western (e.g. "John", "Apple", and "New York"), deviating from Arab's actual interest (e.g. "Mohammed", "Muslim Brotherhood", and "Egypt") which can be addressed by Quora. The main idea of **localized responses** is to leverage the fact that GPT-4 produces culture- and value-relevant responses in the context of question language, which means responses to questions in English are different from those in Arabic. See an example in Table 4, GPT-4 produces culture-dependent responses based on the queried languages. Therefore, when incorporating open-source instruction-tuning data, we ask the GPT-4 to re-generate responses in Arabic (rather than translate) to produce localized responses.
Footnote 3: [https://quora.com/](https://quora.com/)
**Data** In addition to Arabic Quora questions, we also incorporate some open-source instruction-tuning datasets to improve the overall performance. Specifically, we incorporate **Alpaca**Taori et al. (2023); Peng et al. (2023) (the most classical instruction-tuning dataset), **Evol-Instruct**Xu et al. (2023) (a complex instruction dataset), **Code-Alpaca**Chaudhary (2023) (a code-specific instruction dataset) 4, and **ShareGPT**5 (a popular user-GPT dialogue dataset). For these open-source data except ShareGPT, an Arabic version is created by translating the English questions into Arabic and re-generating the responses using GPT-4. We reserve the original ShareGPT data because the original conversations will be destroyed with a re-generated different response.
\begin{table}
\begin{tabular}{l|l|l} \hline \hline
**Question in English**: & **Question in Arabic**: \\ Is it forbidden for a woman to confess her love to a man? & **GT-4**: (In Islam, it is not necessarily considered haram for a woman to express her feelings towards a man, but it must be done decently and respectfully and in accordance with Islamic teachings. Islam encourages modesty and chasticity, and therefore both men and women must display decency and good morals in their dealings. If a woman wants to express her feelings, she can do so directly or through an intermediary, such as her family...) \\ \hline \hline \end{tabular}
\end{table}
Table 4: GPT-4 answers culture-relevant questions differently across languages. Questions here are the same in semantics but differ in languages. The Arabic response is translated into English (right).
\begin{table}
\begin{tabular}{l|l|l|l} \hline \hline Dataset & Top-5 Person & Top-5 Organization & Top-5 GPE \\ \hline Alpaca & John, John Smith, Alice, Mary, Harry Potter & Apple, Amazon, Google, Microsoft, ABC & United States, India, New York, France, China \\ \hline Evol-Instruct & John, John Smith, Harry Potter, Alice, Bob & Apple, Amazon, quantum, Google, Microsoft & United States, New York, Los Angeles, San Francisco, Japan \\ \hline ShareGPT & Di Maria, Messi, Beckhaus, Eco, Clara & Tribunal, Google, Council, Bing, Supreme Court & United States, Argentina, France, New York, Hong Kong \\ \hline Quora & Prophet, Mohammed, Adam, Hijri, Ali & European Union, Google Muslim Brotherhood, Soviet Union, United Nations & Egypt, Turkey, Saudi Arabia, Morocco, America \\ \hline \hline \end{tabular}
\end{table}
Table 3: Top 5 names of individuals, organizations, and geopolitical entities (GPE) by frequency.
#### 2.2.3 Reinforcement Learning from AI feedback
To further align AceGPT with values and cultures, we utilize reinforcement learning from AI feedback with a reward model trained with **localized preference data**. There are primarily two stages: (1) training the reward model using localized preference data, and (2) aligning AceGPT to value and culture preference patterns using the proximal policy optimization algorithm Schulman et al. (2017).
**Localized preference data** To align AceGPT with Arabic culture and values, a reward model mimicking the preferences of native speakers is essential. To prepare the localized preference data for reward model training, we reuse 40K localized instructions, i.e. Quora questions, in the SFT stage and sample paired outputs from our fine-tuned 7B model. Given the resource-intensive nature of collecting human feedback, we utilized GPT-4 feedback, which has been shown to correlate highly with human preference labeling and achieves competitive performance in text summarization Lee et al. (2023). However, due to observed position bias in GPT-4 Zhang et al. (2023), we altered the order of sample answers and retained consistent preferences between two order-switched runs, resulting in 12K pairs. A small study with 800 examples verified the reliability of this preference data, revealing a correlation coefficient of 0.84 between GPT-4 and human evaluations. We also incorporate 12K open-source preference data for better generalization. See Appendix C for details.
**Reward model** The reward model operates within a 'binary' framework, determining preferences with an additional linear head post the final hidden states. The loss function is expressed as:
\[\mathcal{L}(\theta)=-\frac{1}{\|D\|}\mathbb{E}_{(x,y_{r},y_{r})\sim D}\left[ \log(\sigma(r_{\theta}(x,y_{c})-r_{\theta}(x,y_{r})))\right]. \tag{1}\]
Here, \(x\) is the input, \(y_{c}\) is the chosen model output, \(y_{r}\) is the rejected model output of the pair, and \(r_{\theta}\) is the reward model with the parameter \(\theta\).
**Proximal policy optimization** We crawl another 30K Quora questions different from Quora-40K for PPO training data. Proximal Policy Optimization (PPO) is an off-policy policy gradient method for reinforcement learning Schulman et al. (2017). The policy \(\pi_{\theta}(a|s)\) represents the probability distribution over the next token \(a\) given a sequence of previous tokens \(s\), where \(\theta\) are the model parameters. The primary objective is to maximize the preference signal from the reward model that corresponds to the desired output behaviour. The objective is
\[\mathcal{L}(\theta)=\mathbb{E}_{t}\left[\min\left(\frac{\pi_{\theta}(a_{t}|s_{ t})}{\pi_{\theta_{\text{old}}}(a_{t}|s_{t})}A_{t},\text{clip}\left(\frac{\pi_{ \theta}(a_{t}|s_{t})}{\pi_{\theta_{\text{old}}}(a_{t}|s_{t})},1-\epsilon,1+ \epsilon\right)A_{t}\right)\right]. \tag{2}\]
Here, \(\theta\) is the current model parameter while \(\theta_{\text{old}}\) is the model parameter used for experience sampling. \(A_{t}\) is the advantage function that measures the relative value of generating \(a_{t}\) as the next token conditioned on the sequence \(s_{1}\cdots s_{t}\), and \(\epsilon\) is a hyperparameter for stability.
## 3 Evaluation
### Evaluation protocol
Evaluation of language models is multifaceted and typically involves multiple metrics and benchmarks to assess various aspects of model performance. We use both automated and manual evaluation methods, assessing dimensions including instruction-following ability, knowledge, Natural
\begin{table}
\begin{tabular}{l|l c c} \hline \hline Data & \multicolumn{2}{c}{Source} & \multirow{2}{*}{Numbers} \\ & questions & & \\ \hline
**Quora-Arabic-40K** & collected from Quora & GPT-4 & 43,050 \\ \hline Alpaca Peng et al. (2023) & self-instruct Taori et al. (2023) & & 49,969 \\ Alpaca-Chinese Peng et al. (2023) & Turbo translated Peng et al. (2023) & GPT-4 & 49,969 \\
**Alpaca-Arabic** & GPT-4 translated from Taori et al. (2023) & & 49,969 \\ \hline
**Code-Alpaca-Arabic** & GPT-4 translated from Chaudhury (2023) & GPT-4 & 20,022 \\ \hline
**Evol-Instruct-Arabic** & GPT-4 translated from Xu et al. (2023) & GPT-4 & 69,997 \\ \hline ShareGPT & humans & ChatGPT & 80,179 \\ \hline \hline \end{tabular}
\end{table}
Table 5: Instruction Tuning Datasets; Datasets Constructed in This Work Are Highlighted in **bold**.
Language Understanding (NLU), and Arabic Cultural and Value Alignment (ACVA), see Table 6. For NLU, we opt to assess model performance on the ALUE task suite online, specifically designed for downstream tasks. Details can be found in Appendix F.2.
Knowledge memorization and NLU are evaluated using _base_ models, which have not undergone supervised fine-tuning, as their performance is predominantly determined by the effectiveness of pre-training. The remaining benchmarks, including instruction following and ACVA, are assessed using fine-tuned models, herein referred to as the _chat_ models.
**Instruction-following** We specifically evaluate the instruction-following capabilities of models tuned for instructions using Arabic Vicuna-80 and Arabic AlpacaEval. In accordance with Chiang et al. (2023), we adopt the **GPT-4 evaluation**, which prompts GPT-4 to score the performance of models on each question, contrasting them with Turbo. The details can be found in Appendix E.2. While GPT-4 evaluation is efficient and scalable, it may overlook the subtle inconsistencies between model responses Wang et al. (2023) and human interactions in real-world scenarios. Therefore, we further conduct **human evaluation** on Arabic Vicuna-80 and Arabic AlpacaEval to evaluate the performance of AccGPT from the perspective of human rather than GPT-4 preferences. To ensure cultural relevance in manual evaluations, we engaged a diverse group of educated, native Arabic speakers. Each model's response was assessed independently by three assessors. We present more details in Table 18 and the designed UI for evaluation in Figure 2.
**Vicuna-80**Chiang et al. (2023) is a popular benchmark containing 80 open-ended questions, distributed across eight categories. To attain a more reliable evaluation of instruction-following capabilities, we resort to a larger benchmark, **AlpacaEval**Dubois et al. (2023). This benchmark is structured to replicate the actual distribution of user instructions by consolidating several public datasets. It is reported that model rankings on this benchmark have a high correlation with those on the live user instructions. **Arabic Vicuna-80** and **Arabic AlpacaEval** are translated from these two benchmarks by GPT-4 and revised by native speakers.
**Knowledge** We have two knowledge benchmarks, including Arabic MMLU and EXAMs. **MMLU**Hendrycks et al. (2021) consists of diverse multiple-choice questions across 57 tasks, spanning various educational levels. We employed Turbo to translate this dataset from English to Arabic. Additionally, Arabic questions from the **EXAMs**Hardalov et al. (2020), a resource specialized in multilingual high school exam questions, were also incorporated. Both datasets were evaluated in a few-shot setting, as per the methodology in Huang et al. (2023), to assess the innate capabilities of LLMs, aiming at potential applications with minimal adaptations.
**Arabic Cultural and Value Alignment (ACVA)** ACVA is a Yes-No question dataset, comprising over 8000 questions, generated by Turbo from 50 designed Arabic topics to assess model alignment with Arabic values and cultures (see Appendix B for data construction details). A subset, revised by Arabic speakers for question quality and answer accuracy, forms the 2486-data 'Clean set'. The correlation between 'All set' and 'Clean set' evaluations is in Sec 3.2. Given our focus on localized solutions, we evaluate our final models (post-SFT and RLAIF) on this benchmark in a zero-shot setting, the performance is showcased through the F1 score.
**Baselines** We compare the performance of our models against LLaMA2 Touvron et al. (2023), Bloomz Muennighoff et al. (2022), Phoenix Chen et al. (2023;b), and Jais Sengupta et al. (2023). LLaMA2-chat models are excluded as they consistently respond in English when queried in Arabic. See details in Sec. E.1.
\begin{table}
\begin{tabular}{l l l l l} \hline \hline Benchmark & Evaluation Aspects & Type of Evaluation & Dataset Size & Types of examples \\ \hline Arabic Vicuna-80 & Instruction following & Human \& Automated & 80 \\ Arabic AlpacaEval & Instruction following & Human \& Automated & 805 \\ \hline Arabic MMLU & & & & \\ EXAMs & Knowledge Ability & Automated & 14k & Multiple-choice Questions \\ \hline ALUE(see Appendix F.2) & Language Understanding & Automated & 18k & Classification \& Regression \\ \hline ACVA-all & Arabic Cultural and & Automated & 9k & Yes/no binary Questions \\ ACVA-clean & Value Alignment & & 2.4k & \\ \hline \hline \end{tabular}
\end{table}
Table 6: Evaluation Benchmarks.
### Experiment results
**Instruction-Following benchmark** We present each model's performance ratio against turbo, scored by GPT-4, in Table 7. The result shows that AceGPTs are superior in both Arabic Vicuna-80 and Arabic AlpacaEval. Notably, AceGPT-7B-chat surpasses Jais-13B by about 20% points with smaller model size. Moreover, AceGPT-13B-chat attains a 100.88% performance ratio of Turbo in Arabic Vicuna-80.
**Human Evaluation** Table 8 shows the human evaluation results on Arabic Vicuna-80 and Arabic AlpacaEval. We calculated the percentages of wins, ties, and losses of the results from three Arabic speakers. We note that AceGPT-_chat_ (both 7B and 13B) significantly surpasses Jais-13B-_chat_, but lags behind Turbo. Moreover, the AceGPT-13B-_chat_ is significantly better than the AceGPT-7B-_chat_, indicating the importance of model size.
**Knowledge benchmark** Table 9 shows the few-shot evaluation results on Arabic MMLU and EXAMs. We can see that AceGPT-13B-base attains the best performance (37.26% in Arabic MMLU and 36.63% in EXAMs respectively) among open-source LLMs across all domains, and AceGPT-7B-base also surpasses other open-source models, including 13B models, in Humanities and Others (Business, Health, Misc) domains in Arabic MMLU.
**Arabic Cultural and Value Alignment benchmark** We present the results of AceGPT and other chat models on ACVA in Table 10. The Pearson correlation of accuracy on 'All set' and 'Clean set' is 0.9863, indicating a high reliability of ACVA all-set evaluation. Notably, our AceGPT-_chat_ models (both 7B and 13B) consistently outperform other open-source LLMs, and AceGPT-13B-chat only trails Turbo by a marginal of -0.87%.
## 4 Analysis
### On Pre-training
**Localization of Pre-training** AceGPT-base uses LLaMA2 as the backbone, the only difference it is further pre-trained with some local Arabic texts. We compare AceGPT-base to LLaMA2 on ACVA with the few-shot setting to demonstrate the benefits of localized pre-training on Arabic culture and
\begin{table}
\begin{tabular}{l l} \hline \hline Comparison & Arabic Vicuna-80 & Arabic AlpacaEval \\ \hline Phoenix Chen et al. (2023a) & 71.92\% \(\pm\) 0.2\% & 65.62\% \(\pm\) 0.3\% \\ Phoenix-multiple-langs Chen et al. (2023b) & 71.67\% \(\pm\) 0.7\% & 65.36\% \(\pm\) 0.1\% \\ Jais-13B-_chat_Sengupta et al. (2023) & 75.40\% \(\pm\) 1.6\% & 74.95\% \(\pm\) 0.2\% \\ \hline
**AceGPT-7B-_chat_** & 94.82\% \(\pm\) 0.2\% & 93.81\% \(\pm\) 0.1\% \\
**AceGPT-13B-_chat_** & **100.88**\% \(\pm\) 0.4\% & **97.95**\% \(\pm\) 0.1\% \\ \hline \hline \end{tabular}
\end{table}
Table 7: Average performance ratio of Turbo and the standard variation over three runs in **Arabic Vicuna-80** and **Arabic AlpacaEval**. The best performance is in **bold** and the second is underlined.
\begin{table}
\begin{tabular}{l|l l l|l|l} \hline \hline Dataset & Comparison & win & tie & lose & win or tie \\ \hline \multirow{4}{*}{Arabic Vicuna-80} & **AceGPT-7B-chat** vs. Jais-13B-chat** & 82.5\% & 6.7\% & 10.8\% & 89.2\% \\ & AceGPT-7B-_chat_ vs. **Turbo** & 27.5\% & 32.9\% & 39.6\% & 60.4\% \\ \cline{2-6} & **AceGPT-13B-_chat_** vs. **Turbo** & 82.9\% & 6.7\% & 10.4\% & 89.6\% \\ & AceGPT-13B-_chat_ vs. **Turbo** & 16.3\% & 57.1\% & 26.6\% & 73.4\% \\ \hline \multirow{4}{*}{Arabic AlpacaEval} & **AceGPT-7B-chat** vs. Jais-13B-_chat_ & 53.0\% & 36.5\% & 10.5\% & 89.5\% \\ & AceGPT-7B-_chat_ vs. **Turbo** & 20.2\% & 46.5\% & 33.3\% & 66.7\% \\ \cline{1-1} \cline{2-6} & **AceGPT-13B-_chat_** vs. **Turbo** & 49.4\% & 42.8\% & 7.8\% & 92.2\% \\ \cline{1-1} & AceGPT-13B-_chat_** vs. **Turbo** & 25.2\% & 44.5\% & 30.3\% & 69.7\% \\ \hline \hline \end{tabular}
\end{table}
Table 8: Human evaluations on Vicuna-80 and AlpacaEval. The winners are in **bold**.
\begin{table}
\begin{tabular}{l l} \hline \hline Size & Model & F1 on ACVA \\ \hline \multirow{2}{*}{7B} & LLaMA2 & 51.44\% \\ & AceGPT-base & 68.28\% \\ \hline \multirow{2}{*}{13B} & LLaMA2 & 65.67\% \\ & AceGPT-base & **76.23**\% \\ \hline \hline \end{tabular}
\end{table}
Table 11: Ablation of Pe-training.
values. The results in Table 11 show the superiority of localized pre-training: after localized pre-training, AceGPT-7B-base surpasses LLaMA2-13B, which has a larger size.
### On Supervised Fine-tuning
Here we mainly evaluate the effectiveness of open-source instructions on the overall performance and of the localized instructions on localization. Each dataset sampled 40k data respectively. The results are shown in Table 12. It can be observed that Evol-Instruct highly contributes to the overall performance in the instruction-following benchmark, while Quora is most beneficial for Arabic culture and values. Note that incorporating ShareGPT largely harms the performance of ACVA; this may be because ShareGPT is almost aligned with Western culture and values.
### On RLAIF
#### 4.3.1 Reward model
To evaluate the sensitivity of the reward model to the overall performance, we measure the correlations between reward scoring and GPT-4 scoring (described in section 3.1) on Arabic Vicuna-80. Following the pairwise comparison setting in GPT-4 scoring, we also calculate the performance ratio for normalized (to [0, 10] as GPT-4 scoring) reward scores on model-chatbot pairs. The Pearson correlation and Spearman correlation are 0.57 and 0.61 respectively, and the results are shown in Figure 0(a). We conclude that the reward model shows a positive correlation with GPT-4 evaluation on Arabic Vicuna, which indicates it can offer an effective signal on overall performance.
\begin{table}
\begin{tabular}{l c c} \hline \hline Model & All set & Clean set \\ \hline Phoenix Chen et al. (2023a) & 41.86\% & 43.80\% \\ Phoenix–multiple-langs Chen et al. (2023b) & 59.78\% & 59.15\% \\ Jais-13B-_chat_ & 61.44\% & 66.83\% \\ \hline
**AceGPT-7B-_chat_** & 69.60\% & 70.08\% \\
**AceGPT-13B-_chat_** & 74.70\% & 76.48\% \\ \hline Turbo & **75.57\%** & **79.03\%** \\ \hline \hline \end{tabular}
\end{table}
Table 10: Average F1 on **ACVA** in the zero-shot setting. The best performance is in **bold** and the second is underlined.
\begin{table}
\begin{tabular}{l|c c c c|c|c} \hline \hline & \multicolumn{6}{c}{Arabic MMLU} \\ Model & Average & STEM & Humanities & Social Sciences & Others & EXAMs \\ \hline Bloomz & 30.95 & 32.32 & 26.71 & 35.85 & 28.95 & 33.89 \\ LLaMA2-7B & 28.81 & 28.48 & 26.68 & 29.88 & 30.18 & 23.48 \\ LLaMA2-13B & 31.25 & 31.06 & 27.11 & 35.5 & 31.35 & 25.45 \\ Jais-13B-_base_ & 30.01 & 27.85 & 25.42 & 39.7 & 27.06 & 35.67 \\ \hline AceGPT-7B-_base_ & 30.36 & 26.63 & 28.17 & 35.15 & 31.5 & 31.96 \\ AceGPT-13B-_base_ & 37.26 & 35.16 & 30.3 & 47.34 & 36.25 & 36.63 \\ \hline Turbo & **46.07** & **44.17** & **35.33** & **61.26** & **43.52** & **45.63** \\ \hline \hline \end{tabular}
\end{table}
Table 9: Accuracy on **Arabic MMLU** and **EXAMs**. The best is **bold** and the second is underlined.
\begin{table}
\begin{tabular}{l c c} \hline \hline Comparison & Arabic Vicuna-80 & Arabic AlpacaEval & ACVA \\ \hline Alpaca-Arabic & 87.15\% \(\pm\) 0.5\% & 82.97\% \(\pm\) 0.4\% & 50.52\% \\ + ShareGPT & 88.01\% \(\pm\) 0.03\% & 84.89\% \(\pm\) 0.3\% & 38.64\% \\ + Evol-Instruct & **90.39\%**\(\pm\) 0.4\% & **86.87**\% \(\pm\) 0.1\% & 61.72\% \\ + Quora & 89.74\% \(\pm\) 0.8\% & 85.71\% \(\pm\) 0.03\% & **65.53**\% \\ \hline \hline \end{tabular}
\end{table}
Table 12: Effects of different datasets on Arabic Vicuna-80, Arabic AlpacaEval and ACVA.
**Localization of Reward model** Then we evaluate the Arabic culture sensitivity of the reward model on the ACVA benchmark. Prompting with "Give me a fact about Arab culture, values, and laws" in Arabic, we calculate the reward scores of prompt-statement pairs for all statements from ACVA. The distribution of reward scores for yes/no statements is shown in Figure 0(b). It demonstrates that reward scores for "yes" statements are higher than "no" statements overall, which suggests that our reward model has a cultural sensitivity.
#### 4.3.2 Ablation
**RLAIF improves instruction-following.** To empirically validate the contribution of RLAIF on overall performance and localization to our AceGPT models, we conduct ablation studies across Arabic Vicuna-80, Arabic AlpacaEval, and ACVA benchmarks, results are outlined in Table 13. _Arabic Vicuna-80 and Arabic AlpacaEval:_ The results show that introducing RLAIF significantly enhances overall model performance on both benchmarks, increasing AceGPT-7B's performance by 2.81% and 2.46%, and AceGPT-13B's by 5.74% and 4.90% on Arabic Vicuna-80 and Arabic AlpacaEval, respectively. By examining the "win or tie" metric, the 7B model shows an enhancement of 3.7% through RLAIF, while the 13B model shows a significant boost of 16.2%. This narrows the gap with Turbo. These enhancements across datasets underscore RLAIF's efficacy.
**RLAIF improves localization** RLAIF results in performance gains of 27.12% and 0.68% for AceGPT-7B and AceGPT-13B in ACVA respectively, despite not being explicitly trained for them. This suggests that RLAIF enhances alignment with Arabic culture and values. Notably, the improvement from RLAIF on the 7B model is much larger than that of 13B, partially because the 7b model is weaker and therefore has more space for improvement, while it may be in saturation in the 13B model. Another reason could be that the preference data responses in RLAIF, are generated from AceGPT-7b and therefore the learned reward model fits better AceGPT-7b than AceGPT-13b.
## 5 Conclusion
AceGPT addresses the "localization issue" in large language models by specifically catering to the distinct linguistic and cultural contexts of Arabic environments, leveraging incremental pre-training, instruction tuning, and reinforcement learning. It excels in multiple domains, including instruction
Figure 1: (a) Correlations between the reward model and GPT-4 and (b) reward distribution.
\begin{table}
\begin{tabular}{l|c c c|c c c} \hline \hline & \multicolumn{3}{c}{Automatic evaluation} & \multicolumn{3}{c}{Human Evaluation (vs. Turbo)} \\ \hline Comparison & Arabic Vicuna-80 & Arabic AlpacaEval & ACVA & win & tie & loss & win or tie \\ \hline AceGPT-7B-_chat_ (w/o RLAIF) & 92.01\(\pm\) 1.3\% & 91.35\% \(\pm\) 0.08\% & 42.48\% & 27.5\% & 29.2\% & 43.3\% & 56.7\% \\ AceGPT-7B-_chat_ & **94.82**\% \(\pm\) 0.2\% & **93.81**\(\pm\) 0.1\% & **69.60**\% & 27.5\% & 32.9\% & 39.6\% & 60.4\% \\ \hline AceGPT-13B-_chat_ (w/o RLAIF) & 95.14\% \(\pm\) 1.0\% & 93.05\% \(\pm\) 0.2\% & 74.18\% & 19.6\% & 37.5\% & 42.9\% & 57.1\% \\ AceGPT-13B-_chat_ & **100.88**\% \(\pm\) 0.4\% & **97.95**\% \(\pm\) 0.1\% & **74.70\%** & 16.3\% & 57.1\% & 26.7\% & 73.3\% \\ \hline \hline \end{tabular}
\end{table}
Table 13: Experiments with/without RLAIF on Arabic Vicuna-80, Arabic AlpacaEval and ACVA.
following and natural language understanding, setting a new standard among Arabic large language models. We contribute high-quality datasets and evaluation resources, highlighting the need for localizing large language models and introducing AceGPT as a pioneering solution for Arabic linguistic and cultural adaptation.
## Limitation
In our AceGPT model, we identified several notable limitations. Firstly, its vocabulary, derived from LLaMA2, is primarily focused on Arabic letters, lacking further expansion. This results in reduced efficiency in Arabic text encoding tasks. Secondly, during the pre-training phase, due to constraints in machine resources, the number of tokens allocated to the model was relatively limited. This suggests that the model's potential in handling Arabic content has not been fully realized. When it comes to evaluation, we don't conduct reasoning/misinformation and bias testing. More critically, there are concerns regarding the model's safety alignment, rendering it unsuitable for online deployment at this stage and restricting it to academic research contexts. Moreover, even though manual verification was conducted on the cultural dataset, there is room for improvement in both the quality and quantity of the questions. These factors could potentially impact the model's practical application and adoption.
## Acknowledgement
A concurrent work Jais Sengupta et al. (2023) was released a few weeks ahead of ours. We thank their efforts to open-source such a great model that is trained from scratch.
We thank Prof. Zhi-Quan Luo and Dr. Ping Lee for their support. We extend our sincere appreciation to the dedicated KAUST graduate students whose contributions were integral to the success of our Arabic evaluations, including Lamees Alzahrani, Abdullah Amr Bawazir, Nouf Khalil Alenizi, Shatha Abdullah Alowdah, Rudaynah Maimani, Feras Khalid Alwutayd, Abdulrahman, Arwa Fallatah, Noura Alhijri, Reem Alquwayzani, and Majid Almarhoumi. We thank them for their invaluable support in this research.
## Author Contributions
Author contributions are shown as follows:
|
2307.16799 | Toward Privacy in Quantum Program Execution On Untrusted Quantum Cloud
Computing Machines for Business-sensitive Quantum Needs | Quantum computing is an emerging paradigm that has shown great promise in
accelerating large-scale scientific, optimization, and machine-learning
workloads. With most quantum computing solutions being offered over the cloud,
it has become imperative to protect confidential and proprietary quantum code
from being accessed by untrusted and/or adversarial agents. In response to this
challenge, we propose SPYCE, which is the first known solution to obfuscate
quantum code and output to prevent the leaking of any confidential information
over the cloud. SPYCE implements a lightweight, scalable, and effective
solution based on the unique principles of quantum computing to achieve this
task. | Tirthak Patel, Daniel Silver, Aditya Ranjan, Harshitta Gandhi, William Cutler, Devesh Tiwari | 2023-07-31T16:07:37Z | http://arxiv.org/abs/2307.16799v1 | Toward Privacy in Quantum Program Execution On Untrusted Quantum Cloud Computing Machines for Business-sensitive Quantum Needs
###### Abstract.
Quantum computing is an emerging paradigm that has shown great promise in accelerating large-scale scientific, optimization, and machine-learning workloads. With most quantum computing solutions being offered over the cloud, it has become imperative to protect confidential and proprietary quantum code from being accessed by untrusted and/or adversarial agents. In response to this challenge, we propose, which is the first known solution to obfuscate quantum code and output to prevent the leaking of any confidential information over the cloud. implements implements a lightweight, scalable, and effective solution based on the unique principles of quantum computing to achieve this task.
## 1. Introduction to.
Quantum computing is an emerging technology that has the potential to accelerate and make possible the execution of many large-scale scientific, optimization, and machine-learning tasks [(7; 27)]. As quantum computing technology advances, multiple cloud-based quantum computing platforms are being used to develop and execute classically-infeasible mission-critical tasks by government agencies and industry partners [(14; 15; 29)]. In many cases, the solutions to these tasks are business sensitive and should be protected (e.g., the solution to a classically-infeasible problem relevant to a defense program). Currently, due to the nascent stage of quantum cloud computing, the cloud computing providers have full access to the end users' mission-sensitive programs and the output of such programs [(26; 30)].
Recognizing the importance of security and privacy for quantum program execution, there has been some related work on it, although not solving the same problem as this work (protecting the output of quantum programs). In particular, encrypting quantum information over networks [(39; 4; 36)] and securing quantum programs from third-party quantum compilers [(31; 34)] have received attention.
Unfortunately, all of these works assume that the cloud hardware provider is an uncompromised entity and does not have intentional or unintentional snoopers on the quantum cloud platform that can analyze the program outputs. Even if the code is protected from the compiler and over the network [(39; 4; 31; 34; 36)], currently, it has to be decrypted before it can be run on the hardware so that the correct output can be obtained, which is open to snooping from the cloud provider. Even if the cloud provider is uncompromised, organizations may not want to disclose their tasks, proprietary code, and program solutions to the cloud provider. Protecting this information from the cloud provider is a non-trivial challenge as _the user essentially wants the hardware provider to run the "wrong" code and observe the "wrong" output, but be able to recover the "correct" quantum output from the "wrong" output on the user's end. We propose to achieve just this_.
In the near future, it is anticipated only a few entities in the world may have access to powerful quantum computers, and these quantum computers will be used to solve previously-unsolved large-scale optimization problems, possibly without an explicit trust model between the service cloud provider and the customer. Therefore, the solutions to such large-scale optimization problems will be considered sensitive and will need to be protected. takes the first few steps toward preparing us for that future - by developing a novel method that intelligently obfuscates program output and quantum circuit structure of the original quantum program provided by the user/customer.
Before we introduce the contributions of, we first provide a primer on relevant quantum computing concepts.
**Qubits and Quantum States.** The fundamental unit of quantum computing is the _qubit_, which is capable of representing a _superposition_ (linear combination) of two orthogonal basis states. This is represented as \(|\Psi\rangle=\alpha\,|0\rangle+\beta\,|1\rangle\), where \(\alpha\) and \(\beta\) are the complex amplitudes of the constituent basis states. Upon measurement, this superposition collapses such that the probability of measuring the state \(|0\rangle\) is \(\|\alpha\|^{2}\) and \(\|\beta\|^{2}\) for measuring the \(|1\rangle\) state, where \(\|\alpha\|^{2}+\|\beta\|^{2}=1\).
Figure 1. Example circuit representation of a quantum algorithm. The horizontal lines represent qubits with gates being applied to them in order from left to right. |
2309.16235 | Language models in molecular discovery | The success of language models, especially transformer-based architectures,
has trickled into other domains giving rise to "scientific language models"
that operate on small molecules, proteins or polymers. In chemistry, language
models contribute to accelerating the molecule discovery cycle as evidenced by
promising recent findings in early-stage drug discovery. Here, we review the
role of language models in molecular discovery, underlining their strength in
de novo drug design, property prediction and reaction chemistry. We highlight
valuable open-source software assets thus lowering the entry barrier to the
field of scientific language modeling. Last, we sketch a vision for future
molecular design that combines a chatbot interface with access to computational
chemistry tools. Our contribution serves as a valuable resource for
researchers, chemists, and AI enthusiasts interested in understanding how
language models can and will be used to accelerate chemical discovery. | Nikita Janakarajan, Tim Erdmann, Sarath Swaminathan, Teodoro Laino, Jannis Born | 2023-09-28T08:19:54Z | http://arxiv.org/abs/2309.16235v1 | # Language models in molecular discovery
###### Abstract
The success of language models, especially transformer-based architectures, has trickled into other domains giving rise to "scientific language models" that operate on small molecules, proteins or polymers. In chemistry, language models contribute to accelerating the molecule discovery cycle as evidenced by promising recent findings in early-stage drug discovery. Here, we review the role of language models in molecular discovery, underlining their strength in de novo drug design, property prediction and reaction chemistry. We highlight valuable open-source software assets thus lowering the entry barrier to the field of scientific language modeling. Last, we sketch a vision for future molecular design that combines a chatbot interface with access to computational chemistry tools. Our contribution serves as a valuable resource for researchers, chemists, and AI enthusiasts interested in understanding how language models can and will be used to accelerate chemical discovery.
## 1 Introduction
Despite technological advances constantly reshaping our understanding of biochemical processes, the chemical industry persistently faces escalating resource costs of up to 10 years and 3 billion dollar per new market release [102]. The intricacy of the problem is typically attested by an exorbitant attrition rate in _in vitro_ screenings [77], the sheer size of the chemical space [68] and the frequency of serendipity [40].
Language models (LMs) emerged recently and demonstrated an astonishing ability to understand and generate human-like text [65]. Machine learning (ML) in general and LMs in particular hold the potential to profoundly accelerate the molecular discovery cycle (see Figure 1). In this chapter, we explore applications of LMs to chemical design tasks. Although LMs were originally developed for natural language, they have shown compelling results in scientific discovery settings when applied to "scientific languages", e.g., in protein folding [55] or _de novo_ design of small molecules [105], peptides [23] or polymers [66]. But what exactly is a language model? By definition, it is any ML model that consumes a sequence of text chunks (so-called tokens) and is capable to reason about the content of the sequence. Since each token is essentially a vector [62], a LM is a pseudo-discrete time series model. Most typically, LMs learn probability distributions over sequences of words thus also facilitating the generation of new text given some input, for example in a language translation task. While all LMs rely on neural networks, contemporary models almost exclusively leverage the Transformer architecture [93]. Now, all of this begs the question - what is the need for LMs in molecular discovery?
First, when applied to serializations of chemical entities (e.g., SMILES [98]), LMs can learn highly structured representations, often even tailored for desired functional properties [36]. This allows to perform smooth and property-driven exploration of the originally deemed discrete protein or molecular space. Another attractive feature of scientific LMs is their ability to seamlessly bridge natural and scientific languages. This can give rise to ChatGPT-style chatbot interfaces that allow chemists to
formulate their design objectives through natural language and to iteratively refine their result with an interactive agent thus potentially accomplishing complex chemical tasks more rapidly. Here, we present an overview of the role of LMs toward accelerated molecular discovery. We commence with the conventional scientific discovery method and then discuss how molecular generative models can be coupled with molecular property prediction models. Seeking for practical usability, we then present the reader with selected software tools and libraries for scientific language modeling. We close with a vision for future molecule design that integrates natural language models into the discovery process through chatbots.
## 2 Accelerated molecular discovery
Molecule discovery, intricately linked to optimizing diverse properties in a vast space, challenges conventional scientific methods. In chemistry's Design-Make-Test-Analyze (DMTA) cycle, synthesis costs and time constraints create a bottleneck that hampers hypothesis refinement (cf. Figure 0(a)). Traditional approaches are largely driven by medicinal chemists who design "molecule hypotheses" which are biased, ad-hoc and non-exhaustive. This hinders progress in addressing global issues, driving crucial necessity for an accelerated process of molecule discovery. Thus, a key challenge lies in improving speed and quality of evaluating such "molecule hypotheses" grounded on laboratory work.
Deep generative models have recently emerged as a promising tool to expedite the hypothesis/design phase in molecular discovery. However, even the most advanced molecular generative models require an efficient method for large-scale virtual screening to test their hypotheses. The _accelerated molecular discovery_ cycle adds a validation loop to DMTA, rapidly evaluating numerous hypotheses inexpensively (cf. Figure0(b)). This loop enhances the design-phase generative model, ensuring only promising hypotheses advance to the synthesis and physical experimentation stages.
### Molecule Representation
Data representation is critical as it determines which information is available for the model. As illustrated in Figure2, various molecular representations exist. Due to popularity of chemical language models (CLMs), this section focuses on text-representations of molecules. A more focused discussion on CLMs was published by Grisoni [38].
Figure 1: A comparison of molecular discovery workflows: (a) classic approach, where each hypothesis (a.k.a. molecule) requires a new experimental cycle. (b) _Accelerated_ molecular discovery cycle with machine-generated hypotheses and assisted validation, enabling simultaneous generation and testing of numerous molecules.
Simplified Molecular Input Line-Entry System (SMILES)SMILES[98] is a string representation made up of specific characters for atoms, bonds, branches, aromaticity, rings and stereochemistry in molecular structures. The character-level representation enables easy tokenization, making SMILES an ideal input for LMs. SMILES are non-unique, so each molecule can be written as multiple SMILES strings. Hence, SMILES are either canonicalized or, alternatively, their multiplicity is used as data augmentation strategy [8] which has shown performance improvement in molecular property prediction [8, 51, 88] and molecular generation [92, 3]. In generative modeling, a common issue is the invalidity of SMILES strings due to an uneven number of ring opening/closure symbols or bond valence violations. SMILES strings can undergo further processing, such as kekulization or stereoinformation removal but employing canonicalized SMILES remains the most prevalent approach.
**Tokenization** is the process of splitting a string into vectorizable units. These units are typically a single character, n-gram characters or words. Instead of splitting at the character level, SMILES are typically tokenized at the atom level with regular expressions [79] or by additionally including positional and connectivity information, thereby acknowledging that the same atom can have different encodings based on its location in the molecular structure [91]. SMILES may also be tokenized at the substructure level, as demonstrated by SMILES Pair Encoding (SMILES-PE) [52]. This method, inspired by byte-pair encoding, iteratively counts and merges frequently occurring SMILES token pairs until a given condition is met. Tokenization enables the creation of a vocabulary for SMILES representations.
**Vocabularies** are dictionaries mapping tokens to vectors thus serving as gateway to LMs. For LMs to learn from SMILES, tokens are vectorized, either via one-hot encodings (where each row in the binary matrix corresponds to a SMILES position and each column signifies a token). However, this discrete method results in sparse, large matrices and thus, an alluring alternative is to learn a continuous embedding for each token during training. This facilitates the learning of semantic relationships between tokens and enhances performance. Since learning good embeddings requires a lot of data, models pre-trained on natural language corpora are a strong option to learn scientific language embeddings through fine-tuning [22].
Self Referencing Embedded Strings (SELFIES)SELFIES[49] were introduced as an alternative to SMILES to counter the problem of generating invalid molecules. Unlike SMILES, SELFIES are generated using derivation rules to enforce valence-bond validity. They store branch length and ring
Figure 2: An illustration of popular ways of representing a chemical molecule as input to a ML model. The representations may be (a) String-based, such as SMILES, SELFIES, or InChI which use characters to represent different aspects of a molecule, (b) Structure-based, such as Graphs or MolFiles that encode connectivity and atomic position, and (c) Feature-based, such as Morgan Fingerprints, which encode local substructures as bits.
size to avoid open branches and rings. These supplementary attributes ensure a valid representation during molecule generation. While this strategy guarantees 100% validity, it could produce strings that are too short to be a useful molecule.
International Chemical Identifier (InChI)Introduced by the IUPAC, InChI [41] are strings encoding structural information including charge of the molecule in a hierarchical manner. The strings can get long and complex for larger molecules. To counter this, a hash called 'InChIKey' was developed to help with search and retrieval. InChIs are are less commonly used in LMs [39].
### Generative Modelling
Generative modeling involves learning the data's underlying distribution with the intent of generating new samples, a technique pivotal in accelerating de novo drug discovery. A generative model may be conditional or unconditional. A conditional generative model utilizes provided data attributes or labels to generate new samples with desired properties, whereas an unconditional model solely provides a way to sample molecules similar to the training data [36]. The DMTA cycle particularly benefits from the conditional generation approach as it facilitates goal-oriented hypothesis design [9]. This section describes a few influential conditional generation models that act on chemical language to generate molecules satisfying user-defined conditions.
#### 2.2.1 Recurrent Neural Network (RNN)
The sequential nature of RNNs makes them suitable models for processing chemical languages. Proposed in the 90s, RNNs were the first flavor of CLMs [8, 79, 85]. Their hidden states are continuously updated as new tokens are passed to the network. During the generation process, tokens are produced auto-regressively. RNNs find use in generating molecule libraries [85] which are extensively used in drug development processes like screening. External scoring functions drive the generation of molecules with desired properties. RNNs are also adept at learning complex distributions [31] and generating a higher proportion of unique and valid SMILES [69], even though their inability to count occurrences of ring opening/closing symbols poses a challenge [46, 70].
Figure 3: An illustration of conditional molecule generation using LMs. The process initiates with the collection and processing of multi-modal data, which is then compressed into a fixed-size latent representation. These representations are subsequently passed to a molecular generative model. The generated molecules then undergo in-silico property prediction, which is linked back to the generative model through a feedback loop during training. The in-silico models direct the generative model to produce property- or task-driven molecules using a reward function. In the inference stage, candidate molecules generated by the optimized model continue through the workflow for lab synthesis and subsequent experimental validation to determine their efficacy for the desired task.
#### 2.2.2 Variational Autoencoder (VAE)
VAEs learn latent distribution parameters of molecules, thus enabling the generation of new molecules by sampling from this distribution. Their unique ability lies in learning a smooth, latent space that facilitates interpolation of samples, even for notoriously discrete entities like molecules [36]. To make it suitable for chemical language models (CLMs), any network compatible with string inputs can function as a VAE's encoder and decoder. Initial works primarily focused on single-modality applications, assessing latent space quality via downstream tasks [36]. This approach remains prevalent and can be used to generate, e.g., catalysts with an RNN-based VAE [78]. Here, a latent space is learned and assessed by predicting the catalyst binding energy. Lim et al. [53] takes it a step further by concatenating a condition vector to the input and the latent embedding generated by the recurrent network-based VAE's encoder. This approach enables the generation of molecules specifically tailored to the given conditions. The scope of VAEs expanded progressively into multi-modal settings for conditional molecule generation, as visualized in Figure 3 and exemplified by Born et al. [11, 12, 13]. These works on task-driven molecule generation incorporate contextual information like gene expression [13] or protein targets [11, 12] or even both [45]. VAEs learn embeddings of context information and primer drugs, which are merged before decoding to produce molecules. A reinforcement-learning-based approach directs the model to produce molecules with desired properties using rewards.
#### 2.2.3 Transformer
The self-attention attribute of Transformers [93] have propelled these models to the forefront of NLP. Transformers have an encoder module that relies on this self-attention to learn embeddings of the input and the context associated with this input. The decoder module predicts tokens using the context learnt by the encoder and previously generated tokens through attention. For generative modeling, decoder-only transformer like the Generative Pre-Training Transformer (GPT) [72] have become the dominant approach. This success was translated to the scientific language domain. One of the first models to use the GPT architecture for conditional molecule generation is MolGPT [4]. SMILES tokens concatenated with a condition vector that summarizes the desired properties and scaffolds are passed as input to this model, which is then trained on the next token prediction task to generate molecules. GPT-like models coupled with RL can also be used to optimize molecular properties like pIC50 [61]. In this two-stage approach, embeddings are first learnt from SMILES strings, and the embedding space is then optimized such that the model samples molecules with the desired properties. Going beyond just using GPT-like architectures for molecule generation, Regression Transformer [10] is a seminal work that formulates conditional sequence modeling as a regression problem. This gives rise to a natural multitask model that concurrently performs property prediction and conditional molecular generation. This is achieved by concatenating conventional molecular tokens with property tokens and employing an training scheme that alternates which parts of the sequence are masked.
All these works are testament to the generative capabilities of Transformer-based models. The superior quality of learned embeddings coupled with its ability to handle parallel processing and scalability makes it a top choice for the task of conditional molecule generation, with promising applications in drug discovery and other areas of molecular design [66].
### Property Prediction
Whether a discovery is novel or not, property prediction is a key step in validating the molecules for a given use case. The success of a molecule depends on a myriad of factors, including how it interacts with its environment. The MoleculeNet datasets [103] are a commonly used benchmark for property prediction. It is curated from public datasets and comprises over 700,000 compounds tested on various properties. Born et al. [15] uses a multiscale convolutional attention model to predict toxicity from SMILES. The model has three kernel sizes for the convolutional network and uses a a Bahdanau attention mechanism [5]. The model shows a superior performance overall on various MoleculeNet tasks compared to all other SMILES-based models. A recent trend is to use transformer-encoders to learn embeddings for molecules and then apply a multilayer perceptron (MLP) on the embeddings for property prediction. MolBERT [29] and ChemBERTA [20]) are two such examples.
These transformer-based models use a BERT backbone to learn molecular embeddings from SMILES and predict properties. Similarly, Molformer [75] uses a transformer-encoder with linear attention and relative positional encoding to learn compressed molecular representations which are then fine-tuned on chemical property prediction benchmarks. To equip transformers with better inductive biases to handle molecules, adaptations of the attention mechanism were proposed. The molecule attention transformer (MAT) incorporates inter-atomic distances and graph structure into the attention mechanism [58]. An improvement over this model is the _relative_-MAT which fuses the distance embedding, bond embedding and neighbourhood embedding and achieves competitive performances on a range of property prediction tasks [59].
## 3 Software tools for scientific language modeling
The paradigm shift towards open-sourcing software has exerted a profound influence in chemistry. Commonly listed implications of open-sourcing in the context of drug discovery include catalyzation of methodological development, fostering of collaboration and ease of scientific reproducibility [35]. In this section we present several software assets (e.g., Python packages or cloud-based web apps) that are key to enable molecular discovery.
### Natural language models
The success story of the Transformer [93] as most widely adopted neural network architecture goes hand in hand with the rise of the transformers library [101], developed since 2019 by HuggingFace. Initially intended for NLP applications, Transformers were adopted interdisciplinarily, e.g in computer vision [25], reinforcement learning [19], protein folding [47] and, of course, chemistry [84]. _HuggingFace_ provides the largest public hub of language models and it offers implementations of all recent models as well as a diverse collection of pretrained models available for fine-tuning or inference. While most of their models focus on NLP, selected models are designed for life science applications, in particular molecular property prediction (e.g., _ChemBerta_[20]), molecular captioning (e.g., _MolT5_[26]), text-based molecular generation (e.g., _MolT5_[26]) but also unsupervised protein language models (e.g., _ProtBert_, _ProtAlbert_, _ProtXLNet_ or _ProtT5_[27]). Moreover, some available models like _Multimodal Text and Chemistry T5_[22] are prompt-based multitasker that besides the above mentioned tasks also perform additional tasks such as forward/backward reaction prediction.
### GT4SD - Generative modeling toolkits
Python libraries like GT4SD (the Generative Toolkit for Scientific Discovery [57]), TdC (Therapeutics Data Commons [43]) or deepchem[73] were developed primarily for molecular discovery applications, but especially GT4SD offers ample support of language models (LMs). GT4SD is designed to enable researchers and developers to use, train, fine-tune and distribute state-of-the-art generative models for sciences with a focus on the design of organic materials. It is compatible and inter-operable with many existing libraries and, beyond transformers, it also gives access to diffusion models (diffusers[96]) or graph generative models (TorchDrug[106]). Next to established molecular generation benchmark like Moses[69] and GuacaMol[16] that include VAEs, generative adversarial networks (GANs), genetic algorithms, and many evaluation metrics for molecular design, gt4sd also supports very contemporary models like the _Regression Transformer_ for concurrent sequence regression and property-driven molecular design [10], _GFlowNets_ for highly diverse candidate generation [6] or _MoLeR_ for motif-constrained molecule generation [60]. GT4SD ships with a harmonized interface and a set of command line tools that access a registry of generative models to run or train any model with a few lines of code. Trained models can be shared to a cloud-hosted model hub and the library is build to facilitate consumption by containerization or distributed computing systems. To date, it includes \(\sim 50\) property prediction endpoints for small molecules, proteins and crystals and overall hosts \(\sim 30\) pre-trained algorithms for material design, 20 free webapps [2] and many Jupyter/Colab notebooks.
### RXN for Chemistry: Reaction and synthesis language models
Once a molecule has been selected for experimental validation, a tangible synthesis route has to be identified. Since the most important tasks in chemical reaction modeling can be framed as sequence conversion problems, the methodology developed for natural language translation can be seamlessly translated to chemistry [84]. In this analogy, atoms are characters, molecules are words, reactions are sentences and precursors are translated into a product or vice versa.
The most mature and flexible library for reaction modeling with LMs is the package rxn4chemistry[32]. It wraps the API of the _IBM RXN for Chemistry_ platform, a freely accessible web application that gives access to a rich set of language models for different tasks in reaction chemistry. The flagship architecture has been the _Molecular Transformer_ (MT), an autoregressive encoder-decoder model, originally applied to predict outcomes of chemical reactions in organic chemistry [80]. Notably, the MT uses a purely data-driven, template-free approach that, unlike many graph-based models, can directly represent stereochemistry and thus also exhibits excellent performance on regio- and stereoselective reactions [67]. The MT was applied to single-step retrosynthesis [90] and became the linchpin of a multi-step retrosynthesis model with a hypergraph exploration strategy [81]. This approach was later generalized to enzymatic reactions with a tokenization scheme based on enzyme classes which facilitated biocatalyzed synthesis planning and paved the road towards more sustainable and green chemistry [71]. Derivatives of the MT helped to enhance diversity in single-step retrosynthesis [90] and a prompt-based disconnection scheme proposed by Thakkar et al. [89] significantly improved controllability by allowing the user to mark a disconnection side in the reactant. Interestingly, an encoder-only derivative of the MT (that replaced the autoregressive decoder with a classification head and leveraged BERT-style [24] self-supervised pretraining on reactions) excelled in predicting reaction classes [83]. The hidden representations of such a model were found to encode reaction types and thus allowing to map reaction atlases and to perform reaction similarity search. This gave rise to the rxnfp package for chemical reaction fingerprinting. Strikingly, masked language modeling also led later to the discovery that the learned attention weights of the Transformer are "secretly" performing atom mapping between products and reactions [82]. The epiphany that CLMs accomplish atom mapping without supervision or human labeling bridged the gap between rule-based and data-driven approaches in reaction modeling, making this once tedious experimental task more efficient.
In the quest for automation in organic chemistry, once the precursors for a molecule's synthesis route are identified, the subsequent crucial phase involves seeking an actionable, stepwise synthesis protocol that is ideally amenable for autonomous execution on a robotic platform, such as _IBM RoboRXN_. In two seminal works Vaucher et al. demonstrated that encoder-decoder Transformers can extract chemical synthesis actions, first from experimental procedures described in patents [94] and later predict them directly from the reaction SMILES [95]. Notable, all the aforementioned models are available via the _IBM RXN for Chemistry_ platform which even allows to control and monitor the robotic platform directly from the web interface. For the daunting task of multistep retrosynthesis planning, _RXN_ also includes non-transformer based models like _AiZynthFinder_[34], a Monte Carlo Tree Search approach build on top of a RNN. Most of the _RXN_ models can be executed also via the rxn4chemistry Python package.
### Specialized libraries
Molecular property prediction.HuggingMolecules is a library solely devoted to aggregate, standardize and distribute molecular property prediction LMs [33]. It contains many encoder-only CLMs, some of them with geometrical and structure-aware inductive biases (e.g., the MAT [58] or its successor, the R-MAT [59]) while others being pure BERT-based models that were trained on SMILES (e.g,. _MolBERT_[29] or _ChemBERTA_[20]).
Data processing.RDKit [50] is a library for manipulating molecules in Python. For narrower applications like ML data preparation several tools exist. First, rxn-chemutils is a library with chemistry-related utilities from RXN for Chemistry. It includes functionalities for standardizing SMILES (e.g., canonicalization or sanitization) but also conversions to other representations (e.g., InChI). It harmonizes reaction SMILES and prepares them for consumption by CLMs, including also SMILES aug
mentation (by traversing the molecular graph in a non-canonical order) and tokenization. Another library with a similar focus is pytoda[12, 13]. It does not support reaction SMILES but implements richer preprocessing utilities, allowing to chain \(>\)10 SMILES transformations (e.g., kekulization [15]). It supports different languages (e.g., SELFIES [49] or BigSMILES [54]) and tokenization schemes (e.g., SMILES-PE [52]). Similar functionalities are available for proteins including different languages (IUPAC, UniRep or Blosum62) and protein sequence augmentation strategies [14]. For small molecules, proteins, and polymers, dedicated language classes facilitate the integration with LMs by storing vocabularies, performing online transformations and feeding to custom datasets. Datasets exist for predicting molecular properties, drug sensitivity, protein-ligand affinity or for self-supervision on small molecules, proteins or polymers.
### General purpose platforms
Several general-purpose platforms for molecular discovery have been launched recently, sometimes even preserving privacy through federated learning (i.e., decentralized, distributed training). For example, MELLODDY [42] is a collaborative effort aimed at cross-pharma federated learning of 2.6 billion confidential activity data points. Similarly, VirtualFlow [37] is an open-source platform facilitating large-scale virtual screening that was shown to identify potent KEAP1 inhibitors. With a focus on _de novo_ drug design, Chemistry42 [44] is a proprietary platform integrating AI with computational and medicinal chemistry techniques.
## 4 Future of molecular discovery
A few years ago, the idea of querying an AI model - like one would a search engine - to not only extract scientific knowledge but also perform computational analyses was an overly ambitious feat. Scientific thinking comes from the ability to reason, and AI models cannot reason like humans, yet. However, these models can **learn** from humans. Our propensity to document everything has enabled us to train Large Language Models (LLMs), like ChatGPT [64] and GitHub Copilot [1], to mimic human responses. When brought into the context of computational science, this could equip non-experts to confidently conduct computational analyses through well-designed prompts. With human-in-the-loop, a synergistic effect could be created where the scientist provides feedback to the model on its output, thus aiding in better model optimization (a strategy called reinforcement learning from human feedback (RLHF) that has been proven critical for ChatGPT [21]). These applications also reduce the barrier for individuals from non-scientific backgrounds to gain a more hands-on experience in conducting scientific analyses without having to go through formal training in computational analysis.
This section provides a sneak peak into what's next for molecular discovery. Riding the LLM wave, the future holds a place for chatbot-like interfaces that may take care of all things computational in molecular discovery. This includes, for example, generating and iteratively improving design ideas, synthesis planning, material purchasing, performing routine safety checks, and validating experiments.
#### The rise of foundation models in chemistry
Conventionally, neural networks are trained for a single given task to achieve maximum performance. This essentially renders the models useless for other tasks, thus requiring a new model for every new task, even when the training domain is the same, which in turn imposes a constraint on the rate of our technological advancements. Over the last few years, this conventional approach has been challenged by Large Language Models (LLMs). It has been found that scaling up LLMs leads to astonishing performances in few-shot [17] and even zero-shot task generalization [76]. Referred to as "foundation models" [30, 63], these models, with typically billions of parameters, can perform multiple tasks despite being trained on one large dataset. Essentially, this multi-task learning is achieved by prompting LLMs with task instructions along with the actual query text which has been found to induce exceptional performance in natural language inference and sentence completion [76]. These findings have kicked off new research directions, such as prompt engineering [97] and in-context learning [17], in NLP.
The foundation model paradigm also finds an increasing adoption in chemistry. There is an increase in task-specific models integrating natural and chemical languages [26, 94, 95, 104]. Concurrently, multi-tasking in pure CLMs has also been advancing through models that combined tasks such as property prediction, reaction prediction and molecule generation either with small task-specific heads (e.g., T5Chem [56]) or via mask infilling (e.g., Regression Transformer [10]). Christofidellis et al. [22] were the first to bridge the gap and develop a fully prompt-based multi-task chemical and natural language model. Despite only 250M parameters, the _Multitask Text and Chemistry T5_ was shown to outperform ChatGPT [64] and Galactica [87] on a contrived discovery workflow for re-discovering a common herbicide (natural text \(\rightarrow\) new molecule \(\rightarrow\) synthesis route \(\rightarrow\) synthesis execution protocol).
### The coalescence of chatbots with chemistry tools
Given the aforementioned strong task generalization performances of LLMs, building chatbot interfaces around it was a natural next step and thus next to ChatGPT [64], many similar tools were launched. Such tools were found to perform well on simplistic chemistry tasks [18, 99],
opening potential to reshape how chemists interact with chemical data, enabling intuitive access to complex concepts and make valuable suggestions for diverse chemical tasks. Furthermore, AI models specifically developed by computer scientists for e.g. drug discovery or material science can be made available through applications powered by LLMs, such as chatbots. This minimizes the access barrier for subject matter experts who would otherwise require the respective programming skills to utilize these AI models. The power of such chatbots is reached through the coalescence of LLMs and existing chemistry software tools like PubChem [48], RDKit [50] or GT4SD [57]. Together, such applicatio
Figure 4: Screenshot of the LLM-powered chatbot application ChemChat. Embedding the capabilities of existing resources such as PubChem [48], RDKit [50] or GT4SD [57] enables the assistant to execute programming routines in the background and thus answer highly subject-matter specific user requests without the user needing programming skills.
potential and value of these models by the strongly enhanced usage. An example of how the interaction with such a tool could look like is shown in Figure 4.
In this example, a user provides a molecule (either as SMILES string or via a molecule sketcher) and asks to identify the molecule. The chatbot relies on prompt-engineering in order to inform the LLM about all its available tools. The user input is first sent to the LLM which recognizes that one of its supported tools, in this case PubChem, can answer the question. The chatbot then sends a request to the PubChem API and returns a concise description of the molecule. The user subsequently asks to compute the logP partition coefficient [100] and the quantitative estimate of drug-likeness (QED) [7]. Calculation of both properties is enabled through the GT4SD tool [57] allowing the chatbot to answer the request with certainty. This will trigger a programming routine to accurately format the API request for GT4SD, i.e., composing the SMILES string with the logP or QED endpoint. The computation is then performed asynchronously and a separate call to the post-processing routine formats the LLM-generated string reply and composes the response object for the frontend.
This fusion of LLMs with existing tools gives rise to a chatbot assistant for material science and data visualization that can perform simple programming routines without requiring the user to know programming or have access to compute resources. A continuation of the conversation involving more complex user queries is shown in Figure 5.
Having identified the initial molecule as theobromine with a logP of -1.04, the user requests three similar molecules with a slightly increased logP of -0.5. Here, ChemChat identifies the Regression Transformer [10] as the available tool to perform substructure-constrained, property-driven molecule design. Once the routine has been executed and the three candidate SMILES are collected, the text result is post-processed to add more response data objects such as molecule visualizations, datasets or Vega Lite specs for interactive visualizations.
In conclusion, chatbots can facilitate the integration of essentially all major cheminformatics software in a truly harmonized and seamless manner. While LLMs are not intrinsically capable to perform
Figure 5: Screenshot of the LLM-powered chatbot application ChemChat showing the continuation of the conversation involving generative tasks through GT4SD’s Regression Transformer [10] as well as property [28] and similarity calculation [74, 86].
complex routines, at least not with high precision and in a trustworthy manner, the synergy between their natural language abilities with existing chemistry tools has the potential to transform the way chemistry is performed.
|
2309.16962 | Lifting the Fog of Uncertainties: Dynamic Resource Orchestration for the
Containerized Cloud | The advances in virtualization technologies have sparked a growing transition
from virtual machine (VM)-based to container-based infrastructure for cloud
computing. From the resource orchestration perspective, containers' lightweight
and highly configurable nature not only enables opportunities for more
optimized strategies, but also poses greater challenges due to additional
uncertainties and a larger configuration parameter search space. Towards this
end, we propose Drone, a resource orchestration framework that adaptively
configures resource parameters to improve application performance and reduce
operational cost in the presence of cloud uncertainties. Built on Contextual
Bandit techniques, Drone is able to achieve a balance between performance and
resource cost on public clouds, and optimize performance on private clouds
where a hard resource constraint is present. We show that our algorithms can
achieve sub-linear growth in cumulative regret, a theoretically sound
convergence guarantee, and our extensive experiments show that Drone achieves
an up to 45% performance improvement and a 20% resource footprint reduction
across batch processing jobs and microservice workloads. | Yuqiu Zhang, Tongkun Zhang, Gengrui Zhang, Hans-Arno Jacobsen | 2023-09-29T04:11:12Z | http://arxiv.org/abs/2309.16962v1 | # Lifting the Fog of Uncertainties: Dynamic Resource Orchestration for the Containerized Cloud
###### Abstract.
The advances in virtualization technologies have sparked a growing transition from virtual machine (VM)-based to container-based infrastructure for cloud computing. From the resource orchestration perspective, containers' lightweight and highly configurable nature not only enables opportunities for more optimized strategies, but also poses greater challenges due to additional uncertainties and a larger configuration parameter search space. Towards this end, we propose Drone, a resource orchestration framework that adaptively configures resource parameters to improve application performance and reduce operational cost in the presence of cloud uncertainties. Built on Contextual Bandit techniques, Drone is able to achieve a balance between performance and resource cost on public clouds, and optimize performance on private clouds where a hard resource constraint is present. We show that our algorithms can achieve sub-linear growth in _cumulative regret_, a theoretically sound convergence guarantee, and our extensive experiments show that Drone achieves an up to 45% performance improvement and a 20% resource footprint reduction across batch processing jobs and microservice workloads.
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote: copyrighted: none
+
Footnote: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote: copyrighted: none
+
Footnote: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote: copyrighted: none
+
Footnote: copyrighted: none
+
Footnote †: copyrighted: none
+
Footnote: copyrighted: none
+
Footnote: copyrighted: none
+
Footnote: copyrighted: none
+
Footnote: copyrighted: none
+
Footnote: copyrighted: none
+
Footnote: copyrighted: none
+
Footnote: copyrighted: none
+
Footnote: copyrighted: none
+
Footnote: copyrighted: none
+
Footnote: copyrighted: none
+
Footnote: copyrighted: none
+
Footnote: copyrighted: none
+
Footnote: copyrighted: none
+
Footnote: copyrighted: none
+
Footnote: copyrighted: none
+
Footnote: copyrighted: none
+
Footnote: copyrighted: none
+
Footnote: copyrighted: none
+
is to progressively optimize a containerized application's resource configuration over its lifespan with minimum manual intervention and the often-costly explicit workload profiling phase. At its core, Drone is built upon recent advances in Gaussian process-based contextual bandits (Wang et al., 2017). By encompassing time-variant cloud uncertainties as contextual parameters, Drone follows an iterative procedure to continuously refine resource configurations based on the previous context-action pairs and collected performance metrics. Assuming a minimal structural relationship between application performance and resource configurations, the power of such a non-parametric model makes Drone versatile across a diverse range of cloud environments and adaptable to various application types and workloads. Specifically, we examine two settings within a shared cloud infrastructure: a) _public cloud_, where computational resources can be effectively considered unlimited and Drone demonstrates adeptness in striking an efficient balance between performance and cost, and b) _private cloud_, where there exists a stringent cap on computational resources and Drone proves capable of optimizing application performance within these resource constraints. Drone is also theoretically sound in both settings since it achieves a sublinear growth of cumulative regret, meaning that the algorithm converges fast with respect to its running time.
We evaluate Drone by deploying various applications on our cloud-hosted Kubernetes cluster using Drone as an integrable resource orchestrator. Our extensive experimental analysis, employing realistic workloads, demonstrates Drone's superior performance compared to alternative solutions in several respects. First, for recurring analytical jobs for which bandit-based approaches have been shown to be efficient (Kubernetes et al., 2017; Wang et al., 2017), Drone exhibits further improvement in performance by accounting for a broader spectrum of cloud uncertainties, coupled with its adherence to resource constraints in the private cloud environment. Second, for user-facing microservices where workload variability is more ad-hoc and no explicit profiling phase is available, Drone also achieves a 37% improvement on P90 latency compared to state-of-the-art alternatives, a result further amplified by our bespoke enhancements over the standard bandit optimization procedure, including a sliding window-based data sampler, empirically optimized starting point selection and latency-aware scheduling mechanisms. To the best of our knowledge, Drone is the first work to harness the potential of resource allocation in a containerized cloud using bandit-based approaches. It showcases superior adaptability across diverse settings in comparison to the preceding VM-based efforts. To sum up, we make the following contributions in this paper:
1. Through comprehensive experimental analysis, we validate the non-structural performance-resource relationship and the significant influence of uncontrollable time-variant environment variables (the cloud uncertainties) on application performance under multiple cloud scenarios.
2. Leveraging recent advances in bandit algorithms, we design Drone, a general-purpose online resource orchestration framework for container-based cloud systems. Drone progressively optimizes the performance-cost tradeoff in public cloud environments, while maintaining strict adherence to resource constraints in resource-limited private clouds. In both cases, Drone theoretically exhibits a fast convergence rate, guaranteeing its performance.
3. We implement Drone as a customized resource orchestrator on top of Kubernetes. Using realistic cloud workloads, we show through extensive experiments that Drone outperforms state-of-the-art alternatives in terms of application performance, cost efficiency and resource constraint compliance.
## 2. Background and Related Work
### Cloud Resource Orchestration
Intelligent resource orchestration on the cloud has long been an active research area which can be categorized as follows based on the underlying techniques adopted.
**Heuristic-based Approaches.** A simple yet practically effective resource orchestration choice is based on heuristics. They are usually intuitive and easy to implement and hence are widely adopted in industrial solutions (Kubernetes, 2017; Wang et al., 2017; Wang et al., 2017; Wang et al., 2017). For example, the default container autocalers in Kubernetes (Kubernetes, 2017) include _Horizontal Pod Autoscaler (HPA)_ and _Vertical Pod Autoscaler (VPA)_, both of which follow a rule-based scaling policy. Such policies enable cloud tenants to define thresholds for interested metrics according to which the system performs autoscaling. However, setting appropriate thresholds for such metrics is a non-trivial task. The optimal values are often application-specific and require expert knowledge from the developer or system administrator. Therefore, such heuristic approaches can hardly generalize across various cloud applications and often involve significant manual efforts.
**Model-based analytical approaches.** Another line of work establishes analytical models to encapsulate the relationship between performance objectives and resource orchestration decisions. The problem is thus often modelled as an optimization problem and certain assumptions are usually made on the problem structure (e.g., linearity and convexity) so that theoretical properties can be utilized to efficiently solve
the problem (Han et al., 2014; Chen et al., 2015; Chen et al., 2016; Chen et al., 2017; Chen et al., 2018; Chen et al., 2019). Control theory and queuing theory are also common theoretical tools for designing resource management solutions (Han et al., 2014; Chen et al., 2015; Chen et al., 2016; Chen et al., 2018). Despite the favorable theoretical characteristics of such solutions, real-life cloud applications generally fail to satisfy the desired problem structure due to varying workload profiles and other cloud uncertainties (Han et al., 2014).
**Predictive approaches using machine learning (ML).** To mitigate over-provisioning overhead and human effort of heuristic-based solutions, predictive approaches predict future workload or system behavior with past statistics and adjust resource allocation in advance to meet future application needs. This type of approaches usually employs well-established machine learning models, such as linear regression (Han et al., 2014; Chen et al., 2016), support vector machine (Zhu et al., 2016) and various types of neural networks (Han et al., 2014; Chen et al., 2015; Chen et al., 2016; Chen et al., 2018; Chen et al., 2019; Chen et al., 2019). Although effective in certain conditions, these ML-based approaches have their intrinsic limitations. First, to deploy such solutions, an exclusive profiling/training phase is generally needed, which can be costly and not available in production-level realistic systems. Second, such ML-based solutions perform best with general workloads or workloads with repeating patterns similar to their training data, but they adapt poorly to fluctuating workloads (Han et al., 2014). Moreover, training data quantity and quality impact the performance of an ML model to a significant extent. It is also non-trivial and requires specialized domain knowledge to select representative training data and costly retraining is often needed if workload shift happens.
More recently, Reinforcement learning (RL) has captured attention from the resource management community (Han et al., 2014; Chen et al., 2016; Chen et al., 2018; Chen et al., 2019), thanks to its ability to interact with the environment while optimizing its resource allocation actions. However, apart from the fact that RL frameworks also need to pretrain their agents and hence share similar limitations to the aforementioned ML models, they usually fail to achieve a convergence guarantee. Also, in RL models, actions taken are in turn affecting the environment (i.e., the states), while in real-life clouds, many environment variables are independent of actions, such as workload uncertainty which comes directly from the end users.
### Bandit Algorithms
The limitations of existing work suggest that an ideal resource orchestration framework should optimize resource allocation decisions in an online manner with minimum model pre-training and human intervention. More importantly, it should work efficiently in today's complex containerized cloud, taking various cloud uncertainties into account and fitting in different cloud settings. To this end, we resort to the contextual bandit approach (Wang et al., 2019), a data-efficient non-parametric solution. Contextual bandit is an extension of the well-studied Multi-Armed Bandit (MAB) problem (Wang et al., 2019) by incorporating contextual information about uncontrollable environment variables, such as cloud uncertainties in the cloud computing context. The original MAB problem is a sequential optimization problem, where a player sequentially selects from a finite set of options with probabilistic rewards to maximize the total reward over time. Bayesian Optimization (BO) is a continuous variant of the MAB problem which aims to find the optimizer of a black-box function by incrementally building a model of the objective function. Although part of our control domain (e.g., fine-grained container resource scaling) can be considered continuous which makes our problem essentially a BO with contextual extension, we stick to the term contextual bandits throughout this
\begin{table}
\begin{tabular}{c|c|c|c|c|c|c|c|c} \hline \hline \multirow{2}{*}{**Framework**} & \multirow{2}{*}{**Application**} & \multirow{2}{*}{**Computing**} & \multirow{2}{*}{**Optimization**} & \multirow{2}{*}{**Acquisition**} & \multirow{2}{*}{**Uncertainties**} & \multirow{2}{*}{**Resource**} & \multirow{2}{*}{**Workload**} & \multirow{2}{*}{**Convergence**} \\ & & & & & & **(contracts)** & & \\ \hline Dremel (Zhu et al., 2016) & DB Tuning & - & DB IOPS & UCB & ✗ & - & DB queries & ✗ \\ \hline CGPTuner (Chen et al., 2016) & DB Tuning & - & Performance & \multirow{2}{*}{GP-Hedge} & Workload & - & Recurring & \multirow{2}{*}{✗} \\ & & Improvement & & only & - & DB queries & \\ \hline \multirow{2}{*}{Cherrypick (Han et al., 2014)} & VM config. selection & VM & Customized Cost & EI & ✗ & ✗ & Recurring & \multirow{2}{*}{✗} \\ & & selection & & & & & analytical jobs & \\ \hline \multirow{2}{*}{Acordia (Wang et al., 2019)} & VM config. selection & VM & Customized Cost & GP-UCB & ✗ & ✗ & Recurring & \multirow{2}{*}{✗} \\ & & selection & & & & & analytical jobs & \\ \hline \multirow{2}{*}{RAMBO (Wang et al., 2019)} & Resource orchestration & Container & Customized Cost & SMSego & ✗ & ✗ & Microservices & ✗ \\ \hline \multirow{2}{*}{Drone} & Resource orchestration & Container & Performance-cost tradeoff (public cloud) & \multirow{2}{*}{GP-UCB} & \multirow{2}{*}{✓} & \multirow{2}{*}{✗} & \multirow{2}{*}{General} & \multirow{2}{*}{✗} \\ & & Performance & & & & & & \\ \cline{1-1} & & opt. (private cloud) & & & & & & \\ \hline \hline \end{tabular}
\end{table}
Table 1. Computer systems studies using bandit algorithms.
paper to highlight the contextual nature and align with the theoretical literature.
**Bandit algorithms in computer systems research.** Due to the ability to model arbitrary performance functions, bandit algorithms have also been employed in computer system-related research, such as database parameter tuning (Kumar et al., 2017; Wang et al., 2018; Wang et al., 2019) and VM configuration selection (Wang et al., 2018; Wang et al., 2019; Wang et al., 2019). Dremel and CGPTuner (Kumar et al., 2017; Wang et al., 2019) use bandit algorithms to fine-tune DBMS-specific parameters and the sole objective is to maximize database performance without constraints, while we focus on a lower level of resource orchestration and consider the performance-cost tradeoff. The closest works to ours are Cherrypick (Wang et al., 2018) and Accordia (Cherrypick and Accordia, 2019). Cherrypick is among the first works to apply bandit algorithms to systems research, aiming to pick the best VM configuration using Bayesian Optimization for big data analytical jobs. It uses Expected Improvement (EI) as its acquisition function, which lacks a convergence guarantee. Accordia studies the exact same problem, and advances one step further by employing the recent GP-UCB algorithm (Wang et al., 2019) with convergence guarantee. However, both Cherrypick and Accordia have inherent limitations which prevent them from being readily applicable to the current containerized cloud. First, both works study the VM configuration selection problem where only a finite set of options are available, while finer-grained, almost-continuous control is possible for containers, as mentioned in Section 1. Second, both Cherrypick and Accordia focus on _recurring_ analytical jobs, whose workload patterns are regular and predictable. Therefore, they are implicitly using the first few runs of the recurring job as the training phase and thus cannot generalize to workload variations. Last but not least, their performance objectives are solely dependent on the actions taken, and they assume infinite resources without considering the uncontrollable cloud uncertainties and resource-limited private clouds. Drone, on the other hand, is uncertainty-aware and generalizes to different cloud workloads and settings. We would also like to mention RAMBO (Wang et al., 2019), a BO-based resource allocation framework for microservices. Although RAMBO solves a similar problem to our work, technical details of implementation and design choices are not sufficiently provided in the paper. A detailed comparison between Drone and closely related works is summarized in Table 1.
## 3. Problem Analysis
In this section, we show through experimental analysis important observations which motivate our work. To justify the complex performance-cost relationship and the substantial impact of cloud uncertainties on application performance in a containerized cloud, we set up a cloud-hosted testbed consisting of 16 VMs (see Section 5 for detailed specifications) to run benchmarking jobs. All jobs are submitted as Kubernetes-managed containers unless otherwise specified. **Non-structural performance-cost relationship.** To study the relationship between application performance and allocated resources, we benchmark three representative analytical workloads running on the native Spark Operator on Kubernetes (Kubernetes, 2019): PageRank, Sort and Logistic Regression (LR). PageRank is a graph-processing algorithm for which we use the Pokec social network graph data (Kubernetes, 2019) with 1.6M vertices and 30M edges. We use gensort(Kubernetes, 2019) to generate 150GB of randomly permuted 100-byte records for sorting. For LR, we use a 4-year span of -400k stock price records from the Nifty 100 Index (Kubernetes, 2019) to train the model. Experiments are repeated five times and the results are shown in Figure. 1(a). While allocating more RAM generally leads to better performance, beneficial theoretical attributes such as linearity and convexity are not manifested in this relationship. For example, LR does not suffer from performance gain saturation when given excessively more RAM, displaying an over 2x performance improvement with increasing RAM allocation from 96GB to 192GB because it benefits from more RAM as a memory-bound job. More interestingly, the performance-cost relationship can even be non-monotonic, meaning more resources does not necessarily lead to performance improvement, as can be observed for PageRank. This is largely due to the fact that PageRank is an iterative network-intensive algorithm where data shuffling between not-co-located containers is needed in each operation. In this
Figure 1. Performance of representative Spark analytical workloads under different RAM allocations.
case network bandwidth is the major bottleneck instead of RAM.
We repeat the same experiments using identical configurations on the vanilla Spark cluster deployment without involving containers and report the results in Figure. 1(b). Although the performance metrics and the performance-cost relationship patterns are similar to the containerized setting, an important finding is that the variance of performance measurements in the VM-based setting (indicated by black confidence intervals on each bar) is much smaller. The stability is in part owing to the more mature architectural support, but also corroborates our insight that greater uncertainties and anomalies are introduced in a containerized cloud. In fact, we do observe more frequent Spark executor errors and restarts on Kubernetes.
**Impact of cloud uncertainties.** We also show that besides workload intensity, other uncontrollable cloud uncertainties can also significantly impact application performance. To better model adverse situations in a shared cloud, we apply interference injection across experiments to create random resource contention (Spark and Flink, 2017), including CPU utilization, RAM bandwidth and network latency and bandwidth. The interferences' occurrence follows a poisson process with average rate of 0.5 per second. The intensity of each interference is uniformly and independently chosen at random between [0,50%] of the total capacity. We first study the performance of sorting varying sizes of data on the Kubernetes deployment of Spark and Flink (Spark and Flink, 2017). All experiments are conducted five times with the same resource configuration (36 CPU cores and 192GB of total RAM) and identical data for one size. The results are shown in Figure 2. We can observe that the variance across multiple runs increases with data size, reporting a coefficient of variation of up to 23% for Spark and 27% for Flink, indicating that application performance can be quite variable due to cloud uncertainties other than workload, especially with a large volume of data which is common in the current "big data era". From the performance discrepancy between Spark and Flink, we can also see that the performance is platform-dependent, meaning that even if we have found the optimal resource configuration for one specific workload, it is not readily transferable to other platforms running the same workload and thus additional configuration tuning may be required.
The impact of cloud uncertainties can be even more serious for microservice applications, due to their complicated calling graphs and the resulting inter-container communication patterns (Spark and Flink, 2017; Flink, 2017). Towards this end, we deploy an example microservice application Sockshop(Sockhop, 2017) consisting of 10+ stateless and database microservices which simulates an online shopping web service. The architecture of Sockshop is shown in Figure 3. It is evident that the Order microservice can be a performance bottleneck due to its connection with several other microservices. With the same resource configuration and workload, we compare the end-to-end latency of two affinity rules and show the Cumulative Distribution Function (CDF) in Figure 4. We can find that if we forcefully isolate Order from other microservices (by setting node-affinity rules for corresponding pods in Kubernetes), the performance is 26% worse in terms of P90 latency than the case where we try to colocate Order with other microservices in a best-effort manner. This finding further verifies our claim that the impact of non-workload uncertainties can be significant, and amount-irrelevant resource orchestration decisions can also be deciding factors for application performance.
## 4. Drone design
In this section, we present Drone, our dynamic resource orchestrator for the containerized cloud. Starting with a brief introduction of contextual bandits and why it is a promising choice for the problem context, we then detail our design and algorithms under both public and private cloud settings. Finally, the implementation and domain-specific optimizations
are discussed which complement practically our algorithmic contribution.
### Overview of Contextual Bandits
As briefly discussed in Sec. 2.2, it is natural to deduce the mapping from contextual bandits to the cloud resource orchestration problem. The ultimate goal is to dynamically adjust resource allocation decisions to optimize an objective value (e.g., performance and/or cost) in the presence of time-variant cloud uncertainties. Formally speaking, we want to find the best resource configuration \(x^{*}\) from action space \(\mathcal{X}\) with uncertainty context \(\omega\in\Omega\) such that the objective function \(f\) is optimized:
\[x^{*}=\operatorname*{arg\,max}_{x\in\mathcal{X}}f(x,\omega) \tag{1}\]
From this formulation, we can see that \(f\) is dependent on not only the decision variable \(x\), but also the context \(\omega\). The output of \(f\) can be any scalar value that is of the most interest to the user. Common choices include application performance indicators (e.g., latency, throughput, response time), utility, and cost. Note that (1) is also often formulated as a minimization problem if \(f\) is a cost function or captures latency/response time, but the essence of the problem remains unchanged. The action \(x\) and context \(\omega\) are vectors with domain-specific dimensions, containing all possible resource orchestration decisions and contextual parameters, respectively. We discuss the concrete dimensions we consider in our problem context in Sec. 5.1. Since the objective function has no structural relationship with the resource orchestration actions, as we point out in Sec. 3, we can only obtain an objective value by querying the corresponding action. In this case, an exhaustive search of the optimal action is clearly intractable, especially when the action space \(\mathcal{X}\in\mathbb{R}^{d}\) is a continuous domain, and the dimension \(d\) is high.
Towards this end, the contextual bandit approach significantly reduces the search cost by intelligently guiding the next action to search for in an iterative optimization process. Specifically, in each time step \(t\), the optimization agent receives a context \(\omega_{t}\) from the environment. Based on the context, the agent then chooses an action \(x_{t}\) from the action space \(\mathcal{X}\), executes this action, and then receives a reward \(y_{t}=f(x_{t},\omega_{t})+\epsilon_{t}\) as a result of the action taken, where \(\epsilon_{t}\) is a Gaussian noise \(\epsilon_{t}\sim\mathcal{N}(0,\sigma^{2})\). The noise term well encapsulates the fact that in practice we can only observe a perturbed function value due to unavoidable measurement error. The optimization process then proceeds on to time step \(t+1\) with the reward-input pair \((y_{t},x_{t},\omega_{t})\) appended to the history information to further guide searching in the next iteration.
To evaluate the quality of the actions taken, we use _cumulative regret_\(R_{T}\) which measures the cumulative performance gap over the complete algorithm running span of \(T\) time steps, a common metric to assess an online sequential optimization algorithm (Zhou et al., 2017):
\[R_{T}=\sum_{t=1}^{T}\left(\max_{x^{*}\in\mathcal{X}}f\left(x^{*},\omega_{t} \right)-f\left(x_{t},\omega_{t}\right)\right) \tag{2}\]
A desired property for an efficient online algorithm is to have _sub-linear regret growth:_\(\lim_{T\to\infty}R_{T}/T\to 0\), meaning that we can quickly find (near-)optimal actions so that the performance gap converges to zero relatively fast. As we will show in the following sections, Drone achieves sub-linear regret growth in both public and private cloud settings.
### Public Cloud: Cost-aware Performance Optimization
We first propose our contextual bandit-based algorithm to jointly optimize application performance and resource cost in public cloud environments where computational resources are unlimited.
**Why can we assume infinite resources?** It seems natural to assume that computational resources are infinite on public clouds, as previous works (Zhou et al., 2017; Zhou et al., 2017) also instinctively did. While the assumption is plausible given the massive scale of major cloud providers1, it may not be readily justifiable from the perspective of individual users or small businesses. For instance, if users are at the edge of their budget for cloud resource renting, they may not be willing to acquire more resources even if that can bring better application performance. In fact, this assumption can be rationalized by cost-saving incentives provided by public cloud, such as Spot Instance (Beng et al., 2016) and Burstable Instance (Beng et al., 2016) on AWS2. Spot instances are preemptive, low-priority VMs at a considerably lower price than on-demand instances3. Burstable instances are VMs that have the ability to "burst" to a significantly higher resource configuration than their normal capacity to handle ephemeral and infrequent workload peaks, which is much cheaper than on-demand instances with the same full capacity. We profile the cost-saving effects of spot and burstable instances by issuing the same batch processing (Sort) and microservice workload with the regular instance m5.large as baseline. Table 2 depicts the normalized cost savings of running the same workload across cloud incentive combinations. We observe an up to 7.19x cost saving by employing burstable spot instances and 6.1x cost savings with spot instances alone, both showing notable cost efficiency
over on-demand regular instances. Therefore, by judiciously adopting these cloud incentives, one can expect a significant cost reduction achieving the same application performance, meaning that under the same budget, a significantly larger resource configuration search space is available, and this in turn justifies the infinite resources assumption.
Another interesting finding is that spot prices can vary drastically with time in an unpredictable manner. Figure 5 shows spot prices of three instance types over a 1-month time span, which exhibit no regular patterns and vary across instance types to a great extent. This suggests that the spot price is an additional contextual dimension to be considered which can greatly impact the resource cost.
**Problem formulation.** Given the assumption of unlimited resources, the optimization objective on public clouds is to keep a balance between application performance and the monetary resource cost. Formally, our optimization problem can be formulated as maximizing a reward function \(f\):
\[\max_{\mathbf{x}_{t}} f(x_{t},\omega_{t})=\alpha p(x_{t},\omega_{t})-\beta c(x_{t}, \omega_{t}) \tag{3}\] \[s.t. \mathbf{x}_{t}\in\mathcal{X},\omega_{t}\in\Omega,\quad\forall t \tag{4}\]
where \(p(x_{t},\omega_{t})\) is the application performance indicator which can be measured at the end of each time step \(t\); \(c(x_{t},\omega_{t})\) is the resource cost associated with the resource orchestration decision \(x_{t}\) and cloud uncertainties enclosed in the context \(\omega_{t}\). \(\alpha\) and \(\beta\) are configurable weights that capture a user's preference between performance and cost.
**How to guide the search process?** By a sequential optimization process, contextual bandit-based algorithms are able to learn more about the objective function \(f\) in every iteration with newly observed data resulting from evaluating \(f\) at point \((x_{t},\omega_{t})\). Therefore, a key design choice of contextual bandits algorithms is to determine how to choose the next point to evaluate so as to learn the most about the objective function. Towards this purpose, we first need to put a surrogate model on \(f\) so that it can be efficiently updated iteratively. In Drone, we choose Gaussian Process (GP) (Drone et al., 2017), a common choice adopted by prior works (Han et al., 2015; Done et al., 2017; Done et al., 2017; Done et al., 2017). As a non-parametric model, GP assumes a function is sampled from a Gaussian distribution, which adds a minimal smoothness assumption on the objective function such that function values evaluated at close inputs will also be close. Formally, let \(z\in\mathcal{X}\times\Omega\) be a joint action-context pair, a GP\((\mu,k)\) is fully specified by its mean function \(\mu(z)=\mathbb{E}[f(z)]\) and covariance or kernel function \(k(z,z^{\prime})=\mathbb{E}[f(z)-\mu(z)(f(z^{\prime})-\mu(z^{\prime}))]\) which acts as the data-independent prior distribution. Now define that \(y_{t}=f(z_{t})+\epsilon_{t}\) is a noisy sample of true function value \(f(z_{t})\) and \(\mathbf{y}_{T}=[y_{1},y_{2},\cdots,y_{T}]\) are evaluated at points \(Z_{T}=[z_{1},z_{2},\cdots,z_{T}]\) (the past data points). Assume now we are given a new \(z^{*}\) and would like to infer the function value \(f^{*}\), we can get a closed-form posterior distribution which is also a GP with the following mean and variance:
\[\mu_{T}\left(z^{*}\right)=\mathbf{k}_{T}\left(z^{*}\right)^{\top }\left(\mathbf{K}_{T}+\sigma^{2}\mathbf{I}\right)^{-1}\mathbf{y}_{T} \tag{6}\] \[\sigma_{T}^{2}\left(z^{*}\right)=k\left(z^{*},z^{*}\right)- \mathbf{k}_{T}\left(z^{*}\right)^{\top}\left(\mathbf{K}_{T}+\sigma^{2} \mathbf{I}\right)^{-1}\mathbf{k}_{T}\left(z^{*}\right) \tag{5}\]
where \(\mathbf{k}_{T}(z^{*})=[k(z_{1},z^{*}),k(z_{2},z^{*}),\cdots,k(z_{T},z^{*})]\) and \([\mathbf{K}_{T}]_{ij}=k\left(z_{i},z_{j}\right)\) is the kernel matrix. We choose the widely adopted Matern kernel with \(\nu=\frac{3}{2}\) following empirical practices. These analytical equations allow us to efficiently infer the function value at a new point based on previous observations and action-context pairs.
Another key element of contextual bandits is to determine how to suggest the next point to evaluate so as to learn most about the objective function. This is achieved by choosing the point maximizing the _acquisition function_, a function that assesses the quality of an action point and is much cheaper to optimize than the original objective function. Next to other popular choices such as Probability Improvement (PI), Expected Improvement (EI) and Thompson Sampling (TS) (Done et al., 2017; Done et al., 2017), we choose Upper Confidence Bound (UCB) (Done et al., 2017), the update rule of which is given as follows:
\[x_{t}=\operatorname*{arg\,max}_{x\in\mathcal{X}}\ \mu_{t-1}(x,\omega_{t})+ \sqrt{\zeta_{t}}\sigma_{t-1}(x,\omega_{t}) \tag{7}\]
An important rationale behind choosing UCB, as can be perceived from the equation, is that it efficiently balances _exploration_ of undiscovered resource configurations and _exploitation_ of configurations that have already been observed to be well-performing. The hyperparameter \(\zeta_{t}\) serves to balance the tradeoff: choosing a small \(\zeta_{t}\) indicates we value the
\begin{table}
\begin{tabular}{|l|l|l|l|} \cline{2-4} \multicolumn{1}{c|}{} & m5.large & Spot only & Spot + Burstable \\ \hline Batch jobs & 1x & 6.10x & 7.19x \\ \hline Microservices & 1x & 5.28x & 6.73x \\ \hline \end{tabular}
\end{table}
Table 2. Normalized cost savings from cloud incentives.
Figure 5. Spot instance prices from April 2023 for m5.16xlarge, c5.18xlarge and r5.16xlarge instance types on AWS.
first mean term more, hence will more likely select an action close to one which previously led to better performance; choosing a large \(\zeta_{t}\), on the other hand, focuses more on the variance term so that under-explored actions with higher uncertainty are more likely to be selected. Moreover, in the GP setting, UCB is superior in terms of both computational efficiency (Yamaguchi et al., 2017; Zhang et al., 2018) and convergence rate (Zhu et al., 2019) compared to alternatives such as GP-TS.
Combining these design choices, we summarize our GP-UCB-based online resource orchestration algorithm in Algorithm 1.
```
0: Performance-cost balance weights \(\alpha,\beta\);
0: Action Space \(\mathcal{X}\);
1:\(S_{0}\leftarrow\emptyset\); \(\triangleright\)\(S_{t}\) stores action-context pairs up to time \(t\)
2:\(\mathbf{y}_{0}\leftarrow\emptyset\); \(\triangleright\)\(\mathbf{y}_{t}\) stores noisy rewards up to time \(t\)
3:for\(t=1,2,\cdots\)do
4: Observe current context \(\omega_{t}\);
5: Select resource configuration \(x_{t}\) according to (7);
6: Observe noisy reward \(y_{t}=f(x_{t},\omega_{t})+\epsilon_{t}\);
7:\(S_{t}\gets S_{t-1}\cup(x_{t},\omega_{t})\);
8:\(\mathbf{y}_{t}\leftarrow\mathbf{y}_{t-1}\cup y_{t}\);
9: Update \(\mu_{t}\) and \(\sigma_{t}\) by posterior update rule (5)-(6);
10:endfor
```
**Algorithm 1** Contextual Bandits for Public Clouds
**Regret analysis.** A desired property of a bandit algorithm is to have sub-linear cumulative regret growth. Our algorithm achieves this with high probability by setting appropriate hyperparameters, as shown in the following theorem:
Theorem 4.1 ().: _Let \(\delta\in(0,1),\forall T\geq 1\), the cumulative regret of Alg. 1 is upper bounded by \(O(\sqrt{T\gamma_{T}\zeta_{T}})\) with high probability. Precisely,_
\[Pr\{R_{T}\leq\sqrt{C_{1}T\gamma_{T}\zeta_{T}}+2\}\geq 1-\delta \tag{8}\]
_where \(C_{1}=\frac{8}{\log(1+\sigma^{-2})}\), \(\zeta_{t}=2B^{2}+300\gamma_{t}\log^{3}(\frac{t}{\delta})\)._
Here, \(\gamma_{T}\) is the maximum information gain in the order of \(O(T^{l}\log T)\) where \(l<1\). \(B\geq||f||_{k}\) is the upper bound of the Reproductive Kernel Hilbert Space (RKHS) norm of \(f\), a common assumption in bandit algorithms. Due to space constraint, please refer to (Han et al., 2017) for proof of the theorems.
### Private cloud: Resource-constrained Performance Optimization
For security or data privacy concerns, organizations often resort to a private cloud solution instead of running their jobs on a public cloud. A private cloud is a self-hosted computing cluster that the organization has full control of. The organization is also able to fully unlock the power of computing nodes by customizing their hardware and software architectures tailored to its own needs which is often constrained on public clouds. Compared to the pay-as-you-go model on public clouds, organizations pay the resource cost upfront at the purchase of the hardware to build the private cloud. The update cycle is generally up to several years when the hardware is too old or the business scale has been significantly expanded. In this case, any resource orchestration decision must respect the private cloud's total resource limit, which is a hard constraint. The optimization objective under such scenarios is thus optimizing application performance subject to the hard resource constraints (Han et al., 2017; Zhang et al., 2018; Zhang et al., 2018). Formally, the resource orchestration optimization problem in the private cloud can be formulated as:
\[\max_{\mathbf{x}_{t}} p(x_{t},\omega_{t})\] \[s.t. x_{t}\in\mathcal{X}_{t}^{S},\omega_{t}\in\Omega,\quad\forall t \tag{9}\]
where \(\mathcal{X}_{t}^{S}\) is the _safe_ set from the action domain at time step \(t\) so that actions can only be selected from the safe set to comply with resource constraints. Specifically, denote \(P_{max}\) as the resource constraint and let \(P(x_{t},\omega_{t})\) be the total resource usage resulting from action \(x_{t}\) and context \(\omega_{t}\) at time step \(t\), we have
\[\mathcal{X}_{t}^{S}=\{x_{t}\in\mathcal{X}:P(x_{t},\omega_{t})\leq P_{\max}\} \tag{11}\]
Note that \(P(x_{t},\omega_{t})\) and \(P_{\max}\) contain multiple dimensions in practice. Each of the dimensions is a resource type (e.g., CPU, RAM, and network bandwidth) and has its own limit in a private cloud. For presentation brevity, here, we abstract them as an overall constraint, without loss of generality. Moreover, \(P(x_{t},\omega_{t})\) is also an unknown function since it depends on the contextual variables \(\omega_{t}\) as well. This is reasonable since resource contention is common in a shared cloud (a private cloud can also be shared within the organization across several development teams). As a result, the performance indicator function \(p(x_{t},\omega_{t})\) and the resource usage function \(P(x_{t},\omega_{t})\) need to be modelled separately. At each time step \(t\) throughout the optimization process, our algorithm needs to select an action \(x_{t}\) from the safe set \(\mathcal{X}_{t}^{S}\) so that the performance \(p(x_{t},\omega_{t})\) is optimized. Towards this end, we use two GPs to model the performance function and the resource function, respectively. Reusing the notation \(z\in\mathcal{X}^{S}\times\Omega\) as a joint safe action-context pair, at each time step \(t\), noisy values of both functions are observed as \(y_{t}=p(z_{t})+\epsilon_{t}\) and \(\phi_{t}=P(z_{t})+\epsilon_{t}\). We now present our solution in Algorithm 2.
The core idea of this algorithm is a two-phase process. In the first phase, starting from a guaranteed safe set, the algorithm is dedicated to exploration by randomly querying actions to gather more information to characterize the safe set. The second phase acts similarly to Alg. 1 which balances
exploration and exploitation by following the GP-UCB procedure to update the posterior GP model. However, on top of the standard GP-UCB algorithm, it leverages information from previous exploration to iteratively expand the safe set based on the lower confidence interval of the resource usage function \(P\) (Line 14). We show through the following theorem that Alg. 2 also achieves sub-linear cumulative regret growth:
Theorem 4.2 ().: _Let \(\delta\in(0,1)\), for sufficiently large \(T\geq 1\), the cumulative regret of Alg. 2 is upper bounded by \(O(\sqrt{T\gamma_{T}\zeta_{T}})\) with high probability. Precisely,_
\[Pr\{R_{T}\leq BT^{\prime}+\sqrt{C_{1}T\gamma_{T}\zeta_{T}}\}\geq 1-\delta \tag{12}\]
_where the parameters \(C_{1},\gamma_{T}\) and \(\zeta_{T}\) take the same value as the previous theorem._
### Drone Implementation
We implemented a prototype of Drone as an integrable resource orchestrator on top of Kubernetes. The overall system architecture of Drone is depicted in Figure 6, which contains the following components:
**Monitoring Module.** The monitoring module is a key component of the Drone framework. It is responsible for periodically collecting both performance metrics and contextual information from the cloud environment. In Drone, we choose Prometheus (Zhou et al., 2017) for this purpose. Prometheus is a market-leading monitoring system shipping with a time series database and powerful querying capabilities through its specialized query language PromQL. It is able to collect system-level real-time metrics such as CPU, RAM and network bandwidth usage through node-exporter(Tran et al., 2017) along with other potential contextual variables. By exposing a metrics exporter, applications are also enabling Prometheus to collect their performance metrics like throughput and response time. The collected metrics are stored in the time series database which can be efficiently queried upon request. The collected real-time contextual information, along with stored history performance data and action-context pairs, provides input to guide the optimization process of Drone's algorithms.
**Application Identifier.** The application identifier helps identify the type of the submitted application to make tailored resource orchestration decisions for batch processing jobs and microservices, respectively. While users can explicitly specify the application type, as discussed in 4.5, the application identifier is also able to automatically detect the application type if it is evident in the deployment specification. For example, a Spark application has an exclusive kind: SparkApplication specification field which can be easily utilized by the application identifier.
**Objective and Resource Enforcer.** Depending on whether the environment is a public cloud or a private cloud, this module specifies the optimization objective for the optimization engine. Users can tune model parameters here based on their needs, such as performance-cost preference coefficients in the public cloud setting and resource limit in the private cloud setting. In a private cloud, if the user does not specify the desired resource limit, the enforcer will set the limit according to the cluster resource usage.
Figure 6. Drone Architecture.
**Optimization Engine.** As the core part of the framework, the optimization engine is responsible for carrying out the optimization process. Based on the cloud setting set by the application identifier and the enforcer, the optimization engine continuously receives performance and contextual metrics from the monitoring module and suggests a resource orchestration action in each decision period. The action is a combination of container rightsizing and scheduling. Actions are executed by directly interacting with the Kubernetes API server to minimize additional overhead.
### Cloud-specific Optimizations
On top of the algorithmic efforts, we also make practical optimizations tailored to the cloud resource orchestration problem context to further improve Drone's usability and efficiency in practice.
**Encoding of actions and contexts.** Unlike CPU cores and RAM allocation/usage which take numerical values and thus naturally fit in our contextual bandit-based framework, some action and contextual variables do not readily manifest a numerical representation such as container scheduling decisions from the action space and possible traffic bottleneck from the context space. We address this issue by scalarizing these variables with numerical encoding. For example, we encode the scheduling decisions as a sub-vector \(x=[x_{1},x_{2},\cdots,x_{m}]\) of the entire decision vector \(x\), where \(m\) is the number of computing nodes or VMs on which a container can be scheduled. The elements \(x_{i}\in\mathbb{N}\) represent the number of containers that should be scheduled to node \(i\). Note that having an individual entry for each single node may lead to dimension explosion when the cloud scale is large. However, in practice, we can further group nodes by physical distance into zones within which nodes perceive almost no network latency when communicating with each other. The scheduling decisions will thus be executed at the zone level, significantly reducing dimensionality to the number of zones. This is particularly useful when the cloud is geographically distributed where high latency can be incurred by inter-zone communication. For traffic between nodes, we can use an integer \(a\in[0,2^{m}-1]\) to encode the possible traffic contention, which can be proven trivially by the binomial theorem.
**Characterization of applications.** We consider two representative application profiles for Drone, namely batch processing jobs and long-running web services in the form of microservices. Also referred to as Best Effort (BE) and Latency Critical (LC) applications in the recent literature (K
this proves to be a good selection with a low error rate across workloads. As a safety measure, we also implement a failure recovery mechanism that if a job errors out with no metrics produced in a pre-defined timeout period, it will be restarted with a higher resource configuration at the midpoint of the previous trial and the maximum resources available.
## 5. Experimental Evaluation
### Experimental Setup
**Testbed setup.** Our testbed cluster is hosted on Compute Canada (Candes et al., 2017), a shared national cloud platform. The cluster consists of 16 virtual machines with one control node and 15 worker nodes, a scale comparable to related work. The control node is equipped with 16 vCPU cores and 180GB of RAM, while the worker nodes have 8 vCPU cores and 30GB of RAM. Each node runs Ubuntu 20.04.5 LTS with Linux kernel v5.4. Nodes are interconnected by 10Gb Ethernet in the same data center. Kubernetes v1.25 is deployed as the container orchestration platform.
**Applications.** Representative applications for both batch processing jobs and microservices are deployed to evaluate Drone. For batch jobs, we benchmark three Spark applications that stress different computational resources, including (1) Spark-Pi, a pi-computation job with configurable number of iterations to control precision as a representative compute-intensive job, (2) PageRank as a jointly memory- and network-intensive job and (3) Logistic Regression to serve as a typical ML training task. For microservices, we use the _Social Network_ application containing 36 microservices from DeathStarBench (K
Figure. 7. First, Figure. 7(a) depicts the performance measurements for the same LR job running with different schemes in the public cloud setting. Starting from the same starting point, it can be seen that all three bandit algorithm-based approaches are able to improve application performance by learning the performance-input relationship function over time. On the other hand, as a completely reactive rule-based autoscaler, the default Kubernetes solution cannot benefit from history information, and hence only manages to maintain a low performance, which is slightly perturbed over time by environment uncertainties and measurement errors. Drone significantly outperforms Cherrypick and Accordia by adaptively learning from the contextual variables while Cherrypick and Accordia are oblivious to such environment changes and can only leverage information from their resource decisions, i.e., the action space. The benefit of considering contextual information can further be observed from the post-convergence behaviour when \(T>10\). Both Cherrypick and Accordia sporadically experience performance oscillations while Drone is able to stabilize after convergence. This is because Cherrypick and Accordia regard any performance feedback as the exclusive result of the actions taken. Therefore, whenever a performance discrepancy is observed they will adjust their resource allocations even though it is primarily owing to changes in contextual cloud uncertainties. It is also worth noting that Drone converges slightly slower at the 10th iteration compared to the other two schemes which converge around the 7th iteration because the search space is larger in Drone due to additional dimensions in the action spaces (e.g., the scheduling vector) and the new contextual dimensions. This is a common performance-dimension tradeoff which we will briefly discuss in Sec. 6.
Figure. 7(b) depicts the normalized resource cost saving compared to the Kubernetes native solution across all three representative batch workloads. While all three frameworks show cost-saving benefits thanks to their cost-aware problem formulation, Drone is the most cost-efficient one with over 20% cost savings across workloads, since it can more accurately search for the (near-)optimal resource configuration based on the information from both performance feedback and environment contexts, without need to over-allocate resources to maintain reasonable performance. Moreover, Drone makes its own scheduling decision by incorporating the scheduling sub-vector into its action space. Thus, even if given the same total amount of resources, Drone also learns the best strategy to assign the execution pods to computing nodes, which Cherrypick and Accordia cannot achieve. This effect is most evident when benchmarking PageRank, a network-intensive workload, where Drone achieves an average of 53% resource cost saving compared to the Kubernetes native solution, a number significantly higher than 20% from Accordia and 17% from Cherrypick.
Similar benefits are also manifested in the resource-limited private cloud setting. We focus on Drone's impact on memory limit compliance since memory is a _non-negotiable_ resource type. Unlike CPU and network bandwidth where inadequate allocation would cause throttling (for CPU) or congestion (for network) but applications are still available, an application that requires more memory than allocated will incur an out-of-memory (OOM) error and the hosting pod will simply be killed and rescheduled if possible. OOM errors can significantly jeopardize application availability and degrade application performance. In our preliminary experiments in Sec. 3, a Spark job with insufficient memory allocation can experience a 20x longer elapsed time and even
\begin{table}
\begin{tabular}{l|c c|c c|c c} \hline & \multicolumn{2}{c|}{Spark-Pi} & \multicolumn{2}{c|}{LR} & \multicolumn{2}{c}{PageRank} \\ Framework & Time(s) & \# Errors & Time(s) & \# Errors & Time(s) & \# Errors \\ \hline k8s & 53\(\pm\)2 & 0 & 328\(\pm\)17 & 1 & 1436\(\pm\)88 & 4 \\ Accordia & 46\(\pm\)1 & 0 & 303\(\pm\)26 & 17 & 1172\(\pm\)95 & 98 \\ Cherrypick & 43\(\pm\)1 & 0 & 298\(\pm\)24 & 13 & 1226\(\pm\)102 & 107 \\ Drone & 41\(\pm\)1 & 0 & 226\(\pm\)9 & 5 & 785\(\pm\)42 & 9 \\ \hline \end{tabular}
\end{table}
Table 3. Drone significantly reduces OOM errors by conforming with resource constraints.
Figure 7. Comparison between Drone and alternatives for batch processing jobs.
get halted in an intermediate stage and fail to make progress. We set the memory limit as 65% of the total memory capacity available in the cluster and run all three representative batch workloads and record the memory utilization metric as shown in Figure. 7(c). We can observe that only Drone manages to abide by the memory constraint in the long run, showing an approximately 16% lower memory profile than baselines, despite the first few exploration rounds where Drone actively explores around to identify the feasible safe action space. To see the benefit of resource limit compliance in action, we run memory-stressing tasks in parallel using stress-ng to simulate significant resource contention which occupies around 30% of total memory. Table 3 summarizes the performance and number of Spark executor errors in different settings. We can observe that the Kubernetes native solution suffers from the least number of OOM errors by using memory utilization as one of its scaling rules. Therefore, it always respects the resource constraints and even suspends invoking executor pods when it detects memory is under stress, which in part contributes to its low performance. Memory constraint-oblivious solutions Cherrypick and Accordia on the other hand experience a large number of executor errors, especially for memory-intensive jobs such as LR and PageRank. In this case, Drone is able to fully utilize the algorithmic effectiveness of contextual bandits to optimize performance while complying with resource constraints, achieving up to 36% performance improvement and 10x less OOM errors compared to Cherrypick and Accordia.
### Drone for Microservices
We also evaluate the efficacy of Drone to orchestrate resources for microservice applications by performing end-to-end experiments. Driving the SocialNet microservice benchmark with a realistic workload trace as shown in Figure 8(a), we collected aggregated performance metrics over the entire application running span. Figure 8(b) shows the cumulative distribution of RAM allocation for Drone and the other three baselines. As hybrid autoscalers, both SHOWAR and Autopilot are able to reduce memory footprint compared to Kubernetes HPA by combining vertical and horizontal autoscaling to mitigate over-allocation. However, Drone outperforms the alternatives by more accurately modelling the performance-action relationship by incorporating a much broader array of factors, instead of only heavily relying on past resource usage information as SHOWAR and Autopilot do. Specifically, Drone is able to serve around 60% of user requests within 50GB of overall RAM allocation, which is 55% less than SHOWAR and 60% less than Autopilot, manifesting an outstanding resource-saving capability.
Figure 8(c) depicts the end-to-end latency distribution across frameworks. Autopilot exhibits a similar performance to Kubernetes HPA since they share a similar reactive scaling strategy based on recent resource statistics. Specifically designed for microservices, SHOWAR performs better by identifying microservice correlations to create locality-oriented affinity rules, which is more likely to schedule closely related microservices to the same node and hence reduces latency. Drone, on the other hand, steps further by encoding the more efficient scheduling opportunities into its decision vector, so it does effectively both rightstizing (i.e., autoscaling by prioritizing vertical scaling) and scheduling. As an integrated resource orchestration solution, Drone lowers the P90 latency by 37% compared to SHOWAR and by 45% compared to Autopilot.
Having run the experiment in the private cloud setting, we also observe a similar effect as in the previous subsection. Table 4 records the total number of dropped user requests over the running span. Unlike in the batch processing case where the Kubernetes solution manages to maintain a low error rate by not invoking pods when memory is low, in this
\begin{table}
\begin{tabular}{|c|c|c|c|} \hline & k8s & Autopilot & SHOWAR & Drone \\ \hline \# of dropped packets & \(4.8\times 10^{4}\) & \(3.4\times 10^{4}\) & \(1.4\times 10^{4}\) & 7809 \\ \hline \end{tabular}
\end{table}
Table 4. Number of dropped requests.
Figure 8. Comparison between Drone and alternatives for microservices.
user-facing microservice case, it experiences the largest number of packet drops due to poor resource allocation decisions. Again, Drone incurs the least number of packet drops thanks to its resource limit-aware algorithm which progressively learns about the safe set for resource orchestration decisions.
## 6. Discussion
**Application-level configuration tuning.** An application's performance can greatly depend on its own configurable parameters as well. For example, Xin et al. (Xin et al., 2017) identifies 38 performance-deciding parameters for SparkQL. While Drone is not readily applicable to application-level parameter tuning out of the box, the idea of the underlying contextual bandits as an algorithmic foundation can be naturally transferable, which has also been recently explored in the database research community, as mentioned in Sec. 2.2. In fact, Drone operates at a lower level of hardware resource orchestration and can be used in parallel with other efficient application-level configuration tuning techniques to jointly optimize an application's performance.
**Tradeoff between precision and cost.** In theory, the function modelling capability of bandit algorithms can always benefit from more information. It is also true in our resource orchestration context. Incorporating more dimensions for tunable parameters from the action space such as disk I/O and last level cache (LLC) bandwidth, or more contextual information such as the graph structure of the running microservices would help Drone more accurately characterize the complex coupling of performance, action and environment. However, it is well-known that bandit algorithms (especially its continuous variant Bayesian Optimization) tend to perform poorly in high dimensions (Zhao et al., 2018). Therefore, in practice, we need to selectively incorporate the "more important" dimensions with domain knowledge and employ several optimizations (see Sec. 4.5) to make the algorithms practically efficient. Actually, as different applications and workloads have divergent resource request profiles, it would be interesting to investigate how to dynamically pick the most critical dimensions based on application and workload properties. We leave that as a future work of Drone.
**Overhead of Drone.** Drone is designed to embrace the latest cloud paradigms and technologies, working seamlessly with the Kubernetes ecosystem. It utilizes the Prometheus-based Kubernetes monitoring stack for metrics collection and modifies resource configurations by directly communicating with the Kubernetes API server and updating the cgroup configuration values for the pods of concern if possible. Thanks to the optimizations we employ, the computation time for each iteration in the online mode is on the order of seconds, well within the metrics updating interval. There is also no additional cost of potential container migration during scheduling since Drone follows the standard Kubernetes-native rolling-update procedure. Therefore, minimal overhead is incurred for using Drone.
**Limitations.** One major limitation of Drone is its insufficient capability to deal with "flash crowds", workloads that burst to a significantly higher level in a very short period of time (e.g., seconds). This situation inherently breaks the Gaussian Process prior assumption of the function and intrinsic limitations of iterative algorithms restrict Drone from reacting fast to such sudden changes. Fortunately, such cases are rare in reality and cloud providers often prepare backup resources for over-allocation in addition to their routine resource allocation frameworks. Moreover, Drone is yet to achieve its full potential to work with microservices since it is oblivious to the microservice dependency graph structure, which has been shown to be instrumental in microservice-oriented resource management (Han et al., 2015; Zhan et al., 2016; Zhan et al., 2017; Zhan et al., 2018; Zhan et al., 2019). Efficiently integrating dependency information into Drone without incurring significant overhead would be another promising direction to explore.
## 7. Conclusions
In this paper, we present Drone, a resource orchestration framework specifically designed for the containerized cloud. Based on recent advances in contextual bandit algorithms, Drone encapsulates various cloud uncertainties as contextual parameters to aid the search process for optimal resource orchestration decisions. The uncertainty-aware approach enables Drone to progressively balance the performance and resource cost tradeoff in a shared public cloud, and optimize performance while adhering to resource constraints in a resource-limited private cloud. Our empirical analysis shows that Drone achieves up to 45% performance improvement and 20% resource cost savings compared to state-of-the-art alternatives.
|
2308.16493 | Expanding Frozen Vision-Language Models without Retraining: Towards
Improved Robot Perception | Vision-language models (VLMs) have shown powerful capabilities in visual
question answering and reasoning tasks by combining visual representations with
the abstract skill set large language models (LLMs) learn during pretraining.
Vision, while the most popular modality to augment LLMs with, is only one
representation of a scene. In human-robot interaction scenarios, robot
perception requires accurate scene understanding by the robot. In this paper,
we define and demonstrate a method of aligning the embedding spaces of
different modalities (in this case, inertial measurement unit (IMU) data) to
the vision embedding space through a combination of supervised and contrastive
training, enabling the VLM to understand and reason about these additional
modalities without retraining. We opt to give the model IMU embeddings directly
over using a separate human activity recognition model that feeds directly into
the prompt to allow for any nonlinear interactions between the query, image,
and IMU signal that would be lost by mapping the IMU data to a discrete
activity label. Further, we demonstrate our methodology's efficacy through
experiments involving human activity recognition using IMU data and visual
inputs. Our results show that using multiple modalities as input improves the
VLM's scene understanding and enhances its overall performance in various
tasks, thus paving the way for more versatile and capable language models in
multi-modal contexts. | Riley Tavassoli, Mani Amani, Reza Akhavian | 2023-08-31T06:53:55Z | http://arxiv.org/abs/2308.16493v1 | # Expanding Frozen Vision-Language Models without Retraining: Towards Improved Robot Perception
###### Abstract
Vision-language models (VLMs) have shown powerful capabilities in visual question answering and reasoning tasks by combining visual representations with the abstract skill set large language models (LLMs) learn during pre-training. Vision, while the most popular modality to augment LLMs with, is only one representation of a scene. In human-robot interaction scenarios, robot perception requires accurate scene understanding by the robot. In this paper, we define and demonstrate a method of aligning the embedding spaces of different modalities (in this case, inertial measurement unit (IMU) data) to the vision embedding space through a combination of supervised and contrastive training, enabling the VLM to understand and reason about these additional modalities without retraining. We opt to give the model IMU embeddings directly over using a separate human activity recognition model that feeds directly into the prompt to allow for any nonlinear interactions between the query, image, and IMU signal that would be lost by
mapping the IMU data to a discrete activity label. Further, we demonstrate our methodology's efficacy through experiments involving human activity recognition using IMU data and visual inputs. Our results show that using multiple modalities as input improves the VLM's scene understanding and enhances its overall performance in various tasks, thus paving the way for more versatile and capable language models in multi-modal contexts.
keywords: Multi-modal visual language models, Robot perception, Contrastive Learning +
Footnote †: journal: Computer Vision and Image Understanding
## 1 Introduction
Multi-modal research in vision, audio and language has gained traction in recent years[1; 2], and now with current studies showing that Large language models (LLMs) have the capabilities of complex question answering and reasoning [3], there has been an influx of attention towards utilizing multi-modal LLMs. Recent research on vision-language models has further shown these reasoning capabilities can be extended to other modalities [4]. In this paper, we propose a method that extends frozen, pretrained visual-language models to understand inertial measurement unit (IMU) data while being extensible to any other modality. This method of extending pretrained models without retraining or finetuning reduces training costs dramatically in an era of deep learning where it has become infeasible to train most models from scratch for the majority of researchers and developers [5]. At these large sizes, models can learn abstract, generalizable reasoning skills that are difficult to replicate
in smaller models [6]. Specifically, language models present a new paradigm of foundation models that offer unlimited downstream use cases, with the limitation of text being the singular modality. Vision-language models (VLMs) have allowed for images to be interwoven with text, taking advantage of the skills the base LLM learned while being trained on text. Flamingo [7] proposed a novel VLM architecture where trainable layers were injected into the frozen LLM. These new trainable layers require far less training than the base LLM while allowing the raw image embeddings to be processed in a depth-wise manner alongside the accompanying text. This results in the frozen LLM being capable of understanding image embeddings that have been resampled to match the distribution the LLM expects. This allows for the LLM's in-context learning capabilities to be used on images, making the model versatile and removing the need for fine-tuning to a domain-specific dataset [8]. Instead of training new layers or modules for every additional modality to be incorporated, any modality can arbitrarily be aligned to share the embedding space of the vision encoder through contrastive training. Consequently, the layers that translate vision embeddings into representations the LLM understands also work on any other modality that has been aligned with the vision embedding space.
Most contrastive learning methods rely on large datasets, but with the methods we propose in this paper, even modalities with relatively few examples can sufficiently align their embedding space to the vision encoder. This idea also addresses a growing demand for larger generalist models to use any
modality to enable users to take advantage of the abstract representations it has learned. As such, our main contributions in this paper are as follows:
1. A methodology that allows for the extension of frozen, pretrained vision-language models to accommodate any number of modalities, broadening their applicability and versatility.
2. An understanding of how multi-modal inputs contribute to the development of increasingly nuanced scene representations, adding depth and context to machine understanding.
3. A validated evidence that the integration of various modalities improves scene understanding, a critical aspect of machine perception and cognition.
4. A demonstration of how relatively small datasets can be used for contrastive alignment.
Figure 1: Overview of the approach showing the concatenation of multiple modal representations of a scene with a query yielding better, more semantic responses
### Robot Perception and Human Activity Recognition (HAR)
The goal of this paper is to leverage VLMs for better scene understanding toward improved robotics perception, especially in human-robot interaction (HRI) scenarios. In this regard, human activity recognition (HAR) using wearable devices can help robots better perceive their environment by gaining an understanding of the type of activity in which the human counterpart is engaged with. Because there already exist very competent HAR models, we choose to supply the IMU embeddings directly in the prompt to assess model performance on more granular aspects of scene understanding that are not readily extractable with pre-existing models. For practical and automated HRI applications, the HAR classification could also be retrieved from an auxiliary model and appended to the query.
## 2 Related Work
HRI has garnered interest in recent years manufacturing due to the potential production efficiency improvement. However, robots do not have the innate contextual scene understanding capability humans latently possess [9]. To remedy this issue, researchers have conducted extensive research into both robot perception [10] and robot action [11]
For robotic action, RT-2 [12] is a visual-language-action (VLA) Model that leverages visual-language models (VLM) such as PaLI-X [13] to generate action commands for a robot based on a query. The model changes the backbone VLM's generated token to include robotic actions that are used as
inputs to the low-level robotic controller. Principally, the paper shows the potential of adapting VLMs to VLAs by combining VLM pretraining with robotic data.
PaLM-E [14] is an embodied, multi-modal language model capable of executing complex robotic tasks via planning and scene understanding. However, they use a training paradigm different from the one presented in this paper whereby modality encoders are trained end-to-end with a frozen LLM as the head of the encoder, similar to[15]. Importantly, they highlight the ability of LLMs to reason on multi-modal inputs in a way similar to language. There are several other recent works, such as ImageBind [16], that integrate multiple modalities into a unified embedding space, showing how a linear combination of embeddings from multiple modalities yields a better representation. These developments highlight the capabilities of multi-modal learning and underscore the importance of exploring it further. Macaw-LLM [17] provides a new architecture for multi-modal inputs, published as a vision-audio-text LLM with the introduction of a new architecture containing an alignment module that aligns the embeddings of multiple modalities to a common embedding space. The benefit of what we design in this work is its ability to leverage pretrained models without the need for a new architecture or retraining of the base model or an alignment module. Works such as BilIP-2 [18] follow the same philosophy of feeding different modalities to language models in a format that they can understand and process through a specific pretraining regime. BLIP-2 combines "off the shelf" frozen image
encoders and frozen LLMs and proposes a pretraining strategy that bridges the gap between modalities. [18] show a novel architecture for a light-weight HAR model designed for processing videos and trained contrastively on associated activity captions.VLMs have been employed in the past for HAR. One such instance is VicTR, a model that utilizes a joint video and text encoder to create video-conditioned text tokens for improved HAR [19]. In another study, the authors developed a HAR algorithm utilizing wide time-domain convolutional neural network and multi-environment sensor data for daily behavior recognition while using contribution significance analysis to assess the contribution of each sensor to the detection of the activity [20].
PaLM-E's approach to integrating sensor signals as inputs in a multi-modal language model provided valuable insights into the potential capabilities of LLMs to reason on multi-modal inputs in a similar manner to language. However, they rely on a paradigm that requires the encoders to be trained end-to-end with a frozen LLM, limiting the flexibility of the system. ImageBind [16] integrates multiple modalities into a unified embedding space through contrastive alignment, bypassing the high training cost of using the LLM directly. Our work strives to develop a methodology that allows LLMs to accommodate an arbitrary number of modalities without needing a new architecture, an issue faced by works like Macaw-LLM [17].
## 3 Methodology
Figure 2 shows how raw inputs are processed through the VLM. An important distinction is that we linearly combine the image and IMU embeddings for a single example after having passed through the perceiver resampler but before they pass through the gated cross attention layers. This linear combination of encoded representations provides the VLM with a more holistic representation of the scene. The training method that aligns the IMU encoder to the pretrained visual encoder is outlined below.
### Dataset
We use the MMAct dataset [21] which consists of 35 different human actions with varied durations of 2-10 seconds. Each sample is recorded on 4
Figure 2: The architecture of Flamingo VLMs extended to handle image-IMU pairs of inputs
different cameras at different points in the room. The room is also set up in 4 distinct ways with different obstacles or components. Example actions include talking on the phone, running, and pulling a cart. We concatenate all IMU signals, down sampling where necessary such that each signal is sampled at 50 Hz. Each signal provides 3 channels, and with 4 signals (two accelerometers at different parts of the body, a gyroscope, and a magnetometer), we attain a 12 channel IMU signal. We sample a window of length 256 from this data, padding with zeros when the signal is not long enough. We randomly sample a frame from the corresponding video. We use a batch size of 512 and train for a maximum of 100 epochs, early stopping once the validation loss is minimized. The total train size is 6093 examples.
### Modality Encoders
In this work, we extend visual language models to understand IMU data encoded using a transformer-based encoder in combination with a 1-d convolution without retraining the visual language model. To train this encoder, we contrastively optimize temporally overlapping image-IMU pairs to have a large cosine similarity using a frozen, pretrained ViT-L/14 [22] as the visual encoder. An extension of CLIP for video, X-CLIP [23], has previously been explored by the authors for HAR [24]. The presented work seeks to show the capability of extending VLMs understanding to multiple modalities with no retraining. Therefore, we are constrained to the frozen visual encoder the VLM was trained with. This is because as we contrastively train our IMU
encoder to share the embedding space of the vision encoder, it is necessary that this shared embedding space towards which the IMU encoder optimizes is the same as the embedding space the VLM was trained to understand. Had we used a different vision encoder to contrastively train the IMU encoder, the pretrained VLM would not understand the IMU embeddings without retraining. Here, we are inspired by the work presented in ImageBind [16] to train arbitrary modality encoders to align their embeddings with a desired embedding space.
### Contrastive Pretraining
Contrastive learning, a subfield of unsupervised learning, works by learning a representation of its inputs such that similar inputs result in similar vectors and dissimilar inputs yield dissimilar vectors [25]. It has been successfully applied in a variety of machine learning tasks, ranging from image and speech recognition to natural language understanding, largely due to its effectiveness in learning rich, meaningful embeddings from unlabeled data. Multi-modal contrastive learning is still an active area of research [26; 27] where the loss functions with which we optimize over are just beginning to be explored. When contrastively training an encoder for a modality with a temporal dimension such as IMU data, the window size is directly correlated with information content which makes it an important hyperparameter to tune and optimize for good representation quality [28]. We utilize a symmetric cross entropy loss objective, also known as the infoNCE loss [29; 30], in
order to train our IMU encoder model. The loss maximizes the dot product of matching pairs in a batch and minimizes the dot product of negative pairs. This was most recently popularized in a multi-modal context with CLIP [22].
\[L_{\text{infoNCE}}=-\sum_{(i,j)\in P}\log\left(\frac{e^{\text{CoSim}(z_{i},z_{j})/ \tau}}{\sum_{k=1}^{N}e^{\text{CoSim}(z_{i},z_{k})/\tau}}\right) \tag{1}\]
For every pair (i,j) in set P, which represents positive pairs of data instances, we compute the cosine similarity CoSim between the respective representations \(z_{i}\) and \(z_{j}\). This similarity score is scaled by a temperature parameter \(\tau\) to control the sharpness of the distribution. The logarithm of this ratio is then computed, and the loss is the negative sum over all positive pairs. This formulation encourages the network to maximize similarity for positive pairs and minimize similarity for negative pairs, where positive pairs are defined as images and overlapping IMU windows, and negative pairs are images and non-overlapping IMU windows. We also add a supervised loss term to the loss function, mapping the embedded IMU representation to class logits with a linear head. This enforces a constraint on the embedding space that keeps embedded actions linearly separable. With the addition of this supervised loss term, we observed more specific, distinct outputs from the VLM when given IMU embeddings.
Rather than computing the infoNCE and supervised losses on the outputs from the encoders, we further process both encoded representations by passing them through the frozen, pretrained perceiver resampler module. This
outputs a predefined set of latent vectors that are resampled representations of the input. For our implementation, we map an encoded sequence length of 256 with dimension 1024 to a length of 64 with the perceiver resampler. We then average pool along the sequence dimension for both image and IMU embeddings to obtain 1-d vectors of size 1024 for each sample. It is with these representations we compute the infoNCE and supervised loss terms. In our empirical tests, this process of including the perceiver resampler module grounds the representation the IMU encoder learns more rigidly. We observed this in testing different iterations on an activity recognition sub-task where we prompt the VLM with only IMU embeddings to identify the action being performed. IMU encoders trained without the perceiver resampler exhibited far worse performance on this task, such that when combining the IMU embeddings with vision embeddings, worse performance could sometimes be observed. Our hypothesis for why we see better performance with this architecture is that the inclusion of the perceiver resampler strictly constrains the features learned by the IMU encoder to have a similar distribution to the features of the image encoder. When computing loss on the embeddings that are output from the encoders rather than the perceiver resampler, the loss is far noisier whereas the perceiver resampler processes embeddings of both modalities into a shared distribution. This contrastive and supervised learning objective enables the IMU encoder to learn a meaningful mapping from raw sensor data to a representative embedding space that aligns with the image encoder. Most unsupervised methods, contrastive learning included,
require large amounts of data. This paper explores how a relatively small training dataset of around 6,000 image-IMU pairs can be leveraged to align an IMU encoder with a vision encoder.
### Multi-Modal Large Language Model
We utilize VLMs as a high-level reasoning module to better understand a scene given various modal representations. We use an implementation of the Otter VLM [31], a continuation of Open Flamingo [32], the open-sourced version of the original DeepMind paper, Flamingo [7]. Otter is further trained on an instruction dataset to support multi-modal in-context instruction tuning which involves conditioning the language model on the corresponding media, such as an image, that corresponds to a caption or an instruction-response pair [31; 33]. This makes Otter deliver more guided outputs when we effectively design prompts.
VLMs such as Otter are autoregressive decoder-only models, with image embeddings represented in the tokenizer with a special token. The image embeddings, or any other modality's embeddings, are passed through a module introduced in the original Flamingo paper called the Perceiver Resampler which takes as input the sequence output of a transformer encoder and outputs a fixed set of resampled tokens as based on a set of learnable latent queries. This allows for variably sized inputs to be mapped to the same length of tokens, and it allows the frozen, pretrained language model to resample the static image embeddings throughout the layers of the LLM.
Because Otter was trained on an instruction-tuning dataset, the model learns to structure its response to follow the preceding in-context examples which allows us to query the model's understanding of the scene. In this paper, we show that the addition of the IMU embeddings in the prompt allows the VLM to better reason about a scene and more wholly understand the activities of the humans present in the visual input.
## 4 Experiments
Below, we show the capabilities of the pretrained Otter model on semantic scene understanding tasks when provided vision data, IMU data, and a combination of both. We take advantage of conditional generation by prepending our query with two example question-response pairs to update the model's output distribution to be more in line with our expectations. Figure 3 shows model responses given different combinations of input modalities.
## 5 Results
We evaluated the effectiveness of our contrastive alignment process by mapping the embeddings of the IMU and image data for each of the 35 classes via t-distributed stochastic neighbor embedding (t-SNE), a technique used to visualize high-dimensional data in a way that shows underlying data distributions in a 2-dimensional representation that can easily be plotted [34]. Figure 4 shows the result of this visualization where each class is represented by a different color, and the clusters suggest distinctive patterns in the data.
The video encoder embeddings display clear clusters, suggesting that the model successfully extracted meaningful patterns from the data. However, these clusters do not align with the specific activities performed as there are no clear color groupings, an outcome we anticipated. This is because the image encoder was not specifically fine-tuned for HAR. The IMU encoder
Figure 4: t-SNE visualization of video and IMU encoder embeddings across 35 classes
Figure 3: In-context generation with only IMU, only images, and both modalities
embeddings lack some of the structure present in the image embeddings, suggesting that the contrastive alignment of encoders did not fully align the two models to share the exact same embedding space, but the class distribution is far more organized which allows the model to better understand a user's actions as evidenced the very clear color groups. Further, the two modalities are fundamentally capturing different characteristics of the scene, which is by design, but that does mean that the embedding space of the IMU encoder will naturally have a different structure even after alignment. For example, the IMU data more closely corresponds to what a person is doing, e.g. two different people doing the same activity have more similar IMU data than images of two people doing the same activity due to the potential for different backgrounds, environments, or peripheral objects. Because the IMU data contains less total information than the associated images, there will be some breakdown in structure where the image embeddings more finely correspond to a given input.
We also test how linearly combining the embeddings from both modalities changes the shared embedding space when visualized with t-SNE. In our experiments, we see that the weights used in linearly combining the two modal embeddings interpolate between the structure of the video and IMU embedding spaces. For Figure 4, we weight the vision embeddings 80% and the IMU embeddings 20%. In practice, these values must be empirically tuned to maintain the structure of the desired embedding space while gaining some smaller amount of information from the new embedding space. ImageBind
[16] exploits the linear combination of vectors in multi-modal embeddings spaces for the task of video retrieval, obtaining better results when using audio in conjunction with vision data.
This emergent structure of grouped examples of the same activity that is present in the IMU embedding space and not in the vision embedding space indicates that the raw IMU distribution is more implicitly linked to the activity label. We view this as a feature allowing the two modalities to naturally complement one another, the IMU data encoding the kinematic, activity data the vision encoder struggles to encode. This point can be seen in Table 1. This table shows the linear probe performance of a supervised HAR model trained by video only, IMU only, and combined video-IMU data. The IMU embeddings naturally encode information about an individual's action with far less noise than is present in an image. When combining modalities, we concatenate the output embeddings of each encoder, mapping the combined vector to class logits. This shows that the contrastive alignment of modalities can provide novel information to a pretrained model that otherwise would not be present in the unimodal data. This hypothesis warrants further investigation in future studies.
## 6 Conclusion
In this paper we have proposed a methodology to extend pretrained vision language models to any modality by only training the new encoder. We have shown the ability of a contrastive and supervised objective to sufficiently
map an encoder to the pretrained vision encoder's embedding space thereby taking advantage of the pretrained VLM. Further, we have shown how multiple modalities leads to a more robust and general scene representation and highlighted its potential in collaborative robotics.
### Future Work
Future work can explore the effects of larger VLMs or VLMs with different architectures with multi-modal fine-tuning. The model size can prove as a limitation to the quality of the models responses, however, larger models will have longer inference times which could prove as an issue in different implementations. We plan to implement multi-modal fine-tuning to models such as MiniGPT-4 [35] which uses a base Vicuna model [36] as the backbone LLM to assess and compare their capabilities with the Otter model. The MiniGPT-4 utilizes a Q-former with a frozen pretrained vision encoder compared to gated cross-attention layers which the flamingo model uses.
Another area we plan to explore is the assessment of information quality of each modality. We hypothesize that modalities can have varying levels
\begin{table}
\begin{tabular}{|p{56.9pt}|p{56.9pt}|p{56.9pt}|p{56.9pt}|p{56.9pt}|} \hline
**Modality** & **Training Loss** & **Test Loss** & **Training Accuracy (\%)** & **Test Accuracy (\%)** \\ \hline Video & 1.0428 & 1.4748 & 65.07 & 52.41 \\ IMU & 0.4052 & 1.2468 & 91.83 & 64.47 \\ Combined & **0.2138** & **0.8753** & **94.58** & **74.46** \\ \hline \end{tabular}
\end{table}
Table 1: Supervised activity recognition for different modalities. Despite sharing the same embedding space, each modality still preserves unique information, as reflected in the increased performance when combining embeddings
of generalizable information regarding the activity and how we identify and account for these discrepancies. Previous work such as ImageBind uses multi-modal embedding space arithmetic to linearly combine embeddings of different modalities to yield a more information-dense embedding vector. [16]. The outcome of the presented work will be ultimately used in the context of HRI for better robot perception. The authors are currently exploring HRI scenarios in the context of construction activities where visual data from robot cameras and IMU data from wearable sensors by construction workers are used to enhance robot perception as seen in Figure 5.
Other avenues of future research can be im
Figure 5: A researcher investigating multi-modal robot perception for human-robot collaboration
introduced in RT-2 which consists of utilizing robotic data through the VLM pretraining. We introduce the feasibility of extending VLMs to any number of modalities and experiment with the viability of implementing modality extensions on VLAs. The study of contrastive alignment of modality encoders to a shared embedding space is an avenue we plan to explore with new training objectives and data-dependent significance analysis across multi-modal representations.
### Limitations
The Otter model uses MPT-7b as its backbone, making it fast for inference, but with technical limitations in its hallucinations. Further, because the dataset of video frame-IMU pairs used, MMAct, is relatively small, we do not attain a 1:1 representation between video and IMU data. This is expected as IMU data intrinsically has a data distribution distinct from any correlated video frames. Another downfall of extending modalities without pretraining is that the learnable model parameters have not been trained for multi-modal processing, potentially causing an increase in hallucinations. Current research indicates that poor training and low-quality training data has a direct effect on the degrees of hallucination [37].
**Conflict of Interest**
The authors declare that they do not identify any personal relationships or financial ties that would affect the contents or the publishing of this paper.
**Data Availability**
Source code and data will be made available upon request.
**CRediT authorship contribution statement**
**Riley Tavassoli** Conceptualization, Methodology, Software, Investigation, Validation, Writing - Original Draft, Writing - Review & Editing, Visualization.
**Mani Amani**: Investigation, Validation, Visualization, Writing - Original Draft, Writing - Review & Editing, Software.
**Reza Akhavi**: Project administration, Funding acquisition, Writing - Review & Editing, Supervision.
**Declaration of Funding**
The presented work has been supported by the U.S. National Science Foundation (NSF) CAREER Award through the grant # CMMI 2047138. The authors gratefully acknowledge the support from the NSF. Any opinions, findings, conclusions, and recommendations expressed in this paper are those of the authors and do not necessarily represent those of the NSF.
|
2301.13786 | Deep learning-based lung segmentation and automatic regional template in
chest X-ray images for pediatric tuberculosis | Tuberculosis (TB) is still considered a leading cause of death and a
substantial threat to global child health. Both TB infection and disease are
curable using antibiotics. However, most children who die of TB are never
diagnosed or treated. In clinical practice, experienced physicians assess TB by
examining chest X-rays (CXR). Pediatric CXR has specific challenges compared to
adult CXR, which makes TB diagnosis in children more difficult. Computer-aided
diagnosis systems supported by Artificial Intelligence have shown performance
comparable to experienced radiologist TB readings, which could ease mass TB
screening and reduce clinical burden. We propose a multi-view deep
learning-based solution which, by following a proposed template, aims to
automatically regionalize and extract lung and mediastinal regions of interest
from pediatric CXR images where key TB findings may be present. Experimental
results have shown accurate region extraction, which can be used for further
analysis to confirm TB finding presence and severity assessment. Code publicly
available at https://github.com/dani-capellan/pTB_LungRegionExtractor. | Daniel Capellán-Martín, Juan J. Gómez-Valverde, Ramon Sanchez-Jacob, David Bermejo-Peláez, Lara García-Delgado, Elisa López-Varela, Maria J. Ledesma-Carbayo | 2023-01-31T17:33:35Z | http://arxiv.org/abs/2301.13786v1 | Deep learning-based lung segmentation and automatic regional template in chest X-ray images for pediatric tuberculosis
###### Abstract
Tuberculosiss (TB) is still considered a leading cause of death and a substantial threat to global child health. Both TB infection and disease are curable using antibiotics. However, most children who die of TB are never diagnosed or treated. In clinical practice, experienced physicians assess TB by examining chest X-rays (CXR). Pediatric CXR has specific challenges compared to adult CXR, which makes TB diagnosis in children more difficult. Computer-aided diagnosis systems supported by Artificial Intelligence have shown performance comparable to experienced radiologist TB readings, which could ease mass TB screening and reduce clinical burden. We propose a multi-view deep learning-based solution which, by following a proposed template, aims to automatically regionalize and extract lung and mediastinal regions of interest from pediatric CXR images where key TB findings may be present. Experimental results have shown accurate region extraction, which can be used for further analysis to confirm TB finding presence and severity assessment. Code publicly available at: [https://github.com/dani-capellan/pTB_LungRegionExtractor](https://github.com/dani-capellan/pTB_LungRegionExtractor).
Tuberculosis, semantic segmentation, pediatric chest X-Ray, deep learning, computer vision Further author information: (Correspondence: DCM and MJLC)
DCM: E-mail: [email protected]
MJLC: E-mail: [email protected]
## 1 Introduction
Despite being an ancient disease, tuberculosis (TB) remains a leading cause of death and a substantial threat to global child health, with an estimated annual burden of 1 million new pediatric cases worldwide and 250 000 children dying because of this disease [1, 2]. Of particular concern are the children under the age of five years that account for the highest mortality and risk of TB progression [3]. TB is caused by a small, aerobic bacterium called _Mycobacterium Tuberculosis_ (Mtb), which generally affects the lungs, although other parts of the body can be also affected [1]. Most children who die of TB are never diagnosed or treated. Screening may be useful to identify children with possible TB and refer them for further testing, otherwise they should be considered for preventive treatment [4]. Chest X-rays (CXR), along with symptom inquiry, are considered as the best TB screening methods, due to its higher availability and lower cost compared to other imaging techniques [5]. In clinical practice, experienced physicians examine CXR for TB. However, this is a subjective, time-consuming process and carries a significant risk of misclassification of other diseases with similar radiological patterns [6, 7]. Besides, the diagnosis of TB is more difficult in young children, given the non-specific nature of their symptoms and the less specific radiological manifestation compared to adults [8]. The most frequent lesions in pediatric TB are lymphadenopathy, airway compression, air space consolidation, pleural effusion, cavities, military patterns and Ghon focus [9, 10, 11]. Due to the difficulty to evaluate lymphadenopathy, the most relevant sign to diagnose TB with confidence on CXR, the lateral view is usually considered to facilitate diagnosis [12].
In this context, computer-aided diagnosis (CAD) systems supported by Artificial Intelligence (AI) algorithms can play an important role in the mass screening of TB by analyzing the CXR images. In recent years, several
CE-certified and commercially available solutions have shown performance comparable to experienced radiologist readings [13, 14]. However, the existing methods do not perform well in pediatric patients and only one system (RADIFY - www.edai.africa/) is currently being designed for children older than 2 years. Additionally, despite its relevance, this field of research has been scarcely tackled [15], showing an urgent need for the development of AI systems for infants and young children (\(<\)3 years). The first steps in a typical CAD system include preprocessing and region of interest (ROI) localization to apply further processing and being able to diagnose more accurately the disease. For TB, the target ROI are the lungs and other structures in the mediastinal region. Most of the current algorithms for detecting and segmenting the lungs are trained and evaluated using healthy subjects, which could have an impact on the correct identification of areas affected by pathology.
As a first step to tackle these challenges in this work, we propose a multi-view deep learning (DL)-based approach which aims to automatically extract lung and mediastinal regions of interest from pediatric CXR images where key TB findings may be present. The output of the proposed method is a standardized partition of the pediatric CXR that will enable further development of TB radiological sign detection methods as well as the potential assessment of severity.
## 2 Methodology
Figure 1 shows the main steps that make up the proposed solution.
### Datasets and Splits
For developing the solution, two datasets were used. Our target CXR dataset is from a pediatric (\(<\)3 years of age) cohort of 218 TB and non-TB children obtained from a study performed at CISM (Manhica Health Research
Figure 1: Pipeline of the proposed solution. Images shown in the pipeline are real predictions and outputs made by the corresponding DL-based models and algorithms on an 8-month-old infant who belongs to the testing set. AP: Anteroposterior. LAT: Lateral.
Center, Mozambique), with both anteroposterior (AP) and lateral (LAT)-view CXR images [9]. Additionaly, for development we used a subset from the public NIH CXR, ChestX-ray8 dataset (112,120 frontal-view CXR images from 30,805 patients) presenting common thoracic pathologies, such as lung nodules, pneumonia, fibrosis, edema or cardiomegaly [16]. To obtain a fully pediatric subset, only images of patients \(\leq\)11 years old were considered, which account for a total of 2330 images from 847 patients. Since further manual labeling of the images was required, a final subset of 210 images covering different ages and pathological findings was randomly selected.
In the experiments, training and validation splits were considered. The amount of training and validation data is specified later in each of the tasks. To test the proposed solution, an independent CISM subset of 30 patients with both AP and LAT chest X-rays was used.
### Preprocessing
To enable comparable contrast representation across the data, a first preprocessing step was applied to the images, mainly based on the application of an image contrast enhancement process with Contrast Limited Adaptive Histogram Equalization (CLAHE), capable of improving local contrast and edges adaptively with modest computational requirements, which has been shown to improve detection of TB and lung lesions on chest radiographs [17, 18, 19]. Preprocessing with CLAHE may also provide better performance in image segmentation [20].
### Lung Region Detection & Cropping
In the high burden clinical scenarios, digital as well as analog X-ray systems exist. To ensure the same field of view (FOV) and the proper processing of manually digitized X-rays, a lung region detection process was performed to both AP and LAT images. Indeed, first experiments showed that the subsequent lung segmentation process was much more robust when a previous cropping step was included.
Consequently, two DL-based fully convolutional neural network (FCNN) object detection models, one for AP and another for LAT, based on YOLO (_You Only Look Once_) architecture were implemented. For this, Ultralytics' YOLOv5* implementation was used for training a lung detector for both AP and LAT images.
Footnote *: [https://github.com/ultralytics/yolov5](https://github.com/ultralytics/yolov5)
For AP images, a YOLOv5s model was trained on a subset of 254 AP images from NIH and CISM datasets (210 and 44, respectively). For LAT images, another YOLOv5s model was trained, this time on 139 LAT images from CISM. All AP and LAT images were manually annotated using CVAT annotation tool and checked by an expert radiologist (RSJ, from author list).
The corresponding object detection outputs were then used to crop both AP and LAT images, narrowing down the field of study to our region of interest, the lungs, thus providing a more robust subsequent segmentation process.
### Lung Segmentation
This step gathers one of the most important parts within the pipeline proposed. The lung segmentation was defined to cover all the lung parenchymal extension, independently of the presence of overlapping structures. This is particularly important for pediatric TB cases as some of the findings could appear behind or in front of other structures such as the heart or at the lower posterior lobes of the lungs.
To tackle this, a comparison of three different state-of-the-art DL-based image segmentation architectures was carried out. Different models were trained and tested for each of the views (AP and LAT). Training was performed from scratch. All the data used for this task, including both training and test sets, were manually segmented using annotation tools. These were then checked by an expert radiologist (RSJ, from author list).
Two U-Net-based architectures and one Transformer-based architecture were used: Gated-Axial Attention UNet (GatedAxialUNet) [21], Medical Transformer (MedT) [21] and nnU-Net ("no-new-Net") [22, 23]. No major changes were made to the source code of each of the implementations, preserving as much as possible default settings.
In order to assess the performance of each of the models in relation to the amount of supervised data used to train the networks, an incremental learning approach was followed. Supervised training data was progressively
increased from 20 to 60 in 20-image steps, gathering segmentation performance results on the independent test set throughout each of the steps.
In the cases of GatedAxialUNet and MedT, input images were resized to an input size of \(256\times 256\), default batch size of 4 was kept, Adam optimizer was used with the default learning rate value of 0.001, and a total of 400 epochs were considered for training the models. The rest of the hyperparameters and configurations were kept with their default values. The validation set accounted for the 20% of the initial training set. To train GatedAxialUNet and MedT networks, binary cross-entropy (CE) loss was used between the prediction and the ground truth, which has the following form:
\[\mathcal{L}_{CE(p,\hat{p})}=-\left(\frac{1}{wh}\sum_{x=0}^{w-1}\sum_{y=0}^{h-1 }(p\log(\hat{p}))+(1-p)\log(1-\hat{p})\right) \tag{1}\]
where \(w\) and \(h\) are the dimensions of the image, \(p\), i.e. \((p(x,y))\), corresponds to the pixel label in the image and \(\hat{p}\), i.e. \(\hat{p}(x,y)\), denotes the output prediction at a specific location \((x,y)\) in the image.
In the case of nnU-Net, 2D U-Net models were trained on the data. Input images were automatically adapted by the implementation, with different patch sizes depending on the image type (AP images: \(768\times 896\), LAT images: \(1024\times 896\)). The input batch size was automatically set to 1 by the implementation, according to GPU memory requirements. 50 epochs were considered for training the models. Stochastic gradient descent (SGD) with Nesterov momentum (\(\mu=0.99\)), an initial learning rate of 0.01 and a weight decay following the 'poly' learning rate policy, were used for learning network weights. The rest of the hyperparameters and configurations were kept with their default values. Validation sets accounted for 20% of the initial training set, as a 5-fold cross-validation approach for training the models was adapted, following the implementation's documentation and guidelines. To train the nnU-Net models, a combination of Dice and CE loss was used:
\[\mathcal{L}_{\text{total}}=\mathcal{L}_{\text{Dice}}+\mathcal{L}_{CE} \tag{2}\]
where \(\mathcal{L}_{CE}\) was defined above and \(\mathcal{L}_{Dice}\), for an image \(x\) with a binary prediction output \(\hat{y}\) and binary label \(y\), is defined as:
\[\mathcal{L}_{\text{Dice}}=-\frac{2\sum_{i}y_{i}\hat{y}_{i}}{\sum_{i}y_{i}+ \sum_{i}\hat{y}_{i}} \tag{3}\]
where \(i\) represents the \(i\)-th pixel on the images. For further details on the nnU-Net training process, please refer to Isensee et al.[22]
### Automatic LAT Orientation Correction
In clinical routine, LAT images can be acquired either facing the patient right or left. Depending on this fact, the vertebral column may appear at the right or left side of the image. Consequently, after segmenting the lungs, an automatic orientation correction of the LAT image was included in the pipeline. This provides the solution with robustness and homogeneity, otherwise incorrect regions could be extracted in the subsequent steps.
To tackle this issue, a light-weighted and efficient ResNet-based Deep Convolutional Neural Network was designed and trained from scratch, which learned to detect the vertebrae in the image. The model was trained on 111 CISM LAT images and validated on 28 CISM LAT images (20% validation split). An horizontal flip was then performed to those images in which the network detected the column at the left (see Figure 1), homogenizing the data for the automatic region extraction process, and, thus, making the system more robust.
In order to make the training of the network more efficient, input images were first normalized to zero mean and unit variance, using z-normalization (\(X_{norm}=\frac{X-\mu}{\sigma+\epsilon}\)), where \(X\) is the image, \(\mu\) is the mean value of the image, \(\sigma\) its standard deviation and \(\epsilon\) a small (\(\epsilon\ll\sigma,\epsilon\approx e^{-10}\)) parameter that prevents division by zero.
DL models in this section were implemented using TensorFlow framework. Training and testing in this and previous sections were done using a workstation with NVIDIA TITAN X 12GB and TITAN Xp 12GB GPUs, 64GB RAM and Intel Core i7 @ 3.6 GHz CPU.
### Standardized Template and Automatic Region Extraction
As a final step, an automatic standardized template, based on Andronikou et al. proposals[24] to regionalize the pediatric CXR, was constructed having as input the previous cropped AP and LAT images, with their corresponding predicted lung segmentations.
To ensure the correspondence of regions across views we first aligned AP and LAT views. Subsequently, the AP image was automatically rotated to ensure lung verticality based on the orientation of the segmentations. To achieve this, a BLOb (Binary Large Object) detection followed by a principal component analysis (PCA) was applied to the AP predicted segmentation masks in order to estimate the rotation of each of the lungs.
The AP and LAT bounding boxes enclosing the lung segmentations were extracted and mediastinal regions were defined based on relative measures with respect to the lungs. The final regions extracted are detailed in Table 1. AP and LAT lungs were divided into thirds. LAT lungs were also divided in thirds to identify corresponding areas of potential pathology, not necessarily anatomical regions. APUM contains the respiratory track and suprahiliar area; APMM mainly contains the parahiliar area; and LATMM gathers the parahiliar area, of vital importance for experienced radiologists when detecting parahiliar lymphadenopathies. This standard template and its partitions can be used for further analysis to confirm TB finding presence and severity assessment.
## 3 Experiments and Results
### Lung Region Detection & Cropping
Lung detection performance was satisfactory using YOLOv5s, the small version of YOLOv5 (7.2M parameters, 14 MB of size).
A confidence threshold of 0.7 was selected for inference, with the aim of properly detecting the lungs with these models. Figure 2 shows two examples of how YOLOv5 performs on both AP and LAT views from two testing cases, one non-TB and other TB.
### Lung Segmentation
Results obtained throughout all the different experiments carried out in this step are presented in Table 2. These results were obtained by testing the different trained configurations and architectures, following the mentioned incremental learning approach, on an independent CISM test set of 30 manually segmented cases (with their corresponding 30 AP and 30 LAT images). When computing the metrics, all predicted and reference masks were resized to \(256\times 256\), avoiding metric miscalculation due to this fact.
\begin{table}
\begin{tabular}{l l} \hline \hline
**View \& Region(s)** & **Acronym(s)** \\ \hline AP right lung thirds (upper, middle, lower) & APUR, APMR, APLR \\ AP left lung thirds (upper, middle, lower) & APUL, APML, APLL \\ AP upper and middle mediastinal regions & APUM, APMM \\ LAT lungs thirds (upper, middle, lower) & LATULS, LATMLS, LATLLS \\ LAT middle mediastinal region & LATMM \\ \hline \hline \end{tabular}
\end{table}
Table 1: Extracted regions and their acronyms.
Figure 2: Lung detections in both AP and LAT views of two cases from the test set.
#### 3.2.1 Incremental learning
Figure 3 shows how model performance varied depending on the amount of data (20, 40 or 60 images) used to train the models (see Table 2 for numerical results). nnU-Net provides greater stability than the other architectures, with good enough Dice (F1) metrics at low amounts of training data. Both MedT and GatedAxialUNet yielded expected results with incremental performance for both AP and LAT views. MedT required enough quantity of data to yield competitive results. In LAT images, this effect was perceived with greater emphasis.
Incremental learning showed, in general, significant improvement in performance for all architectures. Model performance increase was more noticeable in MedT and GatedAxialUNet. nnU-Net proved to have greater stability towards training data quantity variation, yielding promising results even with low training data availability. With only 20 images, nnU-Net performed similarly as with 60 images in both AP and LAT views.
#### 3.2.2 Results comparison
The most stable and best performing architecture was nnU-Net. Nonetheless, GatedAxialUNet and MedT also yielded good performance results, even carrying out a more efficient training process than nnU-Net (time delayed during the training process was drastically reduced with MedT and GatedAxialUNet). However, performance metrics provided by these last two models did not reach as high values as nnU-Net did.
Figure 4 shows a visual comparison of the predictions obtained from each of the models in a non-TB case and a TB-compatible case from the independent test set.
Thus, nnU-Net demonstrated greater capacity in segmenting lungs in both AP and LAT views, even when fewer images were used for training. Nonetheless, training and inference times were much shorter in GatedAxialUNet and MedT.
### Automatic LAT Orientation Correction
The custom ResNet model implemented for this step provided an accuracy of 1.00 in the test set, correctly detecting if the vertebral column was located at the right or the left in the LAT-view image. As expected, neither false positives nor false negatives were detected among the test set predictions, as the problem was relatively simple for the network, although necessary to provide the system with greater robustness.
### Standardized Template and Automatic Region Extraction
Finally, the template construction and regional partition was then tested on the independent CISM test set. As input, the predictions used for this final step corresponded to the output of the nnU-Net model trained with 60 images, which demonstrated to have the best performance on the lung segmentation task. An expert radiologist (RSJ, from author list) performed a visual validation of the results. From the 60 CISM AP and LAT test images corresponding to the 30 CISM test cases, 54 were marked as correct, in 5 images minimal corrections (no substantial difference would be perceived in further region-linked TB finding assessment) were suggested, and only in 1 image severe corrections (substantial difference would be perceived in further assessment) were reported. Figure 5 presents four randomly selected cases of the test set, showing how these regions are extracted in different scenarios, in cases of different age.
Figure 4: Visual comparison of the predictions obtained from each of the models in a 29-month-old non-TB case (up) and an 8-month-old TB-compatible case (down). These cases belong to the independent CISM test set.
## 4 Conclusions
In this paper, we have proposed a multi-view deep learning-based pipeline which automatically extracts lung and mediastinal regions of interest from pediatric CXR images, based on a previously proposed standard template. This standard template and its partitions can be used for further analysis to confirm TB findings presence and severity assessment given a pediatric CXR, where TB assessment entails a challenging task. The proposed system lays the groundwork for automatic approaches that may reduce the high clinical burden when assessing pediatric TB, especially in countries with low resources and high prevalence of TB.
###### Acknowledgements.
This work was supported by H2020-MSCA-RISE-2018 INNOVA4TB (EU) project (ID 823854) and ADVANCE-TB Cost Action (EU) project (ID CA21164). DCM's PhD fellowship was supported by Universidad Politecnica de Madrid.
|
2306.17547 | Spaces of innovation and venture formation: the case of biotech in the
United Kingdom | Patents serve as valuable indicators of innovation and provide insights into
the spaces of innovation and venture formation within geographic regions. In
this study, we utilise patent data to examine the dynamics of innovation and
venture formation in the biotech sector across the United Kingdom (UK). By
analysing patents, we identify key regions that drive biotech innovation in the
UK. Our findings highlight the crucial role of biotech incubators in
facilitating knowledge exchange between scientific research and industry.
However, we observe that the incubators themselves do not significantly
contribute to the diversity of innovations which might be due to the underlying
effect of geographic proximity on the influences and impact of the patents.
These insights contribute to our understanding of the historical development
and future prospects of the biotech sector in the UK, emphasising the
importance of promoting innovation diversity and fostering inclusive enterprise
for achieving equitable economic growth. | Francesco Marzolla, Przemysław Nowak, Rohit Sahasrabuddhe, Chakresh Singh, Matteo Straccamore, Erik Zhivkoplias, Elsa Arcaute | 2023-06-30T11:04:41Z | http://arxiv.org/abs/2306.17547v1 | # Spaces of innovation and venture formation: the case of biotech in the United Kingdom
###### Abstract
Patents serve as valuable indicators of innovation and provide insights into the spaces of innovation and venture formation within geographic regions. In this study, we utilise patent data to examine the dynamics of innovation and venture formation in the biotech sector across the United Kingdom (UK). By analysing patents, we identify key regions that drive biotech innovation in the UK. Our findings highlight the crucial role of biotech incubators in facilitating knowledge exchange between scientific research and industry. However, we observe that the incubators themselves do not significantly contribute to the diversity of innovations which might be due to the underlying effect of geographic proximity on the influences and impact of the patents. These insights contribute to our understanding of the historical development and future prospects of the biotech sector in the UK, emphasising the importance of promoting innovation diversity and fostering inclusive enterprise for achieving equitable economic growth.
## Keywords
Innovation, diversity, knowledge spillovers, patents, startups, biotechnology.
## 1 Introduction
The contribution of industries to economic development varies significantly, and the emergence of the global biotechnology sector, which utilises living organisms and their compounds for diverse applications across industries, exemplifies this trend. The biotech sector in the US stands out as a remarkable success story, with revenues exceeding \(10^{5}\) billion within just three decades [1]. In the European context, the UK has gathered attention due to its position as the third-largest contributor to biomedical patents among 16 European countries. Additionally, the UK boasts the highest concentration of financially active biomedical startups and venture capital firms [2].
Biotechnology has emerged as a critical driver of innovation in fields such as medicine, agriculture, and environmental sciences. However, the role of inventions and knowledge diversity in the success of biomedical startups remains unclear. Understanding the dynamics and spatial patterns of biotech innovation is crucial for policymakers, entrepreneurs, and researchers aiming at fostering and supporting the growth of this sector. This study focuses specifically on the biotech landscape in the United Kingdom (UK) and examines the spaces of innovation and venture formation within the country.
The UK is recognised as a biotech hub, hosting numerous research institutions, universities, and industry players. It offers a unique ecosystem that fosters collaboration, knowledge exchange, and entrepreneurial activities. By analysing patent data, this study aims to gain insights into the spatial distribution of biotech innovation across UK, in order to identify the key regions and cities driving growth in this sector.
Innovation activity can be assessed through the analysis of patent data and technological advancements. Pugliese et al. demonstrated that technology serves as the most reliable predictor of industrial and scientific production in the coming decades [3]. The utilisation of patent data to monitor technological innovation is a well-established practice in academic research [4, 5, 6]. Thanks to the availability of different databases about patent documents and increased computational capabilities, patents have become a valuable resource for studying technological change [7]. Various entities, including academia (e.g., Hall et al. [8]), institutions (e.g., PATSTAT, REGPAT), and corporations (e.g., Google Patents), have contributed to the development of extensive collections of patent-related documents. This abundance of data has allowed researchers to explore multiple aspects of patented inventions, including their role in explaining the technological change, their interconnections, and their association with inventors and applicants [7, 9, 10]. One notable characteristic of patent documents, particularly relevant for economic analysis, is the presence of codes associated with the claims made in patent applications. These codes categorise the scope of commercial rights sought by inventors. To facilitate an examination by patent office officials, claims are classified into technological areas using classification systems such as the IPC classification [11] or the United States Patent Classification (USPC) [12, 13]. These classification systems employ hierarchical six-digit codes, which provide increasingly detailed definitions of technological domains. By mapping claims to classification codes, localised analysis of patents and patent applications within specific technology domains becomes possible. However, it is essential to recognise the limitations of using patents as a proxy for measuring innovation [14]. Estimating the value of patents presents a significant challenge [15]. While certain patents hold substantial market value, others may possess limited or no value. Furthermore, employing patent statistics as a comprehensive measure of economic and inventive activity is not without drawbacks [16, 5]. It is crucial to acknowledge that inventions do not encompass all forms of knowledge production in the economy, and patents do not cover the entirety of knowledge generated [17]. Additionally, patents represent just one among several indicators of knowledge and do not uniformly capture all sectors of the economy [18, 19].
This study builds upon previous research that explored knowledge spillovers in the UK based on patent citations, with biotechnology showing a weaker effect compared to other technologies [20]. By focusing on the local level, specifically the NUTS3 regions, and incorporating information on startups, we aim to address this limitation and investigate the influence of biotechnology incubators. Furthermore, we examine the regions in the UK that demonstrate high intellectual property (IP) potential and explore their capacity to drive knowledge accumulation in other industries.
## 2 Data
### Patents
Sources:The Patent data used in this work is the same data as the one used in [20]. It belongs to the OECD REGPAT database [21], and it is from 1977 to 2019. It has been filtered such that only patents belonging to the UK, cited and citing, are considered. For further details on data manipulation please refer to [20]. In that work, 43,751 total patents were considered in the study. We further filtered the data to consider only patents
Figure 1: **Geographical distribution of patent in the UK. We plot the NUTS3 regions with red boundaries are those with incubators. **a**: Number of patents active in the UK. **b**: Number of these patents that belong to the biotech sector. **c**: Share, i.e. the percentage of patents which are in the biotech sector.
that have been cited at least once and that cite at least once, resulting in a total of 25,852 patents, from which 12,543 are cited at least once, and 15,745 cite at least once.
Biotechnology patents:Each patent in our dataset can be described by one or more technology codes (IPC codes), which provide information about the technology industry to which they belong. Patents that have at least one IPC belonging to the biotech classification are considered biotech. Selecting the biotechnology classification for the IPC codes, see Appendix, we identified 1,436 patents in this sector, from which 627 are cited at least once, and 937 cite at least once, see Fig. 1.
Citations:The citation network, see Section 3.1, is derived from the citation dataset included in the OECD REGPAT database [21]. For this work, we excluded the patents that were outside of the UK citing other patents in the UK.
Geographical discrepancies:The UK patent database comprises patents from 1977 to 2018, encompassing a broad timeframe. Consequently, various patents are linked to different editions of NUTS3 available on the Eurostat website. To address this issue, all iterations of NUTS3 were downloaded, and the patents within each region were tallied. This approach ensures minimal overlap as the different NUTS3 versions primarily entail minor adjustments to the boundaries.
### Startups and Incubators
Startups:While the term "startup" has become increasingly ubiquitous, a precise definition remains elusive due to its dynamic nature. A startup can be characterised as a young, innovative business. For the purpose of this study, we selected only those companies that have been registered for no longer than 5 years. This allowed us to define them as startups and choose them for further analysis. We extracted all new firms whose focus lies within the field of biotechnology. Biotechnology is a multidisciplinary field concerning many areas of society, including medicine, environmental science and many more, integrated with engineering sciences. In our search for startups, we referred to the official list [22] compiled by the government of the United Kingdom. This list consists of SIC codes which are used to classify businesses by industry in administrative units. The authors connected Biotechnology with Manufactures of Basic Pharmaceuticals, Pharmaceutical Preparations, Irradiation, Electromedical and Electotherapeutic Equipment and Dental Equipment. The full list is available under the link [22].
The firms' data has been extracted from Companies House [23] for 2018, and we considered startups all firms that were created by 2014. Out of the total registered firms in 2018, around 51% can be considered startups, leading to a total of 2,181,018. Out of those the share in biotech is 0.163%, leading to a total of 3548. See Appendix for distribution.
Figure 2: **Geographical distribution of startups in the UK.****a**: Number of startups active in 2018 in the UK. **b**: Number of these startups that are operating in the biotech sector. **c**: Share, i.e. the percentage of startups which are operating in the biotech sector. The NUTS3 regions with red boundaries are those with incubators.
Incubators:While a startup is typically considered a newly established business venture with a scalable business model and high growth potential, incubators, on the other hand, are organisations or programs designed to support and nurture startups during their early stages by providing resources, mentorship, and infrastructure. Technology incubators are established in order to promote the commercialisation of knowledge derived from the university-industry partnership and accelerate business development by providing access to seed investment [24]. The information about 20 biotechnology incubators in the UK was collected from [25], including the geographic location and the institution that provided the platform (University-based, hospital-based, large pharma-based or stand-alone). For 13 incubators, we also collected information about their size and the number of tenant firms [24].
## 3 Methods
### Citation network
As mentioned previously, there are 15,745 patents that cite at least another patent. Using the unique identifiers for these patents we create a directed network. We show in Fig.3 the giant connected component (GCC) of this network after removing all nodes (patents) that are not from biotechnology.
### Precursors of innovation and their diversity
Diversity is considered an important driver for innovation. We will explore the diversity of the patents and that of the derived innovations from biotech, making use of a commonly employed measure of diversity [26], Shannon's entropy. On the other hand, we will also explore whether there is an overlap between the different technologies involved in citing and cited patents using the Jaccard index.
Shannon's entropy:Shannon entropy (SE) [27] measures the uncertainty or randomness in a dataset. It calculates the average amount of information or surprise in each data point. Higher entropy signifies more unpredictability, while lower entropy indicates more structure. SE is crucial in fields like data analysis, machine learning [28] and cryptography [29] to assess dataset complexity and information content. In this section, we utilise the SE to quantify the diversity of a patent. For each patent, the SE is defined as:
\[SE=-\sum_{i}^{N}p_{i}\log(p_{i}) \tag{1}\]
Here, \(N\) represents the total number of unique IPC codes, and \(p_{i}\) denotes the frequency of IPC technology \(i\) within the patent, divided by the total number of unique codes in the dataset.
Figure 3: **Biotechnology patents**. An unweighted directed citation network of patents within the UK cited by other UK patents. The nodes’ size and color (light to dark) are proportional to the in-degree of the node. For our study, the in-degree is a proxy of success.
**Technological similarity:** The Jaccard index [30, 31], is a measure of the similarity between two sets. It is defined as the ratio of the size of the intersection of the sets to the size of their union. The Jaccard index ranges from 0 to 1, with 0 indicating no similarity and 1 indicating complete similarity between the sets. In our case, we compute the Jaccard index considering pair of patents \(X\) and \(Y\) with their set of IPC codes at 4-digit level \(i\) and \(j\). The calculation is computed with
\[\text{Technological similarity}(X,Y)=\frac{|X_{i}\cap Y_{j}|}{|X_{i}\cup Y_{j}|}. \tag{2}\]
Technological similarity = 1 (0) for patents with identical (completely different) IPC codes.
## 4 Results
### Precursors to biotech innovation
In order to foster innovation, it is important to understand which are the ideal conditions giving rise to the observed patents. In this section, we look at the precursors of innovation, which correspond to the cited patents and their IPC codes, and explore whether they belong or not to the same industry. We do this in a temporal manner, by looking at these over time.
In Fig. 4**(a)**, we plot the mean technological similarity of non-biotech and biotech patents to their precursors. From the large difference in the fraction of patents that are highly similar to their precursors, we see that innovations in biotech are more likely to come from different technologies than those outside of biotech.
As a further investigation, we check whether the precursors of biotech patents come from outside the biotech industry. Around 40% of biotech patents have exclusively non-biotech precursors and around 55% of them stem only from other biotech patents. In Fig. 4B, we plot the distribution of these two classes of biotech patents over time, finding that those with primarily non-biotech precursors are more recent than those with primarily biotech precursors.
### The role of incubators for innovation
The UK has invested in biotech incubators across regions. The main role of these incubators, is to provide an ecosystem that supports biotech startups, by providing skills and expertise, to secure growth, and advance the industry. In this section, we explore whether the regions that contain these incubators, show an advantage with respect to regions that do not. We assess the impact of the incubators by looking at the startups and patents in biotech.
### Derived innovations
#### 4.3.1 Diversity
Considering all patents cited at least once, we observe a positive Spearman correlation = 0.352 (p-value \(<\) 0.05) between the number of citations received and the diversity of the citing patents (Fig.5(a)). The corresponding
Figure 4: **Precursors of biotech innovations.****(a)**: Mean technological similarity to precursors in biotech and non-biotech patents. **(b)**: Distribution of the biotech patents with and without biotech precursors over time.
correlation only for biotech patents is 0.331 (p-value \(<0.05\))(Fig.5(b). Next, we define a simple null model for our citation network by using a directed configuration model [32, 33]. This preserves the degree sequence of the directed network, i.e. the total citations received by a patent. Comparing the observed value of the correlation with 1000 simulations of the null model, we observe that the correlation is less than expected (Fig.5(c)).
To check whether biotechnology patents lead to more diverse innovation, we classify all citing patents into those with (982) and without (14,763) biotech precursors. For each citing patent, we compute the mean technological similarity to its precursors and find that 30% of the patents without biotech precursors are technologically identical to their precursors, while the same statistic for those with biotech precursors is 7.5%. This indicates that biotech patents combine effectively with other patents to create novel innovations.
Figure 5: **Diversity vs Number of citations**. We plot the correlation between the diversity and the number of citing patents of all patents cited at least once. **(a)** All patents. **(b)** Biotech patents. The positive correlation indicates that highly cited patents are precursors to diverse innovations. **(c)** Comparing with the null model, the correlation is lower than expected.
Figure 6: **Geographical distribution of biotech activity.** Each dot represents a region (NUTS3 level), indicating its patent and startups share (a) and total (b) within biotechnology. Red triangles correspond to regions with biotech incubators, amongst which we highlight Oxford and Cambridge.
## 5 Discussion
Over the last 40 years, there was a huge number of biotechnological breakthroughs in the UK. The analysis of the patent citation network identified important innovations among which the most cited one is the fully humanised antibodies for therapeutic uses [34]. We show that for biotech patents the mean technological similarity to innovation precursors is lower than average. However, in the last decade, the progress in the biotech industry is mostly driven by inventions from other fields (ecosystem-driven growth). The regional correlation between biotechnological patents and companies clearly highlights the importance of incubators, meant to facilitate knowledge exchange between science and industry. The most successful platforms are located in Oxford and Cambridge which were already well-established by the early 1990s [35]. Yet, the incubators themselves don't create much novelty for patents that use biotechnological advances on the regional level.
The technological diversity of patents that use biotechnological innovations strongly correlates with the importance of innovation. However, our null model suggests that the correlation is less than expected by random. This could be due to multiple reasons, firstly, by rewiring our network we lose the temporal structure of the network. Second, since the shuffling of edges does not take into account the geographic location of the patents when doing the randomisation, a patent can cite different patents anywhere in the UK. While this is possible, more often it is not what is observed. The influence of a patent is indeed driven by geographic proximity. To understand the effect of regional effects we look into the distribution of these patents and their effect geographical within the UK.
Understanding the historical development of biotechnology as a newly emerged and rapidly evolving sector of the economy contribute towards the prioritisation of real economic goals. Under time and cost constraints, technology development analysis can positively affect policy-making and regulation. While the creation of biotechnological clusters positively affected economic growth in the past, the future biotech must promote the innovation diversity that will unlock equity and inclusive enterprise in the economy of the UK.
## Acknowledgements
This work is the output of the Complexity72h workshop, held at IFISC in Palma, Spain, 26-30 June 2023. [https://www.complexity72h.com/](https://www.complexity72h.com/). It means that after many coffees and laughs (and some beers) we came up with a plan for a future paper. This is the seed, the very beginning of a wonderful collaboration.
## Appendix
### Biotechnological classification
List of IPC codes classified as biotech according to [21]: A01H1, A01H4,
A01K67, A61K35/[12-79], A61K(38, 39), A61K48, C02F3/34, C07G(11, 13, 15), C07K(4, 14, 16, 17, 19), C12M,
Figure 7: **Average diversity of derived innovations of biotech patents** The z-score of the mean of the diversity (defined using Shannon Entropy) of innovations derived from the biotech patents in every NUTS3 region. The grey regions are those without any biotech patents. The regions with dashed boundaries are those containing incubators. Incubators do not own patents that crate more diverse innovations than other regions.
C12N, C12P, C12Q, C40B(10, 40/02-08, 50/06),
G01N27/327, G01N33/(53,54,55,57,68,74,76,78,88,92), G06F19/[10-18,20-24]
#### Startups
In the field of complex systems practice, it is common practice to fit various heavy-tailed distributions to real-world data. In figure 8, we can see the best-fit power law to the numbers of startups in the UK in different regions 1, using algorithms provided by Clauset et al. [36]. As we can see, the number of startups seems to be following power law with an exponent of \(\alpha=2.95\). Besides of visual confirmation, the Kolmogorov-Smirnov test [37] returned a p-value of 0.627. It means that there is no reason to deny the hypothesis about the power law within this dataset.
Footnote 1: For the purpose of the analysis we used the NUTS3 Territorial Units in the UK, and for each of them we computed the number of startups. See section **Startups and Incubators** for more details.
However, it is not rare in the data analysis practice to consider only a truncated version of a given dataset. Here due to the recommendations in the publication of Clauset et al. [36], we focused solely on its right tail and performed a fitting procedure for values greater than 13 966. This means we kept about 30%. However high p-value Kolmogorov-Smirnov suggest that there is indeed a power law. Direct conclusions of this are visible in the map in Figure 2(a), where one can see many green areas and only a few yellows. The occurrence of power law means that there are a few regions with an extraordinarily large number of startups.
|